title
listlengths 0
18
| author
listlengths 0
4.41k
| authoraffiliation
listlengths 0
6.45k
| venue
listlengths 0
9
| abstract
stringlengths 1
37.6k
| doi
stringlengths 10
114
⌀ | pdfurls
listlengths 1
3
⌀ | corpusid
int64 158
259M
| arxivid
stringlengths 9
16
| pdfsha
stringlengths 40
40
| text
stringlengths 66
715k
| github_urls
listlengths 0
36
|
---|---|---|---|---|---|---|---|---|---|---|---|
[
"TabFairGAN: Fair Tabular Data Generation with Generative Adversarial Networks",
"TabFairGAN: Fair Tabular Data Generation with Generative Adversarial Networks"
]
| [
"Amirarsalan Rajabi [email protected] \nDepartment of Computer Science\nDepartment of Industrial Engineering and Management Systems\nUniversity of Central Florida Orlando\nUniversity of Central Florida Orlando\nFL, US, FL, US\n",
"Ozlem Ozmen Garibay \nDepartment of Computer Science\nDepartment of Industrial Engineering and Management Systems\nUniversity of Central Florida Orlando\nUniversity of Central Florida Orlando\nFL, US, FL, US\n"
]
| [
"Department of Computer Science\nDepartment of Industrial Engineering and Management Systems\nUniversity of Central Florida Orlando\nUniversity of Central Florida Orlando\nFL, US, FL, US",
"Department of Computer Science\nDepartment of Industrial Engineering and Management Systems\nUniversity of Central Florida Orlando\nUniversity of Central Florida Orlando\nFL, US, FL, US"
]
| []
| With the increasing reliance on automated decision making, the issue of algorithmic fairness has gained increasing importance. In this paper, we propose a Generative Adversarial Network for tabular data generation. The model includes two phases of training. In the first phase, the model is trained to accurately generate synthetic data similar to the reference dataset. In the second phase we modify the value function to add fairness constraint, and continue training the network to generate data that is both accurate and fair. We test our results in both cases of unconstrained, and constrained fair data generation. In the unconstrained case, i.e. when the model is only trained in the first phase and is only meant to generate accurate data following the same joint probability distribution of the real data, the results show that the model beats state-of-the-art GANs proposed in the literature to produce synthetic tabular data. Also, in the constrained case in which the first phase of training is followed by the second phase, we train the network and test it on four datasets studied in the fairness literature and compare our results with another state-of-the-art pre-processing method, and present the promising results that it achieves. Comparing to other studies utilizing GANs for fair data generation, our model is comparably more stable by using only one critic, and also by avoiding major problems of original GAN model, such as mode-dropping and non-convergence, by implementing a Wasserstein GAN. | 10.3390/make4020022 | [
"https://arxiv.org/pdf/2109.00666v1.pdf"
]
| 237,385,691 | 2109.00666 | 8b65f8e45b576bc8c6e55ad57f2fde196e2dc314 |
TabFairGAN: Fair Tabular Data Generation with Generative Adversarial Networks
Amirarsalan Rajabi [email protected]
Department of Computer Science
Department of Industrial Engineering and Management Systems
University of Central Florida Orlando
University of Central Florida Orlando
FL, US, FL, US
Ozlem Ozmen Garibay
Department of Computer Science
Department of Industrial Engineering and Management Systems
University of Central Florida Orlando
University of Central Florida Orlando
FL, US, FL, US
TabFairGAN: Fair Tabular Data Generation with Generative Adversarial Networks
Index Terms-Fairness in Artificial IntelligenceGenerative Adversarial NetworksWGAN
With the increasing reliance on automated decision making, the issue of algorithmic fairness has gained increasing importance. In this paper, we propose a Generative Adversarial Network for tabular data generation. The model includes two phases of training. In the first phase, the model is trained to accurately generate synthetic data similar to the reference dataset. In the second phase we modify the value function to add fairness constraint, and continue training the network to generate data that is both accurate and fair. We test our results in both cases of unconstrained, and constrained fair data generation. In the unconstrained case, i.e. when the model is only trained in the first phase and is only meant to generate accurate data following the same joint probability distribution of the real data, the results show that the model beats state-of-the-art GANs proposed in the literature to produce synthetic tabular data. Also, in the constrained case in which the first phase of training is followed by the second phase, we train the network and test it on four datasets studied in the fairness literature and compare our results with another state-of-the-art pre-processing method, and present the promising results that it achieves. Comparing to other studies utilizing GANs for fair data generation, our model is comparably more stable by using only one critic, and also by avoiding major problems of original GAN model, such as mode-dropping and non-convergence, by implementing a Wasserstein GAN.
I. INTRODUCTION
Artificial intelligence has gained paramount importance in the contemporary human life. With an ever-growing body of research and increasing processing capacity of computers, machine learning systems are being adopted by many firms and institutions for decision-making. Various industries such as insurance companies, financial institutions, and healthcare providers rely on automated decision making by machine learning models, making fairness-aware learning crucial since many of these automated decisions could have major impacts on the lives of individuals.
There are numerous evidence suggesting that bias exists in AI systems. One well known example is Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), which is a decision making system deployed by the US criminal justice system to assess the likelihood of a criminal defendant's recidivism (re-offending). It is shown that COMPAS is biased against African American defendants [1]. Another example is a Google's targeted advertising that was found to have shown the high paid jobs significantly more to males than females [2].
The existence of such bias and unfair classifications in AI systems has led the research community to pay attention to the problem of bias in AI. There are different approaches to improve fairness existing in the AI fairness literature. Let D = {X, S, Y } be a labelled dataset, where X ∈ R n are the unprotected attributes, S is the protected attribute, and Y is the decision. From a legal perspective, protected attribute is the attribute identified by law, based on which it is illegal to discriminate [3], e.g. gender or race. The proposed fairness enforcement methods in the literature could be categorised into three main classes of pre-process methods, in-process methods, and post-process methods.
Pre-process methods include modifying the training data before feeding the data into machine learning algorithm. For instance, in one study [4], four methods are presented to remove bias including suppression which is to remove attributes highly correlated with protected attributes S, massaging the dataset which is to change labels (Y ) of some objects in the dataset, and reweighing that involves assigning weights to different instances in the dataset. These are preliminary and simpler methods that results in more fair predictions, however entail higher fairness-utility cost. In other words fairness is achieved at the expense of accuracy. Another preprocessing method proposed in the literature is the work of Feldman et al. [5] in which a repairment mechanism is proposed to modify the unprotected attributes (X) and achieve fairness with higher accuracy comparing to the aforementioned methods. This method will be discussed in more detail in Section V-B as the baseline method. In-process approaches involve modifying the learning algorithm to achieve fairness during training [3]. These methods mostly include modifying the objective functions or adding regularization terms to the cost function. For example, [6] proposes adding a regularization term to the objective function which penalize mutual information between the protected attributes and the classifier predictions. Finally, post-process mechanisms include modifying the final decisions of the classifiers. For instance, Hardt et al. [7] propose a method to modify the final classification scores in order to enhance equalized odds.
The emergence of unfairness in AI systems is mostly attributed to: 1) direct bias existing in the historical datasets being used to train the algorithms, 2) bias caused by missing data, 3) bias caused by proxy attributes, where bias against the minority population is present in non-protected attributes, and 4) bias resulting from algorithm objective functions, where the aggregate accuracy of the whole population is sought and therefore the algorithm might disregard the minority group for the sake of majority [3]. Since historical datasets are a major source of discrimination in AI, we focus on generating unbiased datasets to achieve fairness. There is a rich and growing literature on generative models. The main idea behind a generative model is to capture the probabilistic distribution that could generate data similar to a reference dataset [8]. Broadly speaking, generative models could be divided into two main classes of models [8]: Energybased models such as Boltzmann Machines [9], and cost function-based models such as autoencoders and generative adversarial networks (GANs) [10]. GANs address some deficiencies in traditional generative models, and are shown to excel in various tasks comparing to other generative models such as in image generation [11] and video generation [12].
The original GAN consists of two networks, generator and discriminator [10]. The two networks play a minimax game. The generator takes a latent random variable Z as input and generates a sample G(Z), that is similar to the real data. The discriminator, on the other hand, is fed with both real and generated samples, and its task is to correctly classify the input sample as real or generated. Over time if the networks have enough capacity, they are trained together and ideally optimized to reach an equilibrium state in which the generator produces data from the exact targeted distribution and the discriminator gives the real and generated samples an equal probability of 0.5. The work in [10] shows that training the discriminator to optimality is equal to minimizing Jensen-Shannon divergence [13]. The work of Arjovsky et al. develops Wasserstein GANs, where a critic replaces the discriminator, and minimizing Earth-mover's distance is used instead of minimizing Jensen-Shannon divergence [14]. They show that WGAN could address some common training problems attributed to GANs, usch as requirement to maintain a careful balance during training as well as mode dropping [15].
In recent studies adversarial training has been used to remove discrimination. One such study, for example, by formulating the model as a minimax problem, proposes an adversarial learning framework that could learn representations of data that are discrimination-free and do not contain explicit information about the protected attribute [16]. Other adversarial objectives are proposed by the works of [17], [18] to achieve group fairness measures such as demographic parity and equality of odds. The application of generative adversarial networks for fairness in tabular datasets is not discussed enough in the literature, but has recently attracted attention of the research community. For instance, the work of Sattigeri et al. [19] proposes an approach to generate image datasets such that demographic fairness in the generated dataset is imposed. In their work Xu et al. [20] design a GAN that produces discrimination free tabular datasets. Their network includes one generator and two discriminators. The generator is adopted from [21] and produces fake pairs of data (X,Ŷ ) following the conditional distribution P G (X, Y |S). One discriminator's task is to ensure generator produces data with good accuracy, and the second discriminator ensures the generator produces fair data.
In this paper, we propose a Wasserstein GAN, TabFairGAN, that can produce high quality tabular data with the same joint distribution as the original tabular dataset. In Section II, we discuss the fairness measure: demographic parity and discrimination score. In Section III, we introduce the model architecture, data transformation, value functions, and the training process of the model. In section IV, we compare the results of TabFairGAN with two other state-of-the-art GANs for tabular data generation, namely TGAN [22] and CTGAN [23]. In SectionV, we show how the model could be used for fair data generation and test the model on four real dataset. We compare the results of our model with the method developed by [5], which is another pre-process methods to enforce fairness. Finally in Section V-D, we explore the fairnessaccuracy trade-off. This work has two main contributions. We show that in the case of no constraints present (no fairness), the model is able to produce high quality synthetic data, competing with the state-of-the-art GANs designed for tabular data generation. Second contribution is producing high quality fair synthetic data, by adding a fairness constraint in the loss function of the generator. Comparing our model to previous application of GANs for fair tabular data generation, the model is more stable based on two merits: 1) the proposed model is a Wasserstein GAN which is shown to improve original GAN model in terms of some common GAN pitfalls, such as modedropping phenomena [15], and 2) the model only uses one critic instead of two [20] or three [24] discriminators.
II. DISCRIMINATION SCORE
Among the most frequently practiced fairness metrics specified in legal notions and the literature is demographic parity or statistical parity/fairness. The goal of demographic fairness is to ensure that the overall proportion of members with respect to the protected group receiving a positive decision is identical. In a binary case, let D = {X, S, Y } be a labelled dataset, where X ∈ R n is the unprotected attributes, S is the protected attribute, and Y is the decision. In this paper, we consider the binary case, and for notational convenience we assume that the protected attribute S takes two values, where S = 0 represents the underprivileged minority class, and S = 1 represents the privileged majority class. For instance, in a binary racial discrimination study the value 0 will be assigned to "African-American", whereas 1 is assigned to "White". We also assign 1 to Y for a successful decision (for instance an admission to a higher education institution), and assign 0 to Y for an unsuccessful decision (rejection). Demographic fairness for the labeled dataset is defined as follows [7]:
P (y = 1|s = 1) = P (y = 1|s = 0)(1)
In this context, demographic parity is defined by the difference between the conditional probability and its marginal. We define the discrimination with respect to the protected attribute S by discrimination score (DS) and calculate it by: DS = P (y = 1|s = 1) − P (y = 1|s = 0). A similar measure could be obtained for a labeled dataset D and a classifier f : (X, S) → Y where the discrimination score for the classifier f with respect to protected attribute S can be obtained by:
P (ŷ = 1|x, s = 1) − P (ŷ = 1|x, s = 0)(2)
III. MODEL DESCRIPTION
A. Tabular Dataset Representation and Transformation
A tabular dataset contains N C numerical columns {c 1 , ..., c N C } and N D categorical columns {d 1 , ..., d N D }. In this model, categorical columns are transformed and represented by one-hot vectors. Representing numerical columns on the other hand is non-trivial due to certain properties of numerical columns. One such property is that numerical columns are often sampled from multi-modal distributions. Some models such as [21] use min-max normalization to normalize and transform numerical columns. The work of Xu et al. [23] proposes a more complex process, namely a mode-specific normalization using variational Gaussian mixture model (VGM) to estimate the number of modes and fit a Gaussian mixture model to each numerical column. In our model, each numerical column is transformed using a quantile transformation [25]:
c i = Φ −1 (F (c i ))(3)
Where c i is the ith numerical feature, F is the CDF (cumulative distbituion function) of the feature c i , and Φ is the CDF of a uniform distribution. After transforming numerical and discrete columns, the representation of each transformed row of the data is as follows:
r = c 1 ⊕ ... ⊕ c N C ⊕ d 1 ⊕ ... ⊕ d N D (4) l i = dim(d i ) (5) l w = dim(r)(6)
where c i represents the ith numerical column, d i denotes the one-hot encoded vector of the ith categorical columns, and ⊕ is the symbol denoting concatenation of vectors. Also, l i shows the dimension of the ith discrete column's one-hot encoding vector and l w shows the the dimension of r.
B. Network Structure
While traditional GANs suffer from problems such as nonconvergence and mode-collapse, the work of [15] developed Wasserstein GANs which improve training of GANs to some extent, and replace the discriminator with a critic. The network designed in this model is a WGAN with gradient penalty [26]. The WGAN value function using the Kantorovich-Rubinstein duality [27] is as follows [26]:
min G max C∈C E x∼P data (x) [C(x)] − E z∼Pz(z) [C(G(z))](7)
Where C is the set of 1-Lipschitz functions. The generator receives a latent variable Z from a standard multivariate normal distribution and produces a sample data point which is then forwarded to the critic. Once the critic and the generator are trained together, eventually the generator would become like a deterministic transformation that produces data similar to the real data.
The generator consists of a fully-connected first layer with ReLu activation function. The second hidden layer of the generator network is then formed by concatenation of multiple vectors that could form data similar to transformed original data. For the numerical variables, a fully connected layer of FC lw→N C , with a ReLu activation is implemented. For nodes that are supposed to produce discrete columns, multiple fully connected layer of FC lw→li , with Gumble softmax [28] activation is used in order to produce one-hot vectors (d i ). The resulting nodes are then concatenated to produce data similar to the transformed original data (with the same dimension of l w ), which is then fed to the critic network. The structure of the critic network is simple and includes 2 fully connected layers with Leaky ReLu activation functions.
The generator network's architecture is formally described as:
h 0 = Z (latent vector) h 1 = ReLu(FC lw→lw (h 0 )) h 2 = ReLu(FC lw→N C (h 1 )) ⊕ gumble 0.2 (FC lw→l1 (h 1 ))⊕ gumble 0.2 (FC lw→l2 (h 1 )) ⊕ ... ⊕ gumble 0.2 (FC lw→l N D (h 1 ))(8)
Where F C a→b denotes a fully connected layer with input size a and output size b, ReLu(x) shows applying a ReLu activation on x, and gumble τ (x) denotes applying Gumble softmax with parameter τ on a vector x, and ⊕ denotes concatenation of vectors.
The critic network's architecture is formally described as:
h 0 = X (output of the generator or transformed real data) h 1 = LeakyReLu 0.01 (FC lw→lw (h 0 )) h 2 = LeakyReLu 0.01 (FC lw→lw (h 1 ))(9)
Where LeakyReLu τ (x) denotes applying Leaky ReLu activation function [29] with slope τ on x. Fig. 1 shows the architecture of the model.
C. Training
In this section we introduce the loss functions for the critic network and generator network of the developed WGAN. The overall process of training the model includes two phases. Phase I of training only focuses on training the model such that the generator could generate data with a joint probability distribution similar to that of the real data. Phase II of training further trains the generator to produce samples which have a joint probability distribution similar to that of real data and is also fair, with respect to discrimination score (DS) defined in Section II.
1) Phase I: Training for Accuracy: In the first phase, generator and critic are trained with respect to their value functions. Critic's loss function with gradient penalty is [26]:
V C = Ê x∼Pg [C(x)]− E x∼Pr [C(x)]+λ Ē x∼Px [(||∇xC(x)|| 2 −1) 2 ](10)
Where P r and P g are real data distribution and generated data distribution, respectively. Note that the third term is the gradient penalty to enforce the Lipschitz constraint, and Px is implicitly defined sampling uniformly along straight lines between pairs of points sampled from the data distribution P r and the generator distribution P g [26].
The loss function for the generator network in Phase I of training is also as follows:
V G = − Ê x∼Pg [C(x)](11)
2) Phase II: Training for Fairness and Accuracy: In the second phase of training, fairness constraint is enforced on generator to produce fair data. Similar to the definitions in Section II, letD = {X,Ŷ ,Ŝ} be a batch of generated data, whereX is the unprotected attribute of the generated data,Ŷ is the decision withŶ = 1 being the successful and favorable value for the decision (e.g. having an income of > 50K for an adult in the adult income dataset), andŜ being the protected attribute withŜ = 0 showing the unprivileged minority group (for example having a gender of "female" in the adult income data set). The new loss function for the generator in Phase II of training is as follows:
V G = − E (x,ŷ,ŝ)∼Pg [C(x,ŷ,ŝ)] − λ f ( E (x,ŷ,ŝ)∼Pg [ŷ|ŝ = 0]− E (x,ŷ,ŝ)∼Pg [ŷ|ŝ = 1])(12)
With the above loss function for the generator, the model aims to generate a fair dataset {X,Ŷ ,Ŝ} ∼ P g which achieves the demographic fairness with respect to the protected attributeŜ in the generated samples, by minimizing discrimination score in the generated data P (Ŷ |Ŝ = 1) − P (Ŷ |Ŝ = 0). The goal in this phase of training is to train the generator to generate synthetic data which is both similar to the real dataD ∼ D, and the generated data is fair based on demographic fairness measure. In the ideal case, the generator would produce synthetic data such thatŶ ⊥Ŝ. After training is done, the samples are generated and inverse transformed to the original data format. The formal procedure of training the model is shown in Algorithm 1. sample a batch mD =x,ŝ,ŷ ∼ P (G θ (z)) 22: Update the generator by descending the gradient: 23:
∇ θ 1 m m i=1 −Cw(D) − λ f ( |D s=0,y=1 | |D s=0 | − |D s=1,y=1 | |D s=1 | ) 24: end for
IV. EXPERIMENT: ONLY PHASE I (NO FAIRNESS)
In this section, we evaluate the effectiveness of the model in producing synthetic data simialr to data coming from a known probability distribution. We show that the model is able to generate synthetic data similar to the reference dataset and compare our results with two state-of-the-art GAN models for generation of tabular datasets, namely TGAN [22] and CTGAN [23]. TGAN is a GAN-based model that generates relational tables by clustering numerical variables to deal with multi-modal distributions and adding noise and KL divergence into loss function to generate discrete features. In CTGAN, mode-specific normalization is applied to numerical values and the generator works conditionally in order to overcome the imbalance in training data. We evaluate the model on UCI Adult Income Dataset 1 [30]. The task we are trying to achieve is as follows: given a dataset D = {X, S, Y } ∼ P data , generate a datasetD syn = {X,Ŝ,Ŷ } ∼ P syn s.t. P syn ∼ P data . We are not seeking to achieve fairness in this section, and we solely seek to generate data following the same distribution as real data to achieve data utility.
To compare data utility among generated datasets among different models, we evaluate the performance of using synthetic data as training data for machine learning. At first, the real dataset is divided into two parts: D train and D test . Adult dataset contains a total of 48,842 rows. 90% of the data were assigned to D train and the rest 10% were assigned to D test . Next, each model is trained on the training set D train for 300 epochs three times. With each training, the trained models are used to generate their corresponding synthetic data D syn . Three machine learning classifiers are then chosen and trained on each generated D syn , tested on D test , and eventually the accuracy and F1 score of classification is recorded. The classifiers used are a Decision Tree Classifier, Logistic Regression, and a Multi Layer Perceptron. Table I reports the results of classification, and compares the results with the case that a classifier is trained on the original D train , and tested on D test (reporting the means and standard deviations of evaluation metrics). The results shows that TabFairGAN and CTGAN outperform TGAN in all cases. TabFairGAN outperforms CTGAN with a DT Classifier. With a LR classifier, the performance of TabFairGAN and CTGAN is identical with respect to accuracy, and TabFairGAN performs slightly better than CTGAN with respect to F1 score. With a MLP classifier, CTGAN performs slightly better than TabFairGAN with respect to accuracy, while TabFairGAN outperforms CTGAN with respect to F1 score. These results display the effetiveness of TabFariGAN with respect to generating data identical to real tabular data.
V. EXPERIMENTS: FAIR DATA GENERATION AND DATA UTILITY (TRAINING WITH BOTH PHASE I AND PHASE II)
In the second set of experiments, the effectiveness of the model in generating data which is both similar to the reference dataset and also fair is evaluated, and the tradeoff between machine learning efficacy and fairness is investigated. We will experiment with four datasets to test the fairness/utility tradeoff of the model. The four datasets and their attributes are first introduced. All four datasets used in experiments are studied in the literature of algorithmic fairness [3]. Next, we introduce the baseline method with which the results of TabFairGAN are compared. The results are presented and compared in Table II.
A. Datasets
The first dataset is UCI Adult Dataset [30]. This dataset is based on 1994 US census data and contains 48,842 rows with 1 http://archive.ics.uci.edu/ml/datasets/adult attributes such as age, sex, occupation, and education level. for each person, and the target variable indicates whether that individual has an income that exceeds $50K per year or not. In our experiments, we consider the protected attribute to be sex (S = "Sex", Y = "Income").
The second dataset used in the experiments is the Bank Marketing Data Set [31]. This dataset contains information about a direct marketing campaign of a Portuguese banking institution. Each row of the dataset contains attributes about an individual such as age, job, marital status, housing, duration of that call, and the target variable determines whether that individual subscribed a term deposit or not. The dataset contains 45,211 records. Similar to [32], we have considered age to be the protected attribute (a young individual has a higher chance of being labeled as "yes" to subscribe a term deposit). In order to have a binary protected attribute, we set a cut-off value of 25 and an age of more than 25 is considered "older", while an age of less than or equal to 25 is considered "younger" (S = "Age", Y = "Subscribed").
The third dataset used in this section is the ProPublica dataset from COMPAS risk assessment system [33]. This dataset contains information about defendants from Broward County, and contains attributes about defendants such as their ethnicity, language, marital status, sex, etc. ,and for each individual a score showing the likelihood of recidivism (reoffending). In this experiments we used a modified version of the dataset. First, attributes such as FirstName, LastName, MiddleName, CASE ID, and DateOfBirth are removed. Studies have shown that this dataset is biased against African Americans [1]. Therefore, ethnicity is chosen to be the protected attribute for this study. Only African American and Caucasian individuals are kept and the rest are dropped. The target variable in this dataset is a risk decile score provided by COMPAS system, showing the likelihood of that individual to re-offend, which ranges from 1 to 10. The final modified dataset contains 16,267 records with 16 features. To make the target variable binary, a cut-off value of 5 is considered and individuals with a declile score of less than 5 are considered "Low Chance", while the rest are considered "High Chance". (S = "Ethnicity", Y = "Recidivism Chance").
The last dataset used in experiments is the Law School
Admission Council which is made by conducting a survey across 162 law schools in the United States [34]. This dataset contains information on 21,790 law students such as their GPA (grade-point average), LSAT score, race, and the target variable is whether the student had a high FYA (first year average grade). Similar to other studies (such as [35]), we have considered race to be the protected attribute. We only considered individuals with "Black" or "White" race. The modified data contains 19,567 records. (S = "Race", Y = "FYA"). There discrimination score (DS) of all datasets are reported in Table II. In their work Feldman et al. [5] proposed a method to modify a dataset to remove bias and preserve relevant information in the data. In dataset D = {X, S, Y }, given the protected attribute S and a single numerical attribute X, let X s = P r(X|S = s) denote the marginal distribution on X conditioned on S = s. Considering F s : X s → [0, 1] the cumulative distribution function for values x ∈ X s , they define a "median" distribution A in terms of its quantile function
F −1 A : F −1 A (u) = median s∈S F −1 s (u)
. They then propose a repair algorithm which createsX, such that for all x ∈ X s the correspondingx = F −1 A (F s (x)). To control the tradeoff between fairness and accuracy, they define and calculate λ − partial repair by:
F −1 s = (1 − λ)F −1 s + λ(F A ) −1(13)
The result of such partial repair procedure is a dataset D = {X, S, Y } which is more fair and preserves relevant information for classification task. We call this method CRDI henceforth.
C. Results
The goal in this section is to train the proposed network on datasets and produce similar data that is also fair with respect to protected attributes defined for each dataset. The process is as follows: The models are first trained on each dataset. As mentioned in Section III-C, training of the network includes two phases: in the first phase, the network is only trained for accuracy for a certain number of epochs, and then in the second phase, the loss function of generator is modified and the network gets trained for accuracy and fairness. Once the training is finished, the generator of the network is used to produce synthetic data D syn . We also generated repaired datasets using CRDI method described in Section V-B to compare our results with. For each model, we train five times and report the means and standard deviations of evaluation results in Table II.
The generated data D syn is then evaluated from two perspective: fairness and utility. To evaluate the fairness of D syn , we adopt discrimination score (DS): DS = P (y = 1|s = 1) − P (y = 1|s = 0). Looking into Table II, the results show that comparing with CRDI, TabFairGAN could more effectively produce datasets s.t. demographic parity in the generated data is almost removed. The demographic parity of the produced datasets by TabFairGAN, beat the repaired datasets produced by CRDI.
To evaluate data utility, we adopt a decision tree classifier with the default parameter setting [36]. For TabFairGAN data, We train the decision tree classifier on D syn and test it on D test , and report the accuracy and F1-score of the classifier. We also train decision tree classifiers on repaired dataD produced by CRDI, and test on D test and report accuracy and f1-score. Table II shows that repaired dataD produced by CRDI has better data utility for adult dataset, COMPAS dataset, and Law School dataset by less than 5% in all cases, while the accuracy of D syn produced by TabFairGAN is almost 8% higher than that ofD produced by CRDI.
The last evaluation we perform on the produced datasets is to examine discrimination score (DS) of the classifier. we adopt discrimination score (DS) for classifier: DS = P (ŷ = 1|s = 1) − P (ŷ = 1|s = 0). The results in Table II show that discrimination score of the decision tree classifier trained on D syn for Adult dataset and Law School is lower by almost 4% and 13%, respectively, while the discrimination score of the decision tree classifier trained onD for Bank dataset and COMPAS dataset is lower by 1% and 0.003%, respectively.
The parameter settings of the models on each datasets is reported in the Appendix. The results show, while CRDI narrowly beats TabFairGAN in terms of data utility, Tab-FairGAN beats CRDI in terms of discrimination score in all cases for generated data and in 2 out of 4 cases in the generated classifiers. This is attributed to fairness utility tradeoff of TabFairGAN governed by λ f . The case of COMPAS dataset is interesting since none of the models could decrease discrimination score in the classifier much, comparing to the discrimination score in the original dataset. Looking into the data and performing a correlation analysis, risk decile score (target variable) has a high Pearson correlation of 0.757 with one of columns names RecSupervisionLevel which denotes the supervisory status of each individual. This reveals that although the generated dataset D syn has a lower discrimination score of 0.009, disparate impact exists in the dataset, indicating that the discriminatory outcomes are not explicitly caused by the protected attribute, but are also from the proxy unprotected attributes [20].
D. Utility and Fairness Trade-off
To explore the trade-off between utility and fairness of the generated data, we perform the following experiment: λ f was increased between [0.05, 0.7] in steps of 0.05, and for each value of λ f the model was trained 170 epochs in phase I and 30 times in the phase II. For each λ f value, five training was performed and the average of Discrimination Score was recorded for each λ f . Figure 2 shows the results, plotted along with standard deviation as confidence intervals. We can observe that discrimination score of the generated synthetic datasets (D syn ) is decreasing significantly as λ f decreases. Meanwhile, classifier accuracy layoff, i.e. the reduction in decision tree classifier's accuracy comparing to the case in which the classifier is trained on the real original training dataset (D train ), is increasing slightly as λ f increases.
VI. CONCLUSION
In this paper, we proposed a Wasserstein Generative Adversarial Network that could generate synthetic data similar to a reference data. We showed that in the case of unconditional tabular data generation, i.e. with no fairness constrains, the model is able to produce data with high quality comparing to other GANs developed for the same purpose. We also showed that by adding a fairness constraint to the generator, the model is able to achieve data generation which improves the demographic parity of the generated data. We tested the model on four datasets studies in the fairness literature and compared our results with that of [5]. As a generative model, GANs have a great potential to be utilized for fair data generation, specially in the case that the real dataset is limited. There are other field in which GANs could be utilized for tabular data generation, such as the research involved with data privacy [37]. In the future work, we will explore other more sophisticated data generation constraints, e.g. considering enforcing other fairness metrics such as equality of odds and equality of opportunity. We also consider exploring utilizing GANs for fairness in other data types, such as text and image data. Table III reports the models' hyperparameters used in Section V experiments.
VII. APPENDIX
Fig. 1 .
1Model architecture. The generator consists of an initial fully connected layer with ReLu activation function, and a second layer which uses ReLu for numerical attributes generation and gumble-softmax to form one-hot representations of categorical attributes. The final data is then produced by concatenating all attributes in the last layer of the generator. The critic consists of fully-connected layers with LeakyReLu activation function.
Algorithm 1
1training algorithm for the proposed WGAN. We use n crit = 4, batch size of 256, λ p = 10, Adam optimizer with α = 0.0002, β 1 = 0.5, and β 2 = 0.999 1: for T 1 do 2: for t = 1, . . . , n crit do 3: Sample batch m D(x, y, s) ∼ Pr and z ∼ P (z) and ∼ U [0, 1] 4:D = (x,ŝ,ŷ) ← G θ (z) 5:D ← (D) + (1 − )(D) − Cw(D) + λp(||∇DCw(D)|| 2 − 1) (C w (G θ (z))) 12: end for 13: for T 2 do 14: for t = 1, . . . , n crit do 15: Sample batch m D(x, y, s) ∼ Pr and z ∼ P (z) and ∼ U [0, 1] 16:D = (x,ŝ,ŷ) ← G θ (z) 17:D ← (D) + (1 − )(D) − Cw(D) + λp(||∇DCw(D)|| 2 − 1)
TABLE I COMPARING
ITHE RESULTS TABFAIRGAN FOR ACCURATE DATA GENERATION WITH TGAN AND CTGAN MODELSClassifier
DTC
LR
MLP
Accuracy
F1
Accuracy
F1
Accuracy
F1
Original Data
0.811 ± 0.001
0.606 ± 0.002
0.798 ± 0.000
0.378 ± 0.000
0.780 ± 0.051
0.488 ± 0.075
TabFairGan
0.783 ± 0.001
0.544 ± 0.002
0.794 ± 0.020
0.239 ± 0.012
0.778 ± 0.045
0.405 ± 0.174
TGAN
0.661 ± 0.013
0.503 ± 0.012
0.765 ± 0.010
0.170 ± 0.008
0.623 ± 0.197
0.376 ± 0.159
CTGAN
0.777 ± 0.003
0.482 ± 0.004
0.794 ± 0.023
0.232 ± 0.012
0.784 ± 0.007
0.305 ± 0.104
B. Baseline Model: Certifying and Removing Disparate Im-
pact
TABLE II COMPARING
IITHE RESULTS OF TABFAIRGAN FOR FAIR DATA GENERATION WITH CRDI DS in Orig. Data DS Gen. Data Acc. Gen. Data F1 Gen. Data DS in Classifier DS Rep. Data. Acc. Rep. Data F1 Rep. Data DS in ClassifierFig. 2. Exploring the trade-off between accuracy and fairness by incremental increasing of parameter λ fOriginal Data
TabFairGAN
CRDI
Dataset
Orig. Acc.
F1 Orig.
Adult
0.816 ± 0.005 0.619 ± 0.013
0.195
0.009 ± 0.027 0.773 ± 0.013 0.536 ± 0.022 0.082 ± 0.038 0.165 ± 0.048 0.793 ± 0.011 0.558 ± 0.029 0.121 ± 0.024
Bank
0.879 ± 0.004 0.491 ± 0.020
0.126
0.001 ± 0.011 0.854 ± 0.004 0.373 ± 0.024 0.060 ± 0.056 0.122 ± 0.004 0.776 ± 0.004 0.384 ± 0.011 0.050 ± 0.017
COMPAS 0.903 ± 0.007 0.914 ± 0.007
0.258
0.009 ± 0.102 0.860 ± 0.040 0.876 ± 0.033 0.208 ± 0.072 0.119 ± 0.128 0.893 ± 0.021 0.906 ± 0.020 0.205 ± 0.055
Law School 0.854 ± 0.008 0.918 ± 0.005
0.302
0.024 ± 0.036 0.847 ± 0.020 0.916 ± 0.012 0.153 ± 0.072 0.233 ± 0.103 0.892 ± 0.004 0.941 ± 0.002 0.289 ± 0.057
TABLE III MODEL
IIIPARAMETERSTabFairGAN
CRDI
T 1
T 2
Lambda
Lambda
Adult
170
30
0.5
0.999
Bank
195
5
0.75
0.9
COMPAS
40
30
2.2
0.999
Law School
180
20
2.5
0.999
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. A Chouldechova, Big data. 52A. Chouldechova, "Fair prediction with disparate impact: A study of bias in recidivism prediction instruments," Big data, vol. 5, no. 2, pp. 153-163, 2017.
Algorithmic bias? an empirical study of apparent gender-based discrimination in the display of stem career ads. A Lambrecht, C Tucker, Management Science. 657A. Lambrecht and C. Tucker, "Algorithmic bias? an empirical study of apparent gender-based discrimination in the display of stem career ads," Management Science, vol. 65, no. 7, pp. 2966-2981, 2019.
Algorithmic fairness. D Pessach, E Shmueli, arXiv:2001.09784arXiv preprintD. Pessach and E. Shmueli, "Algorithmic fairness," arXiv preprint arXiv:2001.09784, 2020.
Data preprocessing techniques for classification without discrimination. F Kamiran, T Calders, Knowledge and Information Systems. 331F. Kamiran and T. Calders, "Data preprocessing techniques for classi- fication without discrimination," Knowledge and Information Systems, vol. 33, no. 1, pp. 1-33, 2012.
Certifying and removing disparate impact. M Feldman, S A Friedler, J Moeller, C Scheidegger, S Venkatasubramanian, proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining. the 21th ACM SIGKDD international conference on knowledge discovery and data miningM. Feldman, S. A. Friedler, J. Moeller, C. Scheidegger, and S. Venkata- subramanian, "Certifying and removing disparate impact," in proceed- ings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, 2015, pp. 259-268.
Fairness-aware classifier with prejudice remover regularizer. T Kamishima, S Akaho, H Asoh, J Sakuma, Joint European Conference on Machine Learning and Knowledge Discovery in Databases. SpringerT. Kamishima, S. Akaho, H. Asoh, and J. Sakuma, "Fairness-aware classifier with prejudice remover regularizer," in Joint European Con- ference on Machine Learning and Knowledge Discovery in Databases. Springer, 2012, pp. 35-50.
Equality of opportunity in supervised learning. M Hardt, E Price, N Srebro, Advances in neural information processing systems. 29M. Hardt, E. Price, and N. Srebro, "Equality of opportunity in supervised learning," Advances in neural information processing systems, vol. 29, pp. 3315-3323, 2016.
Deep generative models: Survey. A Oussidi, A Elhassouny, 2018 International Conference on Intelligent Systems and Computer Vision (ISCV). IEEEA. Oussidi and A. Elhassouny, "Deep generative models: Survey," in 2018 International Conference on Intelligent Systems and Computer Vision (ISCV). IEEE, 2018, pp. 1-8.
Massively parallel architectures for al: Netl, thistle, and boltzmann machines. S E Fahlman, G E Hinton, T J Sejnowski, National Conference on Artificial Intelligence, AAAI. S. E. Fahlman, G. E. Hinton, and T. J. Sejnowski, "Massively parallel architectures for al: Netl, thistle, and boltzmann machines," in National Conference on Artificial Intelligence, AAAI, 1983.
Generative adversarial nets. I Goodfellow, J Pouget-Abadie, M Mirza, B Xu, D Warde-Farley, S Ozair, A Courville, Y Bengio, Advances in neural information processing systems. 27I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, "Generative adversarial nets," Advances in neural information processing systems, vol. 27, 2014.
Large scale gan training for high fidelity natural image synthesis. A Brock, J Donahue, K Simonyan, arXiv:1809.11096arXiv preprintA. Brock, J. Donahue, and K. Simonyan, "Large scale gan training for high fidelity natural image synthesis," arXiv preprint arXiv:1809.11096, 2018.
Generating videos with scene dynamics. C Vondrick, H Pirsiavash, A Torralba, Advances in neural information processing systems. 29C. Vondrick, H. Pirsiavash, and A. Torralba, "Generating videos with scene dynamics," Advances in neural information processing systems, vol. 29, pp. 613-621, 2016.
The jensen-shannon divergence. M Menéndez, J Pardo, L Pardo, M Pardo, Journal of the Franklin Institute. 3342M. Menéndez, J. Pardo, L. Pardo, and M. Pardo, "The jensen-shannon divergence," Journal of the Franklin Institute, vol. 334, no. 2, pp. 307- 318, 1997.
The earth mover's distance as a metric for image retrieval. Y Rubner, C Tomasi, L J Guibas, International journal of computer vision. 402Y. Rubner, C. Tomasi, and L. J. Guibas, "The earth mover's distance as a metric for image retrieval," International journal of computer vision, vol. 40, no. 2, pp. 99-121, 2000.
Wasserstein generative adversarial networks. M Arjovsky, S Chintala, L Bottou, International conference on machine learning. M. Arjovsky, S. Chintala, and L. Bottou, "Wasserstein generative ad- versarial networks," in International conference on machine learning. PMLR, 2017, pp. 214-223.
Censoring representations with an adversary. H Edwards, A Storkey, arXiv:1511.05897arXiv preprintH. Edwards and A. Storkey, "Censoring representations with an adver- sary," arXiv preprint arXiv:1511.05897, 2015.
Learning adversarially fair and transferable representations. D Madras, E Creager, T Pitassi, R Zemel, International Conference on Machine Learning. PMLRD. Madras, E. Creager, T. Pitassi, and R. Zemel, "Learning adversarially fair and transferable representations," in International Conference on Machine Learning. PMLR, 2018, pp. 3384-3393.
Mitigating unwanted biases with adversarial learning. B H Zhang, B Lemoine, M Mitchell, Proceedings of the 2018 AAAI/ACM Conference on AI. the 2018 AAAI/ACM Conference on AIB. H. Zhang, B. Lemoine, and M. Mitchell, "Mitigating unwanted biases with adversarial learning," in Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, 2018, pp. 335-340.
Fairness gan: Generating datasets with fairness properties using a generative adversarial network. P Sattigeri, S C Hoffman, V Chenthamarakshan, K R Varshney, IBM Journal of Research and Development. 634/5P. Sattigeri, S. C. Hoffman, V. Chenthamarakshan, and K. R. Varshney, "Fairness gan: Generating datasets with fairness properties using a gen- erative adversarial network," IBM Journal of Research and Development, vol. 63, no. 4/5, pp. 3-1, 2019.
Fairgan: Fairness-aware generative adversarial networks. D Xu, S Yuan, L Zhang, X Wu, 2018 IEEE International Conference on Big Data (Big Data. IEEED. Xu, S. Yuan, L. Zhang, and X. Wu, "Fairgan: Fairness-aware generative adversarial networks," in 2018 IEEE International Conference on Big Data (Big Data). IEEE, 2018, pp. 570-575.
Generating multi-label discrete patient records using generative adversarial networks. E Choi, S Biswal, B Malin, J Duke, W F Stewart, J Sun, Machine learning for healthcare conference. E. Choi, S. Biswal, B. Malin, J. Duke, W. F. Stewart, and J. Sun, "Gen- erating multi-label discrete patient records using generative adversarial networks," in Machine learning for healthcare conference. PMLR, 2017, pp. 286-305.
Synthesizing tabular data using generative adversarial networks. L Xu, K Veeramachaneni, arXiv:1811.11264arXiv preprintL. Xu and K. Veeramachaneni, "Synthesizing tabular data using gener- ative adversarial networks," arXiv preprint arXiv:1811.11264, 2018.
Modeling tabular data using conditional gan. L Xu, M Skoularidou, A Cuesta-Infante, K Veeramachaneni, Advances in Neural Information Processing Systems. L. Xu, M. Skoularidou, A. Cuesta-Infante, and K. Veeramachaneni, "Modeling tabular data using conditional gan," in Advances in Neural Information Processing Systems, 2019.
Fairgan+: Achieving fair data generation and classification through generative adversarial nets. D Xu, S Yuan, L Zhang, X Wu, 2019 IEEE International Conference on Big Data (Big Data). IEEED. Xu, S. Yuan, L. Zhang, and X. Wu, "Fairgan+: Achieving fair data generation and classification through generative adversarial nets," in 2019 IEEE International Conference on Big Data (Big Data). IEEE, 2019, pp. 1401-1406.
Rank-based inverse normal transformations are increasingly used, but are they merited?. T M Beasley, S Erickson, D B Allison, Behavior genetics. 395T. M. Beasley, S. Erickson, and D. B. Allison, "Rank-based inverse normal transformations are increasingly used, but are they merited?" Behavior genetics, vol. 39, no. 5, pp. 580-595, 2009.
Improved training of wasserstein gans. I Gulrajani, F Ahmed, M Arjovsky, V Dumoulin, A C Courville ; I. Guyon, U V Luxburg, S Bengio, H Wallach, R Fergus, S Vishwanathan, R Garnett, Advances in Neural Information Processing Systems. Curran Associates, Inc30I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville, "Improved training of wasserstein gans," in Advances in Neural Information Processing Systems, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, Eds., vol. 30. Curran Associates, Inc., 2017. [Online]. Available: https://proceedings.neurips.cc/paper/2017/ file/892c3b1c6dccd52936e27cbd0ff683d6-Paper.pdf
Optimal transport: old and new. C Villani, Springer338C. Villani, Optimal transport: old and new. Springer, 2009, vol. 338.
Categorical reparameterization with gumbel-softmax. E Jang, S Gu, B Poole, arXiv:1611.01144arXiv preprintE. Jang, S. Gu, and B. Poole, "Categorical reparameterization with gumbel-softmax," arXiv preprint arXiv:1611.01144, 2016.
Empirical evaluation of rectified activations in convolutional network. B Xu, N Wang, T Chen, M Li, arXiv:1505.00853arXiv preprintB. Xu, N. Wang, T. Chen, and M. Li, "Empirical evaluation of rectified activations in convolutional network," arXiv preprint arXiv:1505.00853, 2015.
UCI machine learning repository. D Dua, C Graff, D. Dua and C. Graff, "UCI machine learning repository," 2017. [Online]. Available: http://archive.ics.uci.edu/ml
A data-driven approach to predict the success of bank telemarketing. S Moro, P Cortez, P Rita, Decision Support Systems. 62S. Moro, P. Cortez, and P. Rita, "A data-driven approach to predict the success of bank telemarketing," Decision Support Systems, vol. 62, pp. 22-31, 2014.
Fairness constraints: Mechanisms for fair classification. M B Zafar, I Valera, M G Rogriguez, K P Gummadi, Artificial Intelligence and Statistics. M. B. Zafar, I. Valera, M. G. Rogriguez, and K. P. Gummadi, "Fairness constraints: Mechanisms for fair classification," in Artificial Intelligence and Statistics. PMLR, 2017, pp. 962-970.
Machine bias propublica. J Angwin, J Larson, S Mattu, L Kirchner, J. Angwin, J. Larson, S. Mattu, and L. Kirchner. (2016) Machine bias propublica. [Online]. Available: https://www.propublica.org/article/ machine-bias-risk-assessments-in-criminal-sentencing
Lsac national longitudinal bar passage study. lsac research report series. L F Wightman, L. F. Wightman, "Lsac national longitudinal bar passage study. lsac research report series." 1998.
Penalizing unfairness in binary classification. Y Bechavod, K Ligett, arXiv:1707.00044arXiv preprintY. Bechavod and K. Ligett, "Penalizing unfairness in binary classifica- tion," arXiv preprint arXiv:1707.00044, 2017.
Scikit-learn: Machine learning in python. F Pedregosa, G Varoquaux, A Gramfort, V Michel, B Thirion, O Grisel, M Blondel, P Prettenhofer, R Weiss, V Dubourg, the Journal of machine Learning research. 12F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg et al., "Scikit-learn: Machine learning in python," the Journal of machine Learning research, vol. 12, pp. 2825-2830, 2011.
Data synthesis based on generative adversarial networks. N Park, M Mohammadi, K Gorde, S Jajodia, H Park, Y Kim, arXiv:1806.03384arXiv preprintN. Park, M. Mohammadi, K. Gorde, S. Jajodia, H. Park, and Y. Kim, "Data synthesis based on generative adversarial networks," arXiv preprint arXiv:1806.03384, 2018.
| []
|
[
"The CORALIE survey for southern extrasolar planets XIX. Brown dwarfs and stellar companions unveiled by radial velocity and astrometry",
"The CORALIE survey for southern extrasolar planets XIX. Brown dwarfs and stellar companions unveiled by radial velocity and astrometry"
]
| [
"D Barbato \nDepartment of Astronomy\nUniversity of Geneva\nChemin Pegasi 51CH-1290VersoixSwitzerland\n\nINAF -Osservatorio Astrofisico di Torino\nVia Osservatorio 20, I-10025 Pino TorineseItaly\n",
"D Ségransan \nDepartment of Astronomy\nUniversity of Geneva\nChemin Pegasi 51CH-1290VersoixSwitzerland\n",
"S Udry \nDepartment of Astronomy\nUniversity of Geneva\nChemin Pegasi 51CH-1290VersoixSwitzerland\n",
"N Unger \nDepartment of Astronomy\nUniversity of Geneva\nChemin Pegasi 51CH-1290VersoixSwitzerland\n",
"F Bouchy \nDepartment of Astronomy\nUniversity of Geneva\nChemin Pegasi 51CH-1290VersoixSwitzerland\n",
"C Lovis \nDepartment of Astronomy\nUniversity of Geneva\nChemin Pegasi 51CH-1290VersoixSwitzerland\n",
"M Mayor \nDepartment of Astronomy\nUniversity of Geneva\nChemin Pegasi 51CH-1290VersoixSwitzerland\n",
"F Pepe \nDepartment of Astronomy\nUniversity of Geneva\nChemin Pegasi 51CH-1290VersoixSwitzerland\n",
"D Queloz \nDepartment of Physics\nETH Zurich\nWolfgang-Pauli-Strasse 2CH-8093ZurichSwitzerland\n\nAstrophysics Group, Cavendish Laboratory\nJJ Thomson AvenueCB3 0HECambridgeUK\n",
"N C Santos \nInstituto de Astrofísica e Ciências do Espaço\nUniversidade do Porto\nCAUP\nRua das Estrelas4150-762PortoPortugal\n\nDepartamento de Física e Astronomia\nFaculdade de Ciências\nUniversidade do Porto\nRua do Campo Alegre4169-007PortoPortugal\n",
"J B Delisle \nDepartment of Astronomy\nUniversity of Geneva\nChemin Pegasi 51CH-1290VersoixSwitzerland\n",
"P Figueira \nDepartment of Astronomy\nUniversity of Geneva\nChemin Pegasi 51CH-1290VersoixSwitzerland\n",
"M Marmier \nDepartment of Astronomy\nUniversity of Geneva\nChemin Pegasi 51CH-1290VersoixSwitzerland\n",
"E C Matthews \nDepartment of Astronomy\nUniversity of Geneva\nChemin Pegasi 51CH-1290VersoixSwitzerland\n\nMax-Planck-Institut für Astronomie\nKönigstuhl 17D-69117HeidelbergGermany\n",
"G Lo Curto \nEuropean Southern Observatory\n19001Casilla, SantiagoChile\n",
"J Venturini \nDepartment of Astronomy\nUniversity of Geneva\nChemin Pegasi 51CH-1290VersoixSwitzerland\n",
"G Chaverot \nDepartment of Astronomy\nUniversity of Geneva\nChemin Pegasi 51CH-1290VersoixSwitzerland\n",
"M Cretignier \nDepartment of Astronomy\nUniversity of Geneva\nChemin Pegasi 51CH-1290VersoixSwitzerland\n",
"J F Otegi \nDepartment of Astronomy\nUniversity of Geneva\nChemin Pegasi 51CH-1290VersoixSwitzerland\n",
"M Stalport \nDepartment of Astronomy\nUniversity of Geneva\nChemin Pegasi 51CH-1290VersoixSwitzerland\n",
"\nDepartment of Astronomy\nUniversity of Geneva\nCh. d'Ecogia 16CH-1290VersoixSwitzerland\n"
]
| [
"Department of Astronomy\nUniversity of Geneva\nChemin Pegasi 51CH-1290VersoixSwitzerland",
"INAF -Osservatorio Astrofisico di Torino\nVia Osservatorio 20, I-10025 Pino TorineseItaly",
"Department of Astronomy\nUniversity of Geneva\nChemin Pegasi 51CH-1290VersoixSwitzerland",
"Department of Astronomy\nUniversity of Geneva\nChemin Pegasi 51CH-1290VersoixSwitzerland",
"Department of Astronomy\nUniversity of Geneva\nChemin Pegasi 51CH-1290VersoixSwitzerland",
"Department of Astronomy\nUniversity of Geneva\nChemin Pegasi 51CH-1290VersoixSwitzerland",
"Department of Astronomy\nUniversity of Geneva\nChemin Pegasi 51CH-1290VersoixSwitzerland",
"Department of Astronomy\nUniversity of Geneva\nChemin Pegasi 51CH-1290VersoixSwitzerland",
"Department of Astronomy\nUniversity of Geneva\nChemin Pegasi 51CH-1290VersoixSwitzerland",
"Department of Physics\nETH Zurich\nWolfgang-Pauli-Strasse 2CH-8093ZurichSwitzerland",
"Astrophysics Group, Cavendish Laboratory\nJJ Thomson AvenueCB3 0HECambridgeUK",
"Instituto de Astrofísica e Ciências do Espaço\nUniversidade do Porto\nCAUP\nRua das Estrelas4150-762PortoPortugal",
"Departamento de Física e Astronomia\nFaculdade de Ciências\nUniversidade do Porto\nRua do Campo Alegre4169-007PortoPortugal",
"Department of Astronomy\nUniversity of Geneva\nChemin Pegasi 51CH-1290VersoixSwitzerland",
"Department of Astronomy\nUniversity of Geneva\nChemin Pegasi 51CH-1290VersoixSwitzerland",
"Department of Astronomy\nUniversity of Geneva\nChemin Pegasi 51CH-1290VersoixSwitzerland",
"Department of Astronomy\nUniversity of Geneva\nChemin Pegasi 51CH-1290VersoixSwitzerland",
"Max-Planck-Institut für Astronomie\nKönigstuhl 17D-69117HeidelbergGermany",
"European Southern Observatory\n19001Casilla, SantiagoChile",
"Department of Astronomy\nUniversity of Geneva\nChemin Pegasi 51CH-1290VersoixSwitzerland",
"Department of Astronomy\nUniversity of Geneva\nChemin Pegasi 51CH-1290VersoixSwitzerland",
"Department of Astronomy\nUniversity of Geneva\nChemin Pegasi 51CH-1290VersoixSwitzerland",
"Department of Astronomy\nUniversity of Geneva\nChemin Pegasi 51CH-1290VersoixSwitzerland",
"Department of Astronomy\nUniversity of Geneva\nChemin Pegasi 51CH-1290VersoixSwitzerland",
"Department of Astronomy\nUniversity of Geneva\nCh. d'Ecogia 16CH-1290VersoixSwitzerland"
]
| []
| Context.A historical planet-search on a sample of 1647 nearby southern main sequence stars has been ongoing since 1998 with the CORALIE spectrograph at La Silla Observatory, with a backup subprogram dedicated to the monitoring of binary stars. Aims. We review 25 years of CORALIE measurements and search for Doppler signals consistent with stellar or brown dwarf companions to produce an updated catalog of both known and previously unpublished binary stars in the planet-search sample, assessing the binarity fraction of the stellar population and providing perspective for more precise planet-search in the binary sample. Methods. We perform new analysis on the CORALIE planet-search sample radial velocity measurements, searching for stellar companions and obtaining orbital solutions for both known and new binary systems. We perform simultaneous radial velocity and proper motion anomaly fits on the subset of these systems for which Hipparcos and Gaia astrometry measurements are available, obtaining accurate estimates of true mass for the companions. Results. We find 218 stars in the CORALIE sample to have at least one stellar companion, 130 of which are not yet published in the literature and for which we present orbital solutions. The use of proper motion anomaly allow us to derive true masses for the stellar companions in 132 systems, which we additionally use to estimate stability regions for possible planetary companions on circumprimary or circumbinary orbits. Finally, we produce detection limit maps for each star in the sample and obtain occurrence rates of 0.43 +0.23 −0.11 % and 12.69 +0.87 −0.77 % for brown dwarf and stellar companions respectively in the CORALIE sample. | 10.1051/0004-6361/202345874 | [
"https://export.arxiv.org/pdf/2303.16717v1.pdf"
]
| 257,804,723 | 2303.16717 | d8fdbefe719d499204f5533360247d3e6789d6dd |
The CORALIE survey for southern extrasolar planets XIX. Brown dwarfs and stellar companions unveiled by radial velocity and astrometry
March 30, 2023
D Barbato
Department of Astronomy
University of Geneva
Chemin Pegasi 51CH-1290VersoixSwitzerland
INAF -Osservatorio Astrofisico di Torino
Via Osservatorio 20, I-10025 Pino TorineseItaly
D Ségransan
Department of Astronomy
University of Geneva
Chemin Pegasi 51CH-1290VersoixSwitzerland
S Udry
Department of Astronomy
University of Geneva
Chemin Pegasi 51CH-1290VersoixSwitzerland
N Unger
Department of Astronomy
University of Geneva
Chemin Pegasi 51CH-1290VersoixSwitzerland
F Bouchy
Department of Astronomy
University of Geneva
Chemin Pegasi 51CH-1290VersoixSwitzerland
C Lovis
Department of Astronomy
University of Geneva
Chemin Pegasi 51CH-1290VersoixSwitzerland
M Mayor
Department of Astronomy
University of Geneva
Chemin Pegasi 51CH-1290VersoixSwitzerland
F Pepe
Department of Astronomy
University of Geneva
Chemin Pegasi 51CH-1290VersoixSwitzerland
D Queloz
Department of Physics
ETH Zurich
Wolfgang-Pauli-Strasse 2CH-8093ZurichSwitzerland
Astrophysics Group, Cavendish Laboratory
JJ Thomson AvenueCB3 0HECambridgeUK
N C Santos
Instituto de Astrofísica e Ciências do Espaço
Universidade do Porto
CAUP
Rua das Estrelas4150-762PortoPortugal
Departamento de Física e Astronomia
Faculdade de Ciências
Universidade do Porto
Rua do Campo Alegre4169-007PortoPortugal
J B Delisle
Department of Astronomy
University of Geneva
Chemin Pegasi 51CH-1290VersoixSwitzerland
P Figueira
Department of Astronomy
University of Geneva
Chemin Pegasi 51CH-1290VersoixSwitzerland
M Marmier
Department of Astronomy
University of Geneva
Chemin Pegasi 51CH-1290VersoixSwitzerland
E C Matthews
Department of Astronomy
University of Geneva
Chemin Pegasi 51CH-1290VersoixSwitzerland
Max-Planck-Institut für Astronomie
Königstuhl 17D-69117HeidelbergGermany
G Lo Curto
European Southern Observatory
19001Casilla, SantiagoChile
J Venturini
Department of Astronomy
University of Geneva
Chemin Pegasi 51CH-1290VersoixSwitzerland
G Chaverot
Department of Astronomy
University of Geneva
Chemin Pegasi 51CH-1290VersoixSwitzerland
M Cretignier
Department of Astronomy
University of Geneva
Chemin Pegasi 51CH-1290VersoixSwitzerland
J F Otegi
Department of Astronomy
University of Geneva
Chemin Pegasi 51CH-1290VersoixSwitzerland
M Stalport
Department of Astronomy
University of Geneva
Chemin Pegasi 51CH-1290VersoixSwitzerland
Department of Astronomy
University of Geneva
Ch. d'Ecogia 16CH-1290VersoixSwitzerland
The CORALIE survey for southern extrasolar planets XIX. Brown dwarfs and stellar companions unveiled by radial velocity and astrometry
March 30, 2023Received <date> / Accepted <date>Astronomy & Astrophysics manuscript no. coralie-binaries ©ESO 2023astrometry -proper motions -stars: fundamental parameters -binaries: general -techniques: radial velocities -planets and satellites: dynamical evolution and stability
Context.A historical planet-search on a sample of 1647 nearby southern main sequence stars has been ongoing since 1998 with the CORALIE spectrograph at La Silla Observatory, with a backup subprogram dedicated to the monitoring of binary stars. Aims. We review 25 years of CORALIE measurements and search for Doppler signals consistent with stellar or brown dwarf companions to produce an updated catalog of both known and previously unpublished binary stars in the planet-search sample, assessing the binarity fraction of the stellar population and providing perspective for more precise planet-search in the binary sample. Methods. We perform new analysis on the CORALIE planet-search sample radial velocity measurements, searching for stellar companions and obtaining orbital solutions for both known and new binary systems. We perform simultaneous radial velocity and proper motion anomaly fits on the subset of these systems for which Hipparcos and Gaia astrometry measurements are available, obtaining accurate estimates of true mass for the companions. Results. We find 218 stars in the CORALIE sample to have at least one stellar companion, 130 of which are not yet published in the literature and for which we present orbital solutions. The use of proper motion anomaly allow us to derive true masses for the stellar companions in 132 systems, which we additionally use to estimate stability regions for possible planetary companions on circumprimary or circumbinary orbits. Finally, we produce detection limit maps for each star in the sample and obtain occurrence rates of 0.43 +0.23 −0.11 % and 12.69 +0.87 −0.77 % for brown dwarf and stellar companions respectively in the CORALIE sample.
Introduction
Since June 1998, the historical CORALIE exoplanet-search survey has been continuously monitoring a southern hemisphere volume-limited sample composed of 1647 main sequence stars located within 50 pc from the Sun and having spectral types ranging from F8 to K0 (Queloz et al. 2000;Udry et al. 2000). As
The radial velocity measurements and additional data products discussed in this paper are available on the DACE web platform at https: //dace.unige.ch/radialVelocities. A copy of the data is also available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/ qcat?J/A+A/. Appendices A and B, containing orbital solution and detection limit plots, are available in the online version of this paper.
Based on observations collected with the CORALIE spectrograph mounted on the 1.2 m Swiss telescope at La Silla Observatory of the time of writing, the survey has collected more than 60000 radial velocity measurements using the CORALIE Echelle spectrograph mounted on the Euler Telescope at La Silla Observatory, with average measurement precision of ∼5 ms −1 and an average timespan of ∼7600 d.
This uniquely long and continuous survey is especially suited for the detection of giant planets with semimajor axes as large as 10 au (Tamuz et al. 2008;Ségransan et al. 2010;Marmier et al. 2013;Rickman et al. 2019) and brown dwarfs (Udry et al. 2002;Santos et al. 2002;Rickman et al. 2019), as well as contributing to statistical studies of the frequency of planetary companions and its dependence on stellar properties (see e.g. Santos et al. 2001;Udry & Santos 2007;Mayor et al. 2011), making the almost 25-year long CORALIE survey an invaluable asset to the field of exoplanetology. Finally, it is worth remark-Article number, page 1 of 34 arXiv:2303.16717v1 [astro-ph.EP] 29 Mar 2023 A&A proofs: manuscript no. coralie-binaries ing that the continuous monitoring of the less-active stars in the volume-limited sample also makes the CORALIE survey a fertile ground for the search of low-mass exoplanetary candidates suitable for follow-up study with higher-precision instruments such as HARPS (Pepe et al. 2000;Mayor et al. 2003) and ESPRESSO (Pepe et al. 2014), expanding its contributions to the search of exoplanetary bodies toward the realm of terrestrial companions.
During the sample selection process (see Udry et al. 2000), known large amplitude binary stars were collected in a lowpriority subprogram within the planet-search survey, due to the disruptive influence that a close-in stellar companion would have on the stability of the inner region of a planetary system (Holman & Wiegert 1999;Musielak et al. 2005;Turrini et al. 2005;Marzari & Gallina 2016) and in order to limit both the blending effect produced by double-lined spectroscopic binaries (SB2) and the observational effort necessary to disentangle the planetary signal from the higher-amplitude stellar contribution. On the other hand, longer-period binary companions producing only linear trends in radial velocity were instead considered still to be good candidates for the planet search survey, both due to the weak gravitational effect that the distant stellar companion would produce on the inner regions of planetary systems and the fact that such linear trends can easily be corrected for (such as Gl86 and HD41004AB, see Queloz et al. 2000;Santos et al. 2002). This selection strategy also reflected the initial bias against low-separation binaries in favour of stellar environments similar to that of the Solar System (Eggenberger & Udry 2010;Quarles et al. 2020). Still, the large number of measurements collected during almost 25 years of observations with CORALIE have unveiled the binary nature of a non-negligible portion of the stars selected over a wide range of orbital periods, and an updated assessment of the binary population in the sample is a necessary step for advancing the analysis of the CORALIE exoplanet-search survey.
The long-term search for stellar companions within the CORALIE survey represents a key contribution to the current endeavours to further our understanding of stellar formation. Both observational and theoretical studies have now shown that most stars form with at least one stellar companions, and more specifically that about half of FGK stars are th part of a binary system (see e.g. Moe & Di Stefano 2017;Halbwachs et al. 2018;Offner et al. 2022) in which the main-sequence companions appear to follow a lognormal separation distribution peaking around 40 au and roughly uniform mass-ratio distributions (Duquennoy & Mayor 1991;Melo 2003;Raghavan et al. 2010;Tokovinin 2014). It has also been shown that tighter solar-type binaries seem to favor larger mass ratios (Lucy & Ricco 1979;Tokovinin 2000;Moe & Di Stefano 2017), suggesting a common formation and evolution history in a shared circumbinary disc; the fact that wider (a > 200 au) binaries also feature a small but significant fraction of high mass ratio systems (El-Badry et al. 2019) is similarly an indication that at least some wide stellar companion form at intermediate separation and undergo outward migration in later stages of their dynamical evolution. Many different theoretical models have been proposed to explain the formation and observed characteristics of binary systems, such as the fragmentation of filaments and cores in star-forming regions (Könyves et al. 2015;Pineda et al. 2015;Guszejnov & Hopkins 2015;Guszejnov et al. 2017) and of massive accretion discs around individual forming stars (Bonnell 1994;Gammie 2001;Kratter et al. 2010;Harsono et al. 2011), and continuous study of the statistics of binary systems is essential in deepening our understanding of stellar formation.
Considering instead the brown dwarf companion population around solar-type stars, robust study of its demographics is hindered by the currently low number of detection, as less than 100 brown dwarf companions are currently known to orbit such stars (see e.g. Ma & Ge 2014;Grieves et al. 2017), but recent work suggest that only ∼ 4% of solar-type stars have brown dwarf companions (Offner et al. 2022). However, a notable characteristic is a clear paucity of brown dwarf companions around solar-type primary stars on close-in orbits in what is commonly referred to as the brown dwarf desert and that could be explained by post-formation migration processes (see e.g. Grether & Lineweaver 2006;Sahlmann et al. 2011b;Shahaf & Mazeh 2019;Kiefer et al. 2019).
The importance of the characterization of binary stellar systems within the scope of a radial velocity survey aimed at the detection of planetary companions is clear by virtue of the effect that the presence or absence of an additional stellar companion has on the formation and stability of planetary bodies is a fundamental theme in exoplanetary science. While works such as Roell et al. (2012) found the binarity rate among planethosting stars to be about four times smaller than for single solartype stars, the high sample heterogeneity and observational bias are still impediments toward a full understanding of exoplanet demographics in the binary environment (see e.g. Thebault & Haghighipour 2015;Quarles et al. 2020, and references therein for reviews on the subject). More recently, Ngo et al. (2017) found no evidence that host binarity alters the distribution of planet properties in systems characterized by radial velocity observations, while Su et al. (2021) reports a positive correlation between planetary multiplicity and stellar orbital separations in circumprimary planetary systems.
In this paper we present the results of a new analysis of the CORALIE measurements of the 1647 stars in the sample, specifically aimed at the search of radial velocity signals comparable with stellar or brown dwarf companions of the target stars. More precisely, in this study we focus on a specific region of the binary companion parameter space, namely a region limited both in orbital separation as a result of the 25 yr duration of the CORALIE survey and in mass regimes, as we focus on companion having minimum mass higher then 40 M Jup . Companions populating the rest of the parameter space will be the main focus of future papers in this series. We find a total of 218 stars in the sample to have at least one such companion, among which 130 are previously unpublished ones and 88 are instead already known and for which we present updated orbital solutions. Additionally, we present further refined orbital solution for a subset of 132 binary stars in the sample using astrometry constraints provided by Hipparcos (Perryman et al. 1997) and Gaia Early Data Release 3 (Gaia EDR3, Gaia Collaboration et al. 2021) proper motion measurements.
Our paper is organised as follows: in Sect. 2 we describe the physical characteristics of the stars in host stars in our sample. In Sect. 3 we present an overview of the CORALIE observational campaign and of the search for radial velocity signals compatible with brown dwarf and stellar companions, while in Sect. 4 we obtain estimates of dynamical masses for a subset of the presented companions using Hipparcos and Gaia proper motion measurements. In Sect. 5 and 6 we respectively discuss a few systems especially worthy of note and the prospects for follow-up search for exoplanets in the systems comprising our sample, while in Sect. 7 we derive occurrence rate values for brown dwarfs and stellar companion in the CORALIE exoplanetary search sample, before concluding and discussing the results of this work in Sect. 8.
Host stars characteristics
Out of the 1647 stars composing the CORALIE exoplanetsearch sample, we first exclude known SB2 identified as such either by archive query or by identifying double peaks in the cross-correlation function (CCF) of the radial velocity spectra. From this we further identify a subset of 218 stars for which we detect a robust radial velocity signal hinting at the presence of a massive companion having minimum mass Msin i 40 M Jup ; the radial velocity analysis that led to this selection is fully detailed in Sect. 3. For clarity and simplicity, we'll refer to this subsam-ple of 218 stars hosting stellar companions that is the main focus of this work simply as "the binary sample" throughout this paper, while the larger CORALIE exoplanet-search sample will be referred to as "the CORALIE sample". In order to have an updated and homogeneous characterization of the physical properties of every star in the sample, we fit the stellar Spectral Energy Distribution (SED) of each star, using the MESA Isochrones and Stellar Tracks (MIST) (Dotter 2016;Choi et al. 2016) via the IDL suite EXOFASTv2 (Eastman et al. 2019). With this method, the stellar parameters are simultaneously constrained by the SED and the MIST isochrones, since the SED primarily constrains the stellar radius R and effective temperature T eff , while a penalty for straying from the MIST evolutionary tracks ensures that the resulting star is physical in nature (see Eastman et al. 2019, for more details on the method). For each star, we fitted all available archival magnitudes from Tycho B T and V T bands (Høg et al. 2000), Johnson's B, V and 2MASS J, H, K bands from the UCAC4 catalog (Zacharias et al. 2012), WISE bands (Cutri et al. 2021), and Gaia G, G BP and G RP bands (Gaia Collaboration et al. 2016), imposing Gaussian priors on each star's effective temperature T eff and metallicity [Fe/H] based on their respective values in the Anders et al. (2019) catalog, as well as on the stellar parallax based on Gaia EDR3 astrometric measurement (Gaia Collaboration et al. 2021).
The stellar parameters derived from the SED fitting for the binary sample are listed in Table 1, along with each star's archival spectral type, while the distribution of a few selected parameters for both the binary and CORALIE sample are plotted in Fig. 1. The median value of host star mass in our sample is of 0.94 M , and the average relative error on this parameter is 10%; median values and average errors for other stellar parameters of interest are 1.02 R and 4% for stellar radii, 5985 K and 3.72% for effective temperature, 4.41 and 1.41% for surface gravity log g, -0.27 dex and 95% for metallicity. In order to compare the distributions of the binary sample with those of the larger CORALIE sample we perform a Kolmogorov-Smirnov test for each stellar parameter derived from the SED fits, finding p-values < 0.05 for M (p = 0.008) R (p = 0.009) and log g (p = 0.014), suggesting that the underlying population of the binary sample is not the same as the overall CORALIE sample. Indeed, we find the median values of stellar mass, radius and surface gravity in the CORALIE sample to be 0.91 M , 0.95 R and 4.45, suggesting therefore that the underlying population of our binary sample consists of slightly more massive and larger stars than the underlying population of the overall exoplanetary search sample.
Radial velocity observations and analysis
Since its first observations in June 1998, the CORALIE spectrograph went through two significant upgrades in June 2007 and in November 2014 in order to increase the overall efficiency and accuracy of the instrument. Specifically, the 2007 upgrade consisted in the replacement of CORALIE's fiber link and crossdisperser optics (Ségransan et al. 2010), while the 2014 upgrade consisted in replacing CORALIE's fiber link with octagonal fibers (Chazelas et al. 2012) and adding a Fabry-Pérot calibration unit (Cersullo et al. 2017). Both interventions on the instrument introduced small offsets between the radial velocity measurements collected before and after each upgrade, depending on such parameters as the spectral type and systemic velocity of the observed star; during the course of the timeseries analysis we therefore consider CORALIE as three different instruments, each marked by the different upgrades. Dur- ing the course of this work we'll refer to the original CORALIE dataset as CORALIE-98 (C98), to the dataset collected after the first upgrade as CORALIE-07 (C07) and to the one collected after the most recent upgrade as CORALIE-14 (C14). For selected stars in our sample, we additionally include in the analysis the measurements collected at lower precision (∼300 ms −1 ) with the CORrelation-RAdial-VELocities (CORAVEL) spectrometer (Baranne et al. 1979) between 1981 and 1998, especially when the CORAVEL data are numerous enough to help identify long-period signals and constraining the orbital parameters of the companions found.
As of the time of writing, over the course of almost 25 years of observations on the CORALIE sample we collected a total of 62600 radial velocity measurements for the 1647 stars in the sample, averaging 38 datapoints per star, with median photonnoise uncertainty and timespan of 5.21 ms −1 and 7698 d. In order to search for Doppler signals consistent with the presence of stellar companions in the CORALIE radial velocity timeseries, we follow an iterative process of investigation of successive dominant peaks in the radial velocity periodogram, such as described in Delisle et al. (2016). As mentioned in Sect. 2, we once again note that stars found to be SB2s in the CORALIE sample are excluded from the following analysis.
First of all, we consider in our analysis only the 1497 stars for which a total of at least 10 CORALIE measurements have been collected over the years, to ensure robust identification of significant signals. We model instrumental offsets, noise and stellar jitter for each star in the CORALIE sample following the formalism detailed in Díaz et al. (2016) and Delisle et al. (2018), computing false alarm probabilities (FAPs) on the periodogram of the residuals as described in Baluev (2008). The periodogram's main peak is considered significant if characterized by a FAP lower than 0.1% and is modeled as a Keplerian, and the thusly obtained new radial velocity residuals is again investigated for significant signals, re-adjusting the jitter, noise and offsets at each step of the iterative process. This method is however valid only when enough measurements are available to compute a value of FAP; for those cases in which no robust value of FAP is obtained but a clear variation having a scatter in excess of the observation formal errors is present in the radial velocity measurements, we still model the data with a Keplerian model, assessing its significance using the difference between the Bayesian Information Criterion (∆BIC) of the Keplerian and flat models, computed as:
BIC = k log n − 2 log L ,(1)
with k as the number of model parameters, n the number of datapoints and log L the maximised log-likelihood of the model evaluated following Delisle et al. (2020). Additionally, the longer-period signals found during this search are also modelled as linear or quadratic trends instead, and are included in the final sample that is the main focus of this work only if ∆BIC > 10 in favour of the Keplerian solution. Overall, the signal search process is similar to that undertaken in parallel in Unger et al., in prep., focused instead on identifying new giant planets and brown dwarf companions in the CORALIE sample. At the end of this analysis, we identified a total of 218 stars featuring at least one signal compatible with a companion minimum mass Msin i 40 M Jup within 1σ, a threshold between giant planets and brown dwarfs we select following the findings reported in Sahlmann et al. (2011a), composing the binary sample representing the main focus of this work. The CORALIE radial velocity dataset for this sample is comprised of a total of 7226 CORALIE measurements, averaging 33 datapoints per star and featuring a median radial velocity uncertainty of 5.31 ms −1 and an observational timespan of 7581 d; all data products are publicly available at the Data and Analysis Center for Exoplanets (DACE) 1 . We run a Markov chain Monte Carlo (MCMC) analysis for each star in our sample based on the algorithm described in Díaz et al. (2016) and Delisle et al. (2016Delisle et al. ( , 2018 order to obtain the posterior distribution of the model parameters, using initial conditions drawn from the orbital solutions we obtained during our preliminary iterative signal search and computing each parameter's confidence intervals for a 68.27% confidence level. A summary of the bestfit radial velocity orbital solutions for all companions found in the sample is listed in the left portion of Table 2, the distributions of selected orbital parameters and mass ratio are plotted in the histograms shown in Fig 2, while the companions distribution on the Msin i-a parameter space is shown in Fig 3, and finally the phase folded radial velocity curves for every companion in the sample are collected in Appendix A.
It can be seen that the population of the companions having Msin i >40 M Jup identified by radial velocity in the sample peaks at ∼5.92 au and ∼290 M Jup (around 0.27 M ) and that the orbital elements cover a large variety, with periods ranging from 4 d (for HD196998B) to 65213 d (HD3795B), semimajor axes from 0.045 au (HD196998B) to 36.40 au (HD3795B), minimum masses from 41.60 M Jup (∼0.04 M , HD30774B) to ∼0.71 M (HD181199B) and eccentricities values from fully circular (HD207450B) to 0.95 (HD137763B). It can be also seen that all the companions in the sample have a minimum mass below the solar mass.
Another point of interest is the distinction between single-lined binaries (SB1) and double-lined spectroscopic binaries (SB2) in the sample. Following Halbwachs et al. (2003) we can use the minimum mass ratio parameter q min between secondary and primary component to identify possible SB2s as those having q > 0.8. By doing so we find no binary systems in the sample with such high value of q as characterized by our radial velocity solutions.
To search for differences in the properties of inner (a <5 au) and outer (a ≥5 au) companions we again perform a Kolmogorov-Smirnov test of the orbital elements of the detected companions, finding p-values only marginally lower than 0.05 for minimum mass (p = 0.03) and eccentricity (p = 0.04) sug- Finally, we find three stars in the sample to host more than one companion. As described by our radial velocity solution, HD94340 is a triple star system in which the 1.28 M primary is orbited by a Msin i ∼0.08 M companion on a 6.84 d orbit and by a Msin i ∼0.07 M body with an orbital period of ∼1123 d, while HD206276 hosts a Msin i ∼0.09 M companion on a 32 d orbit and a Msin i ∼0.16 M at 1374 d, while HD196885 hosts both an inner giant planet with minimum mass of 1.95 M Jup and orbital period of 1330 d and an outer stellar companion with Msin i ∼0.26 M on a 14912 d orbit. Two of these systems are already known in the literature (see Tokovinin et al. 2006for HD94340 and Correia et al. 2008 for HD196885) but in the present work we provide updated orbital parameters for all components, especially with the use of astrometry constrains detailed in Sect. 4, while for HD206276 we provide an updated solution for the outer stellar companion, whose presence was already hinted at by astrometric observations, and present a new inner stellar companion (see Sections 5.2-5.4 for more details on the specific systems).
Astrometric constraints
In order to achieve a more complete characterization of the orbital parameters of the stellar companions of the CORALIE sample identified in our radial velocity analysis we use the variations in proper motion measurements δµ in conjunction with the radial velocity data already analysed in Sect 3 to derive precise dynamical mass and orbits for the massive companions in the sample. Specifically, we use the proper motion anomalies between the Hipparcos (Perryman et al. 1997) and Gaia (Gaia Collaboration et al. 2016) epochs, which have been shown to be able to detect accelerations as small as a few µas yr −2 (Sahlmann 2016;Brandt 2018). Such an approach has been successfully used more and more commonly in recent years to characterize both stellar and substellar companions (see e.g. Calissendorff & Janson 2018;Snellen & Brown 2018;Kervella et al. 2019aKervella et al. ,b,c, 2020Kervella et al. , 2022Brandt et al. 2019Brandt et al. , 2020Brandt et al. , 2021aDamasso et al. 2020a,b;Makarov et al. 2021a,b;Venner et al. 2021;Feng et al. 2021;Llop-Sayson et al. 2021), especially with the advent of more and more precise proper motions measurements provided by the different Gaia Data Releases.
In this work we use the proper motion measurements provided by Gaia EDR3, which are on average about 3-4 times more precise than those provided by the previous Gaia DR2 (Gaia Collaboration et al. 2021;Lindegren et al. 2021;Brandt 2021). More specifically, we use the variations ∆µ α,δ from a purely linear motion as reported in the Hipparcos-Gaia Catalog of Accelerations (HGCA, Brandt 2018Brandt , 2021, in which the Hipparcos and Gaia EDR3 catalogues have been cross-calibrated to account for systematics and shift all proper motions in the Gaia EDR3. For this purpose, we use the open source code orvara (Brandt et al. 2021b), an MCMC orbit fitting code that is able to fit Keplerian orbits to any combination of HCGA proper motion variations, absolute astrometry, relative astrometry, and radial velocities to obtain precise dynamical masses and orbital elements. We use a version of orvara that has been especially modified to accept priors on orbital periods and semimajor axes 2 , in addition to the priors on primary mass and stellar jitter already included in the original version of the code, to help in achieving better convergence and more constrained orbital elements especially for intermediate-separation binaries.
We find 40 stars in our binary sample not to be included in the HGCA, leaving us with 178 stars for which the simultaneous use of proper motion anomalies and radial velocity is in principle viable. For each of them, we impose priors on the primary mass equal to the stellar masses obtained with the SED fits (see Sect. 2), and on orbital periods equal to the periods retrieved from the radial velocity analysis (see Sect. 3).
As the use of proper motion anomaly is based on two measurements separated by almost 25 yr, the method is clearly more efficient for long orbital periods (Kervella et al. 2019a); when analysing in such a manner the entirety of our sample, which as discussed in Sect. 3 features a large variety of orbital periods, special cautions must be taken. This is particularly evident when comparing the semimajor axes for the companions in the sample as characterized by radial velocities alone with those resulting from the orvara simultaneous fit of proper motion variations and radial velocities. Such a comparison, shown in Fig. 4, highlights a large discrepancy between the values obtained by the two methods for the companions that radial velocity characterized as having shorter separations, and that orvara typically overestimates the orbital periods of such close-in stellar companions. For the purposes of this work, we identify a RV ∼1 au as a reasonable threshold above which astrometry constraints are indeed helpful in constraining the orbital elements of the stellar companions in our sample, and we decide to focus our following analysis only on the 132 stars in our sample for which our radial velocity analysis identified a companion with semimajor axis greater than 1 au. In the cases of HD206276 and HD94340, being both triple star systems hosting one companion below 1 au, we perform the orvara analysis using as radial velocity timeseries the residuals obtained after subtracting the Keplerian signal of the inner stellar companion at 0.19 and 0.08 au respectively, so that the timeseries used for the simultaneous fit contain in principle only the contribution of the outer companions in the systems. Lastly, we must note that binary systems with mass ratios q > 0.6 could, in principle, be affected by the luminosity of the secondary companion shifting the photocenter orbit from the primary orbit. To account for this, we analysed all the CCFs for the systems having orbiting the 132 stars in the sample for which we performed simultaneous radial velocity and proper motion anomaly fits. The blue dots and histogram respectively show the parameter space position and distribution of semimajor axes and minimum mass as retrieved by the radial velocity analysis alone, while the respective red plot elements refer instead to the results of the simultaneous astrometry and radial velocity fits, and therefore true dynamical mass instead of minimum mass. The components of multiple systems are connected by gray dash-dotted lines, while the horizontal dashed brown, orange and yellow lines respectively indicate the brown dwarf (40 M Jup ), dwarf star (80 M Jup ) and solar-mass (1047.58 M Jup ) thresholds.
such mass ratios, failing to detect any secondary peak; this corresponds to a lower limit on magnitude ratio of 2.5-3.0, which would lead at most to a barycentric semimajor axis underestimation of of 5-9%.
A summary of the best-fit solution obtained for these 132 systems using orvara is listed in Table 2, while Fig 5 compares the distributions of the stellar companions as characterized by radial velocity alone and the simultaneous use of radial velocity and proper motion anomaly. Generally speaking, this plot shows again the good agreement in orbital separations for the companions in the sample between the two different solutions, with the semimajor axis distribution from the radial velocity fits alone and the one obtained with the addition of proper motion anomalies peaking both at ∼6.30 au. Of clear interest is also the comparison between the mass distributions obtained by the two solutions, as the determination of orbital inclinations derived from the use of astrometry permits the derivation of the true dynamical mass of the stellar companions, shifting the peak of the mass distribution upward to ∼343 M Jup (∼0.33 M ) from the peak value of ∼262 M Jup (∼0.25 M ) for the Msin i distribution derived from radial velocity alone. Fig 6 shows a comparison between the values of selected orbital parameters of Msin i >40 M Jup companions as obtained by radial velocities and those obtained using also astrometry constraints, while Fig 7 shows the distribution of the same parameters as obtained by the simultaneous radial velocity and proper motion anomaly fits. Once again, we generally find good agreement between the orbital separations found from the two solutions and between the eccentricities as well. As done for the orbital parameters obtained from the radial velocity solutions (see Sect. 3) we again perform Kolmogorov-Smirnov tests to search for significant differences in the distributions of the properties of inner (1≤ a <5 au) and outer (a ≥5au) com- panions, finding this time no significant difference between the respective distributions of true mass (p = 0.39) and eccentricities (p = 0.66).
The true masses derived from the simultaneous fit once again show the majority of the companions found in the sample to lie below the solar mass value within the respective uncertainties, although the precise estimate of orbital inclination provided by the astrometry measurements allow some companions to reveal themselves to have a true mass greater than 1 M , namely the ones found orbiting HD27019 (M= 1. The minimum masses, eccentricity and q distributions for inner (1 ≤ a < 5 au) and outer (a ≥ 5 au) companions found in the sample are shown in yellow and green respectively. obtained by the joint radial velocity and proper motion anomalies analysis. By virtue of the true masses derived, this time we find four systems to have q > 0.8, namely HD3795 (q = 0.86), HD39012 (q = 0.81), HD181199A (q = 0.90) and HD223084 (q = 0.90) which are therefore possible undetected blended binaries.
In addition to the companions detected in this sample we also report some results from Unger et al., in prep., in which a
Notes.
Full table is available at the CDS. A portion is shown here for guidance regarding its form and content.
Article number, page 10 of 34 D. Barbato et al.: Brown dwarfs and stellar companions unveiled by radial velocity and astrometry similar joint analysis of radial velocity and astrometric measurements is performed on the planetary companions detected in the CORALIE sample. Specifically, the possible brown dwarf companions found to be orbiting stars HD162020 and HD112758, having respective minimum masses of 14.96 ± 0.53 M Jup and 32.7±1.9 M Jup are found to have true masses of 0.392±0.005 M and 0.245 ± 0.001 M . While we shall not go into the details of the orbital solutions of these two companions in the present work as they are thoroughly analysed in Unger et al., in prep., these additional stellar objects in the CORALIE sample will be here considered in the occurrence rates analysis detailed in Sect. 13.
A few notable cases
Brown dwarfs in the sample
Following the radial velocity analysis detailed in Sect. 3, we find 28 companions in the sample to have a minimum mass between 40 and 80 M Jup and that are therefore describable as possible brown dwarfs. While the low number of such companions found in the sample do not allow for robust statistical analysis, it is possible to note that most of them are found at orbital separations larger than 0.5 au from the primary star, in a further example of the brown dwarfs desert observed around solar-type stars (Grether & Lineweaver 2006;Shahaf & Mazeh 2019;Kiefer et al. 2019). As the simultaneous fit of radial velocities and proper motion anomalies performed and discussed in Sect. 4 allow for the determination of orbital inclination and therefore true mass of the companions, it is of clear interest to discuss how many and which of these possible brown dwarfs are confirmed to be as such by this analysis and which are instead revealed to be stellar companions instead. It is first of all important to note that the primary stars hosting three of these possible brown dwarf companions (namely those found orbiting HD53680, HD153284, HD184860A) are not included in the HGCA, and therefore no astrometric analysis is possible for these stars, and that seven additional possible brown dwarf companions (HD3277, HD28454, HD30774, HD89707, HD151528, HD164427A and HD219709) are found to be on orbits tighter than the 1 au threshold that we have set for reliable analysis with orvara; this therefore leaves us with 18 such companions for which we are able to determine true masses and confirm or reject their nature as brown dwarfs.
Of these 18 objects, our use of proper motion anomalies confirm 7 companions (those orbiting HD4747, HD17289, HD30501, HD74014, HD112863, HD167665 and HD206505) to remain in the brown dwarf mass range, while the remaining 11 (orbiting HIP22059, HD17155, HD20916, HD43848, HD78746, HD87359, HD94340, HD119559, HD154697, HD195010 and HD217580) are found to have a true mass above 80 M Jup and are therefore revealed to be stellar companions. Of the latter, the companion characterized by the larger difference in minimum and true mass is the one found orbiting star HD119559, which starting from a minimum mass of 76.267 +4.149 and phase folded model curves for triple stellar system HD206276, with CORAVEL, CORALIE98, CORALIE07 and CORALIE14 measurements shown in orange, blue, green and purple respectively. but they will considered in the present work as stellar companions in the occurrence rates analysis detailed in Sect. 6.2.
A new triple star system: HD206276 (HIP107143)
The possible presence of a long-period massive companion around K3 V star HD206276 was first noted in Goldin & Makarov (2007), in which the use of a genetic optimizationbased algorithm allowed for obtain additional orbital solutions for a subsample of Hipparcos stars with previous stochastic solutions. In the cited work, HD206276 was reported as possibly hosting a massive companion on a 1338 +171 −73 d orbit with a 0.14 +0.21 −0.11 eccentricity and an orbital inclination of 40 ± 6 deg. However, no radial velocity measurements have been published ever since or used to confirm thet companion or to provide a better orbital solution for the system.
Within the scope of the CORALIE exoplanetary search, we have observed HD206276 over a total of 6932 d collecting 35 radial velocity measurements (divided as 18 C98, 3 C07 and 14 C14); the additional usage of two CORAVEL measurements brings the total of datapoints available for our analysis to 37 over the course of 10584 d. We identify in the timeseries periodogram a highly significant peak (FAP=4.3 · 10 −6 ) at 32 d, with an one-Keplerian residual peak present at 1363 d with FAP=3.1 · 10 −4 corresponding to the astrometric signal reported in Goldin & Makarov (2007), with no further significant signals present in the two-Keplerian residuals. We therefore present our two-Keplerian bestfit model (shown in Fig. 8) with which we confirm the presence of the outer companions having an orbital period P C = 1374.27 ± 0.97 d, semiamplitude K C = 3.50 ± 0.03 kms −1 and eccentricity e C = 0.275 ± 0.008, while we also report the detection of an inner companion with P B = 32.005 ± 0.0002 d, K B = 6.8 ± 0.03 kms −1 and eccentricity e B = 0.255 ± 0.003. By virtue of the primary star having a mass of 0.88 +0.06 As mentioned in Sect. 4, when performing the simultaneous fit of radial velocities and proper motion anomaly with orvara we first subtract the Keplerian signal of the inner companion, since its short period of 32 d is not detectable though the use of proper motion measurements at the two Hipparcos and Gaia epochs, therefore using for the joint analysis a radial velocity timeseries containing in principle only the contribution of the outer companion. However, we find the joint orvara to be unable to converge, failing to contraint the orbital elements of the outer stellar companions, likely due to the 32 d stellar companion still producing a significant astrometric contribution to the proper motion anomalies. The same failure to converge is obtained when trying to model the triple system by jointly fitting our radial velocity with the Hipparcos epoch astrometric time series via the kepmodel python package (see Delisle et al. 2016;Delisle & Ségransan 2022) and the samsam MCMC sampler (e.g. Delisle et al. 2018). We additionally note that this triple system is not listed amongst the Gaia DR3 astrometric orbital solutions validated in Holl et al. (2022), further highlighting the difficulties in disentagling the astrometric signature of the inner stellar companion. For this system we are therefore able to provide only the orbital solutions resulting from the radial velocity fit alone.
An updated triple star system: HD94340 (HIP53217)
First hints on the multiple nature of the G4 V star HD94340 were reported in Makarov & Kaplan (2005), in which the presence of a stellar companion to the primary star is suggested by the detection of a large proper motion acceleration between Hipparcos and Tycho-2 measurements. More specifically, using preliminary results from the CORALIE survey, Tokovinin et al. (2006) reports the presence of a stellar companion with an orbital period of 6.8 d and hints of a possible second massive companion at 1200 d; a joint analysis of Hipparcos proper motion anomaly, adaptive optics and speckle interferometry presented in Tokovinin et al. (2012) confirm the triple nature of the system, win an inner companion of 6.8 d and an outer one with an estimated orbital period of 3.3 yr period and 40mas axis unresolved by speckle interferometry.
Over the course of our survey, we collected a total of 63 radial velocity measurements (17 CORAVEL, 10 C98, 20 C07 and 16 C14) with an observational timespan of 11772 d. The timeseries periodogram features a high significance (FAP=1.8·10 −45 ) peak at 6.84 d and the one-Keplerian solution residuals show an additionally highly significant (FAP=9.6 · 10 −19 ) signal at 1120 d, both signals clearly consistent with literature values, with no further residual significant peak and no correspondence between the identified significant peaks and stellar activity signals among the activity indicators analysed. According to our two-Keplerian bestfit model (shown in the top two rows of Fig. 9), we find the inner companion to have an orbital period P B = 6.84 ± 0.01 d, semiamplitude K B = 8.182 ± 0.003 kms −1 and eccentricity e B = 0.009 ± 0.001, while for the outer com-Article number, page 12 of 34 D. Barbato et al.: Brown dwarfs and stellar companions unveiled by radial velocity and astrometry panion we find P C = 1122.96 +0.60 −0.48 d, K C = 1.356 +0.008 −0.010 kms −1 and e C = 0.305 ± 0.004; having obtained from the SED fit of the primary star (see Sect. 2) a stellar mass of 1.28 +0.15 −0.19 M we derive values of minimum masses and semimajor axes of 90.08 +8.71 −9.13 M Jup and 0.08 ± 0.01 au for the inner companion and of 77.85 +7.54 −7.93 M Jup and 2.34 +0.11 −0.12 au for the outer companion. As mentioned in Sect. 4 and already done for HD206276, we subtract from the radial velocity timeseries the Keplerian signal of the 6.84 d inner companion, performing the orvara fit using the thusly obtained residuals. From the results of this simultaneous radial velocity and proper motion anomaly fit we find for the outer companion a true mass of M C = 269 +51 −45 M Jup (corresponding to 0.26 +0.05 −0.04 M ), inclination i C = 18.2 +2.8 −2.3 deg, with values of semimajor axis (a C = 2.44 +0.11 −0.12 au) and eccentricity (e C = 0.332 +0.023 −0.022 ) in good agreement with those derived from fitting the radial velocity measurements alone.
We additionaly note that the outer stellar companion in this system is part of the Gaia DR3 astrometric orbital solutions validated in Holl et al. (2022), in which it is characterized by having an orbital period of 1213.8 ± 22.0 d and eccentricity 0.30 ± 0.01. The inclination corresponding to this solution is i C = 20.9 +1.1 −1.3 deg, the companion true mass is M C = 0.37 ± 0.03 M , the relative semi-major axis is a C = 2.63 +0.12 −0.13 au. This astrometric-only solution would correspond to a minimum-mass of M C sin i C = 137 ± 15 M Jup or 0.131 ± 0.015 M , which we note is larger by a factor 1.75 than the radial-velocity solution. The parallax is 22.508 ± 0.036 instead of 19.72 ± 0.56 for the Gaia DR3 single-star solution.
Moreover, we additionally analyzed jointly the radial velocity and Hipparcos epoch astrometric time series using the kepmodel python package (see Delisle et al. 2016;Delisle & Ségransan 2022) and the samsam MCMC sampler (e.g. Delisle et al. 2018). While the inclination of the inner 6.8 d companion remains, as expected, unconstrained, the outer 1100 d companion is detected by Hipparcos and its inclination and true mass are constrained. We find an inclination of i C = 13.2 ± 0.8 deg, corrsponding to a true mass of M C = 0.39±0.04 M . The relative semi-major axis is a C = 2.5 ± 0.1 au (53.8 ± 2.7mas), and we find an orbital period for the outer companion of 1122.9±0.4 d, an eccentricity of 0.305 +0.004 −0.003 , ω C = 98.9 +0.7 −0.8 deg, Ω = 8.6±4.0 deg and a revised parallax of 21.4 +0.7 −0.8 mas instead of the 19.58 ± 1.46mas reported for the Hipparcos single-star solution.
We find some discrepancy between the joint Hipparcos and radial velocity solution, the orvara solution, and the Gaia DR3 astrometric orbit solution, in particular between the respective outer companion orbital inclination values. We note that the orbital period is slightly longer than the Gaia DR3 timespan, which makes the Gaia DR3 solution typically less accurate. Moreover, the errorbars of the Gaia DR3 solution are probably underestimated, which is confirmed by the poor matching between the Gaia DR3 period and the well determined RV period (about 4σ). Finally, in this regime of period, the orvara solution is expected to be more sensitive to the Gaia scanning-law, which could lead to inaccurate results. We therefore adopt the joint Hipparcos and radial velocity solution for this system.
A planet-hosting system: HD196885 (HIP 101966)
The presence of a possible stellar companion to the F8 V star HD196885 was first reported in Chauvin et al. (2006Chauvin et al. ( , 2007 as a result of a near-infrared adaptive optics survey, while Correia et al. (2008) additionally detected a planetary companion of hav-ing minimum mass of 3 M Jup and orbital period of 1349 d using CORAVEL, CORALIE and ELODIE radial velocity measurements. Follow-up relative astrometry and radial velocity observations (Fischer et al. 2009;Chauvin et al. 2011) confirmed the binary nature of the system as well as the orbital parameters of the planetary body found around the primary star.
In the scope of this work, we analyse 18 CORAVEL and 41 CORALIE radial velocity measurements (divided as 33 C98, 1 C07 and 7 C14), over a total of 14574 d of and we are able to provide yet another update on the HD196885 multiple system. The timeseries periodogram features a highly significant region (FAP=1.08 · 10 −8 ) at periodicity higher than 16800 d, indicating either a Keplerian signal longer that our observational timespan or a long-term drift trend, and a residual peak at 1323 d with a FAP level of 1 · 10 −3 after subtracting a one-Keplerian solution, with no further significant residual signals. In our two-Keplerian solution, shown in the top row of Fig. 10, we find the outer stellar companion to have orbital period P B = 14912.14 +1.13 Fig. 10.
The simultaneous usage of radial velocity and proper motion measurements therefore allow us to confirm the planetary and stellar nature of the inner and outer companion, respectively. Following Tokovinin (1993), we additionally compute the relative orientation of angular momenta ϕ to try and provide further information on the system's configuration. However, due to the highly unconstrained value of the longitude of the ascending node of the inner planetary companion (Ω b = 169 +142 −121 deg), we obtain a relative orientation of ϕ = 87 deg, a value too close to the 90 deg threshold proposed in Tokovinin (1993) to provide any further robust statement on the configuration of the HD196855 system components.
Prospects for exoplanetary search
As discussed in Sect. 1, the binarity of a stellar system has a deep influence on the formation and evolution of planetary companions, especially when the orbital separations between the stellar components of the systems is below a few hundreds au, in which case most studies in the literature suggest that the formation and survival of more massive and close-in exoplanets than those found around single stars (see e.g. Observed and fitted proper motions in right ascension and declination. The thick black line is the best-fit orbit obtained by the simultaneous fit of radial velocities and proper motion anomalies, with 50 orbits randomly drawn from the posterior distributions and color-coded according to the mass of the outer massive companion, and the proper motion measurements from Hipparcos and Gaia EDR3 are shown in orange. Note that in the radial velocity phase folded plot for HD196885 b the low-precision CORAVEL data are not shown in order to better show the radial velocity curve of the low-amplitude inner companion.
(2019) derives a binary fraction of 79.0 +13.2 −14.7 % for stars hosting giant planets and brown dwarfs on orbits shorter than 1 au, again supporting the critical influence that the presence of stellar companions have on the formation and evolution of such planetary systems. Similarly, based on a literature search for binary companions to exoplanet-hosting stars within 200 pc, Fontanive & Bardalez Gagliuffi (2021) finds that while exoplanets found on circumprimary orbits in very wide binary systems show similar physical properties than those around single stars, tighter binary systems with separations up to a few hundred astronomical units tend to promote instead the formation and survival of more massive giant planets and brown dwarfs with shorter orbital periods and typically in single-planet configurations. The same work also suggests that the properties of close-in exoplanets in wide binary systems are consistent with them being the result of for-mation via fragmentation in a gravitationally unstable disc. This result is further supported by the simulations detailed in Cadman et al. (2022), which find that intermediate separations between the component of a binary stellar system promoting fragmentation is consistent with those featured in the systems displaying an excess of close-in giant planets and brown dwarf (Wang et al. 2014;Kraus et al. 2016;Ngo et al. 2016) While we have detected no new robust exoplanetary signal in the radial velocity analysis of our binary sample the lowto-intermediate orbital separations we derived for the stellar companions discussed in this work, ranging from ∼0.045 au to ∼36.40 au, makes our sample a significant opportunity for the search of exoplanetary companions in binary systems and a suitable testing field for planetary formation theoretical models, provided a larger number of radial velocity measurements with high Article number, page 14 of 34 D. Barbato et al.: Brown dwarfs and stellar companions unveiled by radial velocity and astrometry enough density and precision are collected to successfully disentangle the stellar companion's radial velocity signal from that of any lower-mass body that can orbit the system.
Planetary stability in the binary sample
In order to provide a first assessment of the regions in which exoplanets could be found on stable orbits in the systems of our sample, we use the analytical stability criteria provided in Ballantyne et al. (2021) and based on the numerical simulations performed in Holman & Wiegert (1999).
Considering a planet on a circumprimary (or S-type) orbit in a binary system, Ballantyne et al. (2021) defines the critical semimajor axis a cS as the maximum stable distance from the primary star:
a cS = a bin 0.464 − 0.38µ − 0.631e + 0.586µe+ 0.15e 2 − 0.198µe 2 (2)
being a bin and e the semimajor axis and eccentricity of the binary, and where:
µ = m s m p + m s(3)
with m p and m s as the masses of the primary and secondary stellar components of the binary. Similarly, for a circumbinary (or P-type) orbit the critical semimajor axis a cP , being the minimum stable distance from the binary system, is given by: a cP = a bin 1.6 + 5.1e − 2.22e 2 + 4.12µ − 4.27µe− 5.09µ 2 + 4.61µ 2 e 2 (4)
While finally considering a planet on a circumsecondaty orbit, the maximum stable distance from the secondary star a cS,sec is given by using again Eq. 2 with instead: µ = m p m p + m s
As noted in Ballantyne et al. (2021), these stability criteria are to be taken as a first-order indication of the circumprimary or circumbinary stability of a planet, since there is a variety of mechanisms that can further enhance or disrupt the stability of such orbits (see e.g. Pilat-Lohinger & Dvorak 2002;Pilat-Lohinger et al. 2003;Parker & Quanz 2013;Lam & Kipping 2018;Quarles et al. 2018Quarles et al. , 2020Kong et al. 2021, and references therein). However, since a full dynamical characterization of the binary systems in our sample is beyond the scope of the present work, we assume the validity of the defined stability criteria for the purpose of producing a first estimate of the stability regions of the considered systems.
Additionally, we consider as the minimum stable distance from the primary star the Roche limit of the host star, defined as:
d Roche = 2.423 R 3 ρ ρ pl(6)
where R and ρ are the radius and density of the primary star and ρ pl is the density of the orbiting planet. We use for each primary component the stellar parameters derived from the SED fits detailed in Sect 2, while for the planetary density used to compute the Roche limits we use the Earth density ρ ⊕ as a lower limit scenario. The values of stability limits d Roche , a cS and a cP computed for the systems in the sample are listed in Table 3 We first focus our attention on the binary systems for which we obtained true mass values from the simultaneous fit of radial velocities and proper motion anomalies performed with orvara and detailed in Sect. 4, ignoring for the moment the 40 systems absent from the HGCA and the 44 systems with orbital period too short to be detected by proper motion anomalies. We therefore consider in the following analysisc only the 132 systems for which we have derived values of companion true dynamical mass. Additionally we note that the dynamical stability of a planet in the triple systems HD206276 and HD94340 is likely beyond the validity of the Ballantyne et al. (2021) criteria and therefore warrants further study and focused analysis such as numerical simulations, and that the stability assessments presented here for these two systems should then be interpreted as firstorder estimates. Fig. 11 shows the system architectures compared with that of the Solar System, the stability regions represented by green bands.
From our stability estimates, we note that only 4 systems in our sample do not allow for stable S-type orbits, namely the HD58696, HD43848, HD139696 and HD118598 systems, due to the high eccentricities (0.98, 0.97, 0.98 and 0.95 respectively) of their secondary components as derived by the simultaneous radial velocity and proper motion analysis. Additionally, we find both triple systems HD206276 and HD94340 to have a very narrow stable region for S-type orbits, spanning below 0.05 au and therefore unlikely to host circumprimary planetary companions, especially by virtue of the presence of the two stellar companions discussed in Sects. 5.2-5.3. The system featuring the widest 10 1 10 0 10 1 10 2 a (au) S-type stability region in the sample is HD30517, with said region spanning 5.16 au, while the system for which the secondary companion has the larger impact on planetary stability is instead HD3795, in which no planetary orbit appear to be stable from an orbital separation of 5.07 au to 144.91 au from the primary star.
The latter system is also the one in the sample characterized by the wider minimum P-type stable orbit in the sample, while the system with the tighter circumbinary stable orbit is HD120559, having a a cP of 2.58 au. Finally focusing on planetary companions orbiting the secondary component of our binary systems, the largest stable circumsecondary region in our sample is found in the HD3795 system with a a cS,sec of 4.65 au. 10 1 10 0 10 1 10 2 a (au) Considering instead the 86 binary systems in the sample for which we have only radial velocity orbital solution and therefore only minimum mass values for the companions detected in these systems, we apply the same stability criterion using the value of Msin i as the m s in Eqs. 2-4; the stable regions thus obtained are then to be considered estimates of the maximum range of semimajor axes in which stable orbits for planetary companions are possible. The architecture of these systems is then shown in Fig. 12, from which it can be noted that 3 systems (HD8129, HD89707 and HD137763) do not allow for stable S-type orbits, again by virtue of the large eccentricities (0.95 for all of them) of the detected companions.
Detection limits
In order to investigate the detection capabilities of the CORAVEL and CORALIE data analysed so far we compute the detection limits for the binary sample object of the present work, especially focusing on the substellar (Msin i <80 M Jup ) regime.
To this end we follow a injection and retrieval scheme similar to the one pursued in Barbato et al. (2018), injecting synthetic companion signals into the radial velocity residuals timeseries of each star in the sample, obtained by subtracting from the original radial velocity data the contribution of the companions detected and characterized as described in Sect. 3. The synthetic signals are generated over a 40x40 grid of semimajor axes a inj evenly spaced in logarithm from 0.01 to 100 au and minimum masses M inj similarly spaced from 2 M ⊕ to 80 M Jup . For each of the 1600 (a inj , M in j ) realizations we generate and inject into the residuals 500 synthetic radial velocity curves with randomly drawn values of mean longitude λ 0,inj , eccentricity e inj and argument of periastron ω inj . Lastly, we add to each synthetic data thus generated a random Gaussian noise with an amplitude equal to the standard deviation of the instrumental uncertainties of the original timeseries. Each of the resulting 8 · 10 5 synthetic radial velocity timeseries is then fitted with a flat model and a Keplerian one, and the injected signal is considered as detected only if the ∆BIC between the two models is at least 10 points in favour of the Keplerian model. The resulting detection limit maps obtained for each stars in our binary sample are collected in Appendix B, highlighting both the dynamically unstable regions estimated in Sect. 6.1 and the parameter space region of additional companions that we can exclude based on the available radial velocity measurements.
We show in Fig. 13 the averaged completeness map for the whole binary sample; while the CORALIE and CORAVEL measurements collected for the sample do not have the precision and sampling necessary to ensure detection completeness for planetary companion below 10 M ⊕ and we have only partially completeness for giant companions below the mass of Jupiter in the explored range of orbital separation, we are instead complete for companions above 1 M Jup within 1.80 au.
Occurrence rates for stellar and brown dwarfs companions in the sample
From the detection limit map produced in Sect. 6.2 and shown in Fig 13 it is possible to notice that the radial velocity measurements collected for the sample analysed in the present paper allow us to reach full detection completeness for both brown dwarf companions (40<Msin i <80 M Jup ) with semimajor axis below ∼62 au and stellar (Msin i >80 M Jup ) companions within 100 au from the primary stars. The thorough analysis of the detection limits map produced instead for the whole CORALIE sample will be the focus of a future paper in the series, but for the purposes of the present work it is possible to report that the detection completeness for the aforementioned brown dwarf and stellar companion parameter space is confirmed for the overall CORALIE sample. We exclude from the following analysis the five companions with q > 0.8 identified in Sect. 3 and Sect. 4. We can use this information to provide an assessment of the occurrence rate f for brown dwarfs and stellar companions using the binomial distribution:
p(m; N, f ) = N! m!(N − m)! f m (1 − f ) N−m(7)
being N the size of the CORALIE search sample and m the number of detections within the parameter space here taken into account. In order to derive f we follow the approach described in Burgasser et al. (2003), McCarthy & Zuckerman (2004) and previously applied in different occurrence rate sudies (such as Sozzetti et al. 2009;Faria et al. 2016;Barbato et al. 2018;Santos et al. 2011, to name a few), where the inverse binomial function p ( f ; m, N) ∝ p(m; N, f ) is normalized to 1 over a range of f values bound between 0 and 1. This yield the result:
1 0 p ( f ; m, N) d f = 1 → p = (N + 1)p(8)
and the occurrence rate f is then found as the value corresponding to the mode of p . Lastly, upper and lower uncertainty limits f U , f L corresponding to 1σ confidence limits are computed by finding the range covering 68% of the p distribution by numerically solving the relation: noted that these occurrence rates are much lower than the ∼50% computed for the CORAVEL survey presented in Duquennoy & Mayor (1991), but it is important to underline not only the fact that the sample analysed in the cited work was composed of 164 stars and therefore much smaller in size than the whole CORALIE sample we instead considered in the present study, but also that the CORAVEL sample also included a number of SB2 that we instead excluded from our analysis, making a direct comparison between the two studies non-trivial.
m i=0 (N + 1)! i!(N + 1 − i)! x i (1 − x) N+i−1 = 0.84, x = f L 0.16, x = f U(9)
Considering the 13 detections with 40<Msin i <80 M Jup for which the astrometric analysis either confirm the brown dwarf nature or is not possible (see Sect. 5.1 and Unger et al.,in prep.) we obtain an occurrence rate for brown dwarf companions in the CORALIE sample of f BD = 0.79 +0.29 −0.16 %; considering only the 8 such companions found within 5 au from the primary star we obtain an occurrence rate of close-in brown dwarfs of 0.49 +0.24 −0.12 %, while for the 5 wider brown dwarf companions we obtain an occurrence rate of 0.30 +0.21 −0.07 %, values that we note to be compatible within 1σ. While this apparent surplus of brown dwarfs on closer orbits might be interpreted as in opposition with the known brown dwarf desert, it is important to remember that, while the CORALIE survey is certainly able to detect the large amplitude signals of brown dwarf companions on wide orbits as evident from Fig. 13, its 25 yr timespan allows only for the robust identification of Keplerian signals corresponding to an orbital separation up to ∼7 au while wider companions would be instead detected as radial velocity trends which have not been considered for analysis in the present work (see Sect. 3): a number of possible long-period brown dwarf companions in the CORALIE sample are therefore possibly still to be detected, leading to an larger occurrence rate difference between the two population, and the same applies to the stellar companions in the sample.
By instead considering only the 7 brown dwarfs in the sample confirmed as such by the joint radial velocity and proper motion analysis we obtain an occurrence rate of f BD = 0.43 +0.23 −0.11 %, which we therefore propose as a lower limit on the occurrence rate of brown dwarfs in the sample. Selecting again a threshold of 5 au between inner and outer brown dwarfs, we find occurrence rate values of 0.12 +0.17 −0.03 % and 0.30 +0.21 −0.07 % respectively. While we now find a lower occurrence rate of inner brown dwarfs, we must again note that these two values are compatible within 1σ.
Discussion and conclusions
We present in this paper the results of the homogeneous analysis performed in search for brown dwarf and stellar companions to the 1647 stars comprising the long-term CORALIE exoplanetary survey, in order to produce an updated catalog of binary objects in the sample using a combination of radial velocity and astrometry measurements. As a result, we find 218 stars in the CORALIE sample to host at least one stellar or brown dwarf companion, 88 of which are already known in the literature and for which we present updated orbital solutions, and 130 of which are instead not known so far and for which we therefore provide first assessment of the orbital parameters.
Furthermore, by combining radial velocity measurements and astrometric accelerations as computed between the Hipparcos and Gaia EDR3 we are able to derive precise dynamical masses of 132 stellar and brown dwarf companions with an orbital separation down to 1 au. Notably, we are also able to confirm the planetary nature of HD196885 b as well as the brown dwarf nature of 7 companions with 40≤Msin i ≤80 M Jup , while we find 11 companions with minimum masses within this range to be revealed as stellar-mass companions, again stressing the power of joint usage of radial velocity and astrometric measurements in painting a full picture of system characterization.
The detection completeness analysis we perform on the sample also allow us to derive occurrence rates f = 12.69 +0.87 −0.77 % and f BD = 0.43 +0.23 −0.11 % for stellar and brown dwarf companions respectively. While our occurrence rates also show an apparent overabundance of stellar and brown dwarf companions below 5 au compared to those found on wider orbits, it is imperative to stress that in the present work we have considered only those companions that are found to be best characterized by Keplerian models instead of linear or parabolic trends, and therefore a possibly large number of wide massive companions are still to be found and fully characterized by continuous observations by spectrographs and especially direct imaging instruments, and will be the subject of future papers in this series.
The binary sample presented and characterized in this work can not only represent an important element for follow-up studies on binary statistics and comparison with formation and evolution theoretical models for both stellar and brown dwarf companions, but also an unparalleled opportunity for the search of exoplanetary bodies in binary systems, as theoretical models have shown that planetary formation and survival is deeply influenced by stellar companions within ∼100 au such as those detailed in the present study. Both the dynamical stability assessment and detection limit maps we have produced show that there is still significant space for the discovery of exoplanets on circumprimary and circumbinary orbits around the stars here analysed, and continued follow-up observations will allow in the near future to deeply probe the exoplanetary discovery space in the sample, allowing this catalog to be used as testing field for models of planetary formation in binary systems.
Acknowledgements. The authors wish to thank the referee, Dr. F. Kiefer, for the thorough and useful comments which significantly improved the quality of the manuscript. This work has been carried out within the framework of the National Centre of Competence in Research PlanetS supported by the Swiss National Science Foundation under grants 51NF40_182901 and 51NF40_205606. The authors acknowledge the financial support of the SNSF. The 120 cm EULER telescope and the CORALIE spectrograph were funded by the SNSF and the University of Geneva. This publication makes use of the Data & Analysis Center for Exoplanets (DACE), which is a facility based at the University of Geneva (CH) dedicated to extrasolar planets data visualisation, exchange and analysis. DACE is a platform of the Swiss National Centre of Competence in Research (NCCR) Plan-etS, federating the Swiss expertise in Exoplanet research. The DACE platform is available at https://dace.unige.ch. NCS acknowledges support from the European Research Council through the grant agreement 101052347 (FIERCE) and by FCT -Fundação para a Ciência e a Tecnologia through national funds and by FEDER through COMPETE2020 -Programa Operacional Competitividade e Internacionalização by these grants: UIDB/04434/2020; UIDP/04434/2020. This work has made use of data from the European Space Agency (ESA) mission Gaia (https://www.cosmos.esa.int/gaia), processed by the Gaia Data Processing and Analysis Consortium (DPAC, https://www.cosmos.esa.int/web/ gaia/dpac/consortium). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. The authors made use of ASTROPY (a community-developed core Python package for Astronomy Astropy Collaboration et al. 2013Collaboration et al. , 2018, MATPLOTLIB (Hunter 2007), NUMPY (Harris et al. 2020), SCIPY (Jones et al. 2001) and SEABORN (Waskom 2021). DB also wishes to thank N. Gaiman for his inspiring words about the illusion of permanence and stellar transience.
Fig. 2 :
2Distributions of orbital period, semimajor axis, minimum mass, eccentricity and mass ratio q of the Msin i >40 M Jup companions identified in the sample and characterized via radial velocity analysis. The minimum masses, eccentricity and q distributions for inner (a < 5 au) and outer (a ≥ 5 au) companions found in the sample are shown in yellow and green respectively.
Fig. 3 :
3Distribution in the Msin i-a parameter space of the Msin i >40 M Jup companions identified in the sample and characterized via radial velocity analysis. In the main plot, the kernel density estimation of the population is plotted as contour levels, the components of multiple systems are connected by black dash-dotted lines, while the horizontal dashed brown, orange and yellow lines respectively indicate the brown dwarf (40 M Jup ), dwarf star (80 M Jup ) and solar-mass thresholds. The top-left and right-hand histograms show the distribution and kernel density estimation of the semimajor axes and minimum mass of the companions, respectively.
Fig. 4 :
4Comparison between the best-fit semimajor axes obtained by the simultaneous fitting of radial velocity timeseries and proper motions variations and those obtained by the fitting of radial velocities alone. gesting a possible difference in the respective distributions for inner and outer companions. Indeed, as shown inFig. 2, more companions are found on low-eccentricity inner orbits than outer ones, likely as a result of orbit circularization effects.
Fig. 5 :
5Comparison between the distributions in the Msin i-a parameter space of the Msin i > 40 M Jup companions
Fig. 6 :
6Comparison between the best-fit orbital period, semimajor axis, minimum (or true) mass and eccentricity values of Msin i >40 M Jup companions as obtained by the simultaneous fitting of radial velocity timeseries and proper motions variations and those obtained by the fitting of radial velocities alone for the 132 stars considered in Sect. 4.
Fig. 7 :
774 ± 0.17 M ), HD223084 (M= 0.93 ± 0.06 M ), HD173872 (M= 1.10 +0.47 −0.29 M ), HD181199A (M= 1.18 ± 0.06 M ) and HD3795 (M= 0.90 +0.15 −0.17 M ). As previously done in Sect. 3, we follow Halbwachs et al. (2003) and select a threshold of mass ratio q > 0.8 for SB2s in the sample, using this time the companion true mass values Distributions of orbital period, semimajor axis, minimum mass, eccentricity and mass ratio q of Msin i >40 M Jup companions as obtained by the simultaneous radial velocity and proper motion anomaly fits for the 132 stars considered in Sect. 4.
− 4 . 274 M
4274Jup is revealed by astrometric constraints to have a true mass of 0.35 ± 0.02 M . Finally, we again report some results from Unger et al., in prep. like done in Sect. 4, in this work the usage of Gaia DR3 astrometric measurements allow for four of the aforementioned eleven possible brown dwarf companions to be revealed as stellar companions having a true mass above 80 M Jup , namely the ones found around HD3277 (0.468 +0.023 −0.005 M ), HD89707 (0.100± 0.001 M ), HD151528 (0.1403 +0.0001 −0.0030 M ), and HD164427A (0.3369 +0.003 −0.002 M Jup ). As mentioned before, we refer to Unger et al., in prep. for further details on the respective orbital solution,
Fig. 8 :
8Radial velocity best-fit solution
− 0 .
005 M (see Sect. 2), we derive values of minimum masses and semimajor axes of 168.89 +7.59 −7.66 M Jup (corresponding to 0.161 ± 0.007 M ) and 2.45±0.05 au for the outer companion and of 93.96 +4.09 −4.19 M Jup (corresponding to 0.089±0.004 M ) and 0.195±0.004 au for the inner one.
Fig. 9 :
9Top two rows: same asFig. 8but for HD94340. Bottom panel: best-fit solution astrometric orbit induced by HD94340C, Hipparcos intermediate astrometric data shown in red.
− 1 .
116 d, semiamplitude K B = 2.102 +0.001 −0.001 kms −1 and eccentricity e B = 0.322 +0.001 −0.003 , for which we then derive using a primary mass of 1.24 +0.12 −0.15 M (see Sect. 2) values of minimum mass M B sin i B =278.19 +21.98 −22.84 M Jup and semimajor axis a B =13.59 +0.50 −0.54 au. Similarly, we characterize the inner planetary companion as having orbital period P b = 1330.64 +0.07 −0.43 d, semiamplitude K c = 36.94 +1.24−1.67 ms −1 and eccentricity e b = 0.521 +0.325 −0.085 , and we derive a minimum mass M b sin i b =1.96 +0.34 −0.53 M Jup and semimajor axis a b =2.54 +0.10 −0.11 au. As both companions of the primary star orbit beyond our 1 au threshold, we perform the orvara fit on the original radial velocity timeseries instead of removing one of the companion. As a result, we find for the outer companion a true mass of M B = 334 +26 −27 M Jup (corresponding to 0.32 ± 0.02 M ) and inclination i B = 101.9 +1.6 −1.5 deg, and for the inner one M b = 2.67 +1.4 −0.63 M Jup and inclination i C = 89 +42 −44 deg; the bestfit solutions for the proper motions anomaly curves are shown in the bottom row of
Fontanive et al. 2019; Fontanive & Bardalez Gagliuffi 2021; Cadman et al. 2022). Additionally and more closely related to the present work, different studies have highlighted how planetary formation and evolution is especially affected by binary separations less than 100 au (see e.g. Mayer et al. 2005; Moe & Kratter 2021). Fontanive et al.
Fig. 10 :
10Best-fit orbital solutions for HD196885. Top row: Radial velocity solution and phase folded model curves, with CORAVEL, CORALIE98, CORALIE07 and CORALIE14 measurements shown in orange, blue, green and purple respectively. Bottom row:
Fig. 12 :
12Same asFig. 11but for the 86 systems with only radial velocity solutions available, for which the dynamically stable and unstable regions for additional planetary companions are computed using the Msin i value of the detected companions.
Fig. 13 :
13Completeness map of the binary sample, focused on the substellar (2 M ⊕ <Msin i <80 M Jup ) companion regime. Detection frequency contour levels of 10, 50 and 100% are respectively shown as solid, dashed and dotted curves, while the companions detected in our search are shown as white circles.
Table 1 :
1Stellar parameters for the sample discussed in this work, derived by the SED fits described in Sect. 2 except spectral types, retrieved from Simbad. Nameα(J2000)
δ(J2000)
SpType
M
R
L
ρ
log g
T
eff
[Fe H]
[ M ]
[ R ]
[ L ]
[cgs]
[cgs]
[K]
[dex]
HD225155 00 h
03 m
53.37 s
-28 •
23 37.70
G5IV
1.15 +0.15
−0.16
1.41 ±
0.10
2.78 +0.75
−0.59
0.59 +0.16
−0.13
4.21 +0.08
−0.09
6290 +370
−340
−0.22 +0.27
−0.28
HD1815
00 h
22 m
23.56 s
-27 •
01 57.05
K2V
0.71 ±
0.04 0.67 ±
0.03
0.24 +0.04
−0.03
3.30 +0.41
−0.36
4.63 ±
0.03
4930 +160
−130
−0.30 +0.20
−0.21
HD1926
00 h
23 m
04.73 s
-65 •
07 16.11
F8/G0V
1.00 +0.14
−0.12
1.15 +0.14
−0.15
1.90 +0.64
−0.56
0.95 +0.41
−0.28
4.33 ±
0.10 6320 ±
260
−0.49 +0.29
−0.28
HD2070
00 h
24 m
44.81 s
-51 •
02 37.90
G0V
1.15 +0.14
−0.15
1.36 ±
0.04
2.66 +0.51
−0.37
0.65 +0.12
−0.11
4.23 +0.06
−0.07
6320 +330
−270
−0.23 +0.24
−0.28
HD2098
00 h
25 m
01.41 s
-30 •
41 51.41
G2V
1.01 +0.15
−0.12
1.16 ±
0.17
1.68 +0.67
−0.56
0.94 +0.47
−0.31
4.33 +0.11
−0.12
6080 +310
−300
−0.22 +0.32
−0.33
HD3222
00 h
35 m
02.81 s
-63 •
41 42.64
K2V
0.78 +0.05
−0.04
0.76 ±
0.02 0.46 ±
0.07
2.54 +0.25
−0.23
4.57 ±
0.03
5450 +200
−190
−0.41 ±
0.21
HD3277
00 h
35 m
34.25 s
-39 •
44 46.65
G8V
0.94 +0.11
−0.10
1.02 ±
0.14
1.22 +0.44
−0.36
1.28 +0.55
−0.38
4.40 ±
0.10 5990 ±
230
−0.30 +0.27
−0.28
HD3359
00 h
36 m
04.40 s
-49 •
07 41.28
G8V
0.94 +0.09
−0.08
0.98 ±
0.07
1.00 +0.23
−0.20
1.42 +0.31
−0.26
4.43 ±
0.06 5830 ±
230 −0.17 ±
0.28
HD3795
00 h
40 m
32.79 s
-23 •
48 17.72
K0V
1.05 +0.34
−0.17
1.25 +0.80
−0.34
1.76 +3.90
−0.96
0.79 +0.94
−0.57
4.28 +0.20
−0.36
5800 +500
−260
−0.04 +0.28
−0.35
HD4392
00 h
45 m
41.87 s
-48 •
18 04.56
G4V
0.92 +0.11
−0.09
1.02 +0.14
−0.13
1.30 +0.44
−0.37
1.26 +0.54
−0.38
4.40 ±
0.10
6100 +210
−200
−0.47 ±
0.24
HD4747
00 h
49 m
26.76 s
-23 •
12 44.86
G8V
1.02 ±
0.09 1.70 ±
0.04
2.45 +0.26
−0.16
0.29 ±
0.04 3.98 ±
0.05
5540 +160
−120
−0.03 +0.22
−0.24
HD5562
00 h
56 m
21.26 s
-63 •
57 30.03
G8IV
1.19 +0.22
−0.17
1.89 ±
0.07
4.22 +1.10
−0.86
0.25 +0.07
−0.05
3.96 ±
0.09
6030 +430
−400
−0.14 +0.25
−0.29
HD7320
01 h
13 m
18.82 s
-01 •
51 43.72
G5V
0.88 ±
0.06 0.87 ±
0.02
0.75 +0.09
−0.07
1.87 +0.19
−0.18
4.50 ±
0.04
5760 +180
−150
−0.30 +0.19
−0.20
HD8129
01 h
20 m
30.01 s
-19 •
56 56.73
G7V
0.94 +0.11
−0.10
1.00 ±
0.13
1.06 +0.41
−0.33
1.36 +0.55
−0.39
4.42 +0.09
−0.10
5850 ±
270
−0.20 +0.31
−0.32
HD9770
01 h
35 m
01.00 s
-29 •
54 37.34
K1V
1.12 +0.23
−0.19
1.40 +0.26
−0.29
2.80 +1.90
−1.30
0.61 +0.46
−0.23
4.22 +0.15
−0.14
6290 +570
−460
−0.27 +0.32
−0.33
HD9905
01 h
36 m
10.09 s
-29 •
23 32.47
K1V
0.95 +0.21
−0.13
1.11 +0.75
−0.29
1.38 +2.40
−0.73
0.99 +1.10
−0.75
4.33 +0.20
−0.39
5830 ±
270
−0.28 +0.28
−0.29
HD10519
01 h
42 m
14.91 s
-17 •
53 19.47
G2V
1.08 +0.19
−0.15
1.46 +0.21
−0.24
2.77 +1.10
−0.87
0.50 +0.31
−0.17
4.16 +0.14
−0.13
6180 +330
−310
−0.29 ±
0.27
HD11131
01 h
49 m
23.34 s
-10 •
42 13.08
G3V
0.96 +0.17
−0.12
1.06 +0.30
−0.20
1.30 +1.00
−0.54
1.16 +0.82
−0.57
4.38 +0.14
−0.19
5960 ±
260
−0.27 +0.32
−0.33
HD11264
01 h
49 m
35.56 s
-46 •
46 07.19
G5V
1.02 +0.14
−0.13
1.17 ±
0.14
2.10 +0.73
−0.59
0.92 +0.37
−0.25
4.32 +0.09
−0.10
6430 +310
−270
−0.57 +0.30
−0.31
HD11352
01 h
51 m
31.19 s
-07 •
44 23.57
G5V
0.89 ±
0.07
0.89 +0.06
−0.05
0.91 +0.18
−0.16
1.79 +0.32
−0.29
4.49 +0.05
−0.06
5990 ±
200
−0.45 +0.23
−0.24
HD13945
02 h
15 m
16.20 s
-23 •
16 52.93
G6IV
1.03 +0.10
−0.11
1.12 ±
0.03
1.63 +0.23
−0.20
1.04 ±
0.14
4.35 +0.05
−0.06
6170 +230
−220
−0.25 +0.24
−0.27
HD14629
02 h
20 m
42.92 s
-39 •
02 01.44
K3V
0.74 ±
0.04 0.71 ±
0.02
0.36 +0.05
−0.04
2.88 +0.23
−0.21
4.60 ±
0.03
5270 +180
−170
−0.40 +0.19
−0.20
HD14802
02 h
22 m
32.59 s
-23 •
49 00.47
G0V
1.46 +0.20
−0.23
1.84 ±
0.07
6.20 +2.70
−1.40
0.33 +0.09
−0.07
4.07 ±
0.09
6730 +750
−510
−0.12 +0.25
−0.28
HD15064
02 h
24 m
33.88 s
-40 •
50 25.64
G1V
1.29 +0.15
−0.20
1.66 ±
0.05
3.70 +0.84
−0.54
0.40 ±
0.08
4.11 +0.07
−0.09
6210 +390
−290
−0.03 +0.23
−0.27
HD16287
02 h
36 m
41.76 s
-03 •
09 22.09
K1V
0.81 ±
0.05 0.78 ±
0.04
0.41 +0.07
−0.06
2.38 +0.29
−0.28
4.56 +0.03
−0.04
5220 +170
−160
−0.09 ±
0.20
HD17155
02 h
43 m
34.21 s
-46 •
27 17.51
K4V
0.75 ±
0.04 0.71 ±
0.02
0.27 +0.04
−0.03
2.96 +0.23
−0.22
4.61 ±
0.03
4960 +150
−140
−0.14 +0.15
−0.16
HD17289
02 h
43 m
35.47 s
-62 •
55 09.10
G0V
1.10 +0.18
−0.16
1.37 +0.21
−0.24
2.65 +1.30
−1.00
0.62 +0.40
−0.21
4.22 ±
0.13
6290 +410
−380
−0.29 +0.28
−0.27
HD17152
02 h
44 m
28.95 s
-24 •
24 56.33
G8V
0.92 ±
0.08 0.96 ±
0.06
0.98 +0.19
−0.16
1.49 +0.28
−0.24
4.44 +0.05
−0.06
5870 +210
−200
−0.26 +0.24
−0.25
HD18168
02 h
54 m
02.78 s
-35 •
54 16.87
K3V
0.91 +0.08
−0.07
0.94 ±
0.06
0.91 +0.17
−0.16
1.56 +0.29
−0.26
4.46 +0.05
−0.06
5790 ±
210
−0.20 +0.25
−0.27
HD18809
03 h
00 m
19.71 s
-37 •
27 16.16
G4V
0.93 ±
0.08 0.96 ±
0.03
1.03 +0.15
−0.13
1.50 ±
0.19
4.45 +0.04
−0.05
5940 +200
−190
−0.31 +0.24
−0.25
HD18907
03 h
01 m
37.62 s
-28 •
05 29.37
G9V
0.48 +0.52
−0.33
0.48 +0.66
−0.30
0.05 +1.40
−0.04
6.30 +32.00
−5.40
4.77 +0.35
−0.46
3990 +2000
−650
−0.23 +0.32
−0.30
HD20916
03 h
20 m
11.75 s
-52 •
01 54.67
K0V
0.81 ±
0.05 0.80 ±
0.02
0.59 +0.08
−0.07
2.25 +0.21
−0.20
4.54 ±
0.03 5670 ±
190
−0.44 +0.21
−0.22
HD22705
03 h
36 m
53.40 s
-49 •
57 28.87
G2V
1.01 +0.11
−0.12
1.11 ±
0.06
1.94 +0.43
−0.35
1.05 +0.21
−0.20
4.36 +0.06
−0.07
6470 +310
−300
−0.61 +0.28
−0.29
HD24492
03 h
40 m
48.90 s
-81 •
47 20.65
G6V
0.95 +0.11
−0.10
1.02 ±
0.12
1.23 +0.41
−0.34
1.28 +0.47
−0.35
4.41 +0.08
−0.09
6010 ±
250
−0.30 +0.29
−0.30
HD23308
03 h
42 m
09.85 s
-45 •
57 28.39
F7V
1.16 +0.33
−0.25
1.35 +0.50
−0.39
2.70 +3.80
−1.70
0.67 +0.87
−0.36
4.25 ±
0.21
6340 +530
−500
−0.23 ±
0.29
HD23576
03 h
44 m
45.42 s
-38 •
49 05.05
G1V
1.03 +0.10
−0.11
1.12 ±
0.03
1.65 +0.21
−0.16
1.03 ±
0.14
4.35 +0.05
−0.06
6170 +220
−180
−0.26 +0.23
−0.25
HD25874
04 h
02 m
26.97 s
-61 •
21 25.16
G2V
1.00 +0.12
−0.11
1.11 ±
0.11
1.50 +0.41
−0.35
1.05 +0.34
−0.26
4.36 +0.08
−0.09
6060 +240
−230
−0.22 ±
0.27
...
...
...
...
...
...
...
...
...
...
...
Notes.
Full table is available at the CDS. A portion is shown here for guidance regarding its form and content.
Article number, page 4 of 34
Table 2 :
2Best-fit orbital solutions for the binary systems identified in the sample, left side reporting the results from the radial velocity fits (see Sect. 3 and right side referring to the simultaneous radial velocities and proper motions fits (see Sect. 4). We note that systems with mass ratio q > 0.6 could in principle be affected by photocenter bias leading to underestimating the barycentric semimajor axis at most by 5-9%RV only solution
Table 3 :
3Notes. Full table is available at the CDS. A portion is shown here for guidance regarding its form and content.Boundaries of the circumprimary, circumbinary and cir-
cumsecondary
stability regions of the systems in the binary sample as
described in Sect. 6.1. The Note column indicates with value of
the detected companion mass is used for the stability estimation.
Name
d Roche
a cS
a cP
a cS,sec
Note
[ au ] [ au ]
[ au ]
[ au ]
HD225155 0.008 0.889 13.035 0.428 M true
HD1815
0.006 0.646
6.016
0.340 Msin i
HD1926
0.007 0.229
1.396
0.086 Msin i
HD2070
0.008 0.115
1.843
0.058 Msin i
HD2098
0.007 1.185 12.659 0.756 M true
HD3222
0.007 2.839 41.274 1.336 M true
HD3277
0.007 0.071
0.766
0.021 Msin i
HD3359
0.007 0.036
0.566
0.018 Msin i
HD3795
0.007 5.067 144.914 4.650 M true
HD4392
0.007 0.145
5.935
0.109 M true
HD4747
0.007 1.096 39.734 0.387 M true
HD5562
0.008 1.233 20.116 0.702 M true
HD7320
0.007 0.166 51.340 0.095 M true
HD8129
0.007
-
27.509
-
Msin i
HD9770
0.008 0.766
9.656
0.269 Msin i
HD9905
0.007 2.421 31.232 0.951 M true
HD10519
0.007 0.477 38.102 0.345 M true
HD11131
0.007 0.410 20.108 0.260 M true
HD11264
0.007 3.054 46.823 2.032 M true
HD11352
0.007 0.315
4.046
0.183 M true
...
...
...
...
...
Firstly, considering the 209 stellar companions detected (203 from this work and 6 from Unger et al., in prep.) we find an occurrence rate for stellar companions of f = 12.69 +0.87 −0.77 %. If again we distinguish between the 127 inner and 82 outer stellar companions setting the threshold at 5 au, we find occurrence rate values of 7.71 +0.71 −0.60 % and 4.98 +0.59 −0.48 % respectively. It can be Article number, page 18 of 34 D. Barbato et al.: Brown dwarfs and stellar companions unveiled by radial velocity and astrometry
https://dace.unige.ch Article number, page 5 of 34 A&A proofs: manuscript no. coralie-binaries
https://github.com/nicochunger/orvara/tree/ period-prior
Article number, page 34 of 34
Detection frequency contour levels of 10, 50 and 100% are respectively shown as solid, dashed and dotted curves, while the companions detected around each star are shown as white circles. In each plot the hatched red and orange boxes represent the dynamically unstable semimajor axis range for additional substellar companions computed respectively using the true mass and minimum mass values of the detected companions in the sample as described in Sect. 6.1, with a vertical yellow line representing the detected companion semimajor axis. The hatched grey boxes show the region of parameter space for which we can exclude the presence of additional circumprimary companions based on the CORAVEL and CORALIE data analysed.
. F Anders, A Khalatyan, C Chiappini, A&A. 62894Anders, F., Khalatyan, A., Chiappini, C., et al. 2019, A&A, 628, A94
. A M Price-Whelan, Astropy CollaborationB M Sipőcz, Astropy CollaborationAJ. 156123Astropy Collaboration, Price-Whelan, A. M., Sipőcz, B. M., et al. 2018, AJ, 156, 123
. T P Robitaille, Astropy CollaborationE J Tollerud, Astropy CollaborationA&A. 558A33 Article numberAstropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, A33 Article number, page 19 of 34
. A&A. manuscript no. coralie-binariesA&A proofs: manuscript no. coralie-binaries
. H A Ballantyne, T Espaas, B Z Norgrove, MNRAS. 5074507Ballantyne, H. A., Espaas, T., Norgrove, B. Z., et al. 2021, MNRAS, 507, 4507
. R V Baluev, MNRAS. 3851279Baluev, R. V. 2008, MNRAS, 385, 1279
. A Baranne, M Mayor, J L Poncet, Vistas in Astronomy. 23279Baranne, A., Mayor, M., & Poncet, J. L. 1979, Vistas in Astronomy, 23, 279
. D Barbato, A Sozzetti, S Desidera, A&A. 615175Barbato, D., Sozzetti, A., Desidera, S., et al. 2018, A&A, 615, A175
H M J Boffin, D Pourbaix, The Observatory. 123126Boffin, H. M. J. & Pourbaix, D. 2003, The Observatory, 123, 126
. I A Bonnell, MNRAS. 269837Bonnell, I. A. 1994, MNRAS, 269, 837
. G M Brandt, T D Brandt, T J Dupuy, Y Li, D Michalik, AJ. 161179Brandt, G. M., Brandt, T. D., Dupuy, T. J., Li, Y., & Michalik, D. 2021a, AJ, 161, 179
. T D Brandt, ApJS. 23931Brandt, T. D. 2018, ApJS, 239, 31
. T D Brandt, ApJS. 25442Brandt, T. D. 2021, ApJS, 254, 42
. T D Brandt, T J Dupuy, B P Bowler, AJ. 158140Brandt, T. D., Dupuy, T. J., & Bowler, B. P. 2019, AJ, 158, 140
. T D Brandt, T J Dupuy, B P Bowler, AJ. 160196Brandt, T. D., Dupuy, T. J., Bowler, B. P., et al. 2020, AJ, 160, 196
. T D Brandt, T J Dupuy, Y Li, AJ. 162186Brandt, T. D., Dupuy, T. J., Li, Y., et al. 2021b, AJ, 162, 186
. A J Burgasser, J D Kirkpatrick, I N Reid, ApJ. 586512Burgasser, A. J., Kirkpatrick, J. D., Reid, I. N., et al. 2003, ApJ, 586, 512
. J Cadman, C Hall, C Fontanive, K Rice, MNRAS. 511457Cadman, J., Hall, C., Fontanive, C., & Rice, K. 2022, MNRAS, 511, 457
. P Calissendorff, M Janson, A&A. 615149Calissendorff, P. & Janson, M. 2018, A&A, 615, A149
. F Cersullo, F Wildi, B Chazelas, F Pepe, A&A. 601102Cersullo, F., Wildi, F., Chazelas, B., & Pepe, F. 2017, A&A, 601, A102
. G Chauvin, H Beust, A M Lagrange, A Eggenberger, A&A. 5288Chauvin, G., Beust, H., Lagrange, A. M., & Eggenberger, A. 2011, A&A, 528, A8
. G Chauvin, A M Lagrange, S Udry, A&A. 4561165Chauvin, G., Lagrange, A. M., Udry, S., et al. 2006, A&A, 456, 1165
. G Chauvin, A M Lagrange, S Udry, M Mayor, A&A. 475723Chauvin, G., Lagrange, A. M., Udry, S., & Mayor, M. 2007, A&A, 475, 723
B Chazelas, F Pepe, F ; R Wildi, C R Navarro, & E Cunningham, Prieto, Modern Technologies in Space-and Ground-based Telescopes and Instrumentation II. 8450845013Society of Photo-Optical Instrumentation Engineers (SPIE) Conference SeriesChazelas, B., Pepe, F., & Wildi, F. 2012, in Society of Photo-Optical Instru- mentation Engineers (SPIE) Conference Series, Vol. 8450, Modern Tech- nologies in Space-and Ground-based Telescopes and Instrumentation II, ed. R. Navarro, C. R. Cunningham, & E. Prieto, 845013
. J Choi, A Dotter, C Conroy, ApJ. 823102Choi, J., Dotter, A., Conroy, C., et al. 2016, ApJ, 823, 102
. A C M Correia, S Udry, M Mayor, A&A. 479271Correia, A. C. M., Udry, S., Mayor, M., et al. 2008, A&A, 479, 271
VizieR Online Data Catalog. R M Cutri, E L Wright, T Conrow, II/328Cutri, R. M., Wright, E. L., Conrow, T., et al. 2021, VizieR Online Data Catalog, II/328
. M Damasso, F Del Sordo, G Anglada-Escudé, Science Advances. 67467Damasso, M., Del Sordo, F., Anglada-Escudé, G., et al. 2020a, Science Ad- vances, 6, eaax7467
. M Damasso, A Sozzetti, C Lovis, A&A. 64231Damasso, M., Sozzetti, A., Lovis, C., et al. 2020b, A&A, 642, A31
. J B Delisle, N Hara, D Ségransan, A&A. 63895Delisle, J. B., Hara, N., & Ségransan, D. 2020, A&A, 638, A95
. J B Delisle, D Ségransan, A&A. 667172Delisle, J. B. & Ségransan, D. 2022, A&A, 667, A172
. J B Delisle, D Ségransan, N Buchschacher, F Alesina, A&A. 590134Delisle, J. B., Ségransan, D., Buchschacher, N., & Alesina, F. 2016, A&A, 590, A134
. J B Delisle, D Ségransan, X Dumusque, A&A. 614133Delisle, J. B., Ségransan, D., Dumusque, X., et al. 2018, A&A, 614, A133
. R F Díaz, D Ségransan, S Udry, A&A. 585134Díaz, R. F., Ségransan, D., Udry, S., et al. 2016, A&A, 585, A134
. A Dotter, ApJS. 2228Dotter, A. 2016, ApJS, 222, 8
. A Duquennoy, M Mayor, A&A. 248485Duquennoy, A. & Mayor, M. 1991, A&A, 248, 485
. J D Eastman, J E Rodriguez, E Agol, arXiv:1907.09480arXiv e-printsEastman, J. D., Rodriguez, J. E., Agol, E., et al. 2019, arXiv e-prints, arXiv:1907.09480
A Eggenberger, S Udry, Planets in Binary Star Systems. N. Haghighipour36619Eggenberger, A. & Udry, S. 2010, in Astrophysics and Space Science Library, Vol. 366, Planets in Binary Star Systems, ed. N. Haghighipour, 19
. K El-Badry, H.-W Rix, H Tian, G Duchêne, M Moe, MNRAS. 4895822El-Badry, K., Rix, H.-W., Tian, H., Duchêne, G., & Moe, M. 2019, MNRAS, 489, 5822
. J P Faria, N C Santos, P Figueira, A&A. 58925Faria, J. P., Santos, N. C., Figueira, P., et al. 2016, A&A, 589, A25
. F C Fekel, D W Willmarth, H A Abt, D Pourbaix, AJ. 156117Fekel, F. C., Willmarth, D. W., Abt, H. A., & Pourbaix, D. 2018, AJ, 156, 117
. F Feng, R P Butler, H R A Jones, MNRAS. 5072856Feng, F., Butler, R. P., Jones, H. R. A., et al. 2021, MNRAS, 507, 2856
. D Fischer, P Driscoll, H Isaacson, ApJ. 7031545Fischer, D., Driscoll, P., Isaacson, H., et al. 2009, ApJ, 703, 1545
. C Fontanive, D Bardalez Gagliuffi, Frontiers in Astronomy and Space Sciences. 816Fontanive, C. & Bardalez Gagliuffi, D. 2021, Frontiers in Astronomy and Space Sciences, 8, 16
. C Fontanive, K Rice, M Bonavita, MNRAS. 4854967Fontanive, C., Rice, K., Bonavita, M., et al. 2019, MNRAS, 485, 4967
. A G A Brown, Gaia CollaborationA Vallenari, Gaia CollaborationA&A. 6491Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2021, A&A, 649, A1
. T Prusti, Gaia CollaborationJ H J De Bruijne, Gaia CollaborationA&A. 5951Gaia Collaboration, Prusti, T., de Bruijne, J. H. J., et al. 2016, A&A, 595, A1
. C F Gammie, ApJ. 553174Gammie, C. F. 2001, ApJ, 553, 174
. A Goldin, V V Makarov, ApJS. 173137Goldin, A. & Makarov, V. V. 2007, ApJS, 173, 137
. J Gomez, J A Docobo, P P Campo, R A Mendez, AJ. 152216Gomez, J., Docobo, J. A., Campo, P. P., & Mendez, R. A. 2016, AJ, 152, 216
. D Grether, C H Lineweaver, ApJ. 6401051Grether, D. & Lineweaver, C. H. 2006, ApJ, 640, 1051
. N Grieves, J Ge, N Thomas, MNRAS. 4674264Grieves, N., Ge, J., Thomas, N., et al. 2017, MNRAS, 467, 4264
. D Guszejnov, P F Hopkins, MNRAS. 4504137Guszejnov, D. & Hopkins, P. F. 2015, MNRAS, 450, 4137
. D Guszejnov, P F Hopkins, M R Krumholz, MNRAS. 4684093Guszejnov, D., Hopkins, P. F., & Krumholz, M. R. 2017, MNRAS, 468, 4093
. J L Halbwachs, M Mayor, S Udry, A&A. 61981Halbwachs, J. L., Mayor, M., & Udry, S. 2018, A&A, 619, A81
. J L Halbwachs, M Mayor, S Udry, F Arenou, A&A. 397159Halbwachs, J. L., Mayor, M., Udry, S., & Arenou, F. 2003, A&A, 397, 159
. C R Harris, K J Millman, S J Van Der Walt, Nature. 585357Harris, C. R., Millman, K. J., van der Walt, S. J., et al. 2020, Nature, 585, 357
. D Harsono, R D Alexander, Y Levin, MNRAS. 413423Harsono, D., Alexander, R. D., & Levin, Y. 2011, MNRAS, 413, 423
. E Høg, C Fabricius, V V Makarov, A&A. 35527Høg, E., Fabricius, C., Makarov, V. V., et al. 2000, A&A, 355, L27
. B Holl, A Sozzetti, J Sahlmann, arXiv:2206.05439arXiv e-printsHoll, B., Sozzetti, A., Sahlmann, J., et al. 2022, arXiv e-prints, arXiv:2206.05439
. M J Holman, P A Wiegert, AJ. 117621Holman, M. J. & Wiegert, P. A. 1999, AJ, 117, 621
. J D Hunter, Computing in Science and Engineering. 990Hunter, J. D. 2007, Computing in Science and Engineering, 9, 90
. J S Jenkins, M Díaz, H R A Jones, MNRAS. 4531439Jenkins, J. S., Díaz, M., Jones, H. R. A., et al. 2015, MNRAS, 453, 1439
SciPy: Open source scientific tools for Python Kervella. E Jones, T Oliphant, P Peterson, A&A. 62372Jones, E., Oliphant, T., Peterson, P., et al. 2001, SciPy: Open source scientific tools for Python Kervella, P., Arenou, F., Mignard, F., & Thévenin, F. 2019a, A&A, 623, A72
. P Kervella, F Arenou, J Schneider, A&A. 63514Kervella, P., Arenou, F., & Schneider, J. 2020, A&A, 635, L14
. P Kervella, F Arenou, F Thévenin, A&A. 6577Kervella, P., Arenou, F., & Thévenin, F. 2022, A&A, 657, A7
. P Kervella, A Gallenne, N R Evans, A&A. 623117Kervella, P., Gallenne, A., Evans, N. R., et al. 2019b, A&A, 623, A117
. P Kervella, A Gallenne, N Evans, A&A. 623116Kervella, P., Gallenne, A., Remage Evans, N., et al. 2019c, A&A, 623, A116
. F Kiefer, G Hébrard, J Sahlmann, A&A. 631125Kiefer, F., Hébrard, G., Sahlmann, J., et al. 2019, A&A, 631, A125
. Z Kong, J H Jiang, Z.-H Zhu, K A Fahy, R Burn, arXiv:2101.02316arXiv e-printsKong, Z., Jiang, J. H., Zhu, Z.-H., Fahy, K. A., & Burn, R. 2021, arXiv e-prints, arXiv:2101.02316
. V Könyves, P André, A Men'shchikov, A&A. 58491Könyves, V., André, P., Men'shchikov, A., et al. 2015, A&A, 584, A91
. K M Kratter, C D Matzner, M R Krumholz, R I Klein, ApJ. 7081585Kratter, K. M., Matzner, C. D., Krumholz, M. R., & Klein, R. I. 2010, ApJ, 708, 1585
. A L Kraus, M J Ireland, D Huber, A W Mann, T J Dupuy, AJ. 1528Kraus, A. L., Ireland, M. J., Huber, D., Mann, A. W., & Dupuy, T. J. 2016, AJ, 152, 8
. C Lam, D Kipping, MNRAS. 4765692Lam, C. & Kipping, D. 2018, MNRAS, 476, 5692
. D W Latham, R P Stefanik, G Torres, AJ. 1241144Latham, D. W., Stefanik, R. P., Torres, G., et al. 2002, AJ, 124, 1144
. L Lindegren, S A Klioner, J Hernández, A&A. 6492Lindegren, L., Klioner, S. A., Hernández, J., et al. 2021, A&A, 649, A2
. J Llop-Sayson, J J Wang, J.-B Ruffio, AJ. 162181Llop-Sayson, J., Wang, J. J., Ruffio, J.-B., et al. 2021, AJ, 162, 181
. L B Lucy, E Ricco, AJ. 84401Lucy, L. B. & Ricco, E. 1979, AJ, 84, 401
. B Ma, J Ge, MNRAS. 4392781Ma, B. & Ge, J. 2014, MNRAS, 439, 2781
. V V Makarov, ApJ. 65481Makarov, V. V. 2007, ApJ, 654, L81
. V V Makarov, G H Kaplan, AJ. 1292420Makarov, V. V. & Kaplan, G. H. 2005, AJ, 129, 2420
. V V Makarov, N Zacharias, C T Finch, Research Notes of the American Astronomical Society. 5108Makarov, V. V., Zacharias, N., & Finch, C. T. 2021a, Research Notes of the American Astronomical Society, 5, 108
. V V Makarov, N Zacharias, C T Finch, arXiv:2107.01090arXiv e-printsMakarov, V. V., Zacharias, N., & Finch, C. T. 2021b, arXiv e-prints, arXiv:2107.01090
. M Marmier, D Ségransan, S Udry, A&A. 55190Marmier, M., Ségransan, D., Udry, S., et al. 2013, A&A, 551, A90
. F Marzari, G Gallina, A&A. 59489Marzari, F. & Gallina, G. 2016, A&A, 594, A89
. L Mayer, J Wadsley, T Quinn, J Stadel, MNRAS. 363641Mayer, L., Wadsley, J., Quinn, T., & Stadel, J. 2005, MNRAS, 363, 641
. M Mayor, M Marmier, C Lovis, arXiv:1109.2497arXiv e-printsMayor, M., Marmier, M., Lovis, C., et al. 2011, arXiv e-prints, arXiv:1109.2497
. M Mayor, F Pepe, D Queloz, The Messenger. 11420Mayor, M., Pepe, F., Queloz, D., et al. 2003, The Messenger, 114, 20
. C Mccarthy, B Zuckerman, AJ. 1272871McCarthy, C. & Zuckerman, B. 2004, AJ, 127, 2871
. C H F Melo, A&A. 410269Melo, C. H. F. 2003, A&A, 410, 269
. M Moe, R Di Stefano, ApJS. 23015Moe, M. & Di Stefano, R. 2017, ApJS, 230, 15
. M Moe, K M Kratter, MNRAS. 5073593Moe, M. & Kratter, K. M. 2021, MNRAS, 507, 3593
. Z E Musielak, M Cuntz, E A Marshall, T D Stuit, A&A. 434355Musielak, Z. E., Cuntz, M., Marshall, E. A., & Stuit, T. D. 2005, A&A, 434, 355
. H Ngo, H A Knutson, M L Bryan, AJ. 153242Ngo, H., Knutson, H. A., Bryan, M. L., et al. 2017, AJ, 153, 242
. H Ngo, H A Knutson, S Hinkley, ApJ. 8278Ngo, H., Knutson, H. A., Hinkley, S., et al. 2016, ApJ, 827, 8
. D L Nidever, G W Marcy, R P Butler, D A Fischer, S S Vogt, ApJS. 141503Nidever, D. L., Marcy, G. W., Butler, R. P., Fischer, D. A., & Vogt, S. S. 2002, ApJS, 141, 503
. S S R Offner, M Moe, K M Kratter, arXiv:2203.10066arXiv e-printsOffner, S. S. R., Moe, M., Kratter, K. M., et al. 2022, arXiv e-prints, arXiv:2203.10066
. R J Parker, S P Quanz, MNRAS. 436650Parker, R. J. & Quanz, S. P. 2013, MNRAS, 436, 650
F Pepe, M Mayor, B Delabre, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series. M. Iye & A. F. Moorwood4008Optical and IR Telescope Instrumentation and DetectorsPepe, F., Mayor, M., Delabre, B., et al. 2000, in Society of Photo-Optical In- strumentation Engineers (SPIE) Conference Series, Vol. 4008, Optical and IR Telescope Instrumentation and Detectors, ed. M. Iye & A. F. Moorwood, 582-592
. F Pepe, P Molaro, S Cristiani, arXiv:1401.5918arXiv e-printsPepe, F., Molaro, P., Cristiani, S., et al. 2014, arXiv e-prints, arXiv:1401.5918
. S Peretti, D Ségransan, B Lavie, A&A. 631107Peretti, S., Ségransan, D., Lavie, B., et al. 2019, A&A, 631, A107
. M A C Perryman, L Lindegren, J Kovalevsky, A&A. 32349Perryman, M. A. C., Lindegren, L., Kovalevsky, J., et al. 1997, A&A, 323, L49
. E Pilat-Lohinger, R Dvorak, Celestial Mechanics and Dynamical Astronomy. 82143Pilat-Lohinger, E. & Dvorak, R. 2002, Celestial Mechanics and Dynamical As- tronomy, 82, 143
. E Pilat-Lohinger, B Funk, R Dvorak, A&A. 4001085Pilat-Lohinger, E., Funk, B., & Dvorak, R. 2003, A&A, 400, 1085
. J E Pineda, S S R Offner, R J Parker, Nature. 518213Pineda, J. E., Offner, S. S. R., Parker, R. J., et al. 2015, Nature, 518, 213
. B Quarles, G Li, V Kostov, N Haghighipour, AJ. 15980Quarles, B., Li, G., Kostov, V., & Haghighipour, N. 2020, AJ, 159, 80
. B Quarles, S Satyal, V Kostov, N Kaib, N Haghighipour, ApJ. 856150Quarles, B., Satyal, S., Kostov, V., Kaib, N., & Haghighipour, N. 2018, ApJ, 856, 150
. D Queloz, M Mayor, L Weber, A&A. 35499Queloz, D., Mayor, M., Weber, L., et al. 2000, A&A, 354, 99
. D Raghavan, H A Mcalister, T J Henry, ApJS. 1901Raghavan, D., McAlister, H. A., Henry, T. J., et al. 2010, ApJS, 190, 1
. E L Rickman, E Matthews, W Ceva, arXiv:2209.12957arXiv e-printsRickman, E. L., Matthews, E., Ceva, W., et al. 2022, arXiv e-prints, arXiv:2209.12957
. E L Rickman, D Ségransan, M Marmier, A&A. 62571Rickman, E. L., Ségransan, D., Marmier, M., et al. 2019, A&A, 625, A71
. T Roell, R Neuhäuser, A Seifahrt, M Mugrauer, A&A. 54292Roell, T., Neuhäuser, R., Seifahrt, A., & Mugrauer, M. 2012, A&A, 542, A92
J Sahlmann, IAU Focus Meeting. 29217Sahlmann, J. 2016, IAU Focus Meeting, 29A, 217
J Sahlmann, D Ségransan, D Queloz, S Udry, The Astrophysics of Planetary Systems: Formation, Structure, and Dynamical Evolution. A. Sozzetti, M. G. Lattanzi, & A. P. Boss276Sahlmann, J., Ségransan, D., Queloz, D., & Udry, S. 2011a, in The Astrophysics of Planetary Systems: Formation, Structure, and Dynamical Evolution, ed. A. Sozzetti, M. G. Lattanzi, & A. P. Boss, Vol. 276, 117-120
. J Sahlmann, D Ségransan, D Queloz, A&A. 52595Sahlmann, J., Ségransan, D., Queloz, D., et al. 2011b, A&A, 525, A95
. N C Santos, G Israelian, M Mayor, A&A. 3731019Santos, N. C., Israelian, G., & Mayor, M. 2001, A&A, 373, 1019
. N C Santos, M Mayor, X Bonfils, A&A. 526112Santos, N. C., Mayor, M., Bonfils, X., et al. 2011, A&A, 526, A112
. N C Santos, M Mayor, D Naef, A&A. 392215Santos, N. C., Mayor, M., Naef, D., et al. 2002, A&A, 392, 215
. D Ségransan, S Udry, M Mayor, A&A. 51145Ségransan, D., Udry, S., Mayor, M., et al. 2010, A&A, 511, A45
. S Shahaf, T Mazeh, MNRAS. 4873356Shahaf, S. & Mazeh, T. 2019, MNRAS, 487, 3356
. I A G Snellen, A G Brown, Nature Astronomy. 2883Snellen, I. A. G. & Brown, A. G. A. 2018, Nature Astronomy, 2, 883
. A Sozzetti, G Torres, D W Latham, ApJ. 697544Sozzetti, A., Torres, G., Latham, D. W., et al. 2009, ApJ, 697, 544
. X.-N Su, J.-W Xie, J.-L Zhou, P Thebault, AJ. 162272Su, X.-N., Xie, J.-W., Zhou, J.-L., & Thebault, P. 2021, AJ, 162, 272
. O Tamuz, D Ségransan, S Udry, A&A. 48033Tamuz, O., Ségransan, D., Udry, S., et al. 2008, A&A, 480, L33
P Thebault, N Haghighipour, Planetary Exploration and Science: Recent Results and Advances. Thebault, P. & Haghighipour, N. 2015, in Planetary Exploration and Science: Recent Results and Advances, 309-340
. A Tokovinin, AJ. 14787Tokovinin, A. 2014, AJ, 147, 87
. A Tokovinin, M Hartung, T L Hayward, V V Makarov, AJ. 1447Tokovinin, A., Hartung, M., Hayward, T. L., & Makarov, V. V. 2012, AJ, 144, 7
. A Tokovinin, S Thomas, M Sterzik, S Udry, A&A. 450681Tokovinin, A., Thomas, S., Sterzik, M., & Udry, S. 2006, A&A, 450, 681
. A A Tokovinin, Astronomy Letters. 19383Tokovinin, A. A. 1993, Astronomy Letters, 19, 383
. A A Tokovinin, A&A. 360997Tokovinin, A. A. 2000, A&A, 360, 997
. D Turrini, M Barbieri, F Marzari, P Thebault, P Tricarico, Memorie della Societa Astronomica Italiana Supplementi6172Turrini, D., Barbieri, M., Marzari, F., Thebault, P., & Tricarico, P. 2005, Memorie della Societa Astronomica Italiana Supplementi, 6, 172
. S Udry, M Mayor, D Naef, A&A. 390267Udry, S., Mayor, M., Naef, D., et al. 2002, A&A, 390, 267
. S Udry, M Mayor, D Naef, A&A. 356590Udry, S., Mayor, M., Naef, D., et al. 2000, A&A, 356, 590
. S Udry, N C Santos, ARA&A. 45397Udry, S. & Santos, N. C. 2007, ARA&A, 45, 397
. A Venner, A Vanderburg, L A Pearce, AJ. 16212Venner, A., Vanderburg, A., & Pearce, L. A. 2021, AJ, 162, 12
. J Wang, J.-W Xie, T Barclay, D A Fischer, ApJ. 7834Wang, J., Xie, J.-W., Barclay, T., & Fischer, D. A. 2014, ApJ, 783, 4
. M L Waskom, Journal of Open Source Software. 63021Waskom, M. L. 2021, Journal of Open Source Software, 6, 3021
. L C Watson, J D Pritchard, J B Hearnshaw, P M Kilmartin, A C Gilmore, MNRAS. 325143Watson, L. C., Pritchard, J. D., Hearnshaw, J. B., Kilmartin, P. M., & Gilmore, A. C. 2001, MNRAS, 325, 143
VizieR Online Data Catalog. N Zacharias, C T Finch, T M Girard, I/322AZacharias, N., Finch, C. T., Girard, T. M., et al. 2012, VizieR Online Data Cata- log, I/322A
. S Zúñiga-Fernández, A Bayo, P Elliott, A&A. 64530Zúñiga-Fernández, S., Bayo, A., Elliott, P., et al. 2021, A&A, 645, A30
| [
"https://github.com/nicochunger/orvara/tree/"
]
|
[
"Large-scale phase retrieval",
"Large-scale phase retrieval"
]
| [
"Xuyang Chang \nSchool of Information and Electronics\nBeijing Institute of Technology\n100081BeijingChina\n\nAdvanced Research Institute of Multidisciplinary Science\nBeijing Institute of Technology\nBei-jing 100081China\n",
"Liheng Bian *[email protected] \nSchool of Information and Electronics\nBeijing Institute of Technology\n100081BeijingChina\n\nAdvanced Research Institute of Multidisciplinary Science\nBeijing Institute of Technology\nBei-jing 100081China\n",
"Jun Zhang \nSchool of Information and Electronics\nBeijing Institute of Technology\n100081BeijingChina\n\nAdvanced Research Institute of Multidisciplinary Science\nBeijing Institute of Technology\nBei-jing 100081China\n"
]
| [
"School of Information and Electronics\nBeijing Institute of Technology\n100081BeijingChina",
"Advanced Research Institute of Multidisciplinary Science\nBeijing Institute of Technology\nBei-jing 100081China",
"School of Information and Electronics\nBeijing Institute of Technology\n100081BeijingChina",
"Advanced Research Institute of Multidisciplinary Science\nBeijing Institute of Technology\nBei-jing 100081China",
"School of Information and Electronics\nBeijing Institute of Technology\n100081BeijingChina",
"Advanced Research Institute of Multidisciplinary Science\nBeijing Institute of Technology\nBei-jing 100081China"
]
| []
| High-throughput computational imaging requires efficient processing algorithms to retrieve multi-dimensional and multi-scale information. In computational phase imaging, phase retrieval (PR) is required to reconstruct both amplitude and phase in complex space from intensity-only measurements. The existing PR algorithms suffer from the tradeoff among low computational complexity, robustness to measurement noise and strong generalization on different modalities. In this work, we report an efficient large-scale phase retrieval technique termed as LPR. It extends the plug-and-play generalized-alternating-projection framework from real space to nonlinear complex space. The alternating projection solver and enhancing neural network are respectively derived to tackle the measurement formation and statistical prior regularization. This framework compensates the shortcomings of each operator, so as to realize high-fidelity phase retrieval with low computational complexity and strong generalization. We applied the technique for a series of computational phase imaging modalities including coherent diffraction imaging, coded diffraction pattern imaging, and Fourier ptychographic microscopy. Extensive simulations and experiments validate that the technique outperforms the existing PR algorithms with as much as 17dB enhancement on signal-tonoise ratio, and more than one order-of-magnitude increased running efficiency. Besides, we for the first time demonstrate ultra-large-scale phase retrieval at the 8K level (7680×4320 pixels) in minute-level time. | 10.1186/s43593-021-00004-w | [
"https://arxiv.org/pdf/2104.03148v1.pdf"
]
| 233,168,791 | 2104.03148 | 4573523cfe2380c29256f3d5f039dee6a9f63f2f |
Large-scale phase retrieval
Xuyang Chang
School of Information and Electronics
Beijing Institute of Technology
100081BeijingChina
Advanced Research Institute of Multidisciplinary Science
Beijing Institute of Technology
Bei-jing 100081China
Liheng Bian *[email protected]
School of Information and Electronics
Beijing Institute of Technology
100081BeijingChina
Advanced Research Institute of Multidisciplinary Science
Beijing Institute of Technology
Bei-jing 100081China
Jun Zhang
School of Information and Electronics
Beijing Institute of Technology
100081BeijingChina
Advanced Research Institute of Multidisciplinary Science
Beijing Institute of Technology
Bei-jing 100081China
Large-scale phase retrieval
High-throughput computational imaging requires efficient processing algorithms to retrieve multi-dimensional and multi-scale information. In computational phase imaging, phase retrieval (PR) is required to reconstruct both amplitude and phase in complex space from intensity-only measurements. The existing PR algorithms suffer from the tradeoff among low computational complexity, robustness to measurement noise and strong generalization on different modalities. In this work, we report an efficient large-scale phase retrieval technique termed as LPR. It extends the plug-and-play generalized-alternating-projection framework from real space to nonlinear complex space. The alternating projection solver and enhancing neural network are respectively derived to tackle the measurement formation and statistical prior regularization. This framework compensates the shortcomings of each operator, so as to realize high-fidelity phase retrieval with low computational complexity and strong generalization. We applied the technique for a series of computational phase imaging modalities including coherent diffraction imaging, coded diffraction pattern imaging, and Fourier ptychographic microscopy. Extensive simulations and experiments validate that the technique outperforms the existing PR algorithms with as much as 17dB enhancement on signal-tonoise ratio, and more than one order-of-magnitude increased running efficiency. Besides, we for the first time demonstrate ultra-large-scale phase retrieval at the 8K level (7680×4320 pixels) in minute-level time.
Wide field of view and high resolution are both desirable for various imaging applications, such as medical imaging [1][2][3][4] and remote sensing 5 , providing multi-dimensional and multi-scale target information. As the recent development of computational imaging, large-scale detection has been widely employed in a variety of computational imaging modalities 3,4,6,7 . These computational imaging techniques largely extend the spatial-bandwidth product (SBP) of optical systems from million scale to billion scale. As an example, the SBP of the RUSH microscope platform 4 has reached as high as 1.7 × 10 8 . Such large amount of data poses great challenge for post software processing. Therefore, large-scale processing algorithms with low computational complexity and high fidelity are of great significance for those imaging and perception applications in various dimensions 8 .
In computational phase imaging, phase retrieval (PR) is required to reconstruct both amplitude and phase in complex space from intensity-only measurements. This problem originates from the limitation of low response speed of photodetectors that impedes direct acquisition of light wavefront. Mathematically, the underlying goal of PR is to estimate an unknown complexfield signal from the intensity-only measurements of its complex-valued transformation, which is described as
I = |Au| 2 + ω,(1)
where u is the underlying signal to be recovered (u ∈ C n×1 ), I contains the intensity-only measurements (I ∈ R m×1 ), A represents measurement matrix (A ∈ R n×n or C n×n ), and ω stands for measurement noise. Phase retrieval has been widely applied in plenty fields such as astronomy, crystallography, electron microscopy and optics 9 . It solves various nonlinear inverse problems in optical imaging, such as coherent diffraction imaging 10 (CDI), coded diffraction pattern imaging 11 (CDP), Fourier ptychographic microscopy 3 (FPM) and imaging through scattering medium 12 .
In the past few decades, different phase retrieval algorithms have been developed. Gerchberg and Saxton pioneered the earliest alternating projection (AP) algorithm in the 1970s 13 , which was then extended by Fienup et al. with several variants 14 . Due to its strong generalization ability, AP has been widely employed in multiple phase imaging models. Nevertheless, it is sensitive to measurement noise, suffering from poor noise robustness. Afterwards, researchers introduced optimization into PR, deriving a series of semi-definite programming (SDP) based algorithms 15,16 and Wirtinger flow (WF) based algorithms [17][18][19] . These techniques enhances robustness to measurement noise, but they require high computational complexity and high sampling rate, making them inapplicable for large-scale phase retrieval. Although the sparsity prior of natural images in transformed domains can be incorporated as an additional constraint to lower sampling rate [20][21][22] , it further increases computational complexity.
Recently, the booming deep learning (DL) technique has also been introduced for phase retrieval 23 . Following the large-scale training framework, the DL strategy outperforms the above traditional PR techniques with higher fidelity. However, it provides poor generalization that each suits only for specific models, such as holography 23 and FPM 24 . For different models and even different system parameters, the deep neural network requires to be retrained with new large-scale data sets. To sum, despite of different workflows, the above existing PR algorithms suffer from the tradeoff among low computational complexity, robustness to measurement noise and strong generalization, making them inapplicable for large-scale phase retrieval.
In this work, we report an efficient large-scale phase retrieval technique termed as LPR, as sketched in Fig. 1. It builds on the plug-and-play (PNP) 25 optimization framework, and extends the efficient generalized-alternating-projection (GAP) 8,26,27 strategy from real space to nonlinear complex space. The complex-field PNP-GAP scheme ensures strong generalization of LPR on various imaging modalities, and outperforms the conventional first-order PNP techniques (such as ISTA 28 and ADMM 25 ) with fewer auxiliary variables, lower computational complexity and faster convergence. As PNP-GAP decomposes reconstruction into separate sub-problems including measurement formation and statistical prior regularization 8,29 , we further introduce an alternating projection solver and an enhancing neural network respectively to solve the two sub-problems. These two solvers compensate the shortcomings of each other, allowing the optimization to bypass the poor generalization of deep learning and poor noise robustness of AP. As a result, LPR enables generalized large-scale phase retrieval with high fidelity and low computational complexity, making it a state-of-the-art method for various computational phase imaging applications.
We compared LPR with the existing PR algorithms on extensive simulation and experiment data of different imaging modalities. The results validate that compared to the AP based PR algorithms, LPR is robust to measurement noise with as much as 17dB enhancement on signle-to-noise ratio. Compared with the optimization based PR algorithms, the running time is significantly reduced by more than one order of magnitude. Finally, we for the first time demonstrated ultralarge-scale phase retrieval at the 8K level (7680×4320 pixels) in minute-level time, where most of the other PR algorithms failed due to unacceptable high computational complexity.
Results
We applied LPR and the existing PR algorithms on both simulation and experiment data of three computational phase imaging modalities including CDI, CDP and FPM, to investigate respective pros and cons. The competing algorithms for comparison includes the alternating projection technique (AP) 13,14 , the SDP based techniques (PhaseMax (PMAX) 30 AmpFlow (TAF), Reweighted AmpFlow (RAF)), Coordinate Descent (CD) 35 , KACzmarz (KAC) 36 and the deep learning based prDeep technique 22 . All the algorithm parameters were tuned based on the Phasepack 37 for respective best performance. The convergence is determined when the intensity difference of reconstructed image between two successive iterations is smaller than a preset threshold. We employed the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) 38 to quantify reconstruction quality. All the calculation was tested on a desktop PC with an Intel i7-9700 CPU, 16G RAM and an Nvidia GTX 1660s GPU.
Coherent diffraction imaging. CDI is a representative non-interferometric phase imaging technique, and has been widely applied in physics, chemistry and biology due to its simple setup 9 .
It illuminates a target using coherent plane wave, and records the intensity of far-field diffraction pattern. By oversampling the diffracted light field and applying phase retrieval, both the target's amplitude and phase information can be reconstructed. Mathematically, the measurement formation of CDI is
I = |F(u)| 2 ,(2)
where u denotes the target information, and F represents the Fourier transformation that approximates the far-field diffraction.
Following the above formation model, we employed a high-resolution image (1356×2040 pixels) from the DIV2K 39 dataset as the latent signal to synthesize CDI measurements. Due to the uniqueness guarantee of solution, CDI requires at least 4 times oversampling in the Fourier domain 40 . Correspondingly, we padded zeros around the image matrix to generate a 2712×4080
image. We implemented Fourier transform to the image and retained only its intensity as measurements. Additionally, to investigate the techniques' robustness to measurement noise, we further added different levels of white Gaussian noise (WGN) to the measurements. Table 1 presents the quantitative reconstruction evaluation of different techniques. The results show that the CD and KAC methods failed with no convergence. This is because these techniques require higher sampling ratio. The PLIFT and PLAMP methods do not work as well, because they require matrix lifting and involve higher dimensional matrix that is out of memory in large-scale reconstruction. The other methods except for prDeep obtain little improvement compared to the AP algorithm. Specifically, the WF, AF and PMAX methods even degrade due to limited sampling ratio and noise corruption. The reconstruction of prDeep is better than the conventional algorithms, but with only 2dB enhancement on PSNR, and almost no SSIM improvement compared to AP. In contrast, LPR produces significant enhancement on reconstruction quality, with as much as 6dB and 0.29 improvement on PSNR and SSIM, respectively. Due to limited space, the detailed visual comparison of different techniques is presented in Fig. S1 (supplementary information), which coincides with the above quantitative results. We further compared these algorithms on experiment CDI data 41 Coded diffraction pattern imaging. CDP 11 is a coded version of CDI, which introduces wavefront modulation to increase observation diversity. The strategy of multiple modulations and acquisitions enables to effectively bypass the oversampling limitation of the conventional CDI. Generally, the target light field is additionally modulated by a spatial light modulator (SLM), and the measurements after far-field Fraunhofer diffraction can be modeled as
I = |F(u d)| 2 ,(3)
where d represents the modulation pattern, and denotes the Hadamard product.
We simulated CDP measurements with five and single phase modulations, respectively. The modulation patterns d are subject to Gaussian distribution 11 . We employed the same image as CDI to be the ground-truth signal, and added various levels of WGN to the measurements. Table 2
I = F −1 [P F{u S}] 2 ,(4)
where F −1 is inverse Fourier transform, P denotes system's pupil function, and S represents the wave function of incident light.
Following the formation model, we first implemented a simulation comparison with the following setup parameters: the wavelength is 625nm, the numerical aperture (NA) of objective lens is 0.08, the height from the light source to the target is 84.8mm, and the distance between adjacent light sources is 4mm. The pixel size of camera is 3.4µm. Two microscopy images of blood cells 42 (2048×2048 pixels) were employed as the latent high-resolution (HR) amplitude and phase, respectively. The size of captured low-resolution images (LR) was one fourth of the HR images. Ultra-large-scale phase retrieval. In ultra-large-scale imaging applications such as 4K (4096×2160 pixels) or 8K (7680×4320 pixels), most reconstruction algorithms are not applicable due to either highly large memory requirement or extremely long running time. Nevertheless, the reported LPR technique still works well in such applications. As a demonstration, we implemented a simulation of 8K-level CDP (5 modulations), using an 8K outer space color image as ground truth (released by NASA using the Hubble Telescope). Its spatial resolution is 7680×4320 (each color channel) with in total 33.1 million pixels. Figure 6 presents the reconstruction results of AP and LPR, with the input SNR being 5dB. The colse-ups show that the result of AP is drowned out by measurement noise, leading to dimness and loss of target details. In comparison, LPR outperforms a lot with strong robustness. Both their running time lie in the minute level. Another set of 8K reconstruction results is shown in Fig. S8 (supplementary information).
Methods
Following optimization theory, the phase retrieval task can be modeled aŝ
u = arg min u f (u) + λg(u),(5)
where u denotes the target complex filed to be recovered, f (u) is a data-fidelity regularizer that ensures consistency between the reconstructed result and measurements, and g(u) is a regularizer that imposes certain statistical prior knowledge. Conventionally, Eq. (5) is solved following the first-order proximal gradient methods, such as ISTA and ADMM that are time-consuming to calculate gradients in large-scale nonlinear tasks 29 . In this work, instead, we employ the efficient generalized-alternating-projection (GAP) strategy 29 to transform Eq. (5) with fewer variables to
(u, v) = argmin 1/2 u − v 2 2 + λg(v) s.t. I = |Au| 2 ,(6)
where v is an auxiliary variable balancing the data fidelity term and prior regularization, A denotes measurement matrix, and I represents measurement. The difference between the conventional ADMM and GAP optimization is the constraint on the measurement 29 . ADMM minimizes I − |Au| 2 , while GAP imposes the constraint I = |Au| 2 .
To tackle the large-scale phase retrieval task, we extend the efficient plug-and-play (PNP) optimization framework 25 from real space to nonlinear complex space. Fundamentally, PNP decomposes optimization into two separate sub-problems including measurement formation and prior regularization, so as to incorporating inverse recovery solvers together with various image enhancing solvers to improve reconstruction accuracy, providing high flexibility for different applications.
Mathematically, Eq. (6) is decomposed into the following two sub-problems, to alternatively update the two variables u and v.
• Updating u: given v (k) , u (k+1) is updated via a Euclidean projection of v (k) on the manifold
I = |Au| 2 as u k+1 = v (k) + P R I − |Av| 2 ,(7)
where P R is phase retrieval solver. Considering its great generalization ability on various imaging modalities and low computational complexity, we employ the AP method as the P R solver. It alternates between the target and observation planes allowing to incorporate any information available for the variables, providing low sampling rate requirement.
• Updating v: given u (k+1) , v (k+1) is updated by an image enhancing solver EN as v k+1 = EN u k+1 . After initialization, the variables are updated alternatively following Eq. (7) and Eq. (8).
When the intensity difference of reconstructed image between two successive iterations is smaller than a given threshold, the iteration stops with convergence. Since both the two solvers P R and EN are highly efficient and flexible, the entire reconstruction maintains low computational complexity and great generalization. The demo code has been released at bianlab.github.io.
Conclusion and Discussion
In this work, we engaged to tackle the large-scale phase retrieval problem, and reported a generalized LPR optimization technique with low computational complexity and strong robustness.
It extends the efficient PNP-GAP framework from real space to nonlinear complex space, and incorporates the alternating projection solver and enhancing neural network. As validated by ex-
Figures and tables
Alternating Projection Enhancing Network
Complex-field plug-and-play optimization
, PhaseLift (PLIFT) 15 , Phase-Lamp (PLAMP) 31 ), the Wirtinger flow based techniques (Wirtinger Flow (WF) 17 , Reweighted Wirtinger Flow (RWF) 32 ), the amplitude flow based techniques 33, 34 (AmpFlow (AF), Truncated
, to validate their effectiveness in practical applications. The imaging sample is live glioblastoma cell line U-87 MG. The setup includes a HeNe laser (543nm 5mW), a dual pinhole aperture which consists of two 100 µm pinholes spaced 100 µm apart from edge to edge, a 35 mm objective lens and a CCD camera (1340×1300, 16 bits). The sequential measurements contain far-field diffraction patterns of several moments in the cell fusion process. Because the conventional algorithms obtain little improvement compared to AP and prDeep is not applicable for complex-field sample 22 , we only present the reconstruction results of AP and LPR in Fig. 2. The results show that there exists serious noise artifacts in AP reconstruction, especially in the amplitude images. The cells are almost submerged by background noise at 0 and 135 min, and the contours and edges of cells can not be clearly observed. In comparison, LPR produces high-fidelity results that effectively preserve fine details while attenuating measurement noise. The complete results of all the 48 moments are shown in Fig. S2 -Fig. S5 (supplementary information).
Fig. 3 ,
3presents the quantitative evaluation of different techniques under the CDP modality (5 modulations). The results show that the Wirtinger flow based techniques (WF and RWF) failed because of insufficient measurements 17 . The PLIFT and PLAMP methods are still out of memory. The other conventional methods produce either little improvement or even worse reconstruction compared to AP. Although prDeep outperforms AP, it consumes around triple running time with high computational complexity. In comparison, the reported LPR obtains the best reconstruction performance, with as much as 8.3dB on PSNR and 0.61 on SSIM. Besides, it also shares the same level of running time as AP, which maintains the highest efficiency among all the algorithms. The detailed visual comparison of different methods is presented in Fig. S6 (supplementary information).To further demonstrate the strong reconstruction performance of LPR, we also compared these algorithms in the case of limited sampling ratio with only single modulation, as shown in Tab. 3 andFig. 3. Due to extremely insufficient measurements, most of the methods failed with either no convergence or poor reconstruction quality. Under heavy measurement noise, the target information is either buried or smoothed. In contrast, the reported LPR thchnique enables as much as 17dB enhancement on PSNR and 0.8 improvement on SSIM. As validated by the colse-ups in LPR is able to retrieve fine details, even in the case of heavy measurement noise. Meantime, it is effective to attenuate noise and artifacts, producing smooth background.Fourier ptychographic microscopy. FPM is a novel technique to increase optical system's bandwidth for wide-field and high-resolution imaging. It illuminates the target with coherent light at different incident angles, and acquires corresponding images that contain information of different sub-regions of the target's spatial spectrum. Mathematically, the measurement formation model of FPM is
Figure 4
4presents the reconstruction results of AP 3 , WF 43 and LPR. AP is sensitive to measurement noise. WF can better handle noise, but it requires high computational complexity and long running time (more than one order of magnitude). Compared with AP, LPR obtains as much as nearly 10dB enhancement on PSNR (SNR = 10). Besides, it consumes the same order of running time as AP. The visual comparison also validates that LPR enables high-fidelity reconstruction of both amplitude and phase. Due to space limitation, we present another set of simulation results in Fig. S7 (supplementary information).We also implemented the algorithms on experiment FPM measurements. The imaging sample is blood smear stained by HEMA 3 Wright-Giemsa. The setup consists of a 15×15 LED array, a 2× 0.1 NA objective lens (Olympus), and a camera with 1.85µm pixel size. The central wavelength of the LEDs is 632nm, and the lateral distance between adjacent LEDs is 4mm. The LED array is placed 80mm from the sample. We captured two sets of 225 LR images that correspond to the 15×15 LEDs, respectively under 1000ms and 250ms exposure time. The reconstructed results are presented inFig. 5, which show that AP is seriously degraded under limited exposure. Only the cell nucleus can be observed in amplitude, and other details are lost. LPR produces state-of-the-art reconstruction performance. The measurement noise is effectively removed, and the cell structure and morphology details are clearly retrieved.
iterative image enhancing research has made great progress in recent years with such as non-local optimization and dictionary learning 44 , they maintain high computational complexity for large-scale reconstruction 45 . In this work, considering its flexible and fast solution, we employed the deep learning based FFDNET 46 to tackle the sub-problem with high fidelity and self-adaptation. The neural network consists of a series of 3×3 convolution layers. Each layer is composed of a specific combination of three types of operations including convolution, rectified linear units and batch normalization. The architecture provides a balanced tradeoff between noise suppression and detail fidelity. While an image is input into the network, it is first down sampled into several sub-blocks, which then flow through the network for quality enhancement. Finally, these optimized blocks are stitched together to the original size. Such a workflow enables its great generalization ability on different image size.
tensive simulations and experiments on three different computational phase imaging modalities (CDI, CDP and FPM), LPR exhibits unique advantages in large-scale phase retrieval tasks with high fidelity and efficiency.The LPR technique can be further extended. First, it involves multiple algorithm parameters that are currently adjusted manually. We can introduce the reinforcement learning technique 47 in our future work to automatically adjust these parameters for best performance. Second, LPR is sensitive to initialization, especially under low sampling rate. The optimal spectral initialization 48 technique can be incorporated for stronger robustness. Third, it is interesting to investigate theinfluence of employing other image enhancing solvers such as superresolution neural network 49 and deblurring network. This may open new insights for phase retrieval with boosted quality. 43. Bian, L. et al. Fourier ptychographic reconstruction using Wirtinger flow optimization. Opt. Express 23, 4856-4866 (2015). 44. Elad, M. & Aharon, M. Image denoising via sparse and redundant representations over learned dictionaries. IEEE T. Image Process. 15, 3736-3745 (2006). 45. Zhang, K., Zuo, W., Chen, Y., Meng, D. & Zhang, L. Beyond a gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE T. Image Process. 26, 3142-3155 (2017). 46. Zhang, K., Zuo, W. & Zhang, L. FFDNet: Toward a fast and flexible solution for CNN-based image denoising. IEEE T. Image Process. 27, 4608-4622 (2018). 47. Wei, K. et al. Tuning-free plug-and-play proximal algorithm for inverse imaging problems. In International Conference on Machine Learning (ICML), 10158-10169 (PMLR, 2020). 48. Luo, W., Alghamdi, W. & Lu, Y. M. Optimal spectral initialization for signal recovery with applications to phase retrieval. IEEE T. Signal Proces. 67, 2347-2356 (2019). 49. Wang, Z., Chen, J. & Hoi, S. C. Deep learning for image super-resolution: A survey. IEEE T. Pattern Anal. (2020).
Figure 1 :Figure 2 :Figure 3 :
123The schematic of the reported LPR technique for large-sacle phase retrieval. LPR decomposes the large-scale phase retrieval problem into two subproblems under the PNP-GAP framework, and introduces the efficient alternating projection (AP) and enhancing network solvers for alternating optimization. The workflow realizes robust phase retrieval with low computational complexity and strong generalization on different imaging modalities. Comparison of experiment results under the CDI modality41 . A dual-pinhole aperture is illuminated by a coherent light. A live glioblastoma cell sample is imaged in a time series of diffraction patterns. The reconstructed results describe the fusion process of two glioblastoma cells and form a high-density area. The AP technique is sensitive to measurement noise, and produces unsatisfying results. The reported LPR technique enables to remove noise artifacts and preserve fine details with high fidelity. Visual comparison under the CDP imaging modality (single modulation). In such a low sampling ratio with measurement noise, all the conventional algorithms produce low-contrast resolution. The prDeep technique also produces serious reconstruction artifacts. The reported LPR technique outperforms the other methods with much higher fidelity.
Figure 4 :Figure 5 :Figure 6 :
456Comparison of simulation results under the FPM modality. The left table presents quantitative comparison, while the right images show visual comparison. AP suffers from poor noise robustness. WF requires high computational complexity with longer running time (more than one order of magnitude). In contrast, LPR produces the highest reconstruction quality with as much as nearly 10dB enhancement on PSNR (SNR = 10), and consumes the same order of running time as AP. Comparison of experiment results under the FPM modality. The target is a red blood cell sample that is prepared on a microscope slide stained with Hema 3 stain set (Wright-Giemsa). The limited exposure results in serious measurement noise, which directly flows into the reconstruction results of AP. The WF technique outperforms AP, but it still degrades a lot under short exposure time (250ms). The reported LPR technique maintains strong robustness to measurement noise, and enables to retrieve clear cell structure and morphology details. The first demonstration of ultra-large-scale phase retrieval at the 8K level (7680×4320×3 pixels). The imaging modality is CDP with 5 modulations. At such a large scale, only the AP and the reported LPR techniques still work, while the other ones fail due to high compulational complexity. The results validate that LPR significantly outperforms AP with effective noise removal and detail reservation.
Table 1
1also presents the running time of these techniques. Because all the other algorithms used the result of AP as initialization, we recorded the excess time as the running time of these algorithms. From the results, we can see that prDeep consumes the most running time. LPR takesthe same level of running time compared to the conventional algorithms, but with significantly
improved reconstruction quality.
Table 1 :
1Quantitative comparison under the CDI modality. CD and KAC fail with no convergence. PLIFT and PLAMP are out of computer memory. Most of the conventional algorithms produce little improvement than AP. LPR outperforms the other algorithms, with as much as 6dB (SNR = 30) and 0.29 (SNR = 20) imrprovement on PSNR and SSIM, respectively. We use the excess time beyond AP as the other algorithms' running time, which show that prDeep consumes the most running time. In comparison, LPR takes the same level of running time as the conventional methods.PSNR SSIM TIME PSNR SSIM TIME PSNR SSIM TIMEAlgorithm
SNR=20dB
SNR=25dB
SNR=30dB
AP
18.46
0.50 819.67 21.75
0.58 854.37 22.29
0.65 863.14
WF
19.05
0.52 +27.15 20.84
0.62 +31.98 21.27
0.70 +32.41
RWF
18.52
0.50 +25.69 21.98
0.61 +27.53 22.41
0.71 +27.98
AF
16.55
0.42 +28.61 19.63
0.49 +29.74 19.83
0.54 +27.29
TAF
18.57
0.53 +26.04 21.81
0.59 +25.99 22.30
0.65 +26.49
RAF
18.52
0.53 +22.55 21.79
0.58 +21.80 22.27
0.65 +22.19
PLIFT
-memory limitation
-memory limitation
-memory limitation
PLAMP
-memory limitation
-memory limitation
-memory limitation
PMAX
16.64
0.42 +38.48 19.73
0.49 +39.04 19.97
0.54 +38.11
CD
-no convergence
-no convergence
-no convergence
KAC
-no convergence
-no convergence
-no convergence
prDeep
20.60
0.52 +49.01 21.83
0.58 +43.36 23.33
0.65 +35.46
LPR
23.30
0.79 +28.52 25.52
0.83 +29.97 28.11
0.86 +27.19
Table 2 :
2Quantitative comparison under the CDP modality (5 modulation). The Wirtinger flow based (WF, RWF) techniques fail because of insufficient measurements. PLIFT and PLAMP are out of memory. The other methods produce little improvement or consume extremely long running time compared to AP. In comparison, LPR consumes the same level of running time as AP, and obtains the best performance with as much as 8.3dB on PSNR (SNR = 15) and 0.61 on SSIM (SNR = 10).WF-insufficient measurements -insufficient measurements -insufficient measurements RWF -insufficient measurements -insufficient measurements -insufficient measurementsAlgorithm
SNR=10dB
SNR=15dB
SNR=20dB
PSNR SSIM
TIME
PSNR SSIM
TIME
PSNR SSIM
TIME
AP
15.60
0.21
105.76
18.61
0.33
110.73
23.22
0.55
174.98
AF
13.93
0.19
247.07
17.84
0.33
231.38
23.13
0.60
211.39
TAF
13.40
0.16
257.57
18.14
0.34
225.67
22.71
0.59
213.65
RAF
13.88
0.19
261.59
17.86
0.38
222.38
23.10
0.59
212.09
PLIFT
-memory limitation
-memory limitation
-memory limitation
PLAMP
-memory limitation
-memory limitation
-memory limitation
PMAX
11.08
0.13
295.84
11.36
0.14
300.21
11.66
0.15
296.28
CD
8.69
0.22
357.52
9.47
0.20
321.81
9.78
0.20
264.89
KAC
10.83
0.13
192.44
10.97
0.15
161.48
11.01
0.16
114.75
prDeep
22.67
0.61
301.41
24.42
0.72
282.14
26.85
0.76
380.60
LPR
22.73
0.82
124.80
26.92
0.88
137.33
31.89
0.94
228.42
Table 3 :
3Quantitative comparison under the CDP modality (single modulation). Most of the conventional algorithms fail with either no convergence or poor reconstruction quality because of extremely insufficient measurements. In comparison, LPR still obtains the best reconstruction quality, with more than 17dB improvment on PSNR and nearly 0.8 on SSIM (SNR=20).WF-insufficient measurements -insufficient measurements -insufficient measurements RWF -insufficient measurements -insufficient measurements -insufficient measurements insufficient measurements -insufficient measurements -insufficient measurements CD -insufficient measurements -insufficient measurements -insufficient measurements KAC -insufficient measurements -insufficient measurements -insufficient measurementsAlgorithm
SNR=10dB
SNR=15dB
SNR=20dB
PSNR SSIM
TIME
PSNR SSIM
TIME
PSNR SSIM
TIME
AP
11.71
0.08
13.96
12.82
0.09
13.55
13.02
0.10
13.34
AF
10.47
0.08
24.61
10.53
0.08
23.73
10.82
0.09
23.36
TAF
10.52
0.08
24.05
10.93
0.07
24.21
11.02
0.08
23.09
RAF
10.38
0.06
26.17
10.43
0.07
25.83
10.78
0.08
25.82
PLIFT
-memory limitation
-memory limitation
-memory limitation
PLAMP
-memory limitation
-memory limitation
-memory limitation
PMAX
-prDeep
18.29
0.39
153.41
19.21
0.54
142.34
23.92
0.68
104.84
LPR
21.11
0.81
77.80
25.64
0.87
81.51
30.10
0.89
62.89
Competing InterestsThe authors declare no competing financial interests.
Nano-optic endoscope for high-resolution optical coherence tomography in vivo. H Pahlevaninezhad, Nat. Photonics. 12Pahlevaninezhad, H. et al. Nano-optic endoscope for high-resolution optical coherence to- mography in vivo. Nat. Photonics 12, 540-547 (2018).
High-resolution multimodal flexible coherent Raman endoscope. A Lombardini, Light: Sci. Appl. 7Lombardini, A. et al. High-resolution multimodal flexible coherent Raman endoscope. Light: Sci. Appl. 7, 1-8 (2018).
Wide-field, high-resolution Fourier ptychographic microscopy. G Zheng, R Horstmeyer, C Yang, Nat. Photonics. 7Zheng, G., Horstmeyer, R. & Yang, C. Wide-field, high-resolution Fourier ptychographic microscopy. Nat. Photonics 7, 739-745 (2013).
Video-rate imaging of biological dynamics at centimetre scale and micrometre resolution. J Fan, Nat. Photonics. 13Fan, J. et al. Video-rate imaging of biological dynamics at centimetre scale and micrometre resolution. Nat. Photonics 13, 809-816 (2019).
Space-time coding MIMO-OFDM SAR for high-resolution imaging. W.-Q Wang, IEEE T. Geosci. Remote. 49Wang, W.-Q. Space-time coding MIMO-OFDM SAR for high-resolution imaging. IEEE T. Geosci. Remote 49, 3094-3104 (2011).
Multiscale gigapixel photography. D J Brady, Nature. 486Brady, D. J. et al. Multiscale gigapixel photography. Nature 486, 386-389 (2012).
Computational out-of-focus imaging increases the space-bandwidth product in lens-based coherent microscopy. H Wang, Optica. 3Wang, H. et al. Computational out-of-focus imaging increases the space-bandwidth product in lens-based coherent microscopy. Optica 3, 1422-1429 (2016).
Plug-and-play algorithms for large-scale snapshot compressive imaging. X Yuan, Y Liu, J Suo, Q Dai, Conference on Computer Vision and Pattern Recognition (CVPR). Yuan, X., Liu, Y., Suo, J. & Dai, Q. Plug-and-play algorithms for large-scale snapshot com- pressive imaging. In Conference on Computer Vision and Pattern Recognition (CVPR), 1447- 1457 (2020).
Phase retrieval with application to optical imaging: a contemporary overview. Y Shechtman, IEEE Signal Proc. Mag. 32Shechtman, Y. et al. Phase retrieval with application to optical imaging: a contemporary overview. IEEE Signal Proc. Mag. 32, 87-109 (2015).
Extending the methodology of X-ray crystallography to allow imaging of micrometre-sized non-crystalline specimens. J Miao, P Charalambous, J Kirz, D Sayre, Nature. 400Miao, J., Charalambous, P., Kirz, J. & Sayre, D. Extending the methodology of X-ray crys- tallography to allow imaging of micrometre-sized non-crystalline specimens. Nature 400, 342-344 (1999).
Phase retrieval from coded diffraction patterns. E J Candes, X Li, M Soltanolkotabi, Appl. Comput. Harmon. A. 39Candes, E. J., Li, X. & Soltanolkotabi, M. Phase retrieval from coded diffraction patterns. Appl. Comput. Harmon. A. 39, 277-299 (2015).
Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations. O Katz, P Heidmann, M Fink, S Gigan, Nat. Photonics. 8Katz, O., Heidmann, P., Fink, M. & Gigan, S. Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations. Nat. Photonics 8, 784-790 (2014).
A practical algorithm for the determination of phase from image and diffraction plane pictures. R W Gerchberg, Optik. 35Gerchberg, R. W. A practical algorithm for the determination of phase from image and diffrac- tion plane pictures. Optik 35, 237-246 (1972).
Phase retrieval algorithms: a comparison. J R Fienup, Appl. Optics. 21Fienup, J. R. Phase retrieval algorithms: a comparison. Appl. Optics 21, 2758-2769 (1982).
Phaselift: Exact and stable signal recovery from magnitude measurements via convex programming. E J Candes, T Strohmer, V Voroninski, Commun. Pur. Appl. Math. 66Candes, E. J., Strohmer, T. & Voroninski, V. Phaselift: Exact and stable signal recovery from magnitude measurements via convex programming. Commun. Pur. Appl. Math. 66, 1241-1274 (2013).
. L Vandenberghe, S Boyd, Semidefinite Programming, Rev, 38Vandenberghe, L. & Boyd, S. Semidefinite programming. SIAM Rev. 38, 49-95 (1996).
Phase retrieval via Wirtinger flow: Theory and algorithms. E J Candes, X Li, M Soltanolkotabi, IEEE T. Inform. Theory. 61Candes, E. J., Li, X. & Soltanolkotabi, M. Phase retrieval via Wirtinger flow: Theory and algorithms. IEEE T. Inform. Theory 61, 1985-2007 (2015).
Solving random quadratic systems of equations is nearly as easy as solving linear systems. Y Chen, E Candes, International Conference on Neural Information Processing Systems (NIPS). Chen, Y. & Candes, E. Solving random quadratic systems of equations is nearly as easy as solving linear systems. In International Conference on Neural Information Processing Systems (NIPS), 739-747 (2015).
Coordinate descent algorithms for phase retrieval. Signal Process. W.-J Zeng, H.-C So, 169107418Zeng, W.-J. & So, H.-C. Coordinate descent algorithms for phase retrieval. Signal Process. 169, 107418 (2020).
Phase retrieval from noisy data based on sparse approximation of object phase and amplitude. V Katkovnik, arXiv:1709.01071arXiv preprintKatkovnik, V. Phase retrieval from noisy data based on sparse approximation of object phase and amplitude. arXiv preprint arXiv:1709.01071 (2017).
Compressive phase retrieval based on BM3D denoising. C A Metzler, A Maleki, R G Baraniuk, Bm3d-Prgamp, International Conference on Image Processing (ICIP). IEEEMetzler, C. A., Maleki, A. & Baraniuk, R. G. BM3D-PRGAMP: Compressive phase retrieval based on BM3D denoising. In International Conference on Image Processing (ICIP), 2504- 2508 (IEEE, 2016).
prDeep: robust phase retrieval with a flexible deep network. C Metzler, P Schniter, A Veeraraghavan, International Conference on Machine Learning (ICML). PMLRMetzler, C., Schniter, P., Veeraraghavan, A. et al. prDeep: robust phase retrieval with a flexible deep network. In International Conference on Machine Learning (ICML), 3501-3510 (PMLR, 2018).
Phase recovery and holographic image reconstruction using deep learning in neural networks. Y Rivenson, Y Zhang, H Günaydın, D Teng, A Ozcan, Light: Sci. Appl. 7Rivenson, Y., Zhang, Y., Günaydın, H., Teng, D. & Ozcan, A. Phase recovery and holographic image reconstruction using deep learning in neural networks. Light: Sci. Appl. 7, 17141- 17141 (2018).
Ptychnet: CNN based Fourier ptychography. A Kappeler, S Ghosh, J Holloway, O Cossairt, A Katsaggelos, International Conference on Image Processing (ICIP). IEEEKappeler, A., Ghosh, S., Holloway, J., Cossairt, O. & Katsaggelos, A. Ptychnet: CNN based Fourier ptychography. In International Conference on Image Processing (ICIP), 1712-1716 (IEEE, 2017).
Plug-and-play priors for model based reconstruction. S V Venkatakrishnan, C A Bouman, B Wohlberg, Global Conference on Signal and Information Processing (GlobalSIP). IEEEVenkatakrishnan, S. V., Bouman, C. A. & Wohlberg, B. Plug-and-play priors for model based reconstruction. In Global Conference on Signal and Information Processing (GlobalSIP), 945-948 (IEEE, 2013).
Generalized alternating projection for weighted-2,1 minimization with applications to model-based compressive sensing. X Liao, H Li, L Carin, SIAM J. Imaging Sci. 7Liao, X., Li, H. & Carin, L. Generalized alternating projection for weighted-2,1 minimization with applications to model-based compressive sensing. SIAM J. Imaging Sci. 7, 797-823 (2014).
Generalized alternating projection based total variation minimization for compressive sensing. X Yuan, International Conference on Image Processing (ICIP). IEEEYuan, X. Generalized alternating projection based total variation minimization for compres- sive sensing. In International Conference on Image Processing (ICIP), 2539-2543 (IEEE, 2016).
A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration. J M Bioucas-Dias, M A Figueiredo, IEEE T. Image Process16Bioucas-Dias, J. M. & Figueiredo, M. A. A new TwIST: Two-step iterative shrink- age/thresholding algorithms for image restoration. IEEE T. Image Process. 16, 2992-3004 (2007).
Rank minimization for snapshot compressive imaging. Y Liu, X Yuan, J Suo, D J Brady, Q Dai, IEEE T. Pattern Anal. 41Liu, Y., Yuan, X., Suo, J., Brady, D. J. & Dai, Q. Rank minimization for snapshot compressive imaging. IEEE T. Pattern Anal. 41, 2990-3006 (2018).
Phasemax: Convex phase retrieval via basis pursuit. T Goldstein, C Studer, IEEE T. Inform. Theory. 64Goldstein, T. & Studer, C. Phasemax: Convex phase retrieval via basis pursuit. IEEE T. Inform. Theory 64, 2675-2689 (2018).
Phase retrieval via linear programming: Fundamental limits and algorithmic improvements. O Dhifallah, C Thrampoulidis, Y M Lu, Annual Allerton Conference on Communication, Control, and Computing. AllertonIEEEDhifallah, O., Thrampoulidis, C. & Lu, Y. M. Phase retrieval via linear programming: Funda- mental limits and algorithmic improvements. In Annual Allerton Conference on Communica- tion, Control, and Computing (Allerton), 1071-1077 (IEEE, 2017).
Phase retrieval via reweighted Wirtinger flow. Z Yuan, H Wang, Appl. Optics. 56Yuan, Z. & Wang, H. Phase retrieval via reweighted Wirtinger flow. Appl. Optics 56, 2418- 2427 (2017).
Solving systems of random quadratic equations via truncated amplitude flow. G Wang, G B Giannakis, Y C Eldar, IEEE T. Inform. Theory. 64Wang, G., Giannakis, G. B. & Eldar, Y. C. Solving systems of random quadratic equations via truncated amplitude flow. IEEE T. Inform. Theory 64, 773-794 (2017).
Phase retrieval via reweighted amplitude flow. G Wang, G B Giannakis, Y Saad, J Chen, IEEE T. Signal Proces. 66Wang, G., Giannakis, G. B., Saad, Y. & Chen, J. Phase retrieval via reweighted amplitude flow. IEEE T. Signal Proces. 66, 2818-2833 (2018).
W.-J Zeng, H.-C So, arXiv:1706.03474Coordinate descent algorithms for phase retrieval. arXiv preprintZeng, W.-J. & So, H.-C. Coordinate descent algorithms for phase retrieval. arXiv preprint arXiv:1706.03474 (2017).
Solving systems of phaseless equations via Kaczmarz methods: A proof of concept study. K Wei, Inverse Probl. 31125008Wei, K. Solving systems of phaseless equations via Kaczmarz methods: A proof of concept study. Inverse Probl. 31, 125008 (2015).
Phasepack: A phase retrieval library. R Chandra, T Goldstein, C Studer, International conference on Sampling Theory and Applications (SampTA). IEEEChandra, R., Goldstein, T. & Studer, C. Phasepack: A phase retrieval library. In International conference on Sampling Theory and Applications (SampTA), 1-5 (IEEE, 2019).
Image quality assessment: from error visibility to structural similarity. Z Wang, A C Bovik, H R Sheikh, E P Simoncelli, IEEE T. Image Process. 13Wang, Z., Bovik, A. C., Sheikh, H. R. & Simoncelli, E. P. Image quality assessment: from error visibility to structural similarity. IEEE T. Image Process. 13, 600-612 (2004).
Ntire 2017 challenge on single image super-resolution: Dataset and study. E Agustsson, R Timofte, Conference on Computer Vision and Pattern Recognition (CVPR). Agustsson, E. & Timofte, R. Ntire 2017 challenge on single image super-resolution: Dataset and study. In Conference on Computer Vision and Pattern Recognition (CVPR), 126-135 (2017).
Beyond crystallography: Diffractive imaging using coherent X-ray light sources. J Miao, T Ishikawa, I K Robinson, M M Murnane, Science. 348Miao, J., Ishikawa, T., Robinson, I. K. & Murnane, M. M. Beyond crystallography: Diffractive imaging using coherent X-ray light sources. Science 348, 530-535 (2015).
Y H Lo, situ coherent diffractive imaging. 9Lo, Y. H. et al. In situ coherent diffractive imaging. Nat. Commun. 9, 1-10 (2018).
Blood cells under microscope view for histology education. Choksawatdikorn, Choksawatdikorn. Blood cells under microscope view for histology education.
| []
|
[
"Four-qubit Systems and Dyonic Black Hole-Black Branes in Superstring Theory",
"Four-qubit Systems and Dyonic Black Hole-Black Branes in Superstring Theory"
]
| [
"A Belhaj \nDépartement de Physique\nFaculté Polydisciplinaire\nLIRST\nUniversité Sultan Moulay Slimane\n\n\nDepartamento de Física Téorica\nUniversidad de Zaragoza, E-50009-ZaragozaSpain\n",
"M Bensed \nDépartement de Physique, LabSIMO\nFaculté des Sciences\nUniversité Ibn Tofail Kénitra\nMorocco\n",
"Z Benslimane \nDépartement de Physique, LabSIMO\nFaculté des Sciences\nUniversité Ibn Tofail Kénitra\nMorocco\n",
"M B Sedra \nDépartement de Physique, LabSIMO\nFaculté des Sciences\nUniversité Ibn Tofail Kénitra\nMorocco\n",
"A Segui \nDepartamento de Física Téorica\nUniversidad de Zaragoza, E-50009-ZaragozaSpain\n",
"MoroccoBéni Mellal "
]
| [
"Département de Physique\nFaculté Polydisciplinaire\nLIRST\nUniversité Sultan Moulay Slimane\n",
"Departamento de Física Téorica\nUniversidad de Zaragoza, E-50009-ZaragozaSpain",
"Département de Physique, LabSIMO\nFaculté des Sciences\nUniversité Ibn Tofail Kénitra\nMorocco",
"Département de Physique, LabSIMO\nFaculté des Sciences\nUniversité Ibn Tofail Kénitra\nMorocco",
"Département de Physique, LabSIMO\nFaculté des Sciences\nUniversité Ibn Tofail Kénitra\nMorocco",
"Departamento de Física Téorica\nUniversidad de Zaragoza, E-50009-ZaragozaSpain"
]
| []
| Using dyonic solutions in the type IIA superstring theory on Calabi-Yau manifolds, we reconsider the study of black objects and quantum information theory using string/string duality in six dimensions. Concretely, we relate four-qubits with a stringy quaternionic moduli space of type IIA compactification associated with a dyonic black solution formed by black holes (BH) and black 2-branes (B2B) carrying 8 electric charges and 8 magnetic charges. This connection is made by associating the cohomology classes of the heterotic superstring on T 4 to four-qubit states. These states are interpreted in terms of such dyonic charges resulting from the quaternionic symmetric space SO(4,4)SO(4)×SO(4) corresponding to a N = 4 sigma model superpotential in two dimensions. The superpotential is considered as a functional depending on four quaternionic fields mapped to a class of Clifford algebras denoted as Cl 0,4 . A link between such an algebra and the cohomology classes of T 4 in heterotic superstring theory is also given. | 10.1142/s0219887818500652 | [
"https://arxiv.org/pdf/1705.02811v2.pdf"
]
| 119,028,834 | 1705.02811 | bab02539e9511b3139aa43a9e212bbbda92f5bc0 |
Four-qubit Systems and Dyonic Black Hole-Black Branes in Superstring Theory
1 Dec 2017
A Belhaj
Département de Physique
Faculté Polydisciplinaire
LIRST
Université Sultan Moulay Slimane
Departamento de Física Téorica
Universidad de Zaragoza, E-50009-ZaragozaSpain
M Bensed
Département de Physique, LabSIMO
Faculté des Sciences
Université Ibn Tofail Kénitra
Morocco
Z Benslimane
Département de Physique, LabSIMO
Faculté des Sciences
Université Ibn Tofail Kénitra
Morocco
M B Sedra
Département de Physique, LabSIMO
Faculté des Sciences
Université Ibn Tofail Kénitra
Morocco
A Segui
Departamento de Física Téorica
Universidad de Zaragoza, E-50009-ZaragozaSpain
MoroccoBéni Mellal
Four-qubit Systems and Dyonic Black Hole-Black Branes in Superstring Theory
1 Dec 2017Qubit information systemssuperstring theorystring/string dualityquaternionic manifolds
Using dyonic solutions in the type IIA superstring theory on Calabi-Yau manifolds, we reconsider the study of black objects and quantum information theory using string/string duality in six dimensions. Concretely, we relate four-qubits with a stringy quaternionic moduli space of type IIA compactification associated with a dyonic black solution formed by black holes (BH) and black 2-branes (B2B) carrying 8 electric charges and 8 magnetic charges. This connection is made by associating the cohomology classes of the heterotic superstring on T 4 to four-qubit states. These states are interpreted in terms of such dyonic charges resulting from the quaternionic symmetric space SO(4,4)SO(4)×SO(4) corresponding to a N = 4 sigma model superpotential in two dimensions. The superpotential is considered as a functional depending on four quaternionic fields mapped to a class of Clifford algebras denoted as Cl 0,4 . A link between such an algebra and the cohomology classes of T 4 in heterotic superstring theory is also given.
Extremal black branes have been extensively studied in the framework of superstring theory on the Calabi-Yau (CY) manifolds [1,2,3,4]. These black solutions have been approached by exploring the attractor mechanism and the topological string theory [5,6,7,8,9]. In the attractor mechanism scenario, the scalars could be fixed in terms of the black brane charges by extremising the associated potential with respect to the stringy moduli obtained from the superstring theory compactified on the Calabi-Yau manifolds. Moreover, the corresponding entropy functions have been computed using the string duality symmetries acting on the invariant black brane charges. In this way, several Calabi-Yau compactifications have been examined producing various results dealing with black objects in type II superstrings using D-brane physics [10,11].
The black objects, embedded in superstring theory compactifications, can be connected to quantum information theory using the qubit analysis [12][13][14][15][16][17][18][19][20][21][22][23][24]. Precisely, a fascinating correspondence has been discovered between quantum information theory and superstring theory.
The main obtained relations are between the entropy formulas for specific black hole solutions in supergravity theories and entanglement measures for certain multiqubit systems [17,18]. Alternative studies have been conducted using toric geometry and graph theory [24,25,26,27].
The underlying idea is a link between the N = 2 STU black hole charges and three-qubit states which has been established in [15,16]. Furthermore, the analysis based on three-qubits has been developed to describe the structure of extremal black hole solutions in terms of four-qubit systems [12,19,22,23,28]. In all the works on four-qubits, one uses the complex geometry to deal with the corresponding black hole entropy. For more details, we refer to [22,29].
The main goal of this work is to contribute to these activities by approaching four-qubit systems using string dualities and a quaternionic description of stringy moduli spaces. Concretely, we reconsider the investigation of black objects and quantum information theory using string/string duality in the context of dyonic solutions in type II superstring compactifications on Calabi-Yau manifolds. This may offer a new take on the moduli space of black objects and quantum information theory. More precisely, we link four-qubits with a stringy quaternionic moduli space of type IIA superstring compactification associated with a dyonic black solution formed by black holes (BH) and back 2-branes (B2B), referred to as 0 2 dyonic object with eight electric charges and eight magnetic charges, producing sixteen charges. This connection is made by associating the cohomology classes of the heterotic superstring on T 4 to four-qubit states. These states can be interpreted in terms of charges of such dyonic solutions resulting from the quaternionic symmetric space SO(4,4) SO(4)×SO (4) corresponding to a superpotential of N = 4 sigma model in two dimensions. The superpotential has been considered as a functional de-pending on quaternionic fields related to a class of the Clifford algebras developed in [30]. This algebra, denoted as Cl 0,4 , provides a link with the cohomology classes of T 4 in the heterotic superstring compactification.
The organization of the paper is as follows. Section 2 is a concise review on the study of dyonic solutions in type II superstrings compactified on n-dimensional Calabi-Yau manifolds.
We emphasize the black solutions carrying charges in 10 − 2n dimensions sharing electric and magnetic charge dualities. A classification according to black brane dimensions results in two dyonic solutions which can be generalized in arbitrary dimension. Section 3 contains a link between four-qubit systems and a quaternionic geometry considered as a reduction of the moduli space of the heterotic superstring compactified on T 4 being dual to type IIA superstring on the K3 surface. This moduli space will be considered as the moduli space of a dyonic BH-B2B object with 8 electric charges and 8 magnetic charges. In section 4, the four-qubit states are related to dyonic charges living in the moduli space SO(4,4) SO(4)×SO (4) . In section 5, we claim that the usual decomposition of SO(4) × SO(4) −→ SU (2) × SU (2) × SU (2) × SU (2) results in four quaternionic fields that can be interpreted in terms of four-qubit states. We suggest that these fields can produce a quaternionic superpotential associated with a N = 4 sigma model in two dimensions [31]. Precisely, we regard the superpotentiel as an element of a particular class of the Clifford algebras denoted as Cl 0,4 . We end in Section 6 with some discussions and open questions.
Dyonic solutions in type II superstrings
We start by reconsidering the study of dyonic solutions in type II superstrings compactified on CY manifolds, that we will later need. It is recalled that a n-dimensional CY manifold (CY n-folds) is a complex with the Kähler structure. The latter involves a global nonvanishing holomorphic n-form which is equivalently to a Kähler manifold with a vanishing first Chern class c 1 = 0 required by a SU (n) holonolmy group [32]. The superstring compactification on such manifolds preserves only 1 2 n−1 of the ten dimensional supercharges. It has been remarked that each manifold is associated with a Hodge diagram playing an important rôle in the determination of the superstring theory spectrum in 10 − 2n dimensions [32,33,34,35]. To have a general idea on such data, we list the Hodge diagrams of the T 2 torus, the K3 surface and the dimensions. In this way, the near horizon of these black objects is usually defined by the product of Ads spaces and spheres as follows
n = 1 h 0,0 h 1,0 h 0,1 h 1,1 1 1 1 1 n = 2 h 0,0 h 1,0 h 0,1 h 2,0 h 1,1 h 0,2 h 2,1 h 1,2 h 2,2 1 0 0 1 20 1 0 0 1 n = 3 h 0,0 h 1,0 h 0,1 h 2,0 h 1,1 h 0,2 h 3,0 h 2,1 h 1,2 h 0,3 h 3,1 h 2,2 h 1,3 h 3,2 h 2,3 h 3,3 h 0,0 0 0 0 h 1,1 0 1 h 2,1 h 1,2 1 0 h 1,Ads p+2 × S 8−2n−p ,(2.1)
where p is the internal dimension of the black brane. n and p verify the following constraint
2 ≤ 8 − 2n − p. (2.2)
In the compactified theory living in 10−2n, the electric/magnetic duality liking a p-dimensional electrical black brane to a q-dimensional magnetic one is assured by the constraint
p + q = 6 − 2n. (2.3)
A priori, this equation can be solved in different ways according to the (p, q) couple values.
The solution can be classified as follows
• (p, q) = (0, 6 − 2n) describing an electrical charged black hole (BH)
• (p, q) = (3 − n, 3 − n) describing dyonic black branes (DB(3 − n)B)
• (p, q) = (0, 6 − 2n) and (p, q) = (3 − n, 3 − n), describing black objects like strings, membranes and higher-dimensional branes.
However, a closed inspection shows that we have two kinds of dyonic solutions carrying electric and magnetic charges by considering objet like doublets. They are listed as follows 1. a solution 3 − n 3 − n consisting of the same object associated with
p = q = 3 − n. (2.4)
2. a solution p 6 − 2n − p consisting of an electrically charged black object and its magnetic dual one corresponding to
p, q = 6 − 2n − p, p = 3 − n. (2.5)
The last solution is considered as a single object sharing similar features of the usual dyonic solution described by the same object. We believe that one could build it in any dimension.
To see that, let us consider a model obtained by the compactification of type IIA superstring on the K3 surface required by the h 1,0 = h 1,2 = h 0,1 = h 2,1 = 0. Indeed, it is recalled that the type IIA superstring perturbative bosonic massless sector contain the following fields
NS-NS : g M N , B M N , φ R-R : A M , C M N K (2.6)
where M, N, K = 0, . . . , 9. This compactification produces a N = 2 supergravity in six dimensions with the following bosonic spectrum
g µν , B µν , φ, A µ , C µνρ , C µij , φ α , α = 1, . . . , 80. (2.7)
In this spectrum, g µν is the six dimensional graviton metric, B µν and C µνρ are the six dimensional antisymmetric gauge fields. The field A µ and C µij represent the gravi-photon and Maxwell gauge fields in six dimensions. These fields are obtained from the compactification of C µνρ on the real 2-cycles of the K3 surface. Since C µνρ is dual to a vector in six dimensions, the theory has an U (1) 24 abelian gauge symmetry. Besides the antisymmetric gauge field B µν , the total gauge symmetry reads
G = G 1 × G 2 = U (1) 24 × U (1) (2.8)
associated with one and two-form gauge fields, respectively. In six dimensions, these gauge fields are coupled to the scalar fields. In addition to the dilaton φ in six dimensions, there are 80 scalar fields φ α which can be arranged to form the moduli space of type IIA superstring on the K3 surface. The latter can be viewed as a scalar manifold of half-maximal, non-chiral Type IIA supergravity in six dimensions, coupled to 20 vector multiplets, which reads as
SO(4, 20) SO(4) × SO(20) × SO(1, 1). (2.9)
It has been shown that the first factor SO(4,20)
SO (4)×SO (20) represents the geometric deformations of the K3 surface in the presence of the antisymmetric B-field of the NS-NS sector and it is linked to the symmetry group G 1 = U (1) 24 . However, the second factor SO(1, 1) represents the dilaton scalar field which is associated with G 2 = U (1) [35,36]. It has been remarked that the space (2.9) is related to the electric/magnetic duality assured by the condition Having discussed the compactification of the type IIA superstring on the K3 surface, we would like to relate the corresponding black objects and quantum information theory by combining string/string duality and non trivial cycles appearing in the toroidal compctification of the heterotic superstring on T 4 . A special emphasis is put on four-qubit systems which will be linked with a particular quaternionic geometry considered as a subspace of the one appearing in (2.9).
3 Four-qubit systems and dyonic solutions on the symmetric space SO(4,4)
SO(4)×SO(4)
Several structural similarities between quantum information theory and superstring theory have been established forming the so called black hole qubit correspondence (BHQC). The first mapping was between the entropy formulae of certain black holes and the entanglement measures of qubit systems using Cayley's hyperdeterminant [37,38,39]. In particular, it has been shown that the square root of Cayley's hypertederminant is linked to eight charges of extremal black holes in the STU model by the entropy formulae as follow
S = π |Deta ABC | = π 2 √ τ ABC . (3.1)
Here τ ABC and S are the 3-tangle measure and the black hole entropy, respectively. Furthermore, it is clearly interesting to recall that the Cayley's hyperderminant denoted as DetA is defined as
DetA ≡ − 1 2 ǫ A 1 A 3 ǫ A 2 A 4 ǫ B 1 B 2 ǫ B 3 B 4 ǫ C 1 C 2 ǫ C 3 C 4 a A 1 B 1 C 1 a A 2 B 2 C 2 a A 3 B 3 C 3 a A 4 B 4 C 4 . (3.2)
It is not hard to see that this is an homogeneous quartic polynomial that involves an interesting physical interpretation in terms of the STU black hole charges embedded in type II superstrings.
Precisely, the solution of the STU black hole, in the case of spherical symmetry, is given in terms of 8 charges (q 0 , q 1 , q 2 , q 3 , p 0 , p 1 , p 2 , p 3 ). In this way, the square of the extremal STU black hole entropy is proportional to a quartic polynomial of q 0 , q 1 , q 2 , q 3 , p 0 , p 1 , p 2 and p 3 [15,16]
S 2 = π 2 {−(p 0 q 0 + p 1 q 1 + p 2 q 2 + p 3 q 3 ) 2 + 4((p 1 q 1 )(p 2 q 2 ) + (p 1 q 1 )(p 3 q 3 ) + (p 3 q 3 )(p 2 q 2 ) + q 0 p 1 p 2 p 3 − p 0 q 1 q 2 q 3 )}. (3.3)
Under a suitable mapping, the eight charges of the STU black hole correspond to the states of a three-qubit system as follows [47]
q 0 q 1 q 2 q 3 p 0 p 1 p 2 p 3 ←→ a 000 −a 001 −a 010 −a 100 a 111 a 110 a 101 a 011 .
Motivated by the STU black hole, three-qubits and certain extended works, we would like to link the four-qubits with a particular stringy moduli space given by SO(4, 4) SO(4) × SO (4) .
(3.4)
We will show that this moduli space can be considered as a subpart of a general quaternionic geometry, related to the c-map image of the symmetric projective special Khler manifold SL(2, R) 3 /U (1) 3 , which is the vector multiplets scalar manifold of the N = 2 D = 4 STU supergravity model [48]. Then, we will show that this moduli space corresponds to a dyonic black object in six dimensions carrying eight electric and eight magnetic charges playing the same role as a black string in six dimensions. Concretely, these black solutions are mapped to four-qubit physical systems using string dualities. It is recalled that the physics of the qubit has been extensively investigated from different physical and mathematical aspects [40,41,42].
Using Dirac notation, one-qubit is described by the following state
|ψ = a 0 |0 + a 1 |1 . (3.5)
Here, a i are considered as complex numbers verifying the probability condition
|a 0 | 2 + |a 1 | 2 = 1. (3.6)
It should be denoted that this condition can be interpreted geometrically in terms of the socalled Bloch sphere, CP 1 . Similarly, the two-qubits are represented by the general state |ψ = a 00 |00 + a 10 |10 + a 01 |01 + a 11 |11 .
(3.7)
In this case, the probability condition is |a 00 | 2 + |a 10 | 2 + |a 01 | 2 + |a 11 | 2 = 1, (3.8) defining a 3-dimensional complex projective space CP 3 generalizing the Bloch sphere. This analysis can be extended to N -qubits having 2 N configuration states. For instance, the general state of the four-qubits reads as
|ψ = ijkℓ=0,1 a ijkℓ |ijkℓ ,(3.9)
where a ijkℓ verify the normalization condition Examining the string/string duality in six dimensions, the factor SO(4,16) SO(4)×SO (16) corresponds to the twistor sector in type IIA superstring side. It is associated with the fixed points of the orbifold compactification. It is interesting to note that this sector has played a primordial rôle in solving a serious problem in type IIA spectrum in six dimensions being the absence of non abelian gauge symmetries. However, this sector will be ignored and we consider only the factor SO(4,4) SO(4)×SO (4) . This will be done by restricting quaternionic dimensions living in six dimensional supergravity where certain parts of stringy moduli space should take zero values.
In superstring theory, this factor can be obtained from the toroidoal compactification of the heterotic superstring by ignoring the contribution of the gauge symmetry derived from the 26-dimensional bosonic sector. In particular, the compactification of the heterotic superstring on T 4 produces 4×5 2 degrees of freedom associated with the metric g ij and 4×3 2 degrees of freedom corresponding to the anti-symmetric field B ij . Then, we have 4 × 4 = 16 real scalars parameterizing the symmetric space SO(4,4) SO(4)×SO (4) . Besides such scalar fields, we have also the configuration representing the abelian gauge fields. These abelian gauge fields can be obtained from the B µi and g µi fields. This generates the gauge symmetry
G 1 = U (1) 4 × U (1) 4 (3.15)
which provides a SO(4) × SO(4) isotopy symmetry. In six dimensions, this symmetry corresponds to
• 8 electric charges of BH solution with the AdS 2 × S 4 near horizon geometry. can be considered as a four-qubit system supported by string/string duality in six dimensions [43].
(p,q)forms number 1 1
dz 1 , dz 1 , dz 2 , dz 2 4 dz 1 ∧ dz 1 , dz 1 ∧ dz 2 , dz 1 ∧ dz 1 3 dz 2 ∧ dz 2 , dz 2 ∧ dz 1 , dz 2 ∧ dz 1 3 dz 1 ∧ dz 1 ∧ dz 2 , dz 1 ∧ dz 1 ∧ dz 2 , dz 1 ∧ dz 2 ∧ dz 2 , dz 1 ∧ dz 2 ∧ dz 2 4 dz 1 ∧ dz 1 ∧ dz 2 ∧ dz 2 1
String/string duality interpretation of four-qubits
In this section, we would like to present a stringy interpretation of four-qubits using the string/string duality relating type IIA and heterotic superstings [43]. Indeed, instead of thinking in terms of type IIA D-barnes wrapping non trivial cycles, as done in the second section of the present work, we consider an equivalent description in heterotic superstring using cycles in T 4 . More precisely, we associate to each element of the cohomology classes of the heterotic superstring on T 4 a state of the four-qubit basis. The basis states can be interpreted in terms of the trivial fibration T 4 = T 2 × T 2 . To see that, let us consider the complex realization of
T 2 × T 2 , z α = z α + 1 (4.1) z α = z α + i, i 2 = −1, α = 1, 2.
The cohomology classes of this trivial fibration correspond to the holomorphic and the antiholomorphic forms (p, q) which are listed in table 1.
The table arrangement is motivated from the 2 4 = C 0 4 + C 1 4 + C 2 4 + C 3 4 + C 4 4 which can be explored to divide the 2-forms into two categories
dz 1 ∧ dz 1 , dz 1 ∧ dz 2 , dz 1 ∧ dz 2 dz 2 ∧ dz 2 , dz 1 ∧ dz 2 , dz 1 ∧ dz 2
as required by the normalized volume form on T 2 × T 2
T 4 dz 1 ∧ dz 1 ∧ dz 1 ∧ dz 2 = 1. (4.2)
To make contact with four-qubit states, we consider the following map applied first on one factor T 2
ω 1 ij = (dz 1 ) i ∧ (dz 1 ) j −→ |ij > i, j = 0, 1 (4.3)
producing the two-qubit states. Similarly, the basis states of four-qubits can be obtained by fibering trivially the T 2 × T 2 complex manifold. Indeed, we define the factorization
ω ijkℓ = ω 1 ij ∧ ω 2 kℓ = (dz 1 ) i ∧ (dz 1 ) j ∧ (dz 2 ) k ∧ (dz 2 ) ℓ , i, j, k, ℓ = 0, 1 (4.4)
representing the basis states of the four-qubits
ω ijkl −→ |ijkℓ > . (4.5)
The normalized condition may be assured by
(ω ijkℓ , ω i ′ j ′ k ′ ℓ ′ ) = δ i ′ i δ j ′ j δ k ′ k δ ℓ ′ ℓ (4.6)
where the scalar product can be defined by
(ω iji ′ j ′ , ω klmn ) = T 2 ω iji ′ j ′ ∧ * ω klmn . (4.7)
Here * is the Hodge duality. In order to establish a connection with the dyonic solutions in type IIA supersstring, each state |ijkℓ > should correspond to a charge of the following D
Quaternionic description of four-qubits
The quaternionic character of the moduli space of type IIA superstring on K3 surface, or heterotic superstring on T 4 pushes us to think about a quaternionic analysis of four-qubits.
This could help to clarify ceratin issues by drawing a clear contrast between the attractor mechanism of black holes developed in superstring theory and quantum information theory.
In particular, we would like to give such a description using the symmetric space SO(4,4) SO(4)×SO (4) parameterized by 16 scalar fields associated with a dyonic solution 0 2 with 8 electric charges and 8 magnetic charges. It is recalled that a quaternionic field takes the form
q = x 0 + ix 1 + jx 2 + kx 3 ,(5.1)
where x 0 , x 1 , x 2 and x 3 are real numbers, i, j and k are imaginary numbers such that
i 2 = j 2 = k 2 = −1 ij = −jk = k, jk = −kj = 1 ki = −ki = j. (5.2)
Usually, it is convenient to use the matrix representation of quaternionic fields. It is defined by
q = x 0 σ 0 + i x σ,(5.3)
where x = (x 1 , x 2 , x 3 ) and − → σ = (σ 1 , −σ 2 , σ 3 ) are the usual Pauli matrices and σ 0 is the 2 × 2 identity matrix. In this way, a quaternion number is given by
x 0 + x 1 i + x 2 j + x 3 ij −→ x 0 + x 3 −x 1 + x 2 x 1 + x 2 x 0 − x 3 . (5.4)
It turns out that the scalars of the dyonic solutions, studied in the present work, can be combined to form a quaternionic geometry in terms of four quaternionic blocks. To see that,
we first recall that these scalar fields belong to the (4, 4) bifundamental representation of where j s are spin particles. A priori, there are many ways to decompose the bifundamental representation (4,4) in terms of (m 1 , m 2 , m 3 , m 4 ). A way, which could be related to quaternionic geometry, is (4, 4) = (4, 1, 1, 1) ⊕ (1, 4, 1, 1) ⊕ (1, 1, 4, 1) ⊕ (1, 1, 1, 4). (5.9) This decomposition shows that the symmetric space SO(4,4) SO(4)×SO (4) can be parameterized in terms of four quaternionic fields associated with 16 charges of the 0 2 dyonic black object. In fact, the scalars can be combined to form four quaternionic fields indicated by only one index
φ a b −→ φ a b 1 b 2 −→ φ a 1 a 2 b 1 b 2 −→ φ A , A = 1, 2, 3, 4 (5.10)
where a 1 and a 2 refers to the SU (2) group, the same for b 1 and b 2 . These quaternionic fields can be explored to produce a superpotential of N = 4 sigma model in two dimensions
W = W (φ 1 , φ 2 , φ 3 , φ 4 , p a , q a ) a = 1, . . . , 8. (5.11)
In what follows, we will show that this superpotential can be viewed as a general state of fourqubit systems. Indeed, the non commutativity character of the quaternionic fields can be used to make contact with a particular class of the Clifford algebras. In this way, the superpotentiel W (φ) can be interpreted as an element of such a Clifford algebra. Assuming that the fields φ A form a normalized basis of a vector space V and using the work developed in [30], the algebra spanned by the all reduced products of the form Inspired by such a decomposition, we propose the mapping
Span φ i 1 φ j 2 φ k 3 φ l 4 , i,|ijkl >−→ φ i 1 φ j 2 φ k 3 φ l 4 ,
i, j, k, ℓ = 0, 1.
In this way, the general state of four-qubits corresponds to a quaternionic superpotential |ψ = ijkℓ=0,1 a ijkℓ |ijkℓ −→ W = W (φ 1 , φ 2 , φ 3 , φ 4 , p a , q a ). (5.15) In this mapping, the a ijkℓ numbers should correspond to the (p a , q a ) charges of the dyonic object 0 2 . The superpotential could be viewed as the holomorphic sections of the line bundles on four dimensional quernionic manifolds. We expect that these sections may be encoded in non trivial polytopes going beyond the toric graphs associated with the projective complex geometry used in [44].
Conclusion and open questions
In this work, we have approached four-qubit systems in the context of type II superstring compactifications using string dualities between type IIA and heterotic superstrings. This It is recalled that interesting works dealing with four-qubits from algebraic geometry point of view including ADE singularities have been elaborated in [45,46]. It would be interesting to see if this has any possible connection with such activities. Moreover, it should be of relevance to approach quantum information concepts using quaternionic geometry associated with Clifford algebras. This includes the study of entanglement and quantum discord. It is clearly interesting to better understand such concepts from geometric methods. We anticipate that many concepts used in quantum information, of four-qubits, could be discussed using quaternionic manifolds. This will be addressed elsewhere.
Acknowledgments: The authors would like to thank M. Asorey for discussions. AB would like to thank the Departamento de Física Téorica, Universidad de Zaragoza for very kind hospitality and scientific supports during the realization of a part of this work. He also acknowledges the warm hospitality of Montanez and Naz families durante his travel in Spain and he thanks also Hajja Fatima (his mother) for patience and supports. AS is supported by FPA2012-35453.
out that h p,q denotes the number of the holomorphic and the anti holomorphic (p, q) forms. Deleting the zeros, one observes that each diagram contains two central orthogonal lines. For the CY n-folds, the vertical line encodes the parameters describing the Kähler deformations. It has been shown that the number of such size parameters, representing the Kähler deformations of the metric, is fixed by h 1,1 . The horizontal one represents the parameters of the complex structure (shape parameters) given by h n−1,1 . Beside these parameters, the Hodge diagram can be explored to produce the all physical data in lower dimensional superstring theory compactifications. Indeed, the moduli space of the CY type II superstring compactifications is determined by the geometric deformations of the CY metric including the antisymmetric B-field of the NS-NS sector, the dilaton, specifying the string coupling constant and the scalars derived from the R-R gauge fields on non trivial cycles of the CY spaces. In connection with the black solutions in the type II superstring compactifications, these scalar fields which are associated with supergravity models having 2 6−n supercharges are coupled to an abelian gauge symmetry providing electric and magnetic charges of black objects in 10 − 2n
B2B with near-horizon geometries Ads 2 × S 4 and Ads 4 string (BS) with the near horizon geometry Ads 3 × S 3 . electric charges and 24 magnetic charges. It can be realized in terms of a D-brane system containing {D0, D2, D4, D6} which can be placed on the corresponding Hodge diagram. The
(
D2, D4) (D2, D4) (D2, D4) (D4, D6)
CP 15 complex projective space. Using results on string dualities, these states will be linked to charges of a dyonic object embedded in type IIA superstring.Roughly speaking, the moduli space (3.4) can be considered as a particular geometry ofSO(4, m) SO(4) × SO(m) × SO(1, 1) (3.11)where m ≥ 3 is an integer which can be fixed by the compactification in question. It appears naturally in six dimensional supergravity models.The link that we are after push us to consider the case m = 4 reducing the above moduli space to SO(4, 4) SO(4) × SO(4) × SO(1, 1).(3.12) It turns out that in the analysis of the superstring compactifications (sigma model fields), we can remove the factor SO(1, 1) by fixing the dilaton. The remaining factor can be obtained using different ways. A possible one is to think about the decomposition of the moduli space of the K3 surface, or the heterotic superstring on T 4 . It has been shown that such two models are equivalent. This duality is known by string/string duality in six dimensions[43]. Indeed, it is possible to use the following decomposition
•
8 magnetic charges of a B2B solution with the AdS 4 × S 2 near horizon geometry.Following the general discussion made in the previous section, the moduli space be realized in terms of a D-brane system containing {D0, D2, D4, D6} which can be placed on the Hodge diagram of T 4 . At the level, it is intersecting to note that this solution could generate a single dyonic object in four dimensions. Assuming that the corresponding gauge fields survive in four dimensions, the B2B can be converted to a BH in four dimensions using a possible compactification on a 2-sphere S dimensions with the near-horizon geometry AdS 2 ×S 2 by thinking AdS 4 ×S 2 as AdS 2 ×S 2 ×S 2 . In what follows,
,
D2, D4, D6}in the presence of U (1) 8 gauge fields rotated by the SO(4) × SO(4) isotropy symmetry. In the heterotic superstring side, these vectors, obtained from the graviton and the antisymmetric can be supported by the fact that a real vector of SO(4) perfectly with the above table of the differential complex forms arrangement on the trivial fibration of T 2 × T 2 . In this way, the eight electric charges are linked with 1 + 3 vectors of type g µa and 4 vectors of type B µa of the heterotic superstring in six dimensions. The 8 magnetic charges q a can be associated with the dual objects as required by the electric and magnetic duality p a q a = 2πk.(4.8) These objects forming a dyonic pair of a black solution 0 2 carrying 8 electric and 8 magnetic charges are associated with the four-qubit states.
way, they are specified by two indices a and b,φ ≡ φ a b . (5.6)Then, we consider the following decomposition of SO(4) × SO(4) symmetrySO(4) × SO(4) −→ SU (2) × SU (2) × SU (2) × SU (2). (5.7)The corresponding representations are given by four integers (m 1 , m 2 , m 3 , m 4 ), where m s are dimensions of particle state vector spaces. It is recalled that m s = 2j s + 1, s = 1, 2, 3, 4 (5.8)
connection has been elaborated by giving a classification according to black brane dimensions, which results in two kinds of dyonic solutions generalized in arbitrary dimension where the electric and magnetic charges have been linked formally to Calabi-Yau Hodge numbers. We have shown that the four-qubit systems are related to a stringy moduli spaceSO(4,4) SO(4)×SO(4) , which is a reduction of a moduli space of the heterotic superstring T 4 living in six dimentional supergravity model. Using string/string duality, the four-qubit states are related to a and 8 magnetic charges (p a , q a ). Moreover, it has been remarked that the usual decomposition of SO(4) × SO(4) −→ SU (2) × SU (2) × SU (2) × SU(2)results in four quaternionic fields that can be interpreted as states of the four-qubits. These states are linked with a quaternionic superpotentiel W (φ) of N = 4 sigma interpreted as an element of a particular class of the Clifford algebras denoted as Cl 0,4 .The present work comes up with certain open questions related to quantum information theory.
Table 1 :
1(p, q)-forms on T 2 × T 2 .
defines a class of the Clifford algebras, denoted as Cl 0,4 . It has been shown that this algebra could be decomposed as follows is known by the space of k-multivectors[30]. In connection with the heterotic superstring compactification, a close inspection shows that the algebra Cl 0,4 can be associated with the cohomology classes of T 4 . More precisely, we havej, k, ℓ = 0, 1
(5.12)
Cl 0,4 = ⊕ 4
k=0 Cl
(k)
0,4
(5.13)
where Cl
(k)
0,4
Cl 0
0,4 : the scalar
Cl 1
0,4 : the 1-forms
Cl 2
0,4 : the 2-forms
Cl 3
0,4 : the 3-forms
Cl 4
0,4 : the volume form
.
(5.14)
Microscopic Origin of the Bekenstein-Hawking Entropy. A Strominger, C Vafa, arXiv:hep-th/9601029Phys.Lett. 37999A. Strominger, C. Vafa, Microscopic Origin of the Bekenstein-Hawking Entropy, Phys.Lett. B379 (1996) 99, arXiv:hep-th/9601029.
Black Holes and Calabi-Yau Threefolds. C Vafa, hep-th/9711067Adv.Theor.Math.Phys. 2C. Vafa, Black Holes and Calabi-Yau Threefolds, Adv.Theor.Math.Phys. 2 (1998) 207, hep-th/9711067.
J Maldacena, A Strominger, E Witten, arXiv:hep-th/9711053Black Hole Entropy in M-Theory. 97122J. Maldacena, A. Strominger, E. Witten, Black Hole Entropy in M-Theory, JHEP9712 (1997)002, arXiv:hep-th/9711053.
B Haghighat, S Murthy, C Vafa, S Vandoren, F-Theory , arXiv:1509.00455Spinning Black Holes and Multistring Branches. B. Haghighat, S. Murthy, C. Vafa, S. Vandoren, F-Theory, Spinning Black Holes and Multistring Branches, arXiv:1509.00455.
N = 2 Extremal Black Holes. S Ferrara, R Kallosh, A Strominger, hep-th/9508072Phys. Rev. 52S. Ferrara, R. Kallosh, A. Strominger, N = 2 Extremal Black Holes, Phys. Rev. D52 (1995) 5412, hep-th/9508072.
Supersymmetry and Attractors. S Ferrara, R Kallosh, hep-th/9602136Phys. Rev. 54S. Ferrara and R. Kallosh, Supersymmetry and Attractors, Phys. Rev. D54 (1996) 1514, hep-th/9602136.
Extremal Black Brane Attractors on The Elliptic Curve. R Laamara, M Asorey, A Belhaj, A Segui, arXiv:0907.0093J.Phys. 43105401R. Ahl Laamara, M. Asorey, A. Belhaj, A, Segui, Extremal Black Brane Attractors on The Elliptic Curve, J.Phys. A43 (2010) 105401, arXiv:0907.0093.
P Bueno, R Davies, C S Shahbazi, arXiv:1210.2817Quantum black holes in Type-IIA String Theory. P. Bueno, R. Davies, C. S. Shahbazi, Quantum black holes in Type-IIA String Theory, arXiv:1210.2817.
Black Hole Attractors and the Topological String. H Ooguri, A Strominger, C Vafa, arXiv:hep-th/0405146Phys.Rev. 70106007H. Ooguri, A. Strominger, C. Vafa, Black Hole Attractors and the Topological String, Phys.Rev. D70(2004)106007, arXiv:hep-th/0405146.
S Bellucci, S Ferrara, A Marrani, A Yeranyan, hep-th/0608091Mirror Fermat Calabi-Yau threefolds and Landau-Ginzburg Black Hole Attractors. 029S. Bellucci, S. Ferrara, A. Marrani and A. Yeranyan, Mirror Fermat Calabi-Yau three- folds and Landau-Ginzburg Black Hole Attractors, Riv. Nuov o Cim. 029 (2006)1, hep-th/0608091.
A Belhaj, arXiv:0809.1114On Black Objects in Type IIA Superstring Theory on Calabi-Yau Manifolds. 649A. Belhaj, On Black Objects in Type IIA Superstring Theory on Calabi-Yau Manifolds, African Journal Of Math. Phys. Vol. 6 (2008)49, arXiv:0809.1114.
D = 3 Unification of Curious Supergravities. M J Duff, S Ferrara, A Marrani, arXiv:1610.08800JHEP. 170123hep-thM. J. Duff, S. Ferrara, A. Marrani,D = 3 Unification of Curious Supergravities, JHEP 1701 (2017) 023, arXiv:1610.08800 [hep-th].
Qubit and Fermionic Fock Spaces from Type II Superstring Black Hole. A Belhaj, M Bensed, Z Benslimane, M B Sedra, A Segui, arXiv:1604.03998Int. J. Geom. Methods Mod. Phys. 141750087A. Belhaj, M. Bensed, Z. Benslimane, M. B. Sedra, A. Segui, Qubit and Fermionic Fock Spaces from Type II Superstring Black Hole, Int. J. Geom. Methods Mod. Phys. 14 (2017)1750087 arXiv:1604.03998.
A Belhaj, Z Benslimane, M B Sedra, A Segui, arXiv:1601.07610Qubits from Black Holes in M-theory on K3 Surface. 131650075A. Belhaj, Z. Benslimane, M. B. Sedra, A. Segui, Qubits from Black Holes in M-theory on K3 Surface, Int. J. Geom. Methods Mod. Phys. 13 (2016)1650075 arXiv:1601.07610.
Four curious supergravities. M J Duff, S Ferrara, arXiv:1010.3173Phys. Rev. 8346007M. J. Duff, S. Ferrara, Four curious supergravities, Phys. Rev. D83 (2011)046007, arXiv:1010.3173.
Qubits from extra dimensions. P Levay, Phys. Rev. 84125020P. Levay, Qubits from extra dimensions, Phys. Rev. D84 (2001)125020.
String triality, black hole entropy and Cayley s hyperdeterminant. M J Duff, hep-th/0601134Phys. Rev. 7625017M. J. Duff, String triality, black hole entropy and Cayley s hyperdeterminant, Phys. Rev. D76 (2007) 025017, hep-th/0601134.
Stringy Black Holes and the Geometry of Entanglement. P Levay, arXiv:0603136Phys. Rev. 7424030P. Levay, Stringy Black Holes and the Geometry of Entanglement , Phys. Rev. D74, 024030 (2006), arXiv:0603136.
Embedding qubits into fermionic Fock space, peculiarities of the four-qubit case. P Levay, F Holweck, arXiv:1502.04537P. Levay, F. Holweck, Embedding qubits into fermionic Fock space, peculiarities of the four-qubit case, (2015), arXiv:1502.04537.
The magic three-qubit Veldkamp line: A finite geometric underpinning for form theories of gravity and black hole entropy. P Levay, F Holweck, M Saniga, arXiv:1704.01598P. Levay, F. Holweck, M. Saniga, The magic three-qubit Veldkamp line: A finite geometric underpinning for form theories of gravity and black hole entropy, arXiv:1704.01598.
M Cvetic, G W Gibbons, C N Pope, arXiv:1507.07585Compactifications of Deformed Conifolds, Branes and the Geometry of Qubits. M. Cvetic, G.W. Gibbons, C.N. Pope, Compactifications of Deformed Conifolds, Branes and the Geometry of Qubits, arXiv:1507.07585.
Four-qubit entanglement from string theory. L Borsten, D Dahanayake, M J Duff, A Marrani, W Rubens, arXiv:1005.4915Phys.Rev.Lett. 105100507L. Borsten, D. Dahanayake, M. J. Duff, A. Marrani, W. Rubens, Four-qubit entanglement from string theory, Phys.Rev.Lett. 105 (2010)100507, arXiv:1005.4915.
On the Black-Hole/Qubit Correspondence. L J Borstenm, Duffa, W Marrani, Rubens, arXiv:1101.3559Eur. Phys. J. Plus. 126hep-thL. BorstenM. J. DuffA. Marrani,W. Rubens, On the Black-Hole/Qubit Correspondence, Eur. Phys. J. Plus 126 (2011)37 , [arXiv:1101.3559 [hep-th].
Qubit Systems from Colored Toric Geometry and Hypercube Graph Theory. Y Aadel, A Belhaj, M Bensed, Z Benslimane, M B Sedra, A Segui, Commun. Theor. Phys. 68285Y. Aadel, A. Belhaj, M. Bensed, Z. Benslimane, M. B. Sedra, A. Segui, Qubit Systems from Colored Toric Geometry and Hypercube Graph Theory, . Commun. Theor. Phys. 68(2017) 285
Graph Theory and Qubit Information Systems of Extremal Black Branes. A Belhaj, M B Sedra, A Segui, arXiv:1406.2578J.Phys. 4845401A. Belhaj, M. B. Sedra, A. Segui, Graph Theory and Qubit Information Systems of Ex- tremal Black Branes, J.Phys. A48 (2015)045401, arXiv:1406.2578.
A Belhaj, arXiv:1612.09356Multi-qubits and Polyvalent Singularity in Type II Supestring Theory. A. Belhaj, Multi-qubits and Polyvalent Singularity in Type II Supestring Theory, arXiv:1612.09356.
A Belhaj, A Belhaj, L Machkouri, M M Sedra, S Ziti, arXiv:1609.03534Graph Theory Representation of Quantum Information Inspired by Lie Algebras. A. Belhaj, A. Belhaj, L. Machkouri, M. M. Sedra, S. Ziti, Graph Theory Representation of Quantum Information Inspired by Lie Algebras, arXiv:1609.03534.
STU Black Holes as Four Qubit Systems. P Levay, arXiv:1004.3639Phys. Rev. D. 8226003P. Levay, STU Black Holes as Four Qubit Systems, Phys. Rev. D 82(2010)026003, arXiv:1004.3639.
Grassmannian Connection Between Three-and Four-Qubit Observables, Mermin's Contextuality and Black Holes. P Levay, M Planat, Metod Saniga, arXiv:1305.5689JHEP. 0937P. Levay, M. Planat, Metod Saniga, Grassmannian Connection Between Three-and Four-Qubit Observables, Mermin's Contextuality and Black Holes, JHEP 09 (2013)037, arXiv:1305.5689.
Analysis of Functions of Split-Complex, Multicomplex, and Split-Quaternionic Variables and Their Associated Conformal Geometries. J A Emanuello, The Florida State UniversityPhD thesisJ. A. Emanuello, Analysis of Functions of Split-Complex, Multicomplex, and Split- Quaternionic Variables and Their Associated Conformal Geometries. PhD thesis, The Florida State University, 2015.
Manifolds of G 2 Holonomy from N=4 Sigma Model. A Belhaj, arXiv:hep-th/0201155J.Phys. 358903A. Belhaj, Manifolds of G 2 Holonomy from N=4 Sigma Model, J.Phys. A35(2002)8903, arXiv:hep-th/0201155.
Vacuum configurations for superstrings. P Candelas, G Horowitz, A Strominger, E Witten, Nucl. Phys. 25846P. Candelas, G. Horowitz, A. Strominger, E. Witten, Vacuum configurations for super- strings, Nucl. Phys. B258 (1985)46.
B R Greene, hep-th/9702155String Theory on Calabi Yau Manifolds. B.R. Greene, String Theory on Calabi Yau Manifolds, hep-th/9702155.
P , hep-th/961117K3 surfaces and String Duality. P. Aspinwall, K3 surfaces and String Duality, hep-th/961117.
N=2 Supersymmetric Black Attractors in Six and Seven Dimensions. A Belhaj, L B Drissi, E H Saidi, A Segui, arXiv:0709.0398Nucl. Phys. 796521A. Belhaj, L.B. Drissi, E.H. Saidi, A. Segui, N=2 Supersymmetric Black Attractors in Six and Seven Dimensions, Nucl. Phys. B796 (2008)521, arXiv:0709.0398.
Entropy of Pairs of Dual Attractors in 6D/7D. E H Saidi, A Segui, arXiv:0803.2945JHEP. 0807128E.H. Saidi, A. Segui, Entropy of Pairs of Dual Attractors in 6D/7D, JHEP 0807(2008)128, arXiv:0803.2945.
G Ottavian, arXiv:1301.0472Introduction to the Hyperdeterminant and to the Rank of Multidimensional Matrices. G. Ottavian, Introduction to the Hyperdeterminant and to the Rank of Multidimensional Matrices, arXiv:1301.0472.
On the theory of linear transformations. A Cayley, Camb. Math. J. 4A. Cayley, On the theory of linear transformations, Camb. Math. J. 4 193-209,1845.
Resultants and Multidimensional Determinants. I M Gelfand, M M Kapranov, A V Zelevinsky Discriminants, BirkhauserI.M. Gelfand, M.M. Kapranov, A.V. Zelevinsky Discriminants, Resultants and Multidi- mensional Determinants, Birkhauser, 1994.
M A Nielsen, I L Chuang, Quantum Computation and Quantum Information. New York, NY, USACambridge University PressM. A. Nielsen, I. L. Chuang, Quantum Computation and Quantum Information, Cam- bridge University Press, New York, NY, USA, 2000.
D R Terno, arXiv:quant-ph/0508049Introduction to relativistic quantum information. D. R. Terno, Introduction to relativistic quantum information, arXiv:quant-ph/0508049.
Entanglement properties of topological color codes. M Kargarian, arXiv:0809.4276Phys. Rev. 7862312M. Kargarian, Entanglement properties of topological color codes, Phys. Rev. A78 (2008)062312, arXiv:0809.4276.
C Vafa, arXiv:hep-th/970220Lectures on Strings and Dualities. C. Vafa, Lectures on Strings and Dualities, arXiv:hep-th/970220.
Toric Geometry and String Theory Descriptions of Qudit Systems. A Belhaj, H Ez-Zahraouy, M B Sedra, arXiv:1408.3952J. Geom. Phys. 9521A. Belhaj, H. Ez-Zahraouy, M. B. Sedra, Toric Geometry and String Theory Descriptions of Qudit Systems, J. Geom. Phys. 95 (2015)21, arXiv:1408.3952.
F Holweck, J-G Luque, M Planat, arXiv:1312.0639Singularity of type D 4 arising from four qubit systems. F. Holweck, J-G. Luque, M. Planat, Singularity of type D 4 arising from four qubit systems, arXiv:1312.0639.
F Holweck, H Jaffali, arXiv:1606.05537Three-qutrit entanglement and simple singularities. F. Holweck, H. Jaffali, Three-qutrit entanglement and simple singularities, arXiv:1606.05537.
Strings, black holes, and quantum information. R Kallosh, A D Linde, arXiv:hep-th/0602061Phys.Rev. D. 73R.Kallosh, A.D.Linde, Strings, black holes, and quantum information,Phys.Rev. D 73 (2006) 104033 arXiv:hep-th/0602061.
Symmetric Spaces in Supergravity. S Ferrara, A Marrani, arXiv:0808.3567Contemp. Math. 490203hep-thS.Ferrara, A.Marrani, Symmetric Spaces in Supergravity, Contemp. Math. 490(2009) 203, arXiv:0808.3567 [hep-th].
| []
|
[
"On-demand quantum spin Hall insulators controlled by two-dimensional ferroelectricity",
"On-demand quantum spin Hall insulators controlled by two-dimensional ferroelectricity"
]
| [
"Jiawei Huang \nSchool of Science\nWestlake University\n310024HangzhouZhejiangChina\n",
"Xu Duan \nSchool of Science\nWestlake University\n310024HangzhouZhejiangChina\n",
"Sunam Jeon \nDepartment of Energy Science\nSungkyunkwan University\n16419SuwonKorea\n",
"Youngkuk Kim \nDepartment of Physics\nSungkyunkwan University\n16419SuwonKorea\n",
"Jian Zhou \nCenter for Alloy Innovation and Design\nState Key Laboratory for Mechanical Behavior of Materials\nXi'an Jiaotong University\n710049Xi'anChina\n",
"Jian Li \nSchool of Science\nWestlake University\n310024HangzhouZhejiangChina\n\nInstitute of Natural Sciences\nWestlake Institute for Advanced Study\n310024HangzhouZhejiangChina\n\nKey Laboratory for Quantum Materials of Zhejiang Province\n310024Hangzhou ZhejiangChina\n",
"Shi Liu \nSchool of Science\nWestlake University\n310024HangzhouZhejiangChina\n\nInstitute of Natural Sciences\nWestlake Institute for Advanced Study\n310024HangzhouZhejiangChina\n\nKey Laboratory for Quantum Materials of Zhejiang Province\n310024Hangzhou ZhejiangChina\n"
]
| [
"School of Science\nWestlake University\n310024HangzhouZhejiangChina",
"School of Science\nWestlake University\n310024HangzhouZhejiangChina",
"Department of Energy Science\nSungkyunkwan University\n16419SuwonKorea",
"Department of Physics\nSungkyunkwan University\n16419SuwonKorea",
"Center for Alloy Innovation and Design\nState Key Laboratory for Mechanical Behavior of Materials\nXi'an Jiaotong University\n710049Xi'anChina",
"School of Science\nWestlake University\n310024HangzhouZhejiangChina",
"Institute of Natural Sciences\nWestlake Institute for Advanced Study\n310024HangzhouZhejiangChina",
"Key Laboratory for Quantum Materials of Zhejiang Province\n310024Hangzhou ZhejiangChina",
"School of Science\nWestlake University\n310024HangzhouZhejiangChina",
"Institute of Natural Sciences\nWestlake Institute for Advanced Study\n310024HangzhouZhejiangChina",
"Key Laboratory for Quantum Materials of Zhejiang Province\n310024Hangzhou ZhejiangChina"
]
| []
| The coexistence of ferroelectric and topological orders in two-dimensional (2D) atomic crystals allows non-volatile and switchable quantum spin Hall states. Here we offer a general design principle for 2D bilayer heterostructures that can host ferroelectricity and nontrivial band topology simultaneously using only topologically trivial building blocks. The built-in electric field arising from the out-of-plane polarization across the heterostrucuture enables a robust control of the band gap size and band inversion strength, which can be utilized to manipulate topological phase transitions. Using first-principles calculations, we demonstrate a series of bilayer heterostructures are 2D ferroelectric topological insulators (2DFETIs) characterized with a direct coupling between band topology and polarization state. We propose a few 2DFETI-based quantum electronics including domain-wall quantum circuits and topological memristor.Band topology and ferrroelectricity, two extensively studied properties of bulk insulators representing two distinct "ordered states", can manifest themselves in two dimensions. Graphene as the first discovered twodimensional (2D) material [1] is also the first predicted 2D topological insulators (TIs) characterized by counterpropagating edge currents with opposite spin polarization and an insulating interior[2]. A 2D TI is also called a quantum spin Hall (QSH) insulator for its quantized edge conductance (2e 2 /h where e is the elementary charge and h is Planck's constant). Finding 2D TIs with large band gaps for room-temperature applications remains an actively pursued goal[3,4].Ferroelectrics (FEs) with inversion symmetry breaking often exhibit a strong size effect that the spontaneous polarization diminishes with reduced dimensionality due to the depolarization field from the incomplete screening of surface charges[5]. More recently, facilitated by firstprinciples calculations based on density functional theory (DFT), a range of 2D FEs were discovered followed by confirming experiments [6-10]. Specifically, α-In 2 Se 3 exhibits both out-of-plane and in-plane electric polarization [7, 11], a feature beneficial for practical device applications and high-density integrations. As the surfaces of 2D materials do not suffer from dangling bonds, it is feasible to stack different 2D sheets to construct van der Waals (vdW) heterostructures in a precisely controlled layering sequence less impacted by the lattice mismatch[12]. * [email protected] † [email protected] A 2D material system with both ferroelectric and topological orders, referred to as a 2D ferroelectric topological insulator (2DFETI), remains rarely reported other than some chance discovery[13]. The coexistence of ferroelectricity and nontrivial band topology in general has to reconcile conflicting requirements for band gaps[14]; TIs are often narrow-gap semiconductors with a band gap determined by the strength of spin-orbit coupling (SOC) whereas archetypal FEs such as transition metal perovskites are mostly wide-band-gap insulators with the gap size dictated by the electronegativity difference between oxygen and transition metals. Unlike bulk FEs in 3D, many 2D FEs are semiconductors with moderate band gaps [11], thus being better suited for the coexistence of the topological order. However, a mere coexisting of these two ordered states does not guaranty a strong coupling between topological and polarization states.Here we propose a design principle for the realization of 2DFETIs in bilayer heterostructures comprising only trivial 2D FEs. The ability to create nontrivial quantum materials using trivial building blocks broadens the materials design space. Moreover, the band topology is directly coupled to the polarization state: the nonvolatile QSH states that can be fully switched on and off via voltage. The key requirement for the constituent 2D FEs is the presence of an out-of-plane polarization (P OP ).For a free-standing monolayer, the polarization bound charges on the two surfaces create a depolarization field (E d ) that runs against the polarization. As a result, the valence band maximum (VBM) is located on the negatively-charged ( Q − ) surface while the conduction band minimum (CBM) is on the positivelycharged ( Q + ) surface(Fig. 1a). It reflects the tendency | 10.1039/d2mh00334a | [
"https://export.arxiv.org/pdf/2101.07980v2.pdf"
]
| 233,714,273 | 2101.07980 | d7b5e30043fb4966561ab24d124eebf7244cec60 |
On-demand quantum spin Hall insulators controlled by two-dimensional ferroelectricity
2 May 2021
Jiawei Huang
School of Science
Westlake University
310024HangzhouZhejiangChina
Xu Duan
School of Science
Westlake University
310024HangzhouZhejiangChina
Sunam Jeon
Department of Energy Science
Sungkyunkwan University
16419SuwonKorea
Youngkuk Kim
Department of Physics
Sungkyunkwan University
16419SuwonKorea
Jian Zhou
Center for Alloy Innovation and Design
State Key Laboratory for Mechanical Behavior of Materials
Xi'an Jiaotong University
710049Xi'anChina
Jian Li
School of Science
Westlake University
310024HangzhouZhejiangChina
Institute of Natural Sciences
Westlake Institute for Advanced Study
310024HangzhouZhejiangChina
Key Laboratory for Quantum Materials of Zhejiang Province
310024Hangzhou ZhejiangChina
Shi Liu
School of Science
Westlake University
310024HangzhouZhejiangChina
Institute of Natural Sciences
Westlake Institute for Advanced Study
310024HangzhouZhejiangChina
Key Laboratory for Quantum Materials of Zhejiang Province
310024Hangzhou ZhejiangChina
On-demand quantum spin Hall insulators controlled by two-dimensional ferroelectricity
2 May 2021(Dated: December 27, 2021)
The coexistence of ferroelectric and topological orders in two-dimensional (2D) atomic crystals allows non-volatile and switchable quantum spin Hall states. Here we offer a general design principle for 2D bilayer heterostructures that can host ferroelectricity and nontrivial band topology simultaneously using only topologically trivial building blocks. The built-in electric field arising from the out-of-plane polarization across the heterostrucuture enables a robust control of the band gap size and band inversion strength, which can be utilized to manipulate topological phase transitions. Using first-principles calculations, we demonstrate a series of bilayer heterostructures are 2D ferroelectric topological insulators (2DFETIs) characterized with a direct coupling between band topology and polarization state. We propose a few 2DFETI-based quantum electronics including domain-wall quantum circuits and topological memristor.Band topology and ferrroelectricity, two extensively studied properties of bulk insulators representing two distinct "ordered states", can manifest themselves in two dimensions. Graphene as the first discovered twodimensional (2D) material [1] is also the first predicted 2D topological insulators (TIs) characterized by counterpropagating edge currents with opposite spin polarization and an insulating interior[2]. A 2D TI is also called a quantum spin Hall (QSH) insulator for its quantized edge conductance (2e 2 /h where e is the elementary charge and h is Planck's constant). Finding 2D TIs with large band gaps for room-temperature applications remains an actively pursued goal[3,4].Ferroelectrics (FEs) with inversion symmetry breaking often exhibit a strong size effect that the spontaneous polarization diminishes with reduced dimensionality due to the depolarization field from the incomplete screening of surface charges[5]. More recently, facilitated by firstprinciples calculations based on density functional theory (DFT), a range of 2D FEs were discovered followed by confirming experiments [6-10]. Specifically, α-In 2 Se 3 exhibits both out-of-plane and in-plane electric polarization [7, 11], a feature beneficial for practical device applications and high-density integrations. As the surfaces of 2D materials do not suffer from dangling bonds, it is feasible to stack different 2D sheets to construct van der Waals (vdW) heterostructures in a precisely controlled layering sequence less impacted by the lattice mismatch[12]. * [email protected] † [email protected] A 2D material system with both ferroelectric and topological orders, referred to as a 2D ferroelectric topological insulator (2DFETI), remains rarely reported other than some chance discovery[13]. The coexistence of ferroelectricity and nontrivial band topology in general has to reconcile conflicting requirements for band gaps[14]; TIs are often narrow-gap semiconductors with a band gap determined by the strength of spin-orbit coupling (SOC) whereas archetypal FEs such as transition metal perovskites are mostly wide-band-gap insulators with the gap size dictated by the electronegativity difference between oxygen and transition metals. Unlike bulk FEs in 3D, many 2D FEs are semiconductors with moderate band gaps [11], thus being better suited for the coexistence of the topological order. However, a mere coexisting of these two ordered states does not guaranty a strong coupling between topological and polarization states.Here we propose a design principle for the realization of 2DFETIs in bilayer heterostructures comprising only trivial 2D FEs. The ability to create nontrivial quantum materials using trivial building blocks broadens the materials design space. Moreover, the band topology is directly coupled to the polarization state: the nonvolatile QSH states that can be fully switched on and off via voltage. The key requirement for the constituent 2D FEs is the presence of an out-of-plane polarization (P OP ).For a free-standing monolayer, the polarization bound charges on the two surfaces create a depolarization field (E d ) that runs against the polarization. As a result, the valence band maximum (VBM) is located on the negatively-charged ( Q − ) surface while the conduction band minimum (CBM) is on the positivelycharged ( Q + ) surface(Fig. 1a). It reflects the tendency
The coexistence of ferroelectric and topological orders in two-dimensional (2D) atomic crystals allows non-volatile and switchable quantum spin Hall states. Here we offer a general design principle for 2D bilayer heterostructures that can host ferroelectricity and nontrivial band topology simultaneously using only topologically trivial building blocks. The built-in electric field arising from the out-of-plane polarization across the heterostrucuture enables a robust control of the band gap size and band inversion strength, which can be utilized to manipulate topological phase transitions. Using first-principles calculations, we demonstrate a series of bilayer heterostructures are 2D ferroelectric topological insulators (2DFETIs) characterized with a direct coupling between band topology and polarization state. We propose a few 2DFETI-based quantum electronics including domain-wall quantum circuits and topological memristor.
Band topology and ferrroelectricity, two extensively studied properties of bulk insulators representing two distinct "ordered states", can manifest themselves in two dimensions. Graphene as the first discovered twodimensional (2D) material [1] is also the first predicted 2D topological insulators (TIs) characterized by counterpropagating edge currents with opposite spin polarization and an insulating interior [2]. A 2D TI is also called a quantum spin Hall (QSH) insulator for its quantized edge conductance (2e 2 /h where e is the elementary charge and h is Planck's constant). Finding 2D TIs with large band gaps for room-temperature applications remains an actively pursued goal [3,4].
Ferroelectrics (FEs) with inversion symmetry breaking often exhibit a strong size effect that the spontaneous polarization diminishes with reduced dimensionality due to the depolarization field from the incomplete screening of surface charges [5]. More recently, facilitated by firstprinciples calculations based on density functional theory (DFT), a range of 2D FEs were discovered followed by confirming experiments [6][7][8][9][10]. Specifically, α-In 2 Se 3 exhibits both out-of-plane and in-plane electric polarization [7,11], a feature beneficial for practical device applications and high-density integrations. As the surfaces of 2D materials do not suffer from dangling bonds, it is feasible to stack different 2D sheets to construct van der Waals (vdW) heterostructures in a precisely controlled layering sequence less impacted by the lattice mismatch [12]. * [email protected] † [email protected] A 2D material system with both ferroelectric and topological orders, referred to as a 2D ferroelectric topological insulator (2DFETI), remains rarely reported other than some chance discovery [13]. The coexistence of ferroelectricity and nontrivial band topology in general has to reconcile conflicting requirements for band gaps [14]; TIs are often narrow-gap semiconductors with a band gap determined by the strength of spin-orbit coupling (SOC) whereas archetypal FEs such as transition metal perovskites are mostly wide-band-gap insulators with the gap size dictated by the electronegativity difference between oxygen and transition metals. Unlike bulk FEs in 3D, many 2D FEs are semiconductors with moderate band gaps [11], thus being better suited for the coexistence of the topological order. However, a mere coexisting of these two ordered states does not guaranty a strong coupling between topological and polarization states.
Here we propose a design principle for the realization of 2DFETIs in bilayer heterostructures comprising only trivial 2D FEs. The ability to create nontrivial quantum materials using trivial building blocks broadens the materials design space. Moreover, the band topology is directly coupled to the polarization state: the nonvolatile QSH states that can be fully switched on and off via voltage. The key requirement for the constituent 2D FEs is the presence of an out-of-plane polarization (P OP ).
For a free-standing monolayer, the polarization bound charges on the two surfaces create a depolarization field (E d ) that runs against the polarization. As a result, the valence band maximum (VBM) is located on the negatively-charged ( Q − ) surface while the conduction band minimum (CBM) is on the positivelycharged ( Q + ) surface (Fig. 1a) of the system to generate free carriers needed for bound charge compensation: the screening of Q − (Q + ) surface needs mobile holes (electrons) that can be generated by the crossing of VBM (CBM) over the Fermi level (E F ).
Another heuristic way to understand such band bending is that electrons close to the Q − surface are at a high energy level (thus being at VBM) because of the Coulomb repulsion. The band diagram of a 2D FE with out-of-plane polarization resembles that of an unbiased p-n junction, and E d across the monolayer is similar to the electric field confined to the depletion region around the junction interface (see Fig. S2 in Supplemental Materials). Since the potential step (∆Φ) scales with P OP , one might expect the crossover of VBM and CBM given a sufficiently large P OP . However, such band inversion is unlikely to happen in a monolayer as the system would become metallic at the crossover, providing more free carriers for surface charge compensation thus reducing E d and band bending. Therefore, a 2D FE by itself has a tendency to avoid the band inversion by maintaining an optimal trade-off between imperfect screening and a minimal necessary band bending [15]. In contrast, a bilayer heterostructure made of two different 2D FEs allows for a band inversion process when the following condition is satisfied
∆Φ[P OP ] + X Q + > W Q − ,(1)
where X Q + is the electron affinity of the Q + surface and W Q − is the work function of the Q − surface of the heterosrtructure, respectively. The superscripts (Q + and Q − ) are simply used to reflect the direction of outof-plane polarization. It is evident from Fig. 1b that E VBM = −W Q − and E CBM = −(∆Φ + X Q + ) relative to the vacuum level, and Eq. 1 naturally leads to E CBM < E VBM and thus a band inversion. According to the celebrated Neumann-Wigner theorem [16], the presence of crossing between two bands often demands a symmetry-related protection (e.g., mirror symmetry). For a generic system without special crystalline symmetries, the SOC will then lead to a double group system where a gap generically opens between valence and conduction bands, likely resulting in a QSH state. Because X Q + and W Q − are coming from two different materials, a 2DFETI can be realized by selecting one layer with large X and another layer with small W . Furthermore, by choosing a pair of 2D FEs (labeled as A and B respectively) satisfying
∆Φ[P OP ] + X Q + A > W Q − B ; ∆Φ[P OP ] + X Q + B < W Q − A ,(2)
we can ensure that the configuration with P pointing from B to A has band inversion ( Fig. 1c) while the other configuration with P pointing from A to B is a normal insulator (Fig. 1d), creating a pair of topologically different, non-volatile states in the absence of external electric fields. This can be accomplished by choosing A with large X and W whereas B with small W and X. An advantage of the proposed 2DFETI is the "ondemand" topological quantum phase transition. In general, X and W are less sensitive to the change of polarization for a given material; the potential step ∆Φ is almost singularly determined by P OP and E d . The band inversion strength (λ ∝ E d ) can thus be tuned nearly continuously by applying external electric/stress fields. This implies the access to multiple electronic band configurations that are effectively characterized by the magnitude of λ and can belong to distinct topological phases. Using a generic model (see details in Supplemental Materials), we explore the evolution of the band topology with increasing value of λ and identify successive phase transitions associated with two band inversion events, occurring at Γ-point and then non-high-symmetry points in the Brillouin zone. The two quantum phase transition points (band gap E g = 0, see Fig. 2a) separate trivial, nontrivial, and trivial phases, respectively, as verified by Wilson loop calculations (Fig. 2b).
Using bilayer heterostructures made of III 2 -VI 3 -type 2D FEs as model systems, we demonstrate the feasibility of the design principle based on DFT calculations. A typical III 2 -VI 3 -type 2D FE comprises five atomic planes in the order of VI-III-VI-III-VI, in which the central layer is displaced relative to the top and bottom III-VI layers, resulting in both out-of-plane and in-plane polarization (P OP and P IP , Fig. 3a). DFT calculations are performed using generalized gradient approximation of the Perdew-Burke-Ernzerhof (PBE) type as implemented in QUANTUM ESPRESSO [17,18] (see Supplemental Materials). To recognize the topological state of this system, we obtained the Z 2 topological index by calculating the Wilson loop spectrum using the maximum localized Wannier functions tight-binding Hamiltonian constructed with Wan-nier90 [19] interfaced with QUANTUM ESPRESSO.
We focus on the bilayer heterostructure made of monolayer In 2 Te 3 and In 2 Se 3 , which has W In2Se3 = 6.0 eV, X In2Se3 = 4.7 eV, W In2Te3 = 5.3 eV, and X In2Te3 = 3.8 eV. Noted that In 2 Se 3 has large W and X compared to In 2 Te 3 , hinting at a possible material realization of Eq. 2. The configuration with out-of-plane polarization pointing from In 2 Se 3 (Q − surface) to In 2 Te 3 (Q + surface) is denoted as UU (up-up, Fig. 3b). The computed electronic electrostatic potential across the UU bilayer ( Fig. 3c) is consistent with the design principle illustrated in Fig. 1d. The layer-resolved local density of states confirm that the VBM and CBM are located at Q − and Q + surfaces, respectively. According to Eq. 2, the UU configuration is a normal insulator as X Q + In2Te3 ≪ W Q − In2Se3 . In comparison, the configuration with a reversed polarization, denoted as DD (down-down), has In 2 Se 3 being the Q + surface and In 2 Te 3 being the Q − surface. Then the band inversion can occur if ∆Φ + X Q + In2Se3 > W Q − In2Te3 ; the threshold value of ∆Φ is just 0.6 eV, which is much more feasible.
The hetero-bilayer also allows two configurations with intermediate values of P OP , denoted as UD and DU (updown and down-up, respectively, illustrated in the insets of Fig. 3e). Figure 3e shows the atomic-orbital resolved band structures of In 2 Te 3 /In 2 Se 3 of four different configurations including the effects of SOC. The computed band structures reveal a band inversion between Se-4p and Te-5p states in DU, UD, and DD configurations, and the inverted gap at Γ increases during the polarization reversal process from UU to DD (gauged by the out-ofplane electrical dipole µ OP ), consistent with the design principle. It is noted that we carefully check the band energies for bilayers of DU, UD, and DD configurations with a high k-point density that samples the whole 2D Brillouin zone (Fig. S16 in Supplemental Materials). The values of band gaps in these three configurations, albeit small, are physical. Moreover, the bilayer system with out-of-plane polarization investigated here does not have any crystalline symmetry (e.g., mirror plane) to protect band crossing in the presence of SOC [20,21]. Calculations of the Z 2 invariant using the Wilson loop approach confirm that both DU and UD configurations are in the QSH insulator phase. As a consequence, topological edge states occur at the boundaries of these samples, which can be clearly seen from the boundary spectral functions shown in Fig. 3f-g.
Surprisingly, the DD configuration appears to be Z 2 trivial by the same calculation method. This indicates an interesting scenario where the enhanced band inversion can convert a topological insulator to a trivial one through a phase transition, consistent with the model prediction in Fig. 2a. More important, this highlights that reversing the polarization of a ferroelectric heterobilayer, including its metastable intermediate states, generally enables access to various phase points across a wide range of the topological phase diagram.
Following the same design principle, we find a number of bilayer heterostructures consisted of III 2 -VI 3 -type 2D FEs are both topological insulator and ferroelectric, i.e., the DD and UD configurations of Al 2 Te 3 /Al 2 Se 3 , and the DU and UD configurations of Al 2 Te 3 /In 2 S 3 . The nontrivial band topologies of these systems are confirmed by Z 2 calculations (see Supplemental Materials).
We comment on the well-known issue of band gap underestimation in (semi-)local DFT such as generalized gradient approximation used in this work. Our design principle by itself is quite robust and less sensitive to the DFT model as P OP (and thus ∆Φ) in principle can always be readily tuned, either by finding appropriate 2D FEs or by applying external electric/stress fields, to satisfy Eq. 1. If a systematic underestimation of the band gap exists, the true phase diagram shown in Fig. 3a will shift to the right horizontally relative to the DFT-predicted one. This may cause both "false positive" (predicting a non-trivial insulator being trivial) and "false negative" (predicting a trivial phase being non-trivial) due to the nature of successive phase transitions. We believe highthroughput calculations based on more accurate (albeit more expensive) DFT methods such as hybrid functionals can lead to promising 2DFETIs for experimental synthesis and characterizations.
For a band inversion process driven by SOC, the inversion strength λ is intrinsically limited by the atomic numbers of heavy elements contributing to the states near E F . In comparison, the central quantity of the design principle proposed in this work is P OP and associated Domain Wall
FIG. 4.
Schematic diagram of 2DFETI-based devices. A hetero-bilayer is topologically trivial in the UU configuration and non-trivial in the DD configuration. (a) Ferroelectric domain walls as field-configurable and moveable quasi-1D channel carrying dissipationless spin current. (b) Non-volatile topological memristor made of heterobilayers. The UU configuration is trivial and the DD configuration is nontrivial. The edge conductance can be written to any value between 0 and 2N e 2 /h for a stacked structure containing N layers of 2DFETIs. The edge states remain nonvolatile in the absence of vertical electric field between top and bottom gates. E d , which can be continuously tuned. In experimental realizations, we suggest various factors such as the dielectric constants of substrates, unintentional doping due to chemicals used in the device fabrication process, and inplane strains induced by the lattice mismatch can serve as knobs to obtain precise control of E d and λ. For example, one can use another layer of semiconducting 2D material or a substrate to partially screen the surface charges of the bilayer, thus setting the magnitude of E d and λ to the desired value. Surface doping can be utilized to change the surface work function and electron affinity to configure the topological state.
The 2DFETIs exhibit a few features distinct from their 3D counterparts. First, a 2DFETI is expected to possess more robust switchability than a 3DFETI. The conducting surface states of a 3DFETI, though in principle can serve as innate metallic electrodes to stabilize ferroelectricity at the nanoscale [14], may also screen strongly the external electric field, hindering the polarization reversal process. In contrast, a 2DFETI behaves like a normal insulator along the out-of-plane direction, making it easier to switch P OP . Because the band topology is strongly coupled with the direction and magnitude of P OP , it is feasible to use an external electric field to drive a trivial-nontrivial topological phase transition, corresponding to an OFF-ON switch of the quantized edge conductance. According to our design principle (Eq. 2), the UU and DD configurations of a bilayer heterostruc-ture can have different band topologies. Unlike 1T'-MoS 2 that requires a sustained electric field to maintain the trivial state [3], the UU and DD configurations are intrinsically stable in the absence of an electric field, allowing non-volatile topological field-effect transistor. Additionally, the 180 • domain wall (DW) separating UU and DD domains will support helical metallic states protected from back-scattering. These DWs in 2DFETI can serve as field-configurable and moveable quasi-1D dissipationless charge/spin transport channels (Fig. 4a), offering new opportunities of domain-wall-based quantum electrical circuits.
We propose a topological memristor (illustrated in Fig. 4b) for non-volatile multi-state applications based on vdW heterostructures of 2DFETIs separated by 2D wide-band-gap insulators such as hexagonal boron nitride (hBN). The device setup is similar to a topological transistor [3] but with advantages of being nonvolatile and multistate. Noted that a field-effect transistor made completely from 2D materials has already been demonstrated [22], indicating the possibility of constructing a similar unit using only 2D materials. The hetero-bilayerbased 2DFETI is topologically trivial in the UU configuration while non-trivial in the DD configuration. It has been demonstrated in ferroelectric thin films that the polarization state can be deterministically set to a desired value in an on-demand fashion by controlling the voltage and width of pulsed electric fields [23,24]. Following a similar spirit, the edge conductance of the vdW heterostructure containing N sheets of 2DFETIs can be written electrically to any value between 0 and 2N e 2 /h by varying the number of bilayers of DD configurations, and retain the value without bias. The proposed 2DFETI-based topological memristor, combining the advantages of topological insulator, ferroelectrics, and twodimensional materials, may hold the promise for energyefficient, high-density synaptic electronics and neuromorphic systems.
FIG. 2 .FIG. 3 .
23Successive trivial-nontrivial-trivial phase transitions driven by the strength (λ) of band inversion. (a) Numerically obtained band gap as a function of λ. (b) Wilson loop calculations for three different phases. Electronic structures of In2Te3/In2Se3 bilayer heterostructures. Atomic structures of (a) III2-VI3 2D ferroelectrics and (b) In2Te3/In2Se3 bilayer hetrostructure of UU configuraiton. (c) Electronic electrostatic potential of UU configuration and (f) layer-resolved local density of states (LDOS) for Q + and Q − surfaces computed with DFT. The potential step across the bilayer and the locations of VBM and CBM are consistent with Fig. 1d. (e) Atomic orbital-resolved band structures of UU, DU, UD, and DD configurations. µOP is the out-of-plane dipole. Spectral functions at the edges of (f) DU and (g) UD configurations. Because of the in-plane polarization PIP, the two edges acquire opposite bound charges (ρ + IP and ρ − IP ), leading to shifted Dirac cones relative to the Fermi level.
ACKNOWLEDGMENTS J.H., X.D., and S.L. acknowledge the supports from Westlake Education Foundation, Westlake Multidisciplinary Research Initiative Center, and National Natural Science Foundation of China (52002335). Y.K. acknowledges the support from the NRF Grant (2020R1F1A106926111) The computational resource is provided by Westlake HPC Center and the Korea Institute of Science and Technology Information (KISTI).
. It reflects the tendencyCBM
VBM
E f
E vac
P
CBM
VBM
E vac
a
b
c
d
A
B
A
B
P
P
CBM
VBM
P A
P B
P A
P B
CBM
VBM
E vac
E vac
Energy of Electrons
FIG. 1. Design principle for two-dimensional ferroelectric
topological insulator. Band bending in (a) 2D ferroelectric
and (b) bilayer heterostructure consisted of two different 2D
ferroelectrics. The solid line represents the energy of elec-
trons. The band inversion can be controlled by the switching
of ferroelectric polarization. (c) Potential topological insula-
tor phase due to band inversion. (d) Trivial insulator phase
with uninverted bands.
Electric field effect in atomically thin carbon films. K S Novoselov, 10.1126/science.1102896Science. 306666K. S. Novoselov, Electric field effect in atomically thin carbon films, Science 306, 666 (2004).
Quantum spin hall effect in graphene. C L Kane, E J Mele, 10.1103/PhysRevLett.95.226801Phys. Rev. Lett. 95226801C. L. Kane and E. J. Mele, Quantum spin hall effect in graphene, Phys. Rev. Lett. 95, 226801 (2005).
Quantum spin hall effect in two-dimensional transition metal dichalcogenides. X Qian, J Liu, L Fu, J Li, 10.1126/science.1256815Science. 3461344X. Qian, J. Liu, L. Fu, and J. Li, Quantum spin hall ef- fect in two-dimensional transition metal dichalcogenides, Science 346, 1344 (2014).
Two-dimensional topological insulators: Progress and prospects. L Kou, Y Ma, Z Sun, T Heine, C Chen, 10.1021/acs.jpclett.7b00222J Phys. Chem. Lett. 81905L. Kou, Y. Ma, Z. Sun, T. Heine, and C. Chen, Two-dimensional topological insulators: Progress and prospects, J Phys. Chem. Lett. 8, 1905 (2017).
Phase transition, stability, and depolarization field in ferroelectric thin films. I P Batra, P Wurfel, B D Silverman, 10.1103/PhysRevB.8.3257Phys. Rev. B. 83257I. P. Batra, P. Wurfel, and B. D. Silverman, Phase tran- sition, stability, and depolarization field in ferroelectric thin films, Phys. Rev. B 8, 3257 (1973).
A Belianinov, Q He, A Dziaugys, P Maksymovych, E Eliseev, A Borisevich, A Morozovska, J Banys, Y Vysochanskii, S V Kalinin, CuInP2S6 room temperature layered ferroelectric. 153808A. Belianinov, Q. He, A. Dziaugys, P. Maksymovych, E. Eliseev, A. Borisevich, A. Morozovska, J. Banys, Y. Vysochanskii, and S. V. Kalinin, CuInP2S6 room temperature layered ferroelectric, Nano Lett. 15, 3808 (2015).
Out-of-plane piezoelectricity and ferroelectricity in layered α-In2Se3 nanoflakes. Y Zhou, D Wu, Y Zhu, Y Cho, Q He, X Yang, K Herrera, Z Chu, Y Han, M C Downer, H Peng, K Lai, 10.1021/acs.nanolett.7b02198Nano Lett. 175508Y. Zhou, D. Wu, Y. Zhu, Y. Cho, Q. He, X. Yang, K. Her- rera, Z. Chu, Y. Han, M. C. Downer, H. Peng, and K. Lai, Out-of-plane piezoelectricity and ferroelectricity in lay- ered α-In2Se3 nanoflakes, Nano Lett. 17, 5508 (2017).
K Chang, J Liu, H Lin, N Wang, K Zhao, A Zhang, F Jin, Y Zhong, X Hu, W Duan, Discovery of robust in-plane ferroelectricity in atomic-thick SnTe. 353274K. Chang, J. Liu, H. Lin, N. Wang, K. Zhao, A. Zhang, F. Jin, Y. Zhong, X. Hu, W. Duan, et al., Discovery of robust in-plane ferroelectricity in atomic-thick SnTe, Science 353, 274 (2016).
Room-temperature ferroelectricity in MoTe2 down to the atomic monolayer limit. S Yuan, X Luo, H L Chan, C Xiao, Y Dai, M Xie, J Hao, Nat. Commun. 10S. Yuan, X. Luo, H. L. Chan, C. Xiao, Y. Dai, M. Xie, and J. Hao, Room-temperature ferroelectricity in MoTe2 down to the atomic monolayer limit, Nat. Commun. 10 (2019).
Berry curvature memory through electrically driven stacking transitions. J Xiao, Y Wang, H Wang, C D Pemmaraju, S Wang, P Muscher, E J Sie, C M Nyby, T P Devereaux, X Qian, X Zhang, A M Lindenberg, 10.1038/s41567-020-0947-0Nat. Phys. 161028J. Xiao, Y. Wang, H. Wang, C. D. Pemmaraju, S. Wang, P. Muscher, E. J. Sie, C. M. Nyby, T. P. Devereaux, X. Qian, X. Zhang, and A. M. Lindenberg, Berry curva- ture memory through electrically driven stacking transi- tions, Nat. Phys. 16, 1028 (2020).
W Ding, J Zhu, Z Wang, Y Gao, D Xiao, Y Gu, Z Zhang, W Zhu, Prediction of intrinsic twodimensional ferroelectrics in In2Se3 and other III2-VI3 van der waals materials. 81W. Ding, J. Zhu, Z. Wang, Y. Gao, D. Xiao, Y. Gu, Z. Zhang, and W. Zhu, Prediction of intrinsic two- dimensional ferroelectrics in In2Se3 and other III2-VI3 van der waals materials, Nat. Commun. 8, 1 (2017).
D L Duong, S J Yun, Y H Lee, 10.1021/acsnano.7b07436van der waals layered materials: Opportunities and challenges. 1111803D. L. Duong, S. J. Yun, and Y. H. Lee, van der waals layered materials: Opportunities and challenges, ACS Nano. 11, 11803 (2017).
Heterobilayer with ferroelectric switching of topological state. J.-J Zhang, D Zhu, B I Yakobson, 10.1021/acs.nanolett.0c04531Nano Lett. 21785J.-J. Zhang, D. Zhu, and B. I. Yakobson, Heterobi- layer with ferroelectric switching of topological state, Nano Lett. 21, 785 (2020).
Strain-induced ferroelectric topological insulator. S Liu, Y Kim, L Z Tan, A M Rappe, 10.1021/acs.nanolett.5b04545Nano Lett. 161663S. Liu, Y. Kim, L. Z. Tan, and A. M. Rappe, Strain-induced ferroelectric topological insulator, Nano Lett. 16, 1663 (2016).
Enhanced electromechanical response of ferroelectrics due to charged domain walls. T Sluka, A K Tagantsev, D Damjanovic, M Gureev, N Setter, Nat. Commun. 3748T. Sluka, A. K. Tagantsev, D. Damjanovic, M. Gureev, and N. Setter, Enhanced electromechanical response of ferroelectrics due to charged domain walls, Nat. Com- mun. 3, 748 (2012).
No crossing rule. J Neumann, E Wigner, Phys. Z. 30467J. von Neumann and E. Wigner, No crossing rule, Phys. Z. 30, 467 (1927).
QUANTUM ESPRESSO: a modular and open-source software project for quantum simulations of materials. P Giannozzi, S Baroni, N Bonini, M Calandra, R Car, C Cavazzoni, D Ceresoli, G L Chiarotti, M Cococcioni, I Dabo, J. Phys. Condens. Matter. 21395502P. Giannozzi, S. Baroni, N. Bonini, M. Calandra, R. Car, C. Cavazzoni, D. Ceresoli, G. L. Chiarotti, M. Cococ- cioni, I. Dabo, et al., QUANTUM ESPRESSO: a modu- lar and open-source software project for quantum simula- tions of materials, J. Phys. Condens. Matter 21, 395502 (2009).
Advanced capabilities for materials modelling with QUANTUM ESPRESSO. P Giannozzi, O Andreussi, T Brumme, O Bunau, M B Nardelli, M Calandra, R Car, C Cavazzoni, D Ceresoli, M Cococcioni, J. Phys. Condens. Matter. 29465901P. Giannozzi, O. Andreussi, T. Brumme, O. Bunau, M. B. Nardelli, M. Calandra, R. Car, C. Cavazzoni, D. Ceresoli, M. Cococcioni, et al., Advanced capabilities for materials modelling with QUANTUM ESPRESSO, J. Phys. Condens. Matter 29, 465901 (2017).
Wannier90 as a community code: new features and applications. G Pizzi, V Vitale, R Arita, S Blügel, F Freimuth, G Géranton, M Gibertini, D Gresch, C Johnson, T Koretsune, J Ibañez-Azpiroz, H Lee, J.-M Lihm, D Marchand, A Marrazzo, Y Mokrousov, J I Mustafa, Y Nohara, Y Nomura, L Paulatto, S Poncé, T Ponweiser, J Qiao, F Thöle, S S Tsirkin, M Wierzbowska, N Marzari, D Vanderbilt, I Souza, A A Mostofi, J R Yates, 10.1088/1361-648x/ab51ffJ. Phys.: Condens. Matter. 32165902G. Pizzi, V. Vitale, R. Arita, S. Blügel, F. Freimuth, G. Géranton, M. Gibertini, D. Gresch, C. John- son, T. Koretsune, J. Ibañez-Azpiroz, H. Lee, J.- M. Lihm, D. Marchand, A. Marrazzo, Y. Mokrousov, J. I. Mustafa, Y. Nohara, Y. Nomura, L. Paulatto, S. Poncé, T. Ponweiser, J. Qiao, F. Thöle, S. S. Tsirkin, M. Wierzbowska, N. Marzari, D. Vanderbilt, I. Souza, A. A. Mostofi, and J. R. Yates, Wannier90 as a community code: new features and applications, J. Phys.: Condens. Matter. 32, 165902 (2020).
Topological nodal line semimetals with and without spin-orbital coupling. C Fang, Y Chen, H.-Y. Kee, L Fu, 10.1103/PhysRevB.92.081201Phys. Rev. B. 9281201C. Fang, Y. Chen, H.-Y. Kee, and L. Fu, Topological nodal line semimetals with and without spin-orbital cou- pling, Phys. Rev. B 92, 081201 (2015).
Topological semimetals from first principles. H Gao, J W Venderbos, Y Kim, A M Rappe, 10.1146/annurev-matsci-070218-010049Annu. Rev. Mater. Res. 49153H. Gao, J. W. Venderbos, Y. Kim, and A. M. Rappe, Topological semimetals from first principles, Annu. Rev. Mater. Res. 49, 153 (2019).
Field-effect transistors built from all two-dimensional material components. T Roy, M Tosun, J S Kang, A B Sachid, S B Desai, M Hettick, C C Hu, A Javey, ACS Nano. 86259T. Roy, M. Tosun, J. S. Kang, A. B. Sachid, S. B. Desai, M. Hettick, C. C. Hu, and A. Javey, Field-effect transis- tors built from all two-dimensional material components, ACS Nano 8, 6259 (2014).
A ferroelectric memristor. A Chanthbouala, V Garcia, R O Cherifi, K Bouzehouane, S Fusil, X Moya, S Xavier, H Yamada, C Deranlot, N D Mathur, M Bibes, A Barthélémy, J Grollier, 10.1038/nmat3415Nat. Mater. 11860A. Chanthbouala, V. Garcia, R. O. Cherifi, K. Bouze- houane, S. Fusil, X. Moya, S. Xavier, H. Ya- mada, C. Deranlot, N. D. Mathur, M. Bibes, A. Barthélémy, and J. Grollier, A ferroelectric memristor, Nat. Mater. 11, 860 (2012).
Kinetic control of tunable multi-state switching in ferroelectric thin films. R Xu, S Liu, S Saremi, R Gao, J J Wang, Z Hong, H Lu, A Ghosh, S Pandya, E Bonturim, Z H Chen, L Q Chen, A M Rappe, L W Martin, 10.1038/s41467-019-09207-9Nat. Commun. 101282R. Xu, S. Liu, S. Saremi, R. Gao, J. J. Wang, Z. Hong, H. Lu, A. Ghosh, S. Pandya, E. Bonturim, Z. H. Chen, L. Q. Chen, A. M. Rappe, and L. W. Martin, Kinetic control of tunable multi-state switching in ferroelectric thin films, Nat. Commun. 10, 1282 (2019).
| []
|
[
"Sources of Irreproducibility in Machine Learning: A Review",
"Sources of Irreproducibility in Machine Learning: A Review"
]
| [
"Erik Odd [email protected] ",
"Gundersen ",
"Kevin Coakley [email protected] ",
"Christine R Kirkpatrick [email protected] ",
"Yolanda Gil [email protected] ",
"\nNorwegian University of Science and Technology Trondheim\nNorway Aneo AS Trondheim\nNorway\n",
"\nSan Diego Supercomputer Center\nSan Diego Supercomputer Center\nInformation Sciences Institute\nNorwegian University of Science and Technology Trondheim\nUC San Diego La Jolla, UC San Diego La JollaLos AngelesNorway, USA, USA, USC, USA\n"
]
| [
"Norwegian University of Science and Technology Trondheim\nNorway Aneo AS Trondheim\nNorway",
"San Diego Supercomputer Center\nSan Diego Supercomputer Center\nInformation Sciences Institute\nNorwegian University of Science and Technology Trondheim\nUC San Diego La Jolla, UC San Diego La JollaLos AngelesNorway, USA, USA, USC, USA"
]
| []
| Background: Many published machine learning studies are irreproducible. Issues with methodology and not properly accounting for variation introduced by the algorithm themselves or their implementations are attributed as the main contributors to the irreproducibility.Problem: There exist no theoretical framework that relates experiment design choices to potential effects on the conclusions. Without such a framework, it is much harder for practitioners and researchers to evaluate experiment results and describe the limitations of experiments. The lack of such a framework also makes it harder for independent researchers to systematically attribute the causes of failed reproducibility experiments. Objective: The objective of this paper is to develop a framework that enable applied data science practitioners and researchers to understand which experiment design choices can lead to false findings and how and by this help in analyzing the conclusions of reproducibility experiments. Method: We have compiled an extensive list of factors reported in the literature that can lead to machine learning studies being irreproducible. These factors are organized and categorized in a reproducibility framework motivated by the stages of the scientific method. The factors are analyzed for how they can affect the conclusions drawn from experiments. A model comparison study is used as an example. Conclusion: We provide a framework that describes machine learning methodology from experimental design decisions to the conclusions inferred from them.CCS CONCEPTS• Computing methodologies → Machine learning.KEYWORDSMachine learning, research methodology, reproducibility, model comparison experiments . Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). | 10.48550/arxiv.2204.07610 | [
"https://export.arxiv.org/pdf/2204.07610v2.pdf"
]
| 248,227,686 | 2204.07610 | 5818cae37a473d279faf096295f4f06a358d3054 |
Sources of Irreproducibility in Machine Learning: A Review
Erik Odd [email protected]
Gundersen
Kevin Coakley [email protected]
Christine R Kirkpatrick [email protected]
Yolanda Gil [email protected]
Norwegian University of Science and Technology Trondheim
Norway Aneo AS Trondheim
Norway
San Diego Supercomputer Center
San Diego Supercomputer Center
Information Sciences Institute
Norwegian University of Science and Technology Trondheim
UC San Diego La Jolla, UC San Diego La JollaLos AngelesNorway, USA, USA, USC, USA
Sources of Irreproducibility in Machine Learning: A Review
Background: Many published machine learning studies are irreproducible. Issues with methodology and not properly accounting for variation introduced by the algorithm themselves or their implementations are attributed as the main contributors to the irreproducibility.Problem: There exist no theoretical framework that relates experiment design choices to potential effects on the conclusions. Without such a framework, it is much harder for practitioners and researchers to evaluate experiment results and describe the limitations of experiments. The lack of such a framework also makes it harder for independent researchers to systematically attribute the causes of failed reproducibility experiments. Objective: The objective of this paper is to develop a framework that enable applied data science practitioners and researchers to understand which experiment design choices can lead to false findings and how and by this help in analyzing the conclusions of reproducibility experiments. Method: We have compiled an extensive list of factors reported in the literature that can lead to machine learning studies being irreproducible. These factors are organized and categorized in a reproducibility framework motivated by the stages of the scientific method. The factors are analyzed for how they can affect the conclusions drawn from experiments. A model comparison study is used as an example. Conclusion: We provide a framework that describes machine learning methodology from experimental design decisions to the conclusions inferred from them.CCS CONCEPTS• Computing methodologies → Machine learning.KEYWORDSMachine learning, research methodology, reproducibility, model comparison experiments . Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s).
INTRODUCTION
In recent years, many machine learning studies have shown to be very challenging to reproduce. The areas of machine learning that have reported issues are very diverse and include forecasting [Makridakis et al. 2018], natural language processing [Belz et al. 2021a], generative adversarial networks , deep reinforcement learning [Henderson et al. 2018], recommender systems [Dacrema et al. 2019], and image recognition [Bouthillier et al. 2019]. The authors above point to many methodological issues that are commonly found in machine learning research. Since applications of machine learning reach into many other fields [Gibney 2022], methodological shortcomings can have far reaching effects particularly in domains with high-stakes decisions such as medicine [Roberts et al. 2021;Varoquaux and Cheplygina 2022], social sciences , psychology [Hullman et al. 2022] and many more [Raji et al. 2022].
Proper methodology requires a good understanding of which experiment design choices can lead to false findings. An experiment conducted by Pham et al. [2020] illustrated 16 identical training runs of a deep learning model that resulted in test accuracy varying from 9% to 99%. The authors also presented a survey of 900 participants where 84% were unsure or unaware about variance caused by how an experiment is implemented. Gundersen and Kjensmo [2018] found that experiments in AI presented at top conferences had incomplete documentation. The recognition of a reproducibility crisis in AI [Hutson 2018] has led to community-wide efforts to mitigate the crisis. The machine learning community is not alone in experiencing a reproducibility crisis [Baker 2016;Button et al. 2013;Open Science Collaboration 2015].
The machine learning and AI communities have introduced several mechanisms to improve the level of empirical rigor: 1) reproducibility checklists, 2) datasheets, 3) reproducibility challenges and 4) registered reports. Reproducibility checklists have been introduced at most top level machine learning and AI conferences, such as NeurIPS, AAAI, ICML, IJCAI, and EMNLP. Journals have still not made reproducibility checklists a default part of their submission procedure, although this is to change for JAIR (Journal of Artificial Intellgience Research) [Gundersen et al. 2023]. Datasets introduced at the NeurIPS Dataset and Benchmark Track are required to complete a data sheet [Gebru et al. 2021] to ensure that datasets are properly documented. Reproducibility challenges have been introduced at both ICLR and NeurIPS [Pineau et al. 2021] to encourage third-parties to try confirm findings of articles published at top-level conferences. JAIR is to introduce reproducibility reports inspired by the reproducibility challenges [Gundersen et al. 2023]. Finally, registered reports, which have been missing in AI and ML [Gundersen 2021a], have been introduced at the journal ACM Transactions on Recommender Systems. Figure 1: The scientific method is a systematic process for acquiring knowledge about the world: 1) The world is observed, 2) explanations are made and testable statements are formulated, 3) experiments are designed to test the hypotheses and documented as research protocols, 4) experiments are implemented as code, 5) structured observations are collected and stored digitally as data, 6) the implementation of the experiment is executed and outcomes are produced, 7) the outcomes are analyzed automatically by executing code, 8) the analysis is interpreted and a conclusion is reached to update beliefs [Gundersen 2021b].
The framework presented here complements those efforts as it enables applied data science researchers and practitioners to: i) understand which experiment design choices can lead to false findings by providing an overview of design choices that can lead to irreproducible results, ii) understand how these design choices can affect the conclusion of experiments by mapping them to which part of the result elicitation process they belong to, iii) discuss limitations of experiments by using the overview of design decision and their consequences as a starting point for discussion, and iv) conduct and analyze reproducibility experiments by furthering their ability to understand and pinpoint the potential causes of failed reproducibility experiments. Hence, the framework presented here could provide a methodological basis for reproducibility checklists and provides justifications of why items should be reported. The framework provides an opportunity to educate the community, as the items are not motivated. For example, checklists tend to focus on the software side, while non-determinism must be controlled at all levels of the technical stack [Zhuang et al. 2021]. However, why is not clear to most researchers and practitioners according to Zhuang et al. [2021]. Also, the design decisions presented in this paper extend existing reproducibility checklists significantly, by identifying many additional factors that need to be reported for a published experiment to be reproducible. Similarly, the framework could be useful when designing experiments for registered reports and as a justification for why data sheets are necessary.
The main contribution of this paper is the identification and categorization of 41 design choices documented in the literature that can lead to false conclusions. Another major contribution is a novel framework that enables applied data science researchers and practitioners to understand which experiment design choices can lead to false findings, understand how these design choices can affect the conclusion of experiments and conduct and analyze reproducibility experiments.
A REPRODUCIBILITY FRAMEWORK
We follow the definitions proposed by Gundersen [2021b]. Reproducibility is defined as the ability of independent investigators to draw the same conclusions from an experiment by following the documentation shared by the original investigators; a reproducibility experiment is an experiment conducted by independent researchers to confirm the findings of the original study using the documentation shared by the researchers that conducted the original study. The documentation of a machine learning experiment is not restricted to written text in the form of a scientific report, it could also include the code and data. However, additional documentation beyond a scientific report is not required for independent investigators to conduct a reproducibility experiment. Gundersen [2021b] specifies four different types of reproducibility studies based on which documentation is shared by the original researchers: R1 Description if only textual documentation is shared,
R2
Code if text and code are shared, R3 Data if text and data are shared and R4 Experiment if text, data and code are shared. Having access to code and data reduces the effort required to reproduce the results, while also leading to increased trust in the research [Gundersen 2019]. Drummond [2009] argues that the power of a reproducibility experiment is greater with increased difference from the original study, which can be enforced by providing less documentation. Still, code and data are commonly acknowledged as important for third parties to reproduce results [Haibe-Kains et al. 2020]. There are three degrees of reproducibility that are derived from the scientific method, which is illustrated in Figure 1. The idea will be exemplified by a model comparison study.
Progress in machine learning is to a large degree driven by empirical evidence, and model comparison is the standard method to identify the best performing machine learning model for a given task [Bouthillier et al. 2019;Dacrema et al. 2021;Melis et al. 2018;Sculley et al. 2018]. A model comparison study is an experiment of which the objective is to decide which of a set of models has better performance. Hence, a model comparison identifies the subset S of a given set of computer programs C that performs task T better according to measure P after learning from the same experience E. For a computer program, or model, to be considered a clear winner of a model comparison, it should be significantly better than the model that was previously considered state-of-the-art for the given task [Sculley et al. 2018]. The hypothesis that a model is significantly better than the other models in the comparison should be tested statistically [Cohen 1995]. Empirical evidence is given by conducting experiments where the competing models learn from the same experiences under the same conditions.
Example: A reproducibility experiment is conducted to re-test the hypothesis that performs better than on the MNIST classification task as reported by LeCun et al. [1998]. The documentation of the original study contained the paper cited above as well as the MNIST dataset that was famously published. No code was published, so a reproducibility study requires re-implementing the convolutional neural network and dense neural networks used in the original study. Performance was measured using classification error and uncertainty was estimated, but how was not exactly described. As both the paper (text) and data (the MNIST dataset) are shared by the authors, and these resources are used for the reproducibility experiment, the reproducibility experiment is of type R3 Data.
Outcome reproducibility (O) is achieved if the reproducibility experiment produces the same outcome as the original experiment. The outcome of the image classification task is the set of labels given to each image in the test dataset. If the model in the reproducibility experiment classifies each image with the exact same labels as they were assigned in the original experiment, the reproducibility experiment is outcome reproducible given that the analysis and conclusion of the original investigation are sound.
An experiment can only be evaluated for outcome reproducibility if the outcome produced in the original experiment is shared, which is the case for only 4% of AI studies [Gundersen and Kjensmo 2018]. The outcome was not published by LeCun et al. [1998], so our example reproducibility experiment could not be outcome reproducible, but if it was, it would have been classified as OR3.
Analysis reproducibility (A) is achieved when the same analysis that was made by the original investigators lead to the same conclusion for the reproducibility experiment even if the outcomes differ. Evaluating an experiment for analysis reproducibility requires the methods that were used to analyze the outcome to be shared. Given our example, where code has to be re-implemented and executed in a different computing environment (we might not have access to an SGI server), the outcome will differ. However, as long as performs significantly better than in the reproducibility experiment when measuring the performance using error rate and uncertainty, the reproducibility experiment is analysis reproducible and would be classified as AR3.
Interpretation reproducibility (I) is achieved when a different analysis is done by independent investigators (on the same or different outcome) and their interpretation of the analysis supports the conclusion drawn in the original experiment. Hence, the conclusion of the original experiment is supported even though the reproducibility experiment produces different outcome and the outcome is analyzed in a different way i.e., by using the F1-score instead of the error rate, as long as the F1-score is significantly better for than for . Evaluating for conclusion reproducibility requires that the methods used for analyzing the outcome are shared. In our example, the reproducibility experiment would have been classified as IR3. Figure
CATEGORIZING DESIGN DECISIONS
Many decisions about how to conduct and evaluate an experiment must be made before the experiment can be executed. Some of the decisions are made actively while others are made passively. In the end, all the decisions constitute the experiment, but some of these decisions can lead to changed outcomes, analyses and interpretations of the analyses and thus false conclusions. These decisions are independent variables on which the experiment's conclusion depends. Sometimes even seemingly small changes to these independent variables can lead to false conclusions. Hence, these design decisions can be interpreted as potential sources of irreproducibility.
In this paper, we provide an overview of the design decisions found in the literature that can lead to false conclusions. We have identified 41 such design decisions and organized them into six major categories, shown in Figure 2. These six categories, which we call factors in line with the terminology used by Pham et al. [2020], comprise a super set containing the sources of variation that they introduced. They group the sources of variation into algorithm-level and implementation-level factors, and they show how the sources of variation can change the outcome of a reproducibility experiment so much that the analyses lead to different conclusions if the variation is not controlled for. Our approach is to provide an overview and taxonomy of design decisions that can lead to false conclusions and describe how they can lead to false conclusions. We believe that this will not only be a valuable source for practitioners and researchers when designing experiments, but also when discussing limitations of conclusions and conducting reproducibility experiments.
STUDY DESIGN FACTORS
Study design factors capture the decisions that goes into making the highlevel plan for how to conduct and analyze an experiment in order to answer the stated hypothesis and research questions.
Unsuited experiment design These are experimental analyses that deviate from the claimed or implicit research goals . Motivating why particular performance metrics, datasets or data preprocessing techniques are used should be given explicitly in order to ensure that the experiment design is suited [Dacrema et al. 2021]. Doing analyses that do not support the research goals will likely lead to poor interpretations and thus the wrong conclusions.
p-hacking When decisions of which data is included and which analysis is used are made during the analysis instead of in advance, researchers may self-servingly select the data and analysis that produce statistically significant results [Simonsohn et al. 2014]. This will affect the interpretation and thus the conclusion.
p-fishing This term is used when seeking statistically significant results beyond the original hypothesis [Cockburn et al. 2020]. Changing the hypothesis based on the p-value will not change the outcome nor the analysis, but the interpretation of the results will go beyond the original intent.
HARKing (Hypotheses After Results are Known) is post-hoc reframing of experimental intentions to present a p-fished outcome as having been predicted from the start. HARKing is to execute the scientific method backwards and will change the interpretation of the results, like p-fishing.
Choice of baselines For many machine learning tasks, it is often not clear what comprises the state-of-the-art. Studies have found that many deep learning papers only compared against other deep learning algorithms, even though they were not performing better than other simpler baselines . This could happen in cases where progress is shown by reusing experimental designs that propagate weak baselines without questioning them [Dacrema et al. 2021]. Choosing a baseline that is inferior to the state-of-the art does not change the outcome of the target model, nor will it interfere with the analysis except that the baseline used for comparison is poorer than it could be. The interpretation could change from a baseline doing better to target a model doing better. Experiment initialization Differences in the setup or initialization of an experiment can lead to a difference between 5% to 40% in the number of solved instances of a SAT solver when running on the same hardware [Fichte et al. 2021], so they must be reported [Henderson et al. 2018]. The setup can affect the outcome, so that the same analysis could be interpreted in a different way and change the conclusion if the variation under different setups is significant.
Computational budget Researchers with large computational budgets can sometimes prevent meaningful comparisons of algorithm performance experiments as they can spend the budget on intensive hyperparameter tuning of any given algorithm [Dodge et al. 2019;Melis et al. 2018;Zhang and Duh 2020]. Bouthilier and Varoquaux [2020] report that around 45% of hyperparameters were manually tuned at NeurIPS 2019 and ICLR 2020, so systematic search could give a huge advantage. Running algorithms for longer using more resources produces a different outcome.
Selective tuning of algorithms Researchers' favored algorithms are often fine-tuned to get the best possible result [Latifi et al. 2021] while baselines are often not properly tuned Dacrema et al. 2021]. In some cases, old performance results are used to claim greater improvement than what could otherwise be claimed against the state-ofthe-art [Crane 2018]. Tuning of algorithms in a selective, inconsistent manner could produce outcomes that do not reflect performance under the same conditions.
Study design factors can be controlled by designing a fair model comparison study where the hypothesis is tested genuinely and where the hypothesis is stated in such a way that the test properly answers the research question. A fair model comparison study assigns the same amount of resources, such as tuning and computational budget, to all models and sets up the experiment in a way that does not give advantages to a subset of the models or uses subpar baselines.
Sources of Irreproducibility
ALGORITHMIC FACTORS
Algorithmic factors are design decisions to introduce stochasticity in the learning algorithms and training processes, leads to a different outcome for every experiment run.
Hyperparameter optimization Different hyperparameter optimization methods find different optimal hyperparameter values [Bouthillier et al. 2021[Bouthillier et al. , 2019Henderson et al. 2018;Reimers and Gurevych 2017], so researchers should specify exactly which method is used to improve reproducibility [Cooper et al. 2021;Raff 2019]. Reimers and Gurevych [2017] evaluated the variation of three different methods: random search, grid search and Bayesian optimization, and found that the variation that they caused is significant compared to other sources of variation.
Random weights initialization The initialization of weights in neural networks affects their performance [Pham et al. 2020;Zhuang et al. 2021]. Different initial weights might lead the hyperparameter optimization method to converge to local minima.
Stochastic Layers Dropout, variational dropout, and noisy activations intended to make deep neural networks more robust end up affecting their performance [Pham et al. 2020;Reimers and Gurevych 2017;Zhuang et al. 2021].
Random feature selection Many learning algorithms rely on selecting features at random during training such as Random Forests [Breiman 2001]. The exact set of features selected will affect the outcome, and some selections might perform better than others [Pouchard et al. 2020].
Data Shuffling Data samples are often shuffled randomly so that learning converges faster, which results in differences in outcome [Pham et al. 2020;Reimers and Gurevych 2017;Zhuang et al. 2021].
Batch ordering Because of memory limitations, data samples are fed into deep learning algorithms in batches. Randomizing batch order between epochs results in different outcomes between training runs [Bouthillier et al. 2021;Pham et al. 2020].
Relying on stochasticity cause outcomes to differ between experiment runs unless explicitly controlled for. Different combinations of initialization, training algorithm and dataset will lead to different outcomes that will perform differently. If particularly lucky or unlucky, one might encounter single runs that might perform very differently, even to such a degree that the results might affect the findings. Algorithmic factors can be controlled by setting the pseudo-random number generator initialization seeds so that the outcome will be the exact same for each experiment run if everything else remains the same. However, producing the same outcome over all runs does not mean that a finding is robust and generalizable. Hence, the variation in the performance measured for the outcome produced over several experiment runs must be reported. As pointed out by Miller and Miller [2018]: "No quantitative results are of any value unless they are accompanied by some estimate of the errors inherent in them. "
IMPLEMENTATION FACTORS
Implementation factors are design choices related to the software and hardware that are used to execute the experiment. These factors mirror the variations in physical sciences experiments that are introduced by conducting the same experiment in different laboratories.
Initialization seeds Difference in the seeds used to initialize the pseudorandom number generator leads to difference in outcome [Bouthillier et al. 2019;Melis et al. 2018]. Reimers and Gurevych [2017] show that the seed value for the random number generator can result in statistically significant ( < 10 −4 ) results for different state-of-the-art systems. The same seed on different platforms produces different results [Gundersen et al. 2022;Nagarajan et al. 2019;Pouchard et al. 2020].
Software Outcomes across implementations of similar algorithms can vary significantly, e.g., TensorFlow vs. PyTorch [Henderson et al. 2018;Pouchard et al. 2020]. Hong et al. [2013] showed that a difference in operating systems affected the result. Software versions [Crane 2018;Gundersen et al. 2022;Shahriari et al. 2022], bugs in either one's own implementation or libraries, frameworks or operating systems might affect the outcomes [Crane 2018;Gundersen et al. 2022;Pham et al. 2020;Pineau et al. 2021].
Parallel execution Random completion order of parallel tasks introduces variation [Pham et al. 2020]. Increased parallelism is a driver for noise [Zhuang et al. 2021]. Truncation error of floating point calculations introduces variability as + + ≠ + + , when calculated in parallel [Gundersen et al. 2022;Pham et al. 2020]. Truncation error can be reduced but not completely removed by changing from single precision (32 bits) to double precision (64 bits) at a cost of doubling memory requirements and tripling the training time [Pinto et al. 2021].
Compiler settings Hong et al, [2013] found severe sensitivity to Intel compiler optimization levels for weather simulations that rely on huge amounts of floating point calculations, which is the case for machine learning too.
Auto-selection of primitive operations High level libraries implement deep learning algorithms using GPU-optimized deep learning primitives provided by low-level libraries such as cuDNN and CUDA [Pham et al. 2020]. Autotune in cuDNN automatically benchmarks several modes of operation for primitive functions in run-time, which might change between runs.
Processing unit Changing the processor can affect results [Gundersen et al. 2022;Hong et al. 2013]. Nagarajan et al. [2019] found that a deterministic GPU implementation repeatedly generated the same result when executed on the same GPU, but changed to a different, but deterministic result, when executed on another GPU.
Rounding errors Different hardware architectures and software implement the rounding of floating-point numbers in different ways, the rounding errors accumulate during long running calculations, particularly when using GPUs [Taufer et al. 2010].
Implementation factors can cause outcomes to differ if software, hardware or initialization seeds are changed between experiment runs or parallel processing is utilized. Deterministic implementations of primitive operations, the use of single thread processes and forcing execution in serial manner will guarantee deterministic results [Pham et al. 2020] -for a given setup.
OBSERVATION FACTORS
Observational factors are related to how data is generated, processed and augmented, but also to the properties of environments used for benchmarking, such as agent simulation environments.
Dataset bias The methods used to gather data (manual or automated) and the way data is captured (objects are often centered when photographed) introduce biases in datasets [Torralba and Efros 2011]. Recth et al. [2019] show that algorithms generalize poorly even on datasets that are replicated using the same source populations, so the lack of access to data used in the original study could lead to differences in data distribution for reproducibility experiments [Pineau et al. 2021]. In the social and environmental sciences, models might not generalize from one geographic area to another because of spatial dependence and heterogeneity [Goodchild and Li 2021] Dataset shifts is also an issue [Finlayson et al. 2021]. Different datasets lead to different outcomes.
Pre-processing Differences in data pre-processing will change data samples, so the applied pre-processing techniques must be well documented to facilitate reproducibility [Dacrema et al. 2021]. Differences in data preprocessing changes outcomes.
Data augmentation Stochastic data augmentation procedures are influenced by both algorithmic and implementation factors, which leads to differences in training data and thus different outcomes [Bouthillier et al. 2021;Pham et al. 2020;Zhuang et al. 2021].
Data splits Difference in data splits cause a difference in outcomes [Makridakis et al. 2018], which includes stochastic sampling from the training set instead of training on a static validation set [Bouthillier et al. 2021[Bouthillier et al. , 2019. Also, random selection of samples during training is typically used in gradient boosted trees [Pouchard et al. 2020]. According to Gundersen and Kjensmo [2018], only 16% specify the validation set and 30% specify the test set, which means that for all practical purposes outcome reproducibility is impossible to achieve. A 2% to 12% variation has been shown in labeled attachment scores of Natural Language Processing experiments when comparing models trained using standard test splits versus random test splits, which suggests that the results reported by experiments that only use the standard test splits can be influenced by a bias in the standard test splits that favor certain types of parsers [Çöltekin 2020].
Environment properties Stochasticity and different dynamic properties of the testing environment could affect the outcome, especially in continuous control simulators such as those used in deep reinforcement learning [Henderson et al. 2018].
Annotation quality Differences in annotations made by humans will affect the target value and thus the outcome a model produces [Belz et al. 2021b].
Test data issues Data leakage results in models trained on data that should only be available at test time, leading to overestimating model performance [Dacrema et al. 2021]. Götz-Han et al. [2020] demonstrate five cases of reported performance gains well above the state-of-the-art that were the results of data leakage. The performance gains are far below the claims of the original researchers when the data leakage errors were corrected. Cases where metrics have been reported on training data instead of test data has also been found . Neither outcome nor analysis are changed, but the interpretation could lead to false conclusions .
Observation factors might affect the outcome and interpretation of an experiment. The effect of these factors can be reduced by setting the random seed, sharing details about pre-processing and data provenance [Gebru et al. 2021]. What is done with duplicate data, outliers and missing values can introduce biases [Stodden 2015]. As long as datasets are finite samples of an infinite population [Melis et al. 2018], they might not reflect the actual distribution at a given point in time and can also shift over time.
EVALUATION FACTORS
Evaluation factors relate to how the investigators reach the conclusions from doing an experiment.
Selective reporting results through careful selection of datasets, and ignoring the danger of adaptive over-fitting could lead to the wrong conclusions being made [Pineau et al. 2021]. Selective reporting will affect the interpretation.
Over-claiming of results By drawing conclusions that go beyond the evidence presented (e.g. insufficient number of experiments, mismatch between hypothesis and claim) results are over-estimated [Pineau et al. 2021]. Over-claiming of results are errors in the interpretation.
Lack of naïve baselines Lack of comparison with simple statistical methods or naïve benchmarks such as linear regression and persistence in time-series forecasting could obscure results [Makridakis et al. 2018]. Naïve baselines help interpret the performance, but will not affect outcome nor analysis.
Sampled metrics Sampling from the test set is used sometimes when evaluations are computationally demanding. This could lead to sampled metrics being inconsistent with the exact versions and thus the wrong conclusions can be inferred . Sampled metrics will not change the outcome but can change the interpretation of the analysis.
Error estimation Machine learning methods must be able to specify certainty and confidence intervals around them [Makridakis et al. 2018] and preferably testing statistical significance taking the confidence intervals into account [Henderson et al. 2018]. Reporting single scores without any estimate of error or variation in performance is insufficient to compare nondeterministic approaches [Reimers and Gurevych 2017]. Error estimates are more prevalent in machine learning experiments reported in healthcare than for other domains [McDermott et al. 2021]. Error analyses are part of the analysis and could change the interpretation if done incorrectly or are lacking.
Statistical analysis Improper use of statistics to analyze results, such as claiming significance without proper statistical testing or using the wrong statistics test lead to false conclusions [Card et al. 2020;Pineau et al. 2021]. Power analyses should be done prior to evaluation when comparing against baselines; the number of instances in the test will determine the effect size and should be chosen accordingly [Card et al. 2020]. The decision of which statistical analysis to do affects the analysis.
Evaluation factors affect the analysis and the interpretation of the analysis and thus the conclusion. Evaluation factors can only be controlled through validation and ensuring that one is doing the right experiment and evaluating it correctly. Sculley et al. [2018] list the following practices that should be included in empirical studies: 1) tuning methodology, 2) sliced analysis, 3) ablation studies, 4) sanity checks and counterfactuals and 5) at least one negative result. However, they note that a material increase in standards for empirical analysis or rigor has not been observed across the field.
DOCUMENTATION FACTORS
Documentation factors are related to how well an experiment is documented, which means ideally documenting all the choices mentioned above, which can be impractical.
Readability The readability of papers influences whether it is possible to reproduce results. Mathiness could lead to reduced readability [Lipton and Steinhardt 2019]. Gundersen and Kjensmo [2018] found that a large degree of papers only implicitly state what research questions they answer (94%), which problems that they seek to solve (53%), and what the objective (goal) of conducting the research is (78%). Only 5% of the papers explicitly states the hypothesis and 54% contain pseudo-code. Raff [2019] found that number of tables, readability, specification of hyperparameters, pseudo-code, number of equations and compute needed to run the experiment correlated strongly with reproducibility. Readability could affect the outcome, analysis and interpretation.
Experiment design details Under-specification of the metrics used to report results and misspecification or under-specification of the model or training procedure might lead to the wrong conclusions [Pineau et al. 2021]. Documentation that lacks experiment details could affect outcomes, analyses and interpretations.
Workflow The exact steps taken and their order when conducting the empirical machine learning studies will affect the outcome [Gundersen 2021b], especially when they become more complex [Rupprecht et al. 2020]. Missing documentation of steps could also affect the reproducibility. found that data augmentation was not specified, even when data had been augmented. Not specifying the workflow properly could lead to different outcomes.
Implementation details Details on how novel algorithms and baselines are implemented, especially details that can affect reproducibility, are important [Henderson et al. 2018]. Inconsistencies in the documentation and implementation of software can cause reproducibility experiments to fail when using different software. Alahmari et al. [2020] showed how a Karas' documentation error stating that convolutional layers weights initialization was based on the Glorot uniform, however the actual implementation was with modified version of Glorot uniform, known as the Xavier uniform, causing the results to not be reproducible when implemented on PyTorch. Lack of implementation details of the machine learning algorithms could lead to differing outcomes, while for performance metrics it could lead to different analysis.
Access to data The availability of data is required for outcome reproducibility for non-trivial data, such as synthetic data that can be generated by rules. However, data cannot always be made publicly available [Pineau et al. 2021] and might not be easily available to collect, i.e. medical data is sensitive [McDermott et al. 2021]. Availability of data may affect the outcome.
Access to code Code describes implementation details perfectly and is required for outcome reproducibility for experiments of some complexity.
Reproducing the experiments will require more effort if the code that is necessary to run the experiments is not available [Gundersen 2019;McDermott et al. 2021;Pineau et al. 2021]. The version number or commit ID should be noted when referencing code in a git repository. Lack of code could both affect the outcome, analysis, and interpretation.
Stale URLs URLs in papers that link to software and data often stop working. Hennessey and Ge [2013] analyzed 14,489 unique web pages found in the abstracts of papers published between 1996 and 2010 and found that the median lifespan of these web pages was 9.3 years with 62% of them being archived. Can affect both outcome and analysis.
Documentation factors affect all reproducibility degrees, but can be alleviated by sharing code and data since the code and data themselves document so many aspects of the experiment. This is why code and data sharing is so effective for increasing reproducibility. The location and stability of the resource should be considered when posting data and code to enable reproducible results.
DISCUSSION
In recent years, several studies have sought to investigate potential sources of irreproducibility in machine learning. Most studies have investigated what are called implementation factors and algorithmic factors by Pham et al. [2020] and Zhuang et al. [2021], which both will lead to differing output between runs. Algorithmic factors are design choices related to randomness being introduced in different steps of machine learning algorithms, such as initialization and feature selection. Implementation factors are design choices related to, for example, which seeds are used when initializing the pseudo random number generators, which software and software versions the experiments require or the hardware the experiments are executed on. Other studies have investigated how the documentation of the experiments can be a source of irreproducible results [Gundersen and Kjensmo 2018;Raff 2019Raff , 2021. Improper design of a study can also make it irreproducible [Cockburn et al. 2020;Dacrema et al. 2021;Simonsohn et al. 2014]. Finally, studies have pointed out that the design and execution of the evaluation can affect results [Card et al. 2020;Makridakis et al. 2018;Pineau et al. 2021].
Despite reproducibility being a cornerstone of science, Plesser [2018] argues that reproducibility is a confused term. Many definitions exist in the literature, see both [Plesser 2018] and [Gundersen 2021b] for reviews, but none of these definitions are easy to operationalize; it is not straight forward for researchers or practitioners to let the definitions guide them when trying to reproduce research and analyze failures to reproduce. The definitions do not help in understanding what exactly is required of a reproducibility experiment nor how the results should be evaluated to be considered a success or a failure. We will exemplify this by reviewing some of the most relevant definitions of reproducibility.
The Association for Computing Machinery (ACM) [2020], who based their definition on the Joint Committee for Guides in Metrology [2012], defines reproducibility as "the main results of the paper have been obtained in a subsequent study by a person or team other than the authors, using, in part, artifacts provided by the author" while replication is defined as "the main results of the paper have been independently obtained in a subsequent study by a person or team other than the authors, without the use of authorsupplied artifacts." These definitions are open for interpretation. It is not clear what exactly is meant by "main results have been obtained". Because of this, concluding whether a reproducibility experiment is a success or not is subjective. Furthermore, except for in a replication where no artifacts should be used, it is not clear which artifacts should or can be used for a reproducibility study. However, the definition by ACM is not alone in this ambiguity.
The definitions proposed by The U.S. National Academy of Science [2019] are also ambiguous. They define reproducibility as "obtaining consistent computational results using the same input data, computational steps, methods, and code, and conditions of analysis" and replicability to mean "obtaining consistent results across studies aimed at answering the same scientific question, each of which has obtained its own data. " In a similar fashion as ACM, it is not clear how to interpret "obtaining consistent results" in a practical setting. However, this definition clearly states what is the input to reproducibility studies and replications. Replication differs in stating that only data needs to be different in a replication study, which means that computational steps, methods, code and conditions of analysis can be the same. Still, it is not clear exactly how computational steps differs from code and methods, so even when being more precise the definitions are still ambiguous.
Peng [2011] also distinguishes between replication and reproducibility. While replication requires new evidence (in the form of data) for scientific claims to be independently evaluated, reproducibility requires code and data to be published. According to Peng [2011], the least reproducible research requires code to be published so that independent researchers can review it. Next level requires also data to be published while the gold standard is to share linked and executable code and data. Peng [2011] mentions that all the papers that were reviewed for reproducibility published in the journal Biostatistics at the time were reproducible. However, it is not clear what was required of a paper to be evaluated as reproducible. The same is an issue with the definitions.
Goodman et al. [2016] do not distinguish between reproduciblility and replication. They view this to be the same concept, and the define three different reproducibility levels, results reproducibility, method reproducibility and inference reproducibility. These definitions have similar issues with ambiguity with regards to input of a reproducibility experiment and how to interpret the results. Gundersen [2021b] tries to solve this amiguity by proposing reproducibility types specifying which documentation (paper, code and data) provided by the authors of the study that are being utilized by independent researchers in the reproducibility experiment as well as introducing reproducibility degrees that specify how results can be interpreted to be considered a success. Further details are given in section 2.
Reproducibility definitions are not easy to operationalize because of their inherent ambiguity. No articles in literature, that we are aware of, provide an overview of experiment design choices that can lead to irreproducible results, nor do any articles try to relate the design choices to the interpretation of results. This article seeks to remedy these issues by providing a framework for reproducibility in machine learning that can easily be operationalized by researchers and practitioners to reduce failures of impossible tasks, engineering, post-deployment and communication [Raji et al. 2022].
According to Ioannidis [2005], the greater the flexibility in designs, definitions, outcomes, and analytical modes in a scientific field, the less likely the research findings are to be true. This holds true for machine learning, as "the rate of empirical advancement may not have been matched by consistent increase in the level of empirical rigor across the field as a whole" as Sculley et al. [2018] phrase it. Machine learning methodology is concerned with ensuring that models generalize well to unseen data. Information from the test set should not be used to optimize the performance of a model. This is why machine learning competitions, such as the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), restrict the number of submissions per week. Breaking this rule resulted in a ban at the 2015 challenge [Markoff 2015]. Less attention has been given in machine learning methodology to how information can flow in a similar way from the experiment conclusion to the experiment design. The conclusion should not change the protocol on how it is inferred. Information should not flow from the conclusion to the analysis and further change how the experiment is re-executed or analyzed. This is especially important in the sciences, including computer science [Denning et al. 1988] and general and machine learning more specifically [Russell and Norvig 2020], where the design of an artifact is as important as its experimental evaluation. Typically, algorithm design and experimental evaluation are done iteratively until progress has been made, which is a process that can be prone to methodological mishaps unless proper care is taken.
The framework would be highly relevant for reproducibility studies. The detailed description of the reproducibility study conducted by Di Nunzio and Minzoni [2023] did not rely on a reproducibility framework such as the one proposed here. While thorough, some relevant details are lacking. The authors base the reproducibility study on the article, code and data provided by the original authors, so it is clearly of type R4 Experiment. A difference in performance of almost 2 percentage points from the original study is reported. However, the study lacks a systematic and thorough discussion on the potential sources to this difference. Also, a good discussion on the improper evaluation of results in the reproduced study does not contain potential consequences of the evaluation. As performance differs, the reproducibility degree is not outcome reproducible (O) and the same methodology to evaluate the results is used so it is not interpretation reproducible (I). However, it is not clear whether the reproducibility study supports the conclusion of the original study and hence confirms the original study. If this was the case, the study would have been classified as R4A.
LIMITATIONS
While we believe that the grouping of design decisions into six categories might be at the right level and be exhaustive, the overview and taxonomy do probably not contain all design decisions that can lead to irreproducible results. Our goal is to capture as many as possible, but some might have eluded us. One way of being more certain about capturing as many design decisions as possible would be to perform a structured literature review. We did not do this, but not from a lack of trying. However, a search for "machine learning" AND "reproducibility" and similar terms returns too many irrelevant articles to be practically doable. Instead, we have relied on following the literature for several years. Also, this article does not contain any experiments; it is an overview with references. We do not quantify the importance of the different factors nor the extent to which they can affect the outcome either. The extent to how much performance might vary have been evaluated by other studies cited here. Also, according to these studies the difference in performance differs between machine learning methods and even deep learning architectures, so there is no clear rule of thumb. Hence, what is important is to test for variation and characterize it using statistical methods and through this increase rigor. Increasing rigor is exactly what we seek to support with this framework by showing how factors affect conclusions.
CONCLUSION
The main contribution of this paper is the identification and categorization of 41 design choices documented in the literature that can lead to false conclusions. Another major contribution is a novel framework that enables applied data science researchers and practitioners to understand which experiment design choices can lead to false findings, understand how these design choices can affect the conclusion of experiments and conduct and analyze reproducibility experiments. It is the first comprehensive framework for machine learning reproducibility that provides an overview and characterization of factors that affect reproducibility in machine learning experiments, and it extends existing reproducibility checklists for authors by identifying additional factors that span all levels of the technical stack including hardware. This is also the first work to describe how the reproducibility factors affect the conclusions that are drawn from experiments by relating those factors to the scientific method and different definitions and scope of reproducibility studies, In an era where reproducibility is a priority in all areas of science, our goal is to shed light on the reproducibility challenges and needs in machine learning so that forward-looking solutions and methodologies can stem from this discipline and lead the way for other communities.
Figure 2 :
2Taxonomy of 41 design decisions and how they affect results, grouped into six categories.
Challenges for the Repeatability of Deep Learning Models. Saeed S Alahmari, B Dmitry, Goldgof, Lawrence O Peter R Mouton, Hall, IEEE Access. 8Saeed S Alahmari, Dmitry B Goldgof, Peter R Mouton, and Lawrence O Hall. 2020. Challenges for the Repeatability of Deep Learning Models. IEEE Access 8 (2020), 211860-211868.
Artifact Review and Badging -Version 1. Association for Computing MachineryAssociation for Computing Machinery. 2020. Artifact Review and Badging -Version 1.0. https://www.acm.org/publications/policies/artifact-review-badging.
Reproducibility crisis. Monya Baker, Nature. 533Monya Baker. 2016. Reproducibility crisis. Nature 533, 26 (2016), 353-66.
A systematic review of reproducibility research in natural language processing. Anya Belz, Shubham Agarwal, Anastasia Shimorina, Ehud Reiter, 16th Conference of the European Chapter of the Associationfor Computational Linguistics 2021. Association for Computational LinguisticsAnya Belz, Shubham Agarwal, Anastasia Shimorina, and Ehud Reiter. 2021a. A sys- tematic review of reproducibility research in natural language processing. In 16th Conference of the European Chapter of the Associationfor Computational Linguistics 2021. Association for Computational Linguistics, 381-393.
The reprogen shared task on reproducibility of human evaluations in nlg: Overview and results. Anya Belz, Anastasia Shimorina, Shubham Agarwal, Ehud Reiter, The 14th International Conference on Natural Language Generation. Anya Belz, Anastasia Shimorina, Shubham Agarwal, and Ehud Reiter. 2021b. The reprogen shared task on reproducibility of human evaluations in nlg: Overview and results. In The 14th International Conference on Natural Language Generation.
Accounting for variance in machine learning benchmarks. Xavier Bouthillier, Pierre Delaunay, Mirko Bronzi, Assya Trofimov, Brennan Nichyporuk, Justin Szeto, Nazanin Mohammadi Sepahvand, Edward Raff, Kanika Madan, Vikram Voleti, Proceedings of Machine Learning and Systems. 3Xavier Bouthillier, Pierre Delaunay, Mirko Bronzi, Assya Trofimov, Brennan Nichy- poruk, Justin Szeto, Nazanin Mohammadi Sepahvand, Edward Raff, Kanika Madan, Vikram Voleti, et al. 2021. Accounting for variance in machine learning benchmarks. Proceedings of Machine Learning and Systems 3 (2021).
Unreproducible research is reproducible. Xavier Bouthillier, César Laurent, Pascal Vincent, International Conference on Machine Learning. PMLR. Xavier Bouthillier, César Laurent, and Pascal Vincent. 2019. Unreproducible research is reproducible. In International Conference on Machine Learning. PMLR, 725-734.
Survey of machine-learning experimental methods at NeurIPS2019 and ICLR2020. Xavier Bouthillier, Gaël Varoquaux, hal-02447823Inria Saclay Ile de FranceTechnical ReportXavier Bouthillier and Gaël Varoquaux. 2020. Survey of machine-learning experimental methods at NeurIPS2019 and ICLR2020. Technical Report hal-02447823. Inria Saclay Ile de France.
Random forests. Leo Breiman, Machine learning. 45Leo Breiman. 2001. Random forests. Machine learning 45 (2001), 5-32.
Power failure: why small sample size undermines the reliability of neuroscience. S Katherine, John Button, Claire Ioannidis, Mokrysz, A Brian, Jonathan Nosek, Flint, S J Emma, Marcus R Robinson, Munafò, Nature reviews neuroscience. 14Katherine S Button, John Ioannidis, Claire Mokrysz, Brian A Nosek, Jonathan Flint, Emma SJ Robinson, and Marcus R Munafò. 2013. Power failure: why small sample size undermines the reliability of neuroscience. Nature reviews neuroscience 14, 5 (2013), 365-376.
With Little Power Comes Great Responsibility. Dallas Card, Peter Henderson, Urvashi Khandelwal, Robin Jia, Kyle Mahowald, Dan Jurafsky, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP. the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLPDallas Card, Peter Henderson, Urvashi Khandelwal, Robin Jia, Kyle Mahowald, and Dan Jurafsky. 2020. With Little Power Comes Great Responsibility. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 9263-9274.
Threats of a replication crisis in empirical computer science. Andy Cockburn, Pierre Dragicevic, Lonni Besançon, Carl Gutwin, Commun. ACM. 63Andy Cockburn, Pierre Dragicevic, Lonni Besançon, and Carl Gutwin. 2020. Threats of a replication crisis in empirical computer science. Commun. ACM 63, 8 (2020), 70-79.
Empirical methods for artificial intelligence. Paul R Cohen, MIT press Cambridge139Paul R Cohen. 1995. Empirical methods for artificial intelligence. Vol. 139. MIT press Cambridge.
2020. Verification, reproduction and replication of NLP experiments: A case study on parsing Universal Dependencies. Çağrı Çöltekin, Proceedings of the Fourth Workshop on Universal Dependencies. the Fourth Workshop on Universal DependenciesUDWÇağrı Çöltekin. 2020. Verification, reproduction and replication of NLP experiments: A case study on parsing Universal Dependencies. In Proceedings of the Fourth Workshop on Universal Dependencies (UDW 2020). 46-56.
Hyperparameter Optimization Is Deceiving Us, and How to Stop It. Yucheng Feder Cooper, Jessica Lu, Christopher M De Forde, Sa, Advances in Neural Information Processing Systems. 34A Feder Cooper, Yucheng Lu, Jessica Forde, and Christopher M De Sa. 2021. Hyper- parameter Optimization Is Deceiving Us, and How to Stop It. Advances in Neural Information Processing Systems 34 (2021).
Questionable answers in question answering research: Reproducibility and variability of published results. Matt Crane, Transactions of the Association for Computational Linguistics. 6Matt Crane. 2018. Questionable answers in question answering research: Repro- ducibility and variability of published results. Transactions of the Association for Computational Linguistics 6 (2018), 241-252.
Progress in recommender systems research: Crisis? What crisis? AI Magazine. Paolo Cremonesi, Dietmar Jannach, 42Paolo Cremonesi and Dietmar Jannach. 2021. Progress in recommender systems research: Crisis? What crisis? AI Magazine 42, 3 (2021), 43-54.
A troubling analysis of reproducibility and progress in recommender systems research. Simone Maurizio Ferrari Dacrema, Paolo Boglio, Dietmar Cremonesi, Jannach, ACM Transactions on Information Systems (TOIS). 39Maurizio Ferrari Dacrema, Simone Boglio, Paolo Cremonesi, and Dietmar Jannach. 2021. A troubling analysis of reproducibility and progress in recommender systems research. ACM Transactions on Information Systems (TOIS) 39, 2 (2021), 1-49.
Are we really making much progress? A worrying analysis of recent neural recommendation approaches. Paolo Maurizio Ferrari Dacrema, Dietmar Cremonesi, Jannach, Proceedings of the 13th ACM Conference on Recommender Systems. the 13th ACM Conference on Recommender SystemsMaurizio Ferrari Dacrema, Paolo Cremonesi, and Dietmar Jannach. 2019. Are we really making much progress? A worrying analysis of recent neural recommendation approaches. In Proceedings of the 13th ACM Conference on Recommender Systems. 101-109.
J Peter, Douglas E Denning, David Comer, Gries, C Michael, Allen Mulder, Joe Tucker, Paul R Turner, Young, Report of the ACM task force on the core of Computer Science. ACMPeter J Denning, Douglas E Comer, David Gries, Michael C Mulder, Allen Tucker, A Joe Turner, and Paul R Young. 1988. Report of the ACM task force on the core of Computer Science. ACM.
. Giorgio Maria , Di Nunzio, Riccardo Minzoni, 10.3390/info140200762023. A Thorough Reproducibility Study on Sentiment Classification: Methodology, Experimental Setting, Results. Information. 14Giorgio Maria Di Nunzio and Riccardo Minzoni. 2023. A Thorough Reproducibility Study on Sentiment Classification: Methodology, Experimental Setting, Results. Information 14, 2 (2023). https://doi.org/10.3390/info14020076
Show Your Work: Improved Reporting of Experimental Results. Jesse Dodge, Suchin Gururangan, Dallas Card, Roy Schwartz, Noah A Smith, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingJesse Dodge, Suchin Gururangan, Dallas Card, Roy Schwartz, and Noah A Smith. 2019. Show Your Work: Improved Reporting of Experimental Results. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). 2185-2194.
Replicability is not reproducibility: nor is it good science. Chris Drummond, Proc. of the Evaluation Methods for Machine Learning Workshop at the 26th International Conference on Machine Learning. of the Evaluation Methods for Machine Learning Workshop at the 26th International Conference on Machine LearningMontreal, CanadaChris Drummond. 2009. Replicability is not reproducibility: nor is it good science. In Proc. of the Evaluation Methods for Machine Learning Workshop at the 26th International Conference on Machine Learning, Montreal, Canada. http://cogprints. org/7691/
Complications for Computational Experiments from Modern Processors. Markus Johannes K Fichte, Ciaran Hecher, Anas Mccreesh, Shahab, 27th International Conference on Principles and Practice of Constraint Programming. CP 2021Johannes K Fichte, Markus Hecher, Ciaran McCreesh, and Anas Shahab. 2021. Compli- cations for Computational Experiments from Modern Processors. In 27th Interna- tional Conference on Principles and Practice of Constraint Programming (CP 2021).
. Schloss Dagstuhl-Leibniz-Zentrum für Informatik. Schloss Dagstuhl-Leibniz-Zentrum für Informatik.
The clinician and dataset shift in artificial intelligence. Adarsh Samuel G Finlayson, Karandeep Subbaswamy, John Singh, Annabel Bowers, Jonathan Kupke, Zittrain, Suchi Isaac S Kohane, Saria, New England Journal of Medicine. 385Samuel G Finlayson, Adarsh Subbaswamy, Karandeep Singh, John Bowers, Annabel Kupke, Jonathan Zittrain, Isaac S Kohane, and Suchi Saria. 2021. The clinician and dataset shift in artificial intelligence. New England Journal of Medicine 385, 3 (2021), 283-286.
Datasheets for datasets. Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé Iii, Kate Crawford, Commun. ACM. 64Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé Iii, and Kate Crawford. 2021. Datasheets for datasets. Commun. ACM 64, 12 (2021), 86-92.
Could machine learning fuel a reproducibility crisis in science?. Elizabeth Gibney, Nature. Elizabeth Gibney. 2022. Could machine learning fuel a reproducibility crisis in science? Nature (2022).
Replication across space and time must be weak in the social and environmental sciences. F Michael, Wenwen Goodchild, Li, Proceedings of the National Academy of Sciences. 11835Michael F Goodchild and Wenwen Li. 2021. Replication across space and time must be weak in the social and environmental sciences. Proceedings of the National Academy of Sciences 118, 35 (2021).
What does research reproducibility mean?. N Steven, Daniele Goodman, John P A Fanelli, Ioannidis, 10.1126/scitranslmed.aaf5027Science Translational Medicine. 8Steven N. Goodman, Daniele Fanelli, and John P. A. Ioannidis. 2016. What does research reproducibility mean? Science Translational Medicine 8, 341 (jun 2016), 341ps12-341ps12. https://doi.org/10.1126/scitranslmed.aaf5027
Critical analysis on the reproducibility of visual quality assessment using deep features. Franz Götz-Hahn, Vlad Hosu, Dietmar Saupe, arXiv:2009.05369arXiv preprintFranz Götz-Hahn, Vlad Hosu, and Dietmar Saupe. 2020. Critical analysis on the reproducibility of visual quality assessment using deep features. arXiv preprint arXiv:2009.05369 (2020).
Standing on the Feet of Giants -Reproducibility in AI. Erik Odd, Gundersen, AI Magazine. 40Odd Erik Gundersen. 2019. Standing on the Feet of Giants -Reproducibility in AI. AI Magazine 40, 4 (2019), 9-23.
The Case Against Registered Reports. Erik Odd, Gundersen, AI Mag. 42Odd Erik Gundersen. 2021a. The Case Against Registered Reports. AI Mag. 42, 1 (2021), 88-92.
The fundamental principles of reproducibility. Erik Odd, Gundersen, Philosophical Transactions of the Royal Society A. 37920200210Odd Erik Gundersen. 2021b. The fundamental principles of reproducibility. Philosoph- ical Transactions of the Royal Society A 379, 2197 (2021), 20200210.
Erik Odd, Gundersen, Malte Helmert, and Holger Hoos. 2023. Improving Reproducibility in AI Research: Four Mechanisms Adopted by JAIR. Odd Erik Gundersen, Malte Helmert, and Holger Hoos. 2023. Improving Reproducibility in AI Research: Four Mechanisms Adopted by JAIR. Journal of Artificial Intelligence Research Forthcoming (2023).
State of the art: Reproducibility in artificial intelligence. Erik Odd, Sigbjørn Gundersen, Kjensmo, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence32Odd Erik Gundersen and Sigbjørn Kjensmo. 2018. State of the art: Reproducibility in artificial intelligence. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 32.
Do machine learning platforms provide out-of-the-box reproducibility?. Saeid Odd Erik Gundersen, Richard Juul Shamsaliei, Isdahl, Future Generation Computer Systems. 126Odd Erik Gundersen, Saeid Shamsaliei, and Richard Juul Isdahl. 2022. Do machine learn- ing platforms provide out-of-the-box reproducibility? Future Generation Computer Systems 126 (2022), 34-47.
Transparency and reproducibility in artificial intelligence. Benjamin Haibe-Kains, George Alexandru Adam, Ahmed Hosny, Farnoosh Khodakarami, Levi Waldron, Bo Wang, Chris Mcintosh, Anna Goldenberg, Anshul Kundaje, Casey S Greene, Nature. 586Benjamin Haibe-Kains, George Alexandru Adam, Ahmed Hosny, Farnoosh Kho- dakarami, Levi Waldron, Bo Wang, Chris McIntosh, Anna Goldenberg, Anshul Kundaje, Casey S Greene, et al. 2020. Transparency and reproducibility in artificial intelligence. Nature 586, 7829 (2020), E14-E16.
Deep reinforcement learning that matters. Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger, Proceedings of the AAAI conference on artificial intelligence. the AAAI conference on artificial intelligence32Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, and David Meger. 2018. Deep reinforcement learning that matters. In Proceedings of the AAAI conference on artificial intelligence, Vol. 32.
A cross disciplinary study of link decay and the effectiveness of mitigation techniques. Jason Hennessey, Steven Xijin Ge, BMC bioinformatics. 14Jason Hennessey and Steven Xijin Ge. 2013. A cross disciplinary study of link decay and the effectiveness of mitigation techniques. In BMC bioinformatics, Vol. 14. BioMed Central, 1-11.
An evaluation of the software system dependency of a global atmospheric model. Song-You Hong, Myung-Seo Koo, Jihyeon Jang, Jung-Eun Esther Kim, Hoon Park, Min-Su Joh, Ji-Hoon Kang, Tae-Jin Oh, Monthly Weather Review. 141Song-You Hong, Myung-Seo Koo, Jihyeon Jang, Jung-Eun Esther Kim, Hoon Park, Min-Su Joh, Ji-Hoon Kang, and Tae-Jin Oh. 2013. An evaluation of the software system dependency of a global atmospheric model. Monthly Weather Review 141, 11 (2013), 4165-4172.
The worst of both worlds: A comparative analysis of errors in learning from data in psychology and machine learning. Jessica Hullman, Sayash Kapoor, Priyanka Nanayakkara, Andrew Gelman, Arvind Narayanan, Proceedings of the 2022. the 2022Jessica Hullman, Sayash Kapoor, Priyanka Nanayakkara, Andrew Gelman, and Arvind Narayanan. 2022. The worst of both worlds: A comparative analysis of errors in learning from data in psychology and machine learning. In Proceedings of the 2022
AAAI/ACM Conference on AI, Ethics, and Society. AAAI/ACM Conference on AI, Ethics, and Society. 335-348.
Artificial intelligence faces reproducibility crisis. Matthew Hutson, Science. 359Matthew Hutson. 2018. Artificial intelligence faces reproducibility crisis. Science 359, 6377 (2018), 725-726.
Why most published research findings are false. P A John, Ioannidis, PLoS medicine. 2124John PA Ioannidis. 2005. Why most published research findings are false. PLoS medicine 2, 8 (2005), e124.
International vocabulary of metrology -Basic and general concepts and associated terms. Joint Committee for Guides in Metrology. 3rd edition with minor correctionsJoint Committee for Guides in Metrology. 2012. International vocabulary of metrology -Basic and general concepts and associated terms -3rd edition with minor corrections. https://www.bipm.org/utils/common/documents/jcgm/JCGM_200_2012.pdf.
Sayash Kapoor, Arvind Narayanan, arXiv:2207.070482022. Leakage and the Reproducibility Crisis in ML-based Science. arXiv preprintSayash Kapoor and Arvind Narayanan. 2022. Leakage and the Reproducibility Crisis in ML-based Science. arXiv preprint arXiv:2207.07048 (2022).
The gan landscape: Losses, architectures, regularization, and normalization. Karol Kurach, Mario Lucic, Xiaohua Zhai, Marcin Michalski, Sylvain Gelly, ICML 2018 Workshop on Reproducibility in Machine Learning. Karol Kurach, Mario Lucic, Xiaohua Zhai, Marcin Michalski, and Sylvain Gelly. 2018. The gan landscape: Losses, architectures, regularization, and normalization. In ICML 2018 Workshop on Reproducibility in Machine Learning.
Session-aware recommendation: A surprising quest for the state-of-the-art. Sara Latifi, Noemi Mauro, Dietmar Jannach, Information Sciences. 573Sara Latifi, Noemi Mauro, and Dietmar Jannach. 2021. Session-aware recommendation: A surprising quest for the state-of-the-art. Information Sciences 573 (2021), 291-315.
Gradient-based learning applied to document recognition. Yann Lecun, Léon Bottou, Yoshua Bengio, Patrick Haffner, Proc. IEEE. 86Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient-based learning applied to document recognition. Proc. IEEE 86, 11 (1998), 2278-2324.
Troubling Trends in Machine Learning Scholarship: Some ML Papers Suffer from Flaws That Could Mislead the Public and Stymie Future Research. Zachary C Lipton, Jacob Steinhardt, 10.1145/3317287.3328534Queue. 17Zachary C. Lipton and Jacob Steinhardt. 2019. Troubling Trends in Machine Learning Scholarship: Some ML Papers Suffer from Flaws That Could Mislead the Public and Stymie Future Research. Queue 17, 1 (feb 2019), 45-77. https://doi.org/10.1145/ 3317287.3328534
Are GANs Created Equal? A Large-Scale Study. Mario Lucic, Karol Kurach, Marcin Michalski, Sylvain Gelly, Olivier Bousquet, NeurIPS. Mario Lucic, Karol Kurach, Marcin Michalski, Sylvain Gelly, and Olivier Bousquet. 2018. Are GANs Created Equal? A Large-Scale Study. In NeurIPS.
Statistical and Machine Learning forecasting methods: Concerns and ways forward. Spyros Makridakis, Evangelos Spiliotis, and Vassilios Assimakopoulos. 13194889Spyros Makridakis, Evangelos Spiliotis, and Vassilios Assimakopoulos. 2018. Statistical and Machine Learning forecasting methods: Concerns and ways forward. PloS one 13, 3 (2018), e0194889.
Computer Scientists Are Astir After Baidu Team Is Barred From A.I. Competition. John Markoff, New York Times. John Markoff. 2015. Computer Scientists Are Astir After Baidu Team Is Barred From A.I. Competition. New York Times. June 3, 2015.
Reproducibility in machine learning for health research: Still a ways to go. B A Matthew, Shirly Mcdermott, Nikki Wang, Rajesh Marinsek, Luca Ranganath, Marzyeh Foschini, Ghassemi, Science Translational Medicine. 131655Matthew BA McDermott, Shirly Wang, Nikki Marinsek, Rajesh Ranganath, Luca Foschini, and Marzyeh Ghassemi. 2021. Reproducibility in machine learning for health research: Still a ways to go. Science Translational Medicine 13, 586 (2021), eabb1655.
On the State of the Art of Evaluation in Neural Language Models. Gábor Melis, Chris Dyer, Phil Blunsom, International Conference on Learning Representations. Gábor Melis, Chris Dyer, and Phil Blunsom. 2018. On the State of the Art of Evaluation in Neural Language Models. In International Conference on Learning Representations.
Statistics and chemometrics for analytical chemistry. James Miller, Jane C Miller, Pearson educationJames Miller and Jane C Miller. 2018. Statistics and chemometrics for analytical chemistry. Pearson education.
Prabhat Nagarajan, Garrett Warnell, Peter Stone, The Impact of Nondeterminism on Reproducibility in Deep Reinforcement Learning. Presented at the AAAI 2019 Workshop on Reproducible AI. Honolulu, HawaiiPrabhat Nagarajan, Garrett Warnell, and Peter Stone. 2019. The Impact of Nondeter- minism on Reproducibility in Deep Reinforcement Learning. Presented at the AAAI 2019 Workshop on Reproducible AI, Honolulu, Hawaii (2019).
Reproducibility and replicability in science. Medicine, National Academies PressEngineering National Academies of SciencesEngineering National Academies of Sciences, Medicine, et al. 2019. Reproducibility and replicability in science. National Academies Press.
Estimating the reproducibility of psychological science. Science. 3494716Open Science Collaboration. 2015. Estimating the reproducibility of psychological science. Science 349, 6251 (2015), aac4716.
Reproducible research in computational science. D Roger, Peng, Science. 334Roger D. Peng. 2011. Reproducible research in computational science. Science 334, 6060 (2011), 1226-1227.
2020. Problems and opportunities in training deep learning software systems: An analysis of variance. Hung Viet Pham, Shangshu Qian, Jiannan Wang, Thibaud Lutellier, Jonathan Rosenthal, Lin Tan, Yaoliang Yu, Nachiappan Nagappan, Proceedings of the 35th IEEE/ACM International Conference on Automated Software Engineering. the 35th IEEE/ACM International Conference on Automated Software EngineeringHung Viet Pham, Shangshu Qian, Jiannan Wang, Thibaud Lutellier, Jonathan Rosenthal, Lin Tan, Yaoliang Yu, and Nachiappan Nagappan. 2020. Problems and opportunities in training deep learning software systems: An analysis of variance. In Proceedings of the 35th IEEE/ACM International Conference on Automated Software Engineering. 771-783.
Joelle Pineau, Philippe Vincent-Lamarre, Koustuv Sinha, Vincent Larivière, Alina Beygelzimer, Florence D'alché, Emily Buc, Hugo Fox, Larochelle, Improving reproducibility in machine learning research: a report from the NeurIPS 2019 reproducibility program. 22Joelle Pineau, Philippe Vincent-Lamarre, Koustuv Sinha, Vincent Larivière, Alina Beygelzimer, Florence d'Alché Buc, Emily Fox, and Hugo Larochelle. 2021. Improv- ing reproducibility in machine learning research: a report from the NeurIPS 2019 reproducibility program. Journal of Machine Learning Research 22 (2021).
On the reproducibility of fully convolutional neural networks for modeling time-space evolving physical systems. Antonio Wagner Gonçalves Pinto, Michaël Alguacil, Bauerheim, arXiv:2105.05482arXiv preprintWagner Gonçalves Pinto, Antonio Alguacil, and Michaël Bauerheim. 2021. On the reproducibility of fully convolutional neural networks for modeling time-space evolving physical systems. arXiv preprint arXiv:2105.05482 (2021).
Line Pouchard, Yuewei Lin, and Hubertus Van Dam. 2020. Replicating Machine Learning Experiments in Materials Science. E Hans, Plesser, Parallel Computing: Technology Trends. IOS Press11Reproducibility vs. replicability: a brief history of a confused terminologyHans E Plesser. 2018. Reproducibility vs. replicability: a brief history of a confused terminology. Frontiers in neuroinformatics 11 (2018), 76. Line Pouchard, Yuewei Lin, and Hubertus Van Dam. 2020. Replicating Machine Learning Experiments in Materials Science. In Parallel Computing: Technology Trends. IOS Press, 743-755.
A step toward quantifying independently reproducible machine learning research. Edward Raff, Advances in Neural Information Processing Systems. 32Edward Raff. 2019. A step toward quantifying independently reproducible machine learning research. Advances in Neural Information Processing Systems 32 (2019).
Research Reproducibility as a Survival Analysis. Edward Raff, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence35Edward Raff. 2021. Research Reproducibility as a Survival Analysis. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35. 469-478.
2022. The fallacy of AI functionality. Elizabeth Inioluwa Deborah Raji, Aaron Kumar, Andrew Horowitz, Selbst, 2022 ACM Conference on Fairness, Accountability, and Transparency. Inioluwa Deborah Raji, I Elizabeth Kumar, Aaron Horowitz, and Andrew Selbst. 2022. The fallacy of AI functionality. In 2022 ACM Conference on Fairness, Accountability, and Transparency. 959-972.
Do imagenet classifiers generalize to imagenet. Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, Vaishaal Shankar, PMLRInternational Conference on Machine Learning. Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. 2019. Do imagenet classifiers generalize to imagenet?. In International Conference on Machine Learning. PMLR, 5389-5400.
Reporting Score Distributions Makes a Difference: Performance Study of LSTM-networks for Sequence Tagging. Nils Reimers, Iryna Gurevych, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingNils Reimers and Iryna Gurevych. 2017. Reporting Score Distributions Makes a Differ- ence: Performance Study of LSTM-networks for Sequence Tagging. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. 338-348.
Common pitfalls and recommendations for using machine learning to detect and prognosticate for COVID-19 using chest radiographs and CT scans. Michael Roberts, Derek Driggs, Matthew Thorpe, Julian Gilbey, Michael Yeung, Stephan Ursprung, Angelica I Aviles-Rivero, Christian Etmann, Cathal Mccague, Lucian Beer, Nature Machine Intelligence. 3Michael Roberts, Derek Driggs, Matthew Thorpe, Julian Gilbey, Michael Yeung, Stephan Ursprung, Angelica I Aviles-Rivero, Christian Etmann, Cathal McCague, Lucian Beer, et al. 2021. Common pitfalls and recommendations for using machine learning to detect and prognosticate for COVID-19 using chest radiographs and CT scans. Nature Machine Intelligence 3, 3 (2021), 199-217.
Improving reproducibility of data science pipelines through transparent provenance capture. Lukas Rupprecht, C James, Constantine Davis, Yaniv Arnold, Deepavali Gur, Bhagwat, Proceedings of the VLDB Endowment. the VLDB Endowment13Lukas Rupprecht, James C Davis, Constantine Arnold, Yaniv Gur, and Deepavali Bhag- wat. 2020. Improving reproducibility of data science pipelines through transparent provenance capture. Proceedings of the VLDB Endowment 13, 12 (2020), 3354-3368.
Artificial Intelligence: A Modern Approach. Stuart Russell, Peter Norvig, Pearson, LondonStuart Russell and Peter Norvig. 2020. Artificial Intelligence: A Modern Approach. Pearson, London.
Winner's curse? On pace, progress, and empirical rigor. David Sculley, Jasper Snoek, Alex Wiltschko, Ali Rahimi, ICLR 2018 Workshop Track. David Sculley, Jasper Snoek, Alex Wiltschko, and Ali Rahimi. 2018. Winner's curse? On pace, progress, and empirical rigor. In ICLR 2018 Workshop Track.
How Do Deep-Learning Framework Versions Affect the Reproducibility of Neural Network Models?. Mostafa Shahriari, Rudolf Ramler, Lukas Fischer, Machine Learning and Knowledge Extraction. 4Mostafa Shahriari, Rudolf Ramler, and Lukas Fischer. 2022. How Do Deep-Learning Framework Versions Affect the Reproducibility of Neural Network Models? Machine Learning and Knowledge Extraction 4, 4 (2022), 888-911.
P-curve: a key to the file-drawer. Uri Simonsohn, D Leif, Joseph P Nelson, Simmons, Journal of experimental psychology: General. 143534Uri Simonsohn, Leif D Nelson, and Joseph P Simmons. 2014. P-curve: a key to the file-drawer. Journal of experimental psychology: General 143, 2 (2014), 534.
Reproducing statistical results. Victoria Stodden, Annual Review of Statistics and Its Application. 2Victoria Stodden. 2015. Reproducing statistical results. Annual Review of Statistics and Its Application 2 (2015), 1-19.
Improving numerical reproducibility and stability in large-scale numerical simulations on GPUs. Michela Taufer, Omar Padron, Philip Saponaro, Sandeep Patel, 2010 IEEE International Symposium on Parallel & Distributed Processing (IPDPS). IEEE. Michela Taufer, Omar Padron, Philip Saponaro, and Sandeep Patel. 2010. Improving numerical reproducibility and stability in large-scale numerical simulations on GPUs. In 2010 IEEE International Symposium on Parallel & Distributed Processing (IPDPS). IEEE, 1-9.
Unbiased look at dataset bias. Antonio Torralba, Alexei A Efros, CVPR 2011. IEEE. Antonio Torralba and Alexei A Efros. 2011. Unbiased look at dataset bias. In CVPR 2011. IEEE, 1521-1528.
Machine learning for medical imaging: methodological failures and recommendations for the future. Gaël Varoquaux, Veronika Cheplygina, NPJ digital medicine. 5Gaël Varoquaux and Veronika Cheplygina. 2022. Machine learning for medical imaging: methodological failures and recommendations for the future. NPJ digital medicine 5, 1 (2022), 1-8.
Reproducible and efficient benchmarks for hyperparameter optimization of neural machine translation systems. Xuan Zhang, Kevin Duh, Transactions of the Association for Computational Linguistics. 8Xuan Zhang and Kevin Duh. 2020. Reproducible and efficient benchmarks for hyper- parameter optimization of neural machine translation systems. Transactions of the Association for Computational Linguistics 8 (2020), 393-408.
Donglin Zhuang, Xingyao Zhang, arXiv:2106.11872Shuaiwen Leon Song, and Sara Hooker. 2021. Randomness in neural network training: Characterizing the impact of tooling. arXiv preprintDonglin Zhuang, Xingyao Zhang, Shuaiwen Leon Song, and Sara Hooker. 2021. Ran- domness in neural network training: Characterizing the impact of tooling. arXiv preprint arXiv:2106.11872 (2021).
| []
|
[
"Dual-Sampling Attention Network for Diagnosis of COVID-19 from Community Acquired Pneumonia",
"Dual-Sampling Attention Network for Diagnosis of COVID-19 from Community Acquired Pneumonia"
]
| [
"Xi Ouyang [email protected]. ",
"Jiayu Huo [email protected]. ",
"Liming Xia [email protected] ",
"Fei Shan [email protected] ",
"Jun Liu ",
"Zhanhao Mo [email protected] ",
"Fuhua Yan ",
"Zhongxiang Ding ",
"Qi Yang ",
"Bin Song †Feng Shi ",
"Huan Yuan ",
"Ying Wei ",
"Xiaohuan Cao ",
"Yaozong Gao ",
"Dijia Wu [email protected] ",
"Qian Wang [email protected]. ",
"Dinggang Shen [email protected]. ",
"† X Ouyang ",
"J Huo ",
"L Xia ",
"F Shan ",
"J Liu ",
"Z Mo ",
"F Yan ",
"Z Ding ",
"Q Yang ",
"B ",
") F Shi ",
"H Yuan ",
"Y Wei ",
"X Cao ",
"Y Gao ",
"D Wu ",
"D Shen ",
"\nDepartment of Radiology\nDepartment of Radiology, Shanghai Public Health Clinical Center\nTongji Hospital\nTongji Medical College\nHuazhong University of Science and Technology\nWuhanHubeiChina\n",
"\nDepartment of Radiology\nFudan University\nThe Second Xiangya HospitalShanghaiChina\n",
"\nDepartment of Radiology Quality Control Center\nDepartment of Radiology\nCentral South University\nHunan Province, Hunan ProvinceChangsha, ChangshaChina, China., China-Japan\n",
"\nDepartment of Radiology\nUnion Hospital of Jilin University\nChangchunChina\n",
"\nRuijin Hospital\nShang-hai Jiao\n",
"\nDepartment of Radiology, Affiliated Hangzhou First Peoples Hospital\nTong University School of Medicine\nShanghaiChina\n",
"\nDepartment of Radiology\nZhejiang University School of Medicine\nZhe-jiangHangzhouChina\n",
"\nDepartment of Research and Development\nShanghai United Imag-ing Intelligence Co., Ltd\nSichuan University West China Hospital\nChengdu, ShanghaiChina., China\n"
]
| [
"Department of Radiology\nDepartment of Radiology, Shanghai Public Health Clinical Center\nTongji Hospital\nTongji Medical College\nHuazhong University of Science and Technology\nWuhanHubeiChina",
"Department of Radiology\nFudan University\nThe Second Xiangya HospitalShanghaiChina",
"Department of Radiology Quality Control Center\nDepartment of Radiology\nCentral South University\nHunan Province, Hunan ProvinceChangsha, ChangshaChina, China., China-Japan",
"Department of Radiology\nUnion Hospital of Jilin University\nChangchunChina",
"Ruijin Hospital\nShang-hai Jiao",
"Department of Radiology, Affiliated Hangzhou First Peoples Hospital\nTong University School of Medicine\nShanghaiChina",
"Department of Radiology\nZhejiang University School of Medicine\nZhe-jiangHangzhouChina",
"Department of Research and Development\nShanghai United Imag-ing Intelligence Co., Ltd\nSichuan University West China Hospital\nChengdu, ShanghaiChina., China"
]
| []
| The coronavirus disease is rapidly spreading all over the world, and has infected more than 1,436,000 people in more than 200 countries and territories as of April 9, 2020. Detecting COVID-19 at early stage is essential to deliver proper healthcare to the patients and also to protect the uninfected population. To this end, we develop a dualsampling attention network to automatically diagnose COVID-19 from the community acquired pneumonia (CAP) in chest computed tomography (CT). In particular, we propose a novel online attention module with a 3D convolutional network (CNN) to focus on the infection regions in lungs when making decisions of diagnoses. Note that there exists imbalanced distribution of the sizes of the infection regions between COVID-19 and CAP, partially due to fast progress of COVID-19 after symptom onset. Therefore, we develop a dual-sampling strategy to mitigate the imbalanced learning. Our method is evaluated (to our best knowledge) upon the largest multi-center CT data for COVID-19 from 8 hospitals. In the training-validation stage, we collect 2186 CT scans from 1588 patients for a 5-fold cross-validation. In the testing stage, we employ another independent large-scale testing dataset including 2796 CT scans from 2057 patients.Results show that our algorithm can identify the COVID-19 images with the area under the receiver operating characteristic curve (AUC) value of 0.944, accuracy of 87.5%, sensitivity of 86.9%, specificity of 90.1%, and F1-score of 82.0%. With this performance, the proposed algorithm could potentially aid radiologists with COVID-19 diagnosis from CAP, especially in the early stage of the COVID-19 outbreak. | 10.1109/tmi.2020.2995508 | null | 218,516,659 | 2005.02690 | 601c4188ef2a0155c9e894abdc86b77776ad3192 |
Dual-Sampling Attention Network for Diagnosis of COVID-19 from Community Acquired Pneumonia
Xi Ouyang [email protected].
Jiayu Huo [email protected].
Liming Xia [email protected]
Fei Shan [email protected]
Jun Liu
Zhanhao Mo [email protected]
Fuhua Yan
Zhongxiang Ding
Qi Yang
Bin Song †Feng Shi
Huan Yuan
Ying Wei
Xiaohuan Cao
Yaozong Gao
Dijia Wu [email protected]
Qian Wang [email protected].
Dinggang Shen [email protected].
† X Ouyang
J Huo
L Xia
F Shan
J Liu
Z Mo
F Yan
Z Ding
Q Yang
B
) F Shi
H Yuan
Y Wei
X Cao
Y Gao
D Wu
D Shen
Department of Radiology
Department of Radiology, Shanghai Public Health Clinical Center
Tongji Hospital
Tongji Medical College
Huazhong University of Science and Technology
WuhanHubeiChina
Department of Radiology
Fudan University
The Second Xiangya HospitalShanghaiChina
Department of Radiology Quality Control Center
Department of Radiology
Central South University
Hunan Province, Hunan ProvinceChangsha, ChangshaChina, China., China-Japan
Department of Radiology
Union Hospital of Jilin University
ChangchunChina
Ruijin Hospital
Shang-hai Jiao
Department of Radiology, Affiliated Hangzhou First Peoples Hospital
Tong University School of Medicine
ShanghaiChina
Department of Radiology
Zhejiang University School of Medicine
Zhe-jiangHangzhouChina
Department of Research and Development
Shanghai United Imag-ing Intelligence Co., Ltd
Sichuan University West China Hospital
Chengdu, ShanghaiChina., China
Dual-Sampling Attention Network for Diagnosis of COVID-19 from Community Acquired Pneumonia
1 Song contributed equally to this work. * Corresponding authors: Q. Wang ([email protected]) and D. Shen ([email protected]). X. Ouyang, J. Huo and Q. Wang are with the Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong Univer-sity, Shanghai, China. X. Ouyang and J. Huo are interns at Shanghai United Imaging Intelligence Co. during this work. ( Yang is with the Beijing Chaoyang hospital, Capital Medical University.Index Terms-COVID-19 DiagnosisOnline AttentionEx- plainabilityImbalanced DistributionDual Sampling Strategy
The coronavirus disease is rapidly spreading all over the world, and has infected more than 1,436,000 people in more than 200 countries and territories as of April 9, 2020. Detecting COVID-19 at early stage is essential to deliver proper healthcare to the patients and also to protect the uninfected population. To this end, we develop a dualsampling attention network to automatically diagnose COVID-19 from the community acquired pneumonia (CAP) in chest computed tomography (CT). In particular, we propose a novel online attention module with a 3D convolutional network (CNN) to focus on the infection regions in lungs when making decisions of diagnoses. Note that there exists imbalanced distribution of the sizes of the infection regions between COVID-19 and CAP, partially due to fast progress of COVID-19 after symptom onset. Therefore, we develop a dual-sampling strategy to mitigate the imbalanced learning. Our method is evaluated (to our best knowledge) upon the largest multi-center CT data for COVID-19 from 8 hospitals. In the training-validation stage, we collect 2186 CT scans from 1588 patients for a 5-fold cross-validation. In the testing stage, we employ another independent large-scale testing dataset including 2796 CT scans from 2057 patients.Results show that our algorithm can identify the COVID-19 images with the area under the receiver operating characteristic curve (AUC) value of 0.944, accuracy of 87.5%, sensitivity of 86.9%, specificity of 90.1%, and F1-score of 82.0%. With this performance, the proposed algorithm could potentially aid radiologists with COVID-19 diagnosis from CAP, especially in the early stage of the COVID-19 outbreak.
I. INTRODUCTION
T HE disease caused by the novel coronavirus, or Coronavirus Disease 2019 (COVID-19) is quickly spreading globally. It has infected more than 1,436,000 people in more than 200 countries and territories as of April 9, 2020 [1]. On February 12, 2020, the World Health Organization (WHO) officially named the disease caused by the novel coronavirus as Coronavirus Disease 2019 (COVID-19) [2]. Now, the number of COVID-19 patients, is dramatically increasing every day around the world [3]. Compared with the prior Severe Acute Respiratory Syndrome (SARS) and Middle East Respiratory Syndrome (MERS), COVID-19 has spread to more places and caused more deaths, despite its relatively lower fatality rate [4], [5]. Considering the pandemic of COVID-19, it is important to detect COVID-19 early, which could facilitate the slowdown of viral transmission and thus disease containment.
In clinics, real-time reverse-transcriptionpolymerase-chainreaction (RT-PCR) is the golden standard to make a definitive diagnosis of COVID-19 infection [6]. However, the high false negative rate [7] and unavailability of RT-PCR assay in the early stage of an outbreak may delay the identification of potential patients. Due to the highly contagious nature of the virus, it then constitutes a high risk for infecting a larger population. At the same time, thoracic computed tomography (CT) is relatively easy to perform and can produce fast diagnosis [8]. For example, almost all COVID-19 patients have some typical radiographic features in chest CT, including ground-glass opacities (GGO), multifocal patchy consolidation, and/or interstitial changes with a peripheral distribution [9]. Thus chest CT has been recommended as a major tool for clinical diagnosis especially in the hard-hit region such as Hubei, China [6]. Considering the need of high-throughput screening by chest CT and the workload for radiologists especially in the outbreak, we design a deep-learning-based method to automatically diagnose COVID-19 infection from the community acquired pneumonia (CAP) infection. With the development of deep learning [11], [12], [13], [14], [15], the technology has a wide range of applications in medical image processing, including disease diagnosis [16], and organ segmentation [17], etc. Convolutional neural network (CNN) [18], one of the most representative deep learning technology, has been applied to reading and analyzing CT images in many recent studies [19], [20]. For example, Koichiro et. al. use CNN for differentiation of liver masses on dynamic contrast agentenhanced CT images [21]. Also, some studies focus on the diagnoses of lung diseases in chest CT, e.g., pulmonary nodules [22], [23] and pulmonary tuberculosis [24]. Although deep learning has achieved remarkable performance for abnormality diagnoses of medical images [16], [25], [26], physicians have concerns especially in the lack of model interpretability and understanding [27], which is important for the diagnosis of COVID-19. To provide more insight for model decisions, the class activation mapping (CAM) [28] and gradient-weighted class activation mapping (Grad-CAM) [29] methods have been proposed to produce localization heatmaps highlighting important regions that are closely associated with predicted results.
In this study, we propose a dual-sampling attention network to classify the COVID-19 and CAP infection. To focus on the lung, our method leverages a lung mask to suppress image context of none-lung regions in chest CT. At the same time, we refine the attention of the deep learning model through an online mechanism, in order to better focus on the infection regions in the lung. In this way, the model facilitates interpreting and explaining the evidence for the automatic diagnosis of COVID-19. The experimental results also demonstrate that the proposed online attention refinement can effectively improve classification performance.
In our work, an important observation is that COVID-19 cases usually have more severe infection than CAP cases [30], although some COVID-19 cases and CAP cases do have similar infection sizes. To illustrate it, we use an established VB-Net toolkit [10] to automatically segment lungs and pneumonia infection regions on all the cases in our training-validation (TV) set (with details of our TV set provided in Section IV), and show the distribution of the ratios between the infection regions and lungs in Fig. 1. We can see the imbalanced distribution of the infection size ratios in both COVID-19 and CAP data. In this situation, the conventional uniform sampling on the entire dataset to train the network could lead to unsatisfactory diagnosis performance, especially concerning the limited cases of COVID-19 with small infections and also the limited cases of CAP with large infections. To this end, we train the second network with the size-balanced sampling strategy, by sampling more cases of COVID-19 with small infections and also more cases of CAP with large infections within mini-batches. Finally, we apply ensemble learning to integrate the networks of uniform sampling and size-balanced sampling to get the final diagnosis results, by following the dual-sampling strategy.
As a summary, the contributions of our work are in threefold:
• We propose an online module to utilize the segmented pneumonia infection regions to refine the attention for the network. This ensures the network to focus on the infection regions and increase the adoption of visual attention for model interpretability and explainability. • We propose a dual-sampling strategy to train the network, which further alleviates the imbalanced distribution of the sizes of pneumonia infection regions. • To our knowledge, we have used the largest multi-center CT data in the world for evaluating automatic COVID-19 diagnosis. In particular, we conduct extensive crossvalidations in a TV dataset of 2186 CT scans from 1588 patients. Moreover, to better evaluate the performance and generalization ability of the proposed method, a large independent testing set of 2796 CT scans from 2057 patients is also used. Experimental results demonstrate that our algorithm is able to identify the COVID-19 images with the area under the receiver operating characteristic curve (AUC) value of 0.944, accuracy of 87.5%, sensitivity of 86.9%, specificity of 90.1%, and F1-score of 82.0%.
II. RELATED WORKS
A. Computer-Assisted Pneumonia Diagnosis
Chest X-ray (CXR) is one of the firstline imaging modality to diagnose pneumonia, which manifests as increased opacity [31]. The CNN networks have been successfully applied to pneumonia diagnosis in CXR images [16], [32]. As the release of the Radiological Society of North America (RSNA) pneumonia detection challenge [33] dataset, object detection methods (i.e., RetinaNet [34] and Mask R-CNN [35]) have been used for pneumonia localization in CXR images. At the same time, CT has been used as a standard procedure in the diagnosis of lung diseases [36]. An automated classification method has been proposed to use regional volumetric texture analysis for usual interstitial pneumonia diagnosis in highresolution CT [37]. For COVID-19, GGO and consolidation along the subpleural area of the lung are the typical radiographic features of COVID-19 patients [9]. Chest CT, especially high-resolution CT, can detect small areas of ground glass opacity (GGO) [38].
Some recent works have focused on the COVID-19 diagnosis from other pneumonia in CT images [39], [40], [41]. It requires the chest CT images to identify some typical features, including GGO, multifocal patchy consolidation, and/or interstitial changes with a peripheral distribution [9]. Wang et al. [39] propose a 2D CNN network to classify between COVID-19 and other viral pneumonia based on manually delineated regions. Xu et al. [40] use a V-Net model to segment the infection region and apply a ResNet18 network for the classification. Ying et al. [41] use a ResNet50 network to process all the slices of each 3D chest CT images to form the final prediction for each CT images. However, all these methods are evaluated in small datasets. In this paper, we have collected 4982 CT scans from 3645 patients, provided by 8 collaborative hospitals. To our best knowledge, it is the largest multi-center dataset for COVID-19 till now, which can prove the effectiveness of the method.
Note that, in the context of pneumonia diagnosis, lung segmentation is often an essential preprocessing step in analyzing chest CT images to assess pneumonia. In the literature, Alom et al. [42] utilize U-net, residual network and recurrent CNN for lung lesion segmentation. A convolutional-deconvolutional capsule network has also been proposed for pathological lung segmentation in CT images. In this paper, we use an established VB-Net toolkit for lung segmentation, which has been reported with high Dice similarity coefficient of > 98% in evaluation [10]. Also, this VB-Net toolkit achieves Dice similarity coefficient of 92% between automatically and manually delineated pneumonia infection regions, showing the state-of-the-art performance [43]. For more related works, a recent review paper of automatic segmentation methods on COVID-19 could be found in [43].
B. Class Re-sampling Strategies
For network training in the datasets with long-tailed data distribution, there exist some problems for the universal paradigm to sample the entire dataset uniformly [45]. In such datasets, some classes contain relatively few samples. The information of these cases may be ignored by the network if applying uniform sampling. To address this, some class resampling strategies have been proposed in the literature [46], [47], [48], [49], [50]. The aim of these methods is to adjust the numbers of the examples from different classes within mini-batches, which achieves better performance on longtailed dataset. Generally, class re-sampling strategies could be categorized into two groups, i.e., over-sampling by repeating data for minority classes [46], [47], [48] and under-sampling by randomly removing samples to make the number of each class to be equal [47], [49], [50]. The COVID-19 data is hard to collect and precious, so abandoning data is not a good choice. In this study, we adapt the over-sampling strategies [46] on the COVID-19 with small infections and also CAP with large infections to form a size-balanced sampling method, which can better balance the distribution of the infection regions of COVID-19 and CAP cases within mini-batches. However, over-sampling may lead to over-fitting upon these minority classes [51], [52]. We thus propose the dual-sampling strategy to integrate results from the two networks trained with uniform sampling and size-balanced sampling, respectively.
C. Attention Mechanism
Attention mechanism has been widely used in many deep networks, and can be roughly divided into two types: 1) activation-based attention [53], [54], [55] and 2) gradientbased attention [28], [29]. The activation-based attention usually serves as an inserted module to refine the hidden feature maps during the training, which can make the network to focus on the important regions. For the activation-based attention, the channel-wise attention assigns weights to each channel in the feature maps [55] while the position-wise attention produces heatmaps of importance for each pixel of the feature maps [53], [54]. The most common gradient-based attention methods are CAM [28] and Grad-CAM [29], which reveal the important regions influencing the network prediction. These methods are normally conducted offline and provide a pattern of model interpretability during the inference stage. Recently, some studies [56], [57] argue that the gradient-based methods can be developed as an online module during the training for better localization. In this study, we extend the gradient-based attention to composing an online trainable component and the scenario of 3D input. The proposed attention module utilizes the segmented pneumonia infection regions to ensure that the network can make decisions based on these infection regions.
III. METHOD
The overall framework is shown in Fig. 2. The input for the network is the 3D CT images masked in lungs only. We use an established VB-Net toolkit [10] to segment the lungs for all CT images, and perform auto-contouring of possible infection regions as shown in Fig. 3. The VB-Net toolkit is a modified network that combines V-Net [58] with bottleneck layers to reduce and integrate feature map channels. The toolkit is capable of segmenting the infected regions as well as the lung fields, achieving Dice similarity coefficient of 92% between automatically and manually delineated infection regions [10]. By labeling all voxels within the segmented regions to 1, and the rest part to 0, we can get the corresponding lung mask and then input image by masking the original CT image with the corresponding lung mask.
As shown in Fig. 2, the training pipeline of our method consists of two stages: 1) using different sampling strategies to train two 3D ResNet34 models [44] with the online attention module; 2) training an ensemble learning layer to integrate the predictions from the two models. The details of our method are introduced in the following sections. Fig. 2. Illustration of the pipeline of the proposed method, including two steps. 1) We train two 3D ResNet34 networks [44] with different sampling strategies. Also, the online attention mechanism generates attention maps during training, which refer to the segmented infection regions to refine the attention localization. 2) We use the ensemble learning to integrate predictions from the two trained networks. In this figure, "Attention RN34 + US" means the 3D ResNet34 (RN34) with attention module and uniform sampling (US) strategy, while "Attention RN34 + SS" means the 3D ResNet34 with attention module and size-balanced sampling (SS) strategy. "GAP" indicates the global average pooling layer, and "FC" indicates the fully connected layer. "1 × 1 × 1 Conv" refers to the convolutional layer with 1 × 1 × 1 kernel, and takes the parameters from the fully connected layer as the kernel weights. "MSE Loss" refers to the mean square error function.
Input
Chest CT Image
VB-Net Toolkit
Infection Mask
Lung Mask
A. Network
We use the 3D ResNet34 architecture [44] as the backbone network. It is the 3D extended version of residual network [13], which uses the 3D kernels in all the convolutional layers. In 3D ResNet34, we set the stride of each dimension as 1 in the last residual block instead of 2. This makes the resolution of the feature maps before the global average pooling (GAP) [59] operation into 1/16 of the input CT image in each dimension. Compared with the case of downsampling the input image by a factor of 32 in each dimension in the original 3D ResNet34, it can greatly improve the quality of the generated attention maps based on higher-resolution feature maps.
B. Online attention module
To exhaustively learn all features that are important for classification, and also to produce the corresponding attention maps, we use an online attention mechanism of 3D class activation mapping (CAM). The key idea of CAM [28], [29], [56] is to back-propagate weights of the fully-connected layer onto the convolutional feature maps for generating the attention maps. In this study, we extend this offline operation to become an online trainable component for the scenario of 3D input. Let f denote the feature maps before the GAP operation and also w denote the weight matrix of the fully-connected layer. To make our attention generation procedure trainable, we use w as the kernel of a 1 × 1 × 1 convolution layer and apply a ReLU layer [60] to generate the attention feature map A as:
A = ReLU (conv (f, w)) ,(1)
where A has the shape X × Y × Z, and X, Y, Z is 1/16 of corresponding size of the input CT images. Given the attention feature map A, we first upsample it to the input image size, then normalize it to have intensity values between 0 and 1, and finally perform sigmoid for soft masking [57], as follows:
T (A) = 1 1 + exp(−α(A − β)) ,(2)
where values of α and β are set to 100 and 0.4 respectively. T (A) is the generated attention map of this online attention module, where A is defined in Eq. 1. During the training, the parameters in the 1×1×1 convolution layer are always copied from the fully-connected layer and only updated by the binary cross entropy (BCE) loss for the classification task.
C. Size-balanced Sampling
The main idea of size-balanced sampling is to repeat the data sampling for the COVID-19 cases with small infections and also the CAP cases with large infections in each minibatch during training. Normally, we use the uniform sampling in the entire dataset for the network training (i.e., "Attention RN34 + US" branch in Fig. 2). Specifically, each sample in the training dataset is fed into the network only once with equal probability within one epoch. Thus, the model can review the entire dataset when maintaining the intrinsic data distribution. Due to the imbalance of the distribution of infection size, we train a second network via the size-balanced sampling strategy (i.e., "Attention RN34 + SS" branch). It aims to boost the sampling possibility of the small-infection-area COVID-19 and also large-infection-area CAP cases in each mini-batch. To this end, we split the data into 4 groups according to the volume ratio of the pneumonia infection regions and the lung: 1) smallinfection-area COVID-19, 2) large-infection-area COVID-19, 3) small-infection-area CAP, and 4) large-infection-area CAP. For COVID-19, we define the cases that meet the criteria of < 0.030 as small-infection-area COVID-19, and the rest as large-infection-area COVID-19. For CAP, we define the cases with the ratio > 0.001 as large-infection-area CAP and the rest as small-infection-area CAP. We define the numbers of samples for the 4 , and uniformly pick up a sample from the selected group. This strategy ensures to have more possibility to sample cases from the two groups of 1) COVID-19 with small infections and 2) CAP with large infections. We conduct the size-balanced sampling strategy for all mini-batches when training the "Attention RN34 + SS" model.
D. Objective Function
Two losses are used to train "Attention RN34 + US" and "Attention RN34 + SS" models, i.e., the classification loss L c and the extra attention loss L ex for COVID-19 cases, respectively. We adopt the binary cross entropy as constrain for the COVID-19/CAP classification loss L c . For the COVID-19 cases, given the pneumonia infection segmentation mask M , we can use them to directly refine the attention maps from our model and L ex is thus formulated as:
L ex = ijk (T (A ijk ) − M ijk ) 2 ijk T (A ijk ) + ijk M ijk ,(3)
where T (A ijk ) is the attention map generated from our online attention module (Eq. 2), and i, j and k represent the (i, j, k) th voxel in the attention map. The proposed L ex is modified from the traditional mean square error (MSE) loss, using the sum of regions of attention map T (A ijk ) and the corresponding mask M ijk as an adaptive normalization factor. It can adjust the loss value dynamically according to the sizes of pneumonia infection regions. Then, the overall objective function for training "Attention RN34 + US" and "Attention RN34 + SS" models is expressed as:
L total = L c + λL ex ,(4)
where λ is a weight factor for the attention loss. It is set to 0.5 in our experiments. For the CAP cases, only the classification loss L c is used for model training.
E. Ensemble Learning
The size-balanced sampling method could gain more attention on the minority classes and remedy the infection area bias in COVID-19 and CAP patients. A drawback is that it may suffer from the possible over-fitting of these minority classes. In contrast, the uniform sampling method could learn feature representation from the original data distribution in a relatively robust way. Taking the advantages of both sampling methods, we propose a dual-sampling method via an ensemble learning layer, which gauges the weights for the prediction results produced by the two models.
After training the two models with different sampling strategies, we use an ensemble learning layer to integrate the predictions from two models into the final diagnosis result. We combine the prediction scores with different weights for different ratios of the pneumonia infection regions and the lung:
P f inal = wP U S + (1 − w)P SS ,(5)
where, w is the weight factor. In our experiment, it is set to 0.35 for the case where the ratio meets the criterion < 0.001 or > 0.030, and 0.96 for the rest cases. The factor values are determined with a hyperparameter search on the TV set.
Then, P f inal is the final prediction result of the dual-sampling model. As presented in Eq. 5, the dual-sampling strategy combines the characteristics of uniform sampling and sizebalanced sampling. For the minority classes, i.e., COVID-19 with small infections as well as CAP with large infections, we assign extra weights to the "Attention RN34 + SS" model. For the rest cases, more weights are assigned to the "Attention RN34 + US" model.
IV. EXPERIMENTAL RESULTS
A. Dataset
In this study, we use a large multi-center CT data for evaluating the proposed method in diagnosis of COVID-19. In particular, we have collected a total of 4982 (<2mm) chest CT images from 3645 patients, including 3389 COVID-19 CT images and 1593 CAP CT images. All recruited COVID-19 patients were confirmed by RT-PCR test. Here, the images were provided by the Tongji Hospital of Huazhong University Table I. Thin-slice chest CT images are used in this study with the CT thickness ranging from 0.625 to 1.5mm. CT scanners include uCT 780 from UIH, Optima CT520, Discovery CT750, LightSpeed 16 from GE, Aquilion ONE from Toshiba, SO-MATOM Force from Siemens, and SCENARIA from Hitachi. Scanning protocol includes: 120 kV, with breath hold at full inspiration. All CT images are anonymized before sending them for conducting this research project. The study is approved by the Institutional Review Board of participating institutes. Written informed consent is waived due to the retrospective nature of the study.
B. Image pre-processing
Data are pre-processed in the following steps before feeding them into the network. First, we resample all CT images and the corresponding masks of lungs and infection regions to the same spacing (0.7168mm, 0.7168mm, 1.25mm for the x, y, and z axes, respectively) for the normalization to the same voxel size. Second, we down-sample the CT images and segmentation masks into the approximately half sizes considering efficient computation. To avoid morphological change in down-sampling, we use the same scale factor in all three dimensions and pad zeros to ensure the final size of 138 × 256 × 256. We should emphasize that our method is capable of handling full-size images. Third, we conduct "window/level" (window: 1500, level: -600) scaling in CT images for contrast enhancement. We truncate the CT image into the window [-1350, 150], which sets the intensity value above 150 to 150, and below -1350 to -1350. Finally, following the standard protocol of data pre-processing, we normalize the voxel-wise intensities in the CT images to the interval [0, 1].
C. Training Details and Evaluation Methods
We implement the networks in PyTorch [61], and use NVIDIA Apex for less memory consumption and faster computation. We also use the Adam [62] optimizer with momentum set to 0.9, a weight decay of 0.0001, and a learning rate of 0.0002 that is reduced by a factor of 10 after every 5 epochs. We set the batch size as 20 during the training. In our experiments, all the models are trained from scratch. In the TV set, we conduct 5-fold cross-validation. In each fold, the model is evaluated on the validation set in the end of each training epoch. The best checkpoint model with the best evaluation performance within 20 epochs is used as the final model and then evaluated on the test set. All the models are trained in 4 NVIDIA TITAN RTX graphics processing units, and the inference time for one sample is approximately 4.6s in one NVIDIA TITAN RTX GPU. For evaluating, we use five different metrics to measure the classification results from the model: area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, specificity, and F1-score. AUC represents degree or measure of separability. In this study, we calculated the accuracy, sensitivity, specificity, and F1-score at the threshold of 0.5.
D. Results
First, we conduct 5-fold cross-validation on the TV set. The experimental results are shown in Table II, which combines the results of all 5 validation sets. The receiver operating characteristic (ROC) curve is also shown in Fig. 4(A). We can see that the models with the proposed attention refinement technique can improve the AUC and sensitivity scores. At the same time, we can see that "Attention RN34 + DS" achieves the highest performance in AUC, accuracy, sensitivity, and F1score, when combining the two models with different sampling strategies. As for the specificity, the performance of the dualsampling method is a little bit lower than that of ResNet34 with uniform sampling.
We further investigate the generalization capability of the model by deploying the five trained models of five individual folds on the independent testing dataset. From Fig. 4(B-F), we can see that the trained model of each fold achieves similar performance, implying consistent performance with different training data. Compared with the results on the TV set in Fig. 4(A), the AUC score of the models with the proposed attention module ("Attention RN34 + DS") on the independent test set drops from 0.988 to 0.944, while the AUC score of "RN34 + US" drops from 0.984 to 0.934. This indicates the strong robustness of our model, trained with our attention module, against possible over-fitting. The proposed attention module can also ensure that the decisions made by the model depend mainly on the infection regions, suppressing the contributions from the non-related parts in the images. All 501 CAP images in the test set are from a single site that was not included in the TV set. "Attention RN34 + US" and "Attention RN34 + DS" models achieves ≥ 90.0% in specificity for these images. We can see that our algorithm maintains a great performance on the data acquired from different centers. In the next section, the effects of different sampling strategies are presented. In order to confirm whether there exists significant difference when using the proposed attention module or not, paired t-tests are applied. The p-values between "RN34 + US" and the three proposed methods are calculated. All the p-values are small than 0.01, implying that the proposed methods have significant improvements compared with "RN34 + US".
E. Detailed Analysis
To demonstrate the effectiveness in diagnosing pneumonia of different severity, we use the VB-Net toolkit [10] to get the lung mask and the pneumonia infection regions for all CT images. Based on the quantified volume ratio of pneumonia infection regions over the lung, we roughly divide the data into 3 groups in both the TV set and the test set, according to the ratios, i.e., 1) < 0.005, 2) 0.005 − 0.030, and 3) > 0.030. As shown in Table III, most of COVID-19 images have high ratios (higher than 0.030), while most CAPs are lower than 0.005, which may indicate that the severity of COVID-19 is usually higher than that of CAP in our collected dataset. Furthermore, the classification results of COVID-19 is highly related with the ratio. In Table III, we can see that the sensitivity scores are relatively high for the high infected region group (> 0.030), while the specificity scores are relatively low for the small infection region group (< 0.005). This performance matches the nature of COVID-19 and CAP in the collected dataset.
As size-balanced sampling strategy ("Attention RN34 + SS") is applied in the training procedure, we can find that III GROUP-WISE RESULTS ON TV SET AND TEST SET. BASED ON THE VOLUME RATIO OF PNEUMONIA REGIONS AND THE LUNG, THE DATA IS DIVIDED INTO 3 GROUPS: THE VOLUME RATIOS THAT MEET THE [10]. For the attention results, we show the Grad-CAM results of "RN34 +US" (4 th row), and the attention maps obtained by our proposed attention module of "Attention RN34 + US" and "Attention RN34 + SS" models (5 th and 6 th rows). the sensitivity of the small infected region group (< 0.005) increases from 0.534 to 0.569, compared with the case of using the uniform sampling strategy ("Attention RN34 + US"). And also the specificity of the large infected region group (> 0.030) increases from 0.642 to 0.667. These results demonstrate that the size-balanced sampling strategy can effectively improve the classification robustness when the bias of the pneumonia area exists. However, if we only utilize the size-balanced sampling strategy in the training process, the sensitivity of the large infected region group (> 0.030) will decrease from 0.965 to 0.955, and the specificity of the small infected region group (< 0.005) will decrease from 0.933 to 0.896. This reflects that some advantages of the network may be sacrificed in order to achieve specific requirements. To achieve a dynamic balance between the two extreme conditions, we present the results using the ensemble learning with the dual-sampling model (i.e., "Attention RN34 + DS"). From the sensitivity and specificity in both small and large infected region groups, dual sampling strategy can preserve the classification ability obtained by uniform sampling, and slightly improve the classification performance of the COVID-19 cases in the small infected region group and the CAP cases in the large infected region group. Furthermore, the p-values between "Attention RN34 + US" and "Attention RN34 + DS" in both small-infected-region group (< 0.005) and high-infected-region group (> 0.030) are calculated. All the p-values are smaller than 0.01, which also proves the effectiveness and necessity of the dual sampling strategy.
Finally, we show typical attention maps obtained by our models (Fig. 5) trained in one fold. For comparison, we show the attention results of naive ReNset34 ("RN34 + US") in the same fold without both the online attention module and the infection mask refinement, and perform the model explanation techniques (Grad-CAM [29]) to get the heatmaps for classification. We can see that the output of Grad-CAM roughly indicates the infection localization, yet sometimes appears far outside of the lung. However, the attention maps from our models ("Attention RN34 + US" and "Attention RN34 + SS") can reveal the precise locations of the infection. These conspicuous areas in attention maps are similar to the infection segmentation results, which demonstrates that the final classification results determined by our model are reliable and interpretable. The attention maps thus can be possibly used as the basis to derive the COVID-19 diagnosis in clinical practice.
F. Failure Analysis
We also show two failure cases in Fig. 6, where the COVID-19 cases are classified as CAP by mistake for all the models. As can be observed from the results shown in Fig. 5, the attention maps from all the models incorrectly get activated on many areas unrelated to pneumonia. "RN34 + US" model even generates many highlighted areas in the none-lung region instead of focusing on lungs. With the proposed attention constrain, the attention maps of "Attention RN34 + US" and "Attention RN34 + SS" have partially alleviated this problem. But still the visual evidences are insufficient to reach a final correct prediction.
V. DISCUSSION AND CONCLUSION
For COVID-19, it is important to get the diagnosis result at soon as possible. Although RT-PCR is the current ground truth to diagnose COVID-19, it will take up to days to get the final results and the capacity of the tests is also limited in many places especially in the early outbreak [8]. CT is shown as a powerful tool and could provide the chest scan results in several minutes. It is beneficial to develop an automatic diagnosis method based on chest CT to assist the COVID-19 screening. In this study, we explore a deep-learningbased method to perform automatic COVID-19 diagnosis from CAP in chest CT images. We evaluate our method by the largest multi-center CT data in the world, to the best of our knowledge. To further evaluate the generalization ability of the model, we use independent data from different hospitals (not included in the TV set), achieving AUC of 0.944, accuracy of 87.5%, sensitivity of 86.9%, specificity of 90.1%, and F1-score of 82.0%. At the same time, to better understand the decision of the deep learning model, we also refine the attention module and show the visual evidence, which is able to reveal important regions used in the model for diagnosis. Our proposed method could be further extended for differential diagnosis of pneumonia, which can greatly assist physicians.
There also exist several limitations in this study. First, when longitudinal data becomes ready, the proposed model should be tested for its consistency tracking the development of the COVID-19 during the treatment, as considered in [63]. Second, although the proposed online attention module could largely improve the interpretability and explainability in COVID-19 diagnosis, in comparison to the conventional methods such as Grad-CAM, future work is still needed to analyze the correlation between these attention localizations with the specific imaging signs that are frequently used in clinical diagnosis. There also exist some failure cases that the visualization results do not appear correctly at the pneumonia infection regions, as shown in Fig. 6. This motivates us to further improve the attention module to better focus on the related regions and reduce the distortion from cofounding visual information to the classification task in the future research. Third, we also notice that the accuracy of the small-infectionarea COVID-19 is not quite satisfactory. This indicates the necessity of combining CT images with clinical assessment and laboratory tests for precise diagnosis of early COVID-19, which will also be covered by our future work. The last but not least, the CAP cases used in this study do not include the subtype information, i.e., bacterial, fungal, and non-COVID-19 viral pneumonia. To assist the clinical diagnosis of pneumonia subtypes would also be beneficial.
To conclude, we have developed a 3D CNN network with both online attention refinement and dual-sampling strategy to distinguish COVID-19 from the CAP in the chest CT images. The generalization performance of this algorithm is also verified by the largest multi-center CT data in the world, to our best knowledge.
Fig. 1 .
1Examples of CT images and infection segmentations of two COVID-19 patients (upper left) and two CAP patients (bottom left), and the size distribution of the infection regions of COVID-19 and CAP in our trainingvalidation set (right). The segmentation results of the lungs and infection regions are obtained from an established VB-Net toolkit [10]. The sizes of the infection regions are denoted by the volume ratio of the segmented infection regions and the whole lung. Compared with CAP, the COVID-19 cases tend to have more severe infections in terms of the infection region sizes.
Fig. 3 .
3The pneumonia infection region (upper right) and the lung segmentation (bottom right) from the VB-Net toolkit[10].
Fig. 4 .
4ROC curves of the TV set and the test set. (A) ROC curves of TV set for 5 folds. (B) ROC curve of test set by using the model from TV set fold 1. (C) ROC curve of test set by using the model from TV set fold 2. (D) ROC curve of test set by using the model from TV set fold 3. (E) ROC curve of test set by using the model from TV set fold 4. (F) ROC curve of test set by using the model from TV set fold 5.
Fig. 5 .
5Visualization results of our methods on three COVID-19 cases from small-infection group (< 0.005), median-infection group (0.005 − 0.030) and large-infection group (> 0.030) of the test set are shown from left to right, respectively. For each case, we show the visualization results in both axial view and coronal view. We show the original images (first row), and the segmentation results of the lung and pneumonia infection regions (2 nd and 3 rd rows) by the VB-Net tookit
Fig. 6 .
6Visualization results of two failure cases.
Since the numbers of small-infection-area COVID-19 and largeinfection-area CAP are relatively small, the weights W covid small and W cap large are higher than 1. The values of these two weights are approximately 1.5 in each training fold. Then, the sampling possibilities for 4 groups are calculated by the weight of each group divided by the sum of all weights, W sum . In a mini-batch, we randomly select a group according to the refined possibilities for each group [W covid small /W sum , 1/W sum , 1/W sum , W cap large /W sum ]groups as [N covid
small , N covid
large , N cap
small , N cap
large ].
Then, inspired by the class-resampling strategy in [46],
we define the weights [W covid
small , W covid
large , W cap
small , W cap
large ]
for 4 groups as [N covid
large /N covid
small , 1, 1, N cap
small /N cap
large ].
TABLE I DEMOGRAPHIC
IOF THE TRAINING-VALIDATION (TV) DATASET AND TEST DATASET. THE RESULTS OF "AGE" IS PRESENTED AS MEDIAN VALUES(RANGE). China Hospital. According to the data collection dates, we separate them into two datasets. The first dataset (TV dataset) is used for training and cross-validation, which includes 1094 COVID-19 images and 1092 CAP images. The second dataset serves for independent testing, including 2295 COVID-19 images and 501 CAP images. Note that the split is done on patient level, which means the images of same subject are kept in the same group of training or testing. More details are shown inCharacteristics
TV set
Test set
No. (images (patients))
COVID-19
1094 (960)
2295 (1605)
CAP
1092 (628)
501 (452)
Total
2186 (1588)
2796 (2057)
Age (years)
COVID-19
50.0 (14-89)
50.0 (8-95)
CAP
57.0 (12-94)
42.0 (15-98)
Total
53.0 (12-94)
49.0 (8-98)
Female/Male
COVID-19
479/481
800/805
CAP
322/306
255/197
Total
801/787
1055/1002
of Science and Technology, Shanghai Public Health Clinical
Center of Fudan University, the Second Xiangya Hospital
of Central South University, China-Japan Union Hospital of
Jilin University, Ruijin Hospital Affiliated to Shanghai Jiao
Tong University School of Medicine, Affiliated Hangzhou
First People's Hospital of Zhejiang University, the Beijing
Chaoyang Hospital of Capital Medical University, and Sichuan
University West
TABLE II COMPARASION
IIOF CLASSIFICATION RESULTS OF DIFFERNET MODELS ON THE TV SET AND TEST SET (RN34: 3D RESNET34; US: UNIFORM SAMPLING; SS: SIZE-BALANCED SAMPLING; DS: DUAL-SAMPLING). THE RESULTS OF AUC, ACCURACY, SENSITIVITY, SPECIFICITY AND F1-SCORE ARE PRESENT IN THIS TABLE. THE RESULTS ON TV SET ARE THE COMBINED RESULTS OF 5 VALIDATION SETS. FOR RESULTS ON THE TEST SET, WE SHOW MEAN±STD (STANDARD DEVIATION) SCORES OF FIVE TRAINED MODELS OF EACH TRAINING-VALIDATION FOLD.Results
TV set
Test set
AUC
RN34 + US
0.984
0.934±0.011
Attention RN34 + US
0.986
0.948±0.003
Attention RN34 + SS
0.987
0.938±0.002
Attention RN34 + DS
0.988
0.944±0.003
Accuracy
RN34 + US
0.945
0.859±0.013
Attention RN34 + US
0.947
0.879±0.012
Attention RN34 + SS
0.951
0.869±0.008
Attention RN34 + DS
0.954
0.875±0.009
Sensitivity
RN34 + US
0.931
0.856±0.029
Attention RN34 + US
0.941
0.872±0.018
Attention RN34 + SS
0.953
0.868±0.020
Attention RN34 + DS
0.954
0.869±0.016
Specificity
RN34 + US
0.959
0.870±0.071
Attention RN34 + US
0.953
0.907±0.029
Attention RN34 + SS
0.948
0.876±0.048
Attention RN34 + DS
0.954
0.901±0.025
F1-score
RN34 + US
0.945
0.798±0.011
Attention RN34 + US
0.947
0.825±0.013
Attention RN34 + SS
0.951
0.811±0.004
Attention RN34 + DS
0.954
0.820±0.008
TABLE
Coronavirus disease 2019 (covid-19): situation report. 80WHO, "Coronavirus disease 2019 (covid-19): situation report, 80," 2020.
Who director-general's remarks at the media briefing on. 2020--, "Who director-general's remarks at the media briefing on 2019- ncov on 11 february 2020. 2020," 2020.
Coronavirus disease (covid-2019) situation reports. --, "Coronavirus disease (covid-2019) situation reports," 2020.
Characteristics of and important lessons from the coronavirus disease 2019 (covid-19) outbreak in china: summary of a report of 72 314 cases from the chinese center for disease control and prevention. Z Wu, J M Mcgoogan, Jama. Z. Wu and J. M. McGoogan, "Characteristics of and important lessons from the coronavirus disease 2019 (covid-19) outbreak in china: sum- mary of a report of 72 314 cases from the chinese center for disease control and prevention," Jama, 2020.
Coronavirus: covid-19 has killed more people than sars and mers combined, despite lower case fatality rate. E Mahase, E. Mahase, "Coronavirus: covid-19 has killed more people than sars and mers combined, despite lower case fatality rate," 2020.
Coronavirus disease 2019 (covid-19): A perspective from china. Z Y Zu, M D Jiang, P P Xu, W Chen, Q Q Ni, G M Lu, L J Zhang, Radiology. 200490Z. Y. Zu, M. D. Jiang, P. P. Xu, W. Chen, Q. Q. Ni, G. M. Lu, and L. J. Zhang, "Coronavirus disease 2019 (covid-19): A perspective from china," Radiology, p. 200490, 2020.
A familial cluster of pneumonia associated with the 2019 novel coronavirus indicating person-to-person transmission: a study of a family cluster. J F Chan, S Yuan, K.-H Kok, K K W To, H Chu, J Yang, F Xing, J Liu, C C Yip, R W , .-S Poon, The Lancet. 39510223J. F.-W. Chan, S. Yuan, K.-H. Kok, K. K.-W. To, H. Chu, J. Yang, F. Xing, J. Liu, C. C.-Y. Yip, R. W.-S. Poon et al., "A familial cluster of pneumonia associated with the 2019 novel coronavirus indicating person-to-person transmission: a study of a family cluster," The Lancet, vol. 395, no. 10223, pp. 514-523, 2020.
Correlation of chest ct and rt-pcr testing in coronavirus disease 2019 (covid-19) in china: a report of 1014 cases. T Ai, Z Yang, H Hou, C Zhan, C Chen, W Lv, Q Tao, Z Sun, L Xia, Radiology. 200642T. Ai, Z. Yang, H. Hou, C. Zhan, C. Chen, W. Lv, Q. Tao, Z. Sun, and L. Xia, "Correlation of chest ct and rt-pcr testing in coronavirus disease 2019 (covid-19) in china: a report of 1014 cases," Radiology, p. 200642, 2020.
M Chung, A Bernheim, X Mei, N Zhang, M Huang, X Zeng, J Cui, W Xu, Y Yang, Z A Fayad, Ct imaging features of 2019 novel coronavirus. 200230M. Chung, A. Bernheim, X. Mei, N. Zhang, M. Huang, X. Zeng, J. Cui, W. Xu, Y. Yang, Z. A. Fayad et al., "Ct imaging features of 2019 novel coronavirus (2019-ncov)," Radiology, p. 200230, 2020.
Lung infection quantification of covid-19 in ct images with deep learning. F Shan, Y Gao, J Wang, W Shi, N Shi, M Han, Z Xue, D Shen, Y Shi, arXiv:2003.04655arXiv preprintF. Shan, Y. Gao, J. Wang, W. Shi, N. Shi, M. Han, Z. Xue, D. Shen, and Y. Shi, "Lung infection quantification of covid-19 in ct images with deep learning," arXiv preprint arXiv:2003.04655, 2020.
Deep learning. Y Lecun, Y Bengio, G Hinton, nature. 5217553Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning," nature, vol. 521, no. 7553, pp. 436-444, 2015.
Imagenet classification with deep convolutional neural networks. A Krizhevsky, I Sutskever, G E Hinton, Advances in neural information processing systems. A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," in Advances in neural infor- mation processing systems, 2012, pp. 1097-1105.
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionK. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778.
Densely connected convolutional networks. G Huang, Z Liu, L Van Der Maaten, K Q Weinberger, Proceedings of the IEEE confer. the IEEE conferG. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, "Densely connected convolutional networks," in Proceedings of the IEEE confer- ence on computer vision and pattern recognition, 2017, pp. 4700-4708.
Estimating ct image from mri data using 3d fully convolutional networks. D Nie, X Cao, Y Gao, L Wang, D Shen, Deep Learning and Data Labeling for Medical Applications. SpringerD. Nie, X. Cao, Y. Gao, L. Wang, and D. Shen, "Estimating ct image from mri data using 3d fully convolutional networks," in Deep Learning and Data Labeling for Medical Applications. Springer, 2016, pp. 170- 178.
Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. X Wang, Y Peng, L Lu, Z Lu, M Bagheri, R M Summers, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionX. Wang, Y. Peng, L. Lu, Z. Lu, M. Bagheri, and R. M. Summers, "Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2097-2106.
U-net: Convolutional networks for biomedical image segmentation. O Ronneberger, P Fischer, T Brox, International Conference on Medical image computing and computer-assisted intervention. SpringerO. Ronneberger, P. Fischer, and T. Brox, "U-net: Convolutional networks for biomedical image segmentation," in International Conference on Medical image computing and computer-assisted intervention. Springer, 2015, pp. 234-241.
Backpropagation applied to handwritten zip code recognition. Y Lecun, B Boser, J S Denker, D Henderson, R E Howard, W Hubbard, L D , Neural computation. 14Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel, "Backpropagation applied to handwritten zip code recognition," Neural computation, vol. 1, no. 4, pp. 541-551, 1989.
Automatic lung segmentation based on texture and deep features of hrct images with interstitial lung disease. T Pang, S Guo, X Zhang, L Zhao, BioMed Research International. 2019T. Pang, S. Guo, X. Zhang, and L. Zhao, "Automatic lung segmentation based on texture and deep features of hrct images with interstitial lung disease," BioMed Research International, vol. 2019, 2019.
Lung segmentation on hrct and volumetric ct for diffuse interstitial lung disease using deep convolutional neural networks. B Park, H Park, S M Lee, J B Seo, N Kim, Journal of Digital Imaging. 326B. Park, H. Park, S. M. Lee, J. B. Seo, and N. Kim, "Lung segmentation on hrct and volumetric ct for diffuse interstitial lung disease using deep convolutional neural networks," Journal of Digital Imaging, vol. 32, no. 6, pp. 1019-1026, 2019.
Deep learning with convolutional neural network for differentiation of liver masses at dynamic contrast-enhanced ct: a preliminary study. K Yasaka, H Akai, O Abe, S Kiryu, Radiology. 2863K. Yasaka, H. Akai, O. Abe, and S. Kiryu, "Deep learning with con- volutional neural network for differentiation of liver masses at dynamic contrast-enhanced ct: a preliminary study," Radiology, vol. 286, no. 3, pp. 887-896, 2018.
Added value of computer-aided ct image features for early lung cancer diagnosis with small pulmonary nodules: a matched case-control study. P Huang, S Park, R Yan, J Lee, L C Chu, C T Lin, A Hussien, J Rathmell, B Thomas, C Chen, Radiology. 2861P. Huang, S. Park, R. Yan, J. Lee, L. C. Chu, C. T. Lin, A. Hussien, J. Rathmell, B. Thomas, C. Chen et al., "Added value of computer-aided ct image features for early lung cancer diagnosis with small pulmonary nodules: a matched case-control study," Radiology, vol. 286, no. 1, pp. 286-295, 2018.
End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography. D Ardila, A P Kiraly, S Bharadwaj, B Choi, J J Reicher, L Peng, D Tse, M Etemadi, W Ye, G Corrado, Nature medicine. 256D. Ardila, A. P. Kiraly, S. Bharadwaj, B. Choi, J. J. Reicher, L. Peng, D. Tse, M. Etemadi, W. Ye, G. Corrado et al., "End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography," Nature medicine, vol. 25, no. 6, pp. 954-961, 2019.
Deep learning at chest radiography: automated classification of pulmonary tuberculosis by using convolutional neural networks. P Lakhani, B Sundaram, Radiology. 2842P. Lakhani and B. Sundaram, "Deep learning at chest radiography: au- tomated classification of pulmonary tuberculosis by using convolutional neural networks," Radiology, vol. 284, no. 2, pp. 574-582, 2017.
Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison. J Irvin, P Rajpurkar, M Ko, Y Yu, S Ciurea-Ilcus, C Chute, H Marklund, B Haghgoo, R Ball, K Shpanskaya, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33J. Irvin, P. Rajpurkar, M. Ko, Y. Yu, S. Ciurea-Ilcus, C. Chute, H. Mark- lund, B. Haghgoo, R. Ball, K. Shpanskaya et al., "Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison," in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, 2019, pp. 590-597.
A deep learning architecture for image representation, visual interpretability and automated basal-cell carcinoma cancer detection. A A Cruz-Roa, J E A Ovalle, A Madabhushi, F A G Osorio, International Conference on Medical Image Computing and Computer-Assisted Intervention. SpringerA. A. Cruz-Roa, J. E. A. Ovalle, A. Madabhushi, and F. A. G. Osorio, "A deep learning architecture for image representation, visual interpretability and automated basal-cell carcinoma cancer detection," in International Conference on Medical Image Computing and Computer- Assisted Intervention. Springer, 2013, pp. 403-410.
Visual interpretability for deep learning: a survey. Q.-S Zhang, S.-C Zhu, Frontiers of Information Technology & Electronic Engineering. 191Q.-s. Zhang and S.-C. Zhu, "Visual interpretability for deep learning: a survey," Frontiers of Information Technology & Electronic Engineering, vol. 19, no. 1, pp. 27-39, 2018.
Learning deep features for discriminative localization. B Zhou, A Khosla, A Lapedriza, A Oliva, A Torralba, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionB. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba, "Learning deep features for discriminative localization," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2921- 2929.
Grad-cam: Visual explanations from deep networks via gradient-based localization. R R Selvaraju, M Cogswell, A Das, R Vedantam, D Parikh, D Batra, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionR. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, "Grad-cam: Visual explanations from deep networks via gradient-based localization," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 618-626.
Large-scale screening of covid-19 from community acquired pneumonia using infection size-aware classification. F Shi, L Xia, F Shan, D Wu, Y Wei, H Yuan, H Jiang, Y Gao, H Sui, D Shen, arXiv:2003.09860arXiv preprintF. Shi, L. Xia, F. Shan, D. Wu, Y. Wei, H. Yuan, H. Jiang, Y. Gao, H. Sui, and D. Shen, "Large-scale screening of covid-19 from community acquired pneumonia using infection size-aware classification," arXiv preprint arXiv:2003.09860, 2020.
Imaging of community-acquired pneumonia. T Franquet, Journal of thoracic imaging. 335T. Franquet, "Imaging of community-acquired pneumonia," Journal of thoracic imaging, vol. 33, no. 5, pp. 282-294, 2018.
Chexnet: Radiologistlevel pneumonia detection on chest x-rays with deep learning. P Rajpurkar, J Irvin, K Zhu, B Yang, H Mehta, T Duan, D Ding, A Bagul, C Langlotz, K Shpanskaya, arXiv:1711.05225arXiv preprintP. Rajpurkar, J. Irvin, K. Zhu, B. Yang, H. Mehta, T. Duan, D. Ding, A. Bagul, C. Langlotz, K. Shpanskaya et al., "Chexnet: Radiologist- level pneumonia detection on chest x-rays with deep learning," arXiv preprint arXiv:1711.05225, 2017.
Radiological society of north america. R P D Challenge, R. P. D. Challenge, "Radiological society of north america," 2018.
Focal loss for dense object detection. T.-Y Lin, P Goyal, R Girshick, K He, P Dollár, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionT.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, "Focal loss for dense object detection," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2980-2988.
Mask r-cnn. K He, G Gkioxari, P Dollár, R Girshick, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionK. He, G. Gkioxari, P. Dollár, and R. Girshick, "Mask r-cnn," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2961-2969.
Radiological diagnosis in lung disease: factoring treatment options into the choice of diagnostic modality. M O Wielpütz, C P Heußel, F J Herth, H.-U Kauczor, DeutschesÄrzteblatt International. 11111181M. O. Wielpütz, C. P. Heußel, F. J. Herth, and H.-U. Kauczor, "Radi- ological diagnosis in lung disease: factoring treatment options into the choice of diagnostic modality," DeutschesÄrzteblatt International, vol. 111, no. 11, p. 181, 2014.
Automated classification of usual interstitial pneumonia using regional volumetric texture analysis in high-resolution ct. A Depeursinge, A S Chin, A N Leung, D Terrone, M Bristow, G Rosen, D L Rubin, Investigative radiology. 504261A. Depeursinge, A. S. Chin, A. N. Leung, D. Terrone, M. Bristow, G. Rosen, and D. L. Rubin, "Automated classification of usual interstitial pneumonia using regional volumetric texture analysis in high-resolution ct," Investigative radiology, vol. 50, no. 4, p. 261, 2015.
Guidelines for management of incidental pulmonary nodules detected on ct images: from the fleischner society. H Macmahon, D P Naidich, J M Goo, K S Lee, A N Leung, J R Mayo, A C Mehta, Y Ohno, C A Powell, M Prokop, Radiology. 2841H. MacMahon, D. P. Naidich, J. M. Goo, K. S. Lee, A. N. Leung, J. R. Mayo, A. C. Mehta, Y. Ohno, C. A. Powell, M. Prokop et al., "Guidelines for management of incidental pulmonary nodules detected on ct images: from the fleischner society 2017," Radiology, vol. 284, no. 1, pp. 228-243, 2017.
A deep learning algorithm using ct images to screen for corona virus disease. S Wang, B Kang, J Ma, X Zeng, M Xiao, J Guo, M Cai, J Yang, Y Li, X Meng, covid-19)," medRxivS. Wang, B. Kang, J. Ma, X. Zeng, M. Xiao, J. Guo, M. Cai, J. Yang, Y. Li, X. Meng et al., "A deep learning algorithm using ct images to screen for corona virus disease (covid-19)," medRxiv, 2020.
X Xu, X Jiang, C Ma, P Du, X Li, S Lv, L Yu, Y Chen, J Su, G Lang, arXiv:2002.09334Deep learning system to screen coronavirus disease 2019 pneumonia. arXiv preprintX. Xu, X. Jiang, C. Ma, P. Du, X. Li, S. Lv, L. Yu, Y. Chen, J. Su, G. Lang et al., "Deep learning system to screen coronavirus disease 2019 pneumonia," arXiv preprint arXiv:2002.09334, 2020.
Deep learning enables accurate diagnosis of novel coronavirus (covid-19) with ct images. Y Song, S Zheng, L Li, X Zhang, X Zhang, Z Huang, J Chen, H Zhao, Y Jie, R Wang, medRxivY. Song, S. Zheng, L. Li, X. Zhang, X. Zhang, Z. Huang, J. Chen, H. Zhao, Y. Jie, R. Wang et al., "Deep learning enables accurate diagnosis of novel coronavirus (covid-19) with ct images," medRxiv, 2020.
Recurrent residual convolutional neural network based on u-net (r2u-net) for medical image segmentation. M Z Alom, M Hasan, C Yakopcic, T M Taha, V K Asari, arXiv:1802.06955arXiv preprintM. Z. Alom, M. Hasan, C. Yakopcic, T. M. Taha, and V. K. Asari, "Recurrent residual convolutional neural network based on u-net (r2u- net) for medical image segmentation," arXiv preprint arXiv:1802.06955, 2018.
Review of artificial intelligence techniques in imaging data acquisition, segmentation and diagnosis for covid-19. F Shi, J Wang, J Shi, Z Wu, Q Wang, Z Tang, K He, Y Shi, D Shen, IEEE Reviews in Biomedical Engineering. F. Shi, J. Wang, J. Shi, Z. Wu, Q. Wang, Z. Tang, K. He, Y. Shi, and D. Shen, "Review of artificial intelligence techniques in imaging data acquisition, segmentation and diagnosis for covid-19," IEEE Reviews in Biomedical Engineering, 2020.
Can spatiotemporal 3d cnns retrace the history of 2d cnns and imagenet. K Hara, H Kataoka, Y Satoh, Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. the IEEE conference on Computer Vision and Pattern RecognitionK. Hara, H. Kataoka, and Y. Satoh, "Can spatiotemporal 3d cnns retrace the history of 2d cnns and imagenet?" in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 2018, pp. 6546-6555.
The devil is in the tails: Fine-grained classification in the wild. G Van Horn, P Perona, arXiv:1709.01450arXiv preprintG. Van Horn and P. Perona, "The devil is in the tails: Fine-grained classification in the wild," arXiv preprint arXiv:1709.01450, 2017.
Bbn: Bilateral-branch network with cumulative learning for long-tailed visual recognition. B Zhou, Q Cui, X.-S Wei, Z.-M Chen, arXiv:1912.02413arXiv preprintB. Zhou, Q. Cui, X.-S. Wei, and Z.-M. Chen, "Bbn: Bilateral-branch network with cumulative learning for long-tailed visual recognition," arXiv preprint arXiv:1912.02413, 2019.
A systematic study of the class imbalance problem in convolutional neural networks. M Buda, A Maki, M A Mazurowski, Neural Networks. 106M. Buda, A. Maki, and M. A. Mazurowski, "A systematic study of the class imbalance problem in convolutional neural networks," Neural Networks, vol. 106, pp. 249-259, 2018.
Relay backpropagation for effective learning of deep convolutional neural networks. L Shen, Z Lin, Q Huang, SpringerL. Shen, Z. Lin, and Q. Huang, "Relay backpropagation for effective learning of deep convolutional neural networks," in European conference on computer vision. Springer, 2016, pp. 467-482.
Learning from imbalanced data. H He, E A Garcia, IEEE Transactions on knowledge and data engineering. 219H. He and E. A. Garcia, "Learning from imbalanced data," IEEE Transactions on knowledge and data engineering, vol. 21, no. 9, pp. 1263-1284, 2009.
The class imbalance problem: A systematic study. N Japkowicz, S Stephen, 6Intelligent data analysisN. Japkowicz and S. Stephen, "The class imbalance problem: A system- atic study," Intelligent data analysis, vol. 6, no. 5, pp. 429-449, 2002.
Class-balanced loss based on effective number of samples. Y Cui, M Jia, T.-Y Lin, Y Song, S Belongie, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionY. Cui, M. Jia, T.-Y. Lin, Y. Song, and S. Belongie, "Class-balanced loss based on effective number of samples," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 9268-9277.
Smote: synthetic minority over-sampling technique. N V Chawla, K W Bowyer, L O Hall, W P Kegelmeyer, Journal of artificial intelligence research. 16N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer, "Smote: synthetic minority over-sampling technique," Journal of artificial intel- ligence research, vol. 16, pp. 321-357, 2002.
Non-local neural networks. X Wang, R Girshick, A Gupta, K He, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionX. Wang, R. Girshick, A. Gupta, and K. He, "Non-local neural net- works," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7794-7803.
Dual attention network for scene segmentation. J Fu, J Liu, H Tian, Y Li, Y Bao, Z Fang, H Lu, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionJ. Fu, J. Liu, H. Tian, Y. Li, Y. Bao, Z. Fang, and H. Lu, "Dual attention network for scene segmentation," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 3146-3154.
Squeeze-and-excitation networks. J Hu, L Shen, G Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionJ. Hu, L. Shen, and G. Sun, "Squeeze-and-excitation networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7132-7141.
Attention branch network: Learning of attention mechanism for visual explanation. H Fukui, T Hirakawa, T Yamashita, H Fujiyoshi, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognition10H. Fukui, T. Hirakawa, T. Yamashita, and H. Fujiyoshi, "Attention branch network: Learning of attention mechanism for visual explana- tion," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 10 705-10 714.
Tell me where to look: Guided attention inference network. K Li, Z Wu, K.-C Peng, J Ernst, Y Fu, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionK. Li, Z. Wu, K.-C. Peng, J. Ernst, and Y. Fu, "Tell me where to look: Guided attention inference network," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 9215-9223.
V-net: Fully convolutional neural networks for volumetric medical image segmentation. F Milletari, N Navab, S.-A Ahmadi, 2016 Fourth International Conference on 3D Vision (3DV). IEEEF. Milletari, N. Navab, and S.-A. Ahmadi, "V-net: Fully convolutional neural networks for volumetric medical image segmentation," in 2016 Fourth International Conference on 3D Vision (3DV). IEEE, 2016, pp. 565-571.
Network in network. M Lin, Q Chen, S Yan, arXiv:1312.4400arXiv preprintM. Lin, Q. Chen, and S. Yan, "Network in network," arXiv preprint arXiv:1312.4400, 2013.
Rectified linear units improve restricted boltzmann machines. V Nair, G E Hinton, Proceedings of the 27th international conference on machine learning (ICML-10). the 27th international conference on machine learning (ICML-10)V. Nair and G. E. Hinton, "Rectified linear units improve restricted boltz- mann machines," in Proceedings of the 27th international conference on machine learning (ICML-10), 2010, pp. 807-814.
Pytorch: An imperative style, high-performance deep learning library. A Paszke, S Gross, F Massa, A Lerer, J Bradbury, G Chanan, T Killeen, Z Lin, N Gimelshein, L Antiga, Advances in Neural Information Processing Systems. A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga et al., "Pytorch: An imperative style, high-performance deep learning library," in Advances in Neural Information Processing Systems, 2019, pp. 8024-8035.
Adam: A method for stochastic optimization. D P Kingma, J Ba, arXiv:1412.6980arXiv preprintD. P. Kingma and J. Ba, "Adam: A method for stochastic optimization," arXiv preprint arXiv:1412.6980, 2014.
Classic: consistent longitudinal alignment and segmentation for serial image computing. Z Xue, D Shen, C Davatzikos, NeuroImage. 302Z. Xue, D. Shen, and C. Davatzikos, "Classic: consistent longitudinal alignment and segmentation for serial image computing," NeuroImage, vol. 30, no. 2, pp. 388-399, 2006.
| []
|
[
"UvA-DARE (Digital Academic Repository)",
"UvA-DARE (Digital Academic Repository)",
"UvA-DARE (Digital Academic Repository)",
"UvA-DARE (Digital Academic Repository)",
"UvA-DARE (Digital Academic Repository)",
"UvA-DARE (Digital Academic Repository)",
"UvA-DARE (Digital Academic Repository)",
"UvA-DARE (Digital Academic Repository)"
]
| [
"\nInstitute for Theoretical Physics\nInstitute of Physics\nUniversity of Amsterdam\n1090 GLAmsterdamThe Netherlands\n",
"\nInstitute for Theoretical Physics\nInstitute of Physics\nUniversity of Amsterdam\n1090 GLAmsterdamThe Netherlands\n",
"\nInstitute for Theoretical Physics\nInstitute of Physics\nUniversity of Amsterdam\n1090 GLAmsterdamThe Netherlands\n",
"\nInstitute for Theoretical Physics\nInstitute of Physics\nUniversity of Amsterdam\n1090 GLAmsterdamThe Netherlands\n"
]
| [
"Institute for Theoretical Physics\nInstitute of Physics\nUniversity of Amsterdam\n1090 GLAmsterdamThe Netherlands",
"Institute for Theoretical Physics\nInstitute of Physics\nUniversity of Amsterdam\n1090 GLAmsterdamThe Netherlands",
"Institute for Theoretical Physics\nInstitute of Physics\nUniversity of Amsterdam\n1090 GLAmsterdamThe Netherlands",
"Institute for Theoretical Physics\nInstitute of Physics\nUniversity of Amsterdam\n1090 GLAmsterdamThe Netherlands"
]
| []
| The simple-cubic structure of elemental Polonium and its relation to combined charge and orbital order in other elemental chalcogens Silva, A.; van Wezel, J. | 10.21468/scipostphys.4.6.028 | [
"https://pure.uva.nl/ws/files/34415705/SciPostPhys_4_6_028.pdf"
]
| 56,272,138 | 1712.09533 | 8ae290be36596c861e78a2aa496a4f1bb1ea920b |
UvA-DARE (Digital Academic Repository)
Institute for Theoretical Physics
Institute of Physics
University of Amsterdam
1090 GLAmsterdamThe Netherlands
UvA-DARE (Digital Academic Repository)
10.21468/SciPostPhys.4.6.028UvA-DARE is a service provided by the library of the University of Amsterdam (https://dare.uva.nl)
The simple-cubic structure of elemental Polonium and its relation to combined charge and orbital order in other elemental chalcogens Silva, A.; van Wezel, J.
Introduction
Polonium is unique in the periodic table, being the only element to crystallise into a simple cubic lattice structure under ambiant conditions. Besides it being remarkable that such a loosely packed configuration is favoured in any material, this is also surprising given that Tellurium (Te) and Selenium (Se), the two isoelectronic elements directly above Po in the periodic table, adopt a trigonal spiral lattice structure [1] (Sulphur and Oxygen in the same column form molecules rather than crystals, and will be ignored from here on). The trigonal arrangement in Te and Se can be understood as arising from a Peierls instability of a hypothetical simple cubic parent structure [2], in which the short bonds in three simultaneous charge density waves connect in a pattern that spirals around the body diagonal of the cube (see inset in figure 1).
Looking more closely, the spiral structure in Se and Te is in fact a combined charge and orbital ordered state, in which a spiral pattern of preferential occupation of different p-orbitals necessarily accompanies the charge order [2]. Polonium however, is considerably heavier than Se and Te, and relativistic effects may be expected to play a role in determining its ground state. Heuristically, it is clear that the presence of strong spin-orbit coupling, eliminating orbitals as individual degrees of freedom, is at odds with the formation of orbital order. In fact, ab initio calculations of the phonon dispersion in elemental chalcogens indicate that inclusion of relativistic effects suppresses a softening of the phonons, and possibly a related structural instability, which would otherwise be present [3][4][5][6]. The mechanism by which this is accomplished, as well as the identification of the dominant relativistic effect, being either a Darwin term, mass-velocity term, or atomic spin-orbit interaction, is still an unsettled and controversial issue [3][4][5][6][7][8]. In this paper, we construct a minimal microscopic model for elemental chalcogens, in which the evolution of the lattice structure can be studied as a function of the strength of spin-orbit coupling.
We show that at weak coupling, the simple cubic structure is unstable towards the formation of combined charge and orbital order, which results in the spiral trigonal lattice structure observed in Se and Te. Upon raising the strength of the spin-orbit coupling, the instability is suppressed, and the simple cubic structure observed in Po is realised instead. Moreover, we show that taking into account thermal expansion of the lattice, the strutural instability is suppressed at elevated temperatures. That is, using parameter values that are realistic for Po, the phonon structure is softened to such an extent as to effectively weaken the role of spinorbit coupling at high temperatures. As a result, we find a transition between the two known allotropes of Polonium, the simple cubic α−Po and the trigonal β−Po. We argue that this corresponds to the experimentally observed transition at approximatly 348K [7,9], and conclude that like Se and Te, β−Po has a combined charge and orbital ordered structure (as indicated in the phase diagram of figure 2). The unusual lowering of the crystal symmetry upon raising temperature, and the peculiar phase diagram connecting the structure of Po to that of Se and Te, are thus found to be due to the intricate interplay between spins, orbitals, charges, and lattice deformations in the elemental chalcogens, where none of these degrees of freedom can be neglected.
Minimal microscopic model
The starting point for constructing a minimal microscopic model capable of describing the lattice instabilities in the entire family of elemental chalcogens, is a simple cubic arrangement of atoms. All chalcogens have four electrons in the outer shell of p-orbitals, so we will consider a tight-binding model taking into account only p x , p y , and p z -orbitals on each site. For convenience, we choose the quantisation axes for the orbitals to coincide with the lattice directions. The points are self-consistent numerical solutions of the set of equations in Eq. (4), while the solid line connecting them is a guide to the eye only. The unordered state at high λ SOC /t corresponds to the simple cubic lattice structure of α−Po, as shown in the right inset. In the ordered state, the planes perpendicular to the cube's body diagonal move closer together, by means of a contraction of the thick bonds shown in blue in the left inset. The result is the spiral trigonal lattice structure known to be realised in Se and Te. Because the structural transition is the result of three simultaneous density wave instabilities, each occurring in chains of distinct orbitals, the trigonal state necessarily is also an orbital ordered state. The least occupied orbitals in each trigonal plane are highlighted in the left inset.
The strongest orbital overlaps then occur in one-dimensional chains of p-orbitals aligned in a head-to-toe fashion along their long axis. In other words, the overlaps of for example neighbouring p x orbitals on the x-axis are much larger than those between neighbouring p x orbitals on the y or z axes, or between any two p orbitals of different type.
A minimal model for the bare electronic structure may thus be constructed by taking into account a hopping integral t along chains in all three directions, but neglecting all other orbital overlaps, and in particular any inter-chain hopping. Interactions between one-dimensional chains in different directions can then be taken into account by including the Coulomb interaction V between electrons in different p-orbitals on the same site. The resulting model is known to qualitatively capture the instability in the electronic structure which underlies the formation of combined charge and orbital order in Se and Te [2]. The electronic Hamiltonian for this minimal model can be written as the sum of tight binding and Coulomb terms:
H TB = t r,n,σĉ † r,n,σĉr+n,n,σ + H.c. H Coul = V r,n,σ,σ ĉ † r,n,σĉr,n,σĉ † r,n+1,σ ĉ r,n+1,σ ,(1)
whereĉ † r,n,σ creates an electron on position r, with spin σ, in a p n -orbital, with n ∈ {x, y, z}. The lattice vectors a n are written using the shorthand notation n. In our simulations, we use the parameter values t = 2.0 eV and V = 39 meV.
We additionally allow atoms to be displaced by introducing phonons. Since the phonon dispersion is approximately flat in the momentum-space region of interest, we employ an Einstein mode of constant energy ω = 3.5 meV. There are two different ways in which electrons couple to the phonons. On the one hand, atomic displacements alter the interatomic distances, which affects the hopping of electrons between them. On the other hand, atomic displacements also alter the local density of ions surrounding a particular site, which influences the on-site potential energy of electrons. We take into account both the kinetic and potential energy contributions of phonons:
H kin el-ph = g (1)
r,n,σ û r,n −û r+n,n ĉ † r,n,σĉr+n,n,σ + H.c.
H pot el-ph = g (2) r,n û r+n,n −û r−n,n ĉ † r,n,σĉr,n,σ .
Hereû r,n is the operator corresponding to the n-component of displacement for the atom on position r. The relative strength of the two types of electron-phonon coupling g (1) and g (2) determines whether a spiral trigonal structure consisting of site-centered or bondcentered charge density waves is formed in Se and Te. For simplicity, we assume equal values g (1) = g (2) = 0.04 eV for these couplings, resulting in a bond-centered spiral state consistent with experimental observations. The minimal model consisting of the terms considered so far gives rise to three sets of mutually parallel Fermi surface sheets. This situation is extremely well-nested, and, together with the electron-phonon coupling, renders the simple cubic phase unstable towards the formation of three simultaneous charge density waves, connected to the three sets of planes. In fact, a single, common nesting vector Q = 2π/3a(1, 1, 1) can be chosen such that every point on a Fermi surface sheet is connected to a corresponding point on a parallel sheet. The on-site Coulomb interaction provides a coupling between the density waves, resulting in an overall spiral trigonal structure. Because each charge density wave resides in chains of a particular type of orbital, the trigonal structure is automatically orbital ordered as well as charge ordered [2].
In Polonium, we expect relativistic effects to suppress the trigonal β-Po phase at low temperatures, and instead stabilise the simple cubic α-Po allotrope. This is made possible in the minimal model by including spin-orbit coupling:
H SOC = λ SOC r,n,n ,σ,σ M nn σσ ĉ † r,n,σĉ r,n ,σ ,(3)
where λ SOC is the overall strength of the spin-orbit coupling, while M contains the matrix elements of the operatorL ·Ŝ in the basis of states labelled by orbital index n and spin σ.
The full Hamiltonian, combining all terms from equations (1), (2), and (3), and taking arbitrary but realistic values for all model parameters, can be solved numerically within the mean field approximation. This is done by introducing mean field averages corresponding to charge density, bond density, and displacement waves in each of the three lattice directions: σ 〈ĉ † r,n,σĉr,n,σ 〉 = ρ 0 + Acos (Q · r + ϕ n ) σ 〈ĉ † r,n,σĉr+n,n,σ 〉 = σ 0 + B cos (Q · (r + n)/2 + ϕ n ) 〈û r,n 〉 =ũ sin (Q · r + ϕ n ) .
Here, A is the mean-field amplitude for the on-site charge density variations, while B corresponds to modulations of the bond densities. The atomic displacement field is given byũ. The wave vector Q is equal for all instabilities and is determined by the strongly nested Fermi surface, but the phases ϕ n differ between density waves in different lattice directions n. Taking ϕ n = n · 2π/3, the known spiral trigonal lattice structure of Te and Se is recovered for vanishing spin-orbit coupling. This relation can be understood as an optimisation of the competition between Coulomb and electron-phonon interactions [2], and is assumed to hold also for finite values of the spin-orbit coupling. The phonon part of the mean field Hamiltonian can be solved analytically using a Bogoliubov transformation [10], which shows the atomic displacements in the presence of given electronic order parameters A and B to beũ = 2 3/ ω(2B g (1) − Ag (2) ). This expression relates the displacement fieldũ to the amplitudes of site-centered and bond-centered charge modulations. Notice that the size of displacements is inversely proportional to the bare phonon frequency. The fermionic part of the mean field Hamiltonian can be written in matrix form and diagonalised numerically for any given value ofũ. Iterating this procedure eventually yields self-consistent solutions for the displacementũ and the density modulations A and B.
Without spin-orbit coupling and at zero temperature, the mean field ground state has a non-zero expectation value for the displacements, and is hence in the spiral trigonal lattice configuration. As the strength of spin-orbit coupling, λ SOC , is increased, a critical point is encountered, beyond which no non-trivial self-consistent solutions exist, as shown in figure 1. Intuitively, the disappearance of the trigonal state at large λ SOC can be understood by realising that it corresponds to a state of combined charge and orbital order. The strong coupling between spin and orbitals destroys the independent orbital degree of freedom, and hence prevents the onset of orbital order. As a result, the simple cubic lattice remains the ground state configuration. Alternatively, the competition between spin orbit coupling and density wave order may be phrased in terms of energetics. Large spin orbit coupling causes the Fermi surface to deform and gaps to open up. This obstructs the formation of charge and orbital order, which depends on having sufficiently nested Fermi surface available for a charge ordering gap to lower the overall electronic energy.
Turning up the temperature
It is known experimentally that polonium undergoes an unusual structural phase transition at about 348 K, where the low temperature simple cubic α−Po lattice structure is reduced in symmetry and becomes the high temperature trigonal β−Po phase [9,11,12]. In order to describe this effect in our minimal model, we include the effect of temperature in two places. First, the mean field expectation values all become thermal expectation values, written for the electronic part of the Hamiltonian in terms of Fermi-Dirac distributions. Secondly, and more importantly, we take into account the fact that thermal expansion of the lattice will cause a lowering of the bare phonon energy. Owing to the relative softness of the material, the change in phonon energy in Po is significant, and cannot be neglected [6].
To describe the dependence of phonon energy on temperature, we first approximate the thermal expansion to be linear, so that the lattice constant at temperature T can be written as a(T ) = a 0 (1 + α∆T ). Here a 0 is the lattice constant at some reference temperature (∆T = 0), and α is the linear thermal expansion coefficient, which we take to be the experimentally determined value α = 23.5 × 10 −6 K −1 , obtained at 298K [13]. Taking α to be fixed while varying the temperature is seen to be a reasonable approximation in the region of interest by comparing it to the volumetric thermal expansion in Po as obtained by first principle calculations [5]. Assuming the phonon energy to depend on the interatomic distance, the expansion of the lattice will cause the bare phonon energies to soften, which we describe by the linear dependence:
ω = ω 0 + γ(a(T ) − a 0 )/a 0 ,(5)
where ω 0 is the energy of the bare phonon at the reference temperature where ∆T = 0 and a(T ) = a 0 . Fitting equation (5) to experimental data in order to establish the value of γ is prevented by the fact that polonium's strong radioactivity leads to a scarcity in relevant experimental data. A rough estimate of γ ≈ −172 meV can nonetheless be obtained by fitting equation (5) to ab initio studies of phonon energy versus lattice constant, reported in reference [6]. The lattice expansion affects the fermionic part of the mean field calculations through the inverse proportionality of the displacementũ on the bare phonon energy. Looking for self consistent solutions as a function of both temperature and spin-orbit coupling then leads to the phase diagram shown in figure 2. At zero temperature, sufficiently large values of spinorbit coupling are seen to effectively prevent the simple cubic structure from distorting into a trigonal phase. Raising the temperature lowers the bare phonon energy however, which makes the simple cubic structure more unstable, and hence requires ever larger spin-orbit coupling to prevent it from breaking down. As a result, for any fixed value of the spin-orbit coupling, the lattice may undergo a charge ordering transition into the trigonal charge and orbital ordered state, even if the low temperature phase was simple cubic. This effect is shown once more in figure 3 in terms of the thermal evolution of the order parameter for fixed values of the spinorbit coupling. Notice that the predicted chirality of the combined charge and orbital ordered Figure 3: The value of the order parameter B as a function of temperature, at various fixed values of the spin orbit coupling strength. At low temperature the lattice is simple cubic and order is exponentially suppressed (as indicated by the exponential fits to the data), while high temperatures favour the formation of combined charge and orbital order within a trigonal lattice structure (as shown by the linear fits). The transition into the ordered state, qualitatively indicated by the dotted lines, shifts to progressively higher temperatures for increasing strength of the spin-orbit coupling.
phase can in principle be observed in x-ray diffraction of optical activity experiments, while the orbital order itself should yield observable signatures in dedicated STM experiments.
The phase diagram of figure 2 agrees qualitatively with the evolution of lattice structures throughout the family of elemental chalcogens. The spin-orbit coupling in elemental Se and Te is weak enough to place them to the left of the zero-temperature transition point, as indicated schematically by the dashed lines in figure 2. Notice that at extremely high temperatures, the combined charge and orbital order in these crystals may be expected to be destroyed by thermal fluctuations. There is no guarantee however that this will happen below the melting temperature of the material. In fact, there are experimental indications that the short-range coordination in molten elemental Te changes from trigonal to cubic just above its melting temperature [14][15][16]. In contrast, polonium has strong spin-orbit coupling, placing it to the right of the zero-temperature transition, where the thermal evolution going from zero to high temperatures includes a transition from simple cubic to the less symmetric trigonal phase before the melting point is reached. Although probably impractical, further experimental exploration of the phase diagram of figure 2 could in principle be achieved by considering different isotopes of Po, in which the change in atomic mass affects the strength of the spin-orbit coupling.
Conclusions
The unique simple cubic lattice structure of elemental α−Po at ambient conditions, as well as its unusual symmetry-lowering structural transition towards β−Po at elevated temperatures, can be qualitatively understood in terms of the minimal microscopic model presented here. That the lattice structures and phase diagrams of the isoelectronic elements Se and Te can be understood within the same model without any additional assumptions, firmly establishes the fact that it captures the essential physics in the description of crystalline elemental chalcogens.
The simple cubic ground state of polonium is found in this model to be of a deceptive simplicity. The electronic structure consists of well-nested pieces of Fermi surface, which in the presence of electron-phonon coupling inevitably lead to large peaks in the electronic susceptibility and hence an incipient structural instability. The fact that three separate instabilities loom in three distinct orbital sectors, coupled together by Coulomb interactions, yields a preferred trigonal configuration of the lattice, corresponding to a combined charge and orbital ordered state. This novel type of order is in fact realised in Se and Te, which have spiral trigonal lattice structures at all temperatures. In polonium however, the additional presence of strong spin-orbit coupling competes with the onset of charge and orbital order, which can be understood either in terms of the orbital degree of freedom becoming obsolete, or in terms of decreased nesting due to gaps opening up at the Fermi energy. The spin-orbit coupling thus prevents the simple cubic lattice from becoming unstable. At elevated temperatures, the balance is once again shifted in favour of the structural instability, by the softening of phonon energies as the lattice expands. The result is a re-emergence of the spiral trigonal state, but now at high temperatures, sitting above a more symmetric low-temperature simple cubic phase.
The elements Selenium, Tellurium, and Polonium, thus emerge as crystals in which an intricate balance between all possible degrees of freedom, orbitals, charge, spin, and atomic displacements, determines the structure of the atomic lattice. The fact that multiple degrees of freedom cooperate and compete with each other profoundly affects the physics of these deceptively simple materials, as can be clearly seen from the phase diagram across the family of chalcogens. Spin-orbit coupling competes with the onset of a cooperative charge and orbital ordered phase. This can be undone at high temperatures, but rather than thermal fluctuations determining the evolution of the phase diagram along the temperature axis, it is the indirect effect coming from the softening of phonons upon thermal expansion that shifts the balance of power between competing ingredients. That such a complex interplay can nonetheless be understood in terms of a simple minimal model, puts forward the family of chalcogens as a textbook case for understanding the possible effects of competition, co-existence, and cooperation among spin, charge, orbital, and lattice degrees of freedom.
Funding information J.v.W. acknowledges support from VIDI grant 680-47-528, financed by the Netherlands Organisation for Scientific Research (NWO).
A Appendix: mean-field Hamiltonian
For completeness we present the mean-field Hamiltonian as obtained from equations (1)- (4).
A momentum space basis may be defined as (p x↑ (k), p x↓ (k), p y↑ (k), . . . p x↑ (k + Q), p x↓ (k + Q), . . . p x↑ (k − Q), . . . ). Here, the first index runs over the three types of p-orbitals, the second is a spin index, and the momentum is taken to lie within the reduced Brillouin zone. The Hamiltonian then has diagonal elements equal to 2t(cos(k) − µ) for the first six elements, 2t(cos(k + Q) − µ) for the next six, and 2t(cos(k − Q) − µ) for the final ones. The Coulomb interaction appears as elements of the form VA(e ±iϕ a + e ±iϕ b ), connecting states with like orbitals and spins in different momentum sectors. The indices a and b correspond to the two orbital orientations different from the one of the states connected by this element. The electron-phonon coupling may be written as −4e ±iϕ a [g (2) Asin(Q) − 2g (1) B sin(Q/2)][g (2) sin(Q) + g (1) (sin(k) − sin(k ))]/( ω). This term also connects states with like orbitals and spins, but different momenta. The index a corresponds to the orbital index of the element under consideration, and the momenta k and k are the momenta being connected. Finally, the spin orbit coupling acts within a momentum sector, and is of the form:Ĥ
SOC = λ SOC 0 0 −i 0 0 1 0 0 0 i −1 0 i 0 0 0 0 −i 0 −i 0 0 −i 0 0 −1 0 i 0 0 1 0 i 0 0 0 .
Figure 1 :
1The value of the bond density order parameter B, as a function of the strength of spin orbit coupling λ SOC , which is measured in units of the bandwidth t.
Figure 2 :
2Phase diagram for elemental chalcogens as a function of temperature and strength of spin-orbit coupling. A. Schematic phase diagram. At low temperatures, increasing the strength of spin orbit coupling leads to a suppression of the trigonal instability, and hence a stabilisation of the simple cubic lattice. Polonium is expected to fall just to the right of the transition point, and thus to have a simple cubic ground state, while Selenium and Tellurium have low spin-orbit interaction, and thus a trigonal ground state structure. At fixed spin-orbit coupling, starting from the trigonal phase, the melting point (schematically indicated by the red dashed line) is encountered before charge and orbital order is destroyed and the local structure becomes cubic. Starting instead from the simple cubic phase, thermal expansion of the lattice lowers the bare phonon energy and thus shifts the balance of competing interactions in favour of the trigonal phase. B. The transition temperatures between cubic (lower right) and trigonal (upper left) phases, found by self-consistently solving the mean field equations. The error bars indicate the uncertainty in assigning the transition point within the precision of our numerical routine, and the solid line is a guide to the eye.
Structure and Conductivity in the VIb Group of the Periodic System. A , 10.1063/1.1746893J. Chem. Physs. 16372A. von Hippel, Structure and Conductivity in the VIb Group of the Periodic System, J. Chem. Physs 16, 372 (1948), doi:10.1063/1.1746893.
Elemental chalcogens as a minimal model for chiral charge and orbital order. A Silva, J Henke, J Van Wezel, 10.1103/PhysRevB.97.045151Phys. Rev. B. 9745151A. Silva, J. Henke and J. van Wezel, Elemental chalcogens as a minimal model for chiral charge and orbital order, Phys. Rev. B 97, 045151 (2018), doi:10.1103/PhysRevB.97.045151.
Origin of the stabilized simple-cubic structure in polonium: Spin-orbit interaction versus Peierls instability. B I Min, J H Shim, M Park, K Kim, S K Kwon, S J Youn, 10.1103/PhysRevB.73.132102Phys. Rev. B. 73132102B. I. Min, J. H. Shim, M. Sik Park, K. Kim, S. K. Kwon and S. J. Youn, Origin of the stabilized simple-cubic structure in polonium: Spin-orbit interaction versus Peierls instability, Phys. Rev. B 73, 132102 (2006), doi:10.1103/PhysRevB.73.132102.
Why Is Polonium Simple Cubic and So Highly Anisotropic?. D Legut, M Friák, M Šob, 10.1103/PhysRevLett.99.016402Phys. Rev. Lett. 9916402D. Legut, M. Friák and M. Šob, Why Is Polonium Simple Cubic and So Highly Anisotropic?, Phys. Rev. Lett. 99, 016402 (2007), doi:10.1103/PhysRevLett.99.016402.
Phases of Polonium via Density Functional Theory. M J Verstraete, 10.1103/PhysRevLett.104.035501Phys. Rev. Lett. 10435501M. J. Verstraete, Phases of Polonium via Density Functional Theory, Phys. Rev. Lett. 104, 035501 (2010), doi:10.1103/PhysRevLett.104.035501.
Phonon softening and superconductivity triggered by spin-orbit coupling in simple-cubic α-polonium crystals. C Kang, K Kim, B I Min, 10.1103/PhysRevB.86.054115Phys. Rev. B. 8654115C. Kang, K. Kim and B. I. Min, Phonon softening and superconductivity triggered by spin-orbit coupling in simple-cubic α-polonium crystals, Phys. Rev. B 86, 054115 (2012), doi:10.1103/PhysRevB.86.054115.
Comment on "why is polonium simple cubic and so highly anisotropic?. K Kim, H C Choi, B I Min, 10.1103/PhysRevLett.102.079701Phys. Rev. Lett. 10279701K. Kim, H. C. Choi and B. I. Min., Comment on "why is polonium sim- ple cubic and so highly anisotropic?", Phys. Rev. Lett. 102, 079701 (2007), doi:10.1103/PhysRevLett.102.079701.
Relativistically parametrized extended Hueckel calculations. 11. Energy bands for elemental tellurium and polonium. L L Lohr, 10.1021/ic00259a039Inorg. Chem. 26L. L. Lohr, Relativistically parametrized extended Hueckel calculations. 11. Energy bands for elemental tellurium and polonium, Inorg. Chem. 26, 2005 (1987), doi:10.1021/ic00259a039.
Physical Properties of Polonium. II. X-Ray Studies and Crystal Structure. W H Beamer, C R Maxwell, 10.1063/1.1747155J. Chem. Phys. 171293W. H. Beamer and C. R. Maxwell, Physical Properties of Polonium. II. X-Ray Studies and Crystal Structure, J. Chem. Phys 17, 1293 (1949), doi:10.1063/1.1747155.
Exciton-phonon-driven charge density wave in TiSe 2. J Van Wezel, P Nahai-Williamson, S S Saxena, 10.1103/PhysRevB.81.165109Phys. Rev. B. 81165109J. van Wezel, P. Nahai-Williamson and S. S. Saxena, Exciton-phonon-driven charge density wave in TiSe 2 , Phys. Rev. B 81, 165109 (2010), doi:10.1103/PhysRevB.81.165109.
Physical Properties of Polonium. I. Melting Point, Electrical Resistance, Density, and Allotropy. C R Maxwell, 10.1063/1.1747154J. Chem. Phys. 171288C. R. Maxwell, Physical Properties of Polonium. I. Melting Point, Electrical Resistance, Den- sity, and Allotropy, J. Chem. Phys 17, 1288 (1949), doi:10.1063/1.1747154.
The structures of polonium and its compounds-I α and β polonium metal. R J Desando, R C Lange, 10.1016/0022-1902(66)80270-1J. Inorg. Nucl. Chem. 2866R. J. DeSando and R. C. Lange, The structures of polonium and its compounds-I α and β polonium metal, J. Inorg. Nucl. Chem. 28, 1837 (1966), doi:10.1016/0022- 1902(66)80270-1.
D Lide, CRC, Handbook of Chemistry and Physics. Boca Raton, FL86D. Lide, ed., CRC, Handbook of Chemistry and Physics, CRC, Boca Raton, FL, 86 edn. (2005).
Short range order in amorphous and liquid Se 1−x Te x systems. R Bellissent, G Tourand, 10.1016/0022-3093(80)90364-6J. Non-Crys. Sol. 35R. Bellissent and G. Tourand, Short range order in amorphous and liquid Se 1−x Te x systems, J. Non-Crys. Sol. 35, 1221 (1980), doi:10.1016/0022-3093(80)90364-6.
X-ray diffraction measurements for expanded fluid selenium up to the metallic region. M Inui, T Noda, K Tamura, 10.1016/S0022-3093(96)00378-XJ. Non-Crys. Sol. 26196378M. Inui, T. Noda and K. Tamura, X-ray diffraction measurements for expanded fluid sele- nium up to the metallic region, J. Non-Crys. Sol. 261, 205 (1996), doi:10.1016/S0022- 3093(96)00378-X.
Study of the structure of liquid tellurium by neutron diffraction near the melting temperature. G Tourand, 10.1016/0375-9601(75)90168-1Phys. Lett. A. 54G. Tourand, Study of the structure of liquid tellurium by neutron diffraction near the melting temperature, Phys. Lett. A 54, 209 (1975), doi:10.1016/0375-9601(75)90168-1.
| []
|
[
"Magnetic tunnel junctions with impurities",
"Magnetic tunnel junctions with impurities"
]
| [
"F Kanjouri \nDepartment of Physics\nMoscow Lomonosov University\n119899MoscowRussia\n\nDepartment of Physics\nYazd University\nYazdIran\n",
"N Ryzhanova \nDepartment of Physics\nMoscow Lomonosov University\n119899MoscowRussia\n\nD?partement de Recherche Fondamentale sur la Mati?re Condens?e\nSPINTEC, Unit? de Recherche Associ?e 2512\nCEA/CNRS\nCEA/Grenoble\n38054Grenoble CedexFrance\n",
"B Dieny \nD?partement de Recherche Fondamentale sur la Mati?re Condens?e\nSPINTEC, Unit? de Recherche Associ?e 2512\nCEA/CNRS\nCEA/Grenoble\n38054Grenoble CedexFrance\n",
"N Strelkov \nDepartment of Physics\nMoscow Lomonosov University\n119899MoscowRussia\n\nD?partement de Recherche Fondamentale sur la Mati?re Condens?e\nSPINTEC, Unit? de Recherche Associ?e 2512\nCEA/CNRS\nCEA/Grenoble\n38054Grenoble CedexFrance\n",
"A Vedyayev \nDepartment of Physics\nMoscow Lomonosov University\n119899MoscowRussia\n\nD?partement de Recherche Fondamentale sur la Mati?re Condens?e\nSPINTEC, Unit? de Recherche Associ?e 2512\nCEA/CNRS\nCEA/Grenoble\n38054Grenoble CedexFrance\n"
]
| [
"Department of Physics\nMoscow Lomonosov University\n119899MoscowRussia",
"Department of Physics\nYazd University\nYazdIran",
"Department of Physics\nMoscow Lomonosov University\n119899MoscowRussia",
"D?partement de Recherche Fondamentale sur la Mati?re Condens?e\nSPINTEC, Unit? de Recherche Associ?e 2512\nCEA/CNRS\nCEA/Grenoble\n38054Grenoble CedexFrance",
"D?partement de Recherche Fondamentale sur la Mati?re Condens?e\nSPINTEC, Unit? de Recherche Associ?e 2512\nCEA/CNRS\nCEA/Grenoble\n38054Grenoble CedexFrance",
"Department of Physics\nMoscow Lomonosov University\n119899MoscowRussia",
"D?partement de Recherche Fondamentale sur la Mati?re Condens?e\nSPINTEC, Unit? de Recherche Associ?e 2512\nCEA/CNRS\nCEA/Grenoble\n38054Grenoble CedexFrance",
"Department of Physics\nMoscow Lomonosov University\n119899MoscowRussia",
"D?partement de Recherche Fondamentale sur la Mati?re Condens?e\nSPINTEC, Unit? de Recherche Associ?e 2512\nCEA/CNRS\nCEA/Grenoble\n38054Grenoble CedexFrance"
]
| []
| The influence on the I-V characteristics and tunnel magnetoresistance (TMR), of impurities embedded into the insulating barrier (I)separating the two ferromagnetic electrodes (F) of a magnetic tunnel junction, was theoretically investigated. When the energy of the electron's bound state at the impurity site is close to the Fermi energy, it is shown that the current and TMR are strongly enhanced in the vicinity of the impurity. If the position of the impurity inside the barrier is asymmetric, e.g. closer to one of the interfaces F/I, the I-V characteristic exhibits a quasidiode behavior. The case of a single impurity and of a random distribution of impurities within a plane were both studied. | 10.1063/1.1997294 | [
"https://arxiv.org/pdf/cond-mat/0412351v1.pdf"
]
| 119,011,359 | cond-mat/0412351 | e6c71cc347e6b1e6c8a1dbad99ebd280dd83594e |
Magnetic tunnel junctions with impurities
14 Dec 2004
F Kanjouri
Department of Physics
Moscow Lomonosov University
119899MoscowRussia
Department of Physics
Yazd University
YazdIran
N Ryzhanova
Department of Physics
Moscow Lomonosov University
119899MoscowRussia
D?partement de Recherche Fondamentale sur la Mati?re Condens?e
SPINTEC, Unit? de Recherche Associ?e 2512
CEA/CNRS
CEA/Grenoble
38054Grenoble CedexFrance
B Dieny
D?partement de Recherche Fondamentale sur la Mati?re Condens?e
SPINTEC, Unit? de Recherche Associ?e 2512
CEA/CNRS
CEA/Grenoble
38054Grenoble CedexFrance
N Strelkov
Department of Physics
Moscow Lomonosov University
119899MoscowRussia
D?partement de Recherche Fondamentale sur la Mati?re Condens?e
SPINTEC, Unit? de Recherche Associ?e 2512
CEA/CNRS
CEA/Grenoble
38054Grenoble CedexFrance
A Vedyayev
Department of Physics
Moscow Lomonosov University
119899MoscowRussia
D?partement de Recherche Fondamentale sur la Mati?re Condens?e
SPINTEC, Unit? de Recherche Associ?e 2512
CEA/CNRS
CEA/Grenoble
38054Grenoble CedexFrance
Magnetic tunnel junctions with impurities
14 Dec 2004
The influence on the I-V characteristics and tunnel magnetoresistance (TMR), of impurities embedded into the insulating barrier (I)separating the two ferromagnetic electrodes (F) of a magnetic tunnel junction, was theoretically investigated. When the energy of the electron's bound state at the impurity site is close to the Fermi energy, it is shown that the current and TMR are strongly enhanced in the vicinity of the impurity. If the position of the impurity inside the barrier is asymmetric, e.g. closer to one of the interfaces F/I, the I-V characteristic exhibits a quasidiode behavior. The case of a single impurity and of a random distribution of impurities within a plane were both studied.
Magnetic tunnel junctions (MTJ), consist of two metallic ferromagnetic electrodes separated by an insulating barrier. They typically exhibit tunnel magnetoresistance (TMR) of the order of 50% associated with a change in the relative orientation of the magnetization in the two ferromagnetic electrodes. They attract a lot of attention [1,2,3] especially due to their applications in several spin-electronic devices especially in non-volatile MRAM (Magnetic Random Access Memory). In a pioneer paper [4], a theory of TMR for ideal MTJ (without defect) was developed. Later on, it has been shown [5,6] that the presence of different types of defects within the barrier can dramatically affect the I-V characteristics and TMR amplitude. In these papers, the current, averaged over the cross-section of the system, was calculated. However, it is also interesting to investigate the local current density and TMR in the vicinity of the impurity. From an experimental point of view, this is achievable by using conductive Atomic Force Microscopy approach as realized for instance in the following reference [? ] where the authors mapped out the spatial variations of the I(V) characteristics through a tunnel barrier. From a theroretical point of view, a theory of local impurity assisted tunnelling in MTJ was recently developed [8]. Tight binding model and Kubo formalism were used to calculate the spin-dependent tunnel current through the MTJ.
In this earlier paper, the I-V characteristics were not investigated in detail. Furthermore, the dependence of spin-dependent current on the position of cross section plane relative to the position of impurity was not calculated.
In the present paper, we report on a theoretical study of the spatial distribution of spin-dependent current across the plane of a magnetic tunnel junction. The local I-V charactertistics as well as the local TMR amplitude are calculated for a single impurity and for a random planar distribution of impurities inside the barrier. In this theory, We adopted the free electron model with exchange splitting for the ferromagnetic electrodes and used the nonequilibrium Keldysh technique [9] to calculate the transport properties which are nonlinear functions of the applied voltage.
The MTJ is described as a three layers system, consisting of two thick ferromagnetic electrodes F separated by an insulating layer, I. Inside the barrier, a single nonmagnetic impurity with attracting potential is located at a given distance from the F/I interface.
The two cases of parallel and antiparallel orientations of the F-layers magnetization were investigated.
The F-electrodes are connected to the reservoirs with chemical potentials µ 1 and µ 2 so
that µ 2 − µ 1 = eV , where V is the applied voltage.
To calculate the current through the system, the Keldysh Green function G −+ and advanced and retarded Green functions G A and G R must be calculated. By solving the Dyson equation, we found that
G −+ (r, r ′ ) = G −+ 0 (r, r ′ ) + G R 0 (r, r 0 )W G −+ 0 (r 0 , r ′ ) 1 − W G R 0 (r 0 , r 0 ) + G −+ 0 (r, r 0 )W G A 0 (r 0 , r ′ ) 1 − W G A 0 (r 0 , r 0 ) + G R 0 (r, r 0 )W G −+ 0 (r 0 , r 0 )W G A 0 (r 0 , r ′ ) (1 − W G R 0 (r 0 , r 0 )) (1 − W G A 0 (r 0 , r 0 ))(1)
where G −+ 0 (r, r ′ ), G A 0 (r, r ′ ) and G R 0 (r, r ′ ) are the Green's functions for the system in the absence of the impurity and the potential of the impurity V was represented as a δ-function:
V (r) = W a 3 0 δ(z − z 0 )δ(ρ-ρ 0 ), r 0 = (ρ 0 , z 0 )
is the position of the impurity, a 0 is its effective radius, W is its amplitude. The explicit expressions for G A , G R , G −+ have the following forms:
G R 0 (r, r ′ ) = d 2 κ (−1)e −iκ(ρ−ρ ′ ) 2 √ q(z)q(z ′ )den {E(z 2 , z) [q(z 2 ) + ik 2 ] + E −1 (z 2 , z) [q(z 2 ) − ik 2 ]} × {E(z ′ , z 1 ) [q(z 1 ) + ik 1 ] + E −1 (z ′ , z 1 ) [q(z 1 ) − ik 1 ]} ,(2)G A 0 (r, r ′ ) = d 2 κ (−1)e iκ(ρ−ρ ′ ) 2 √ q(z)q(z ′ )den * {E(z 2 , z) [q(z 2 ) − ik 2 ] + E −1 (z 2 , z) [q(z 2 ) + ik 2 ]} × {E(z ′ , z 1 ) [q(z 1 ) − ik 1 ] + E −1 (z ′ , z 1 ) [q(z 1 ) + ik 1 ]} ,(3)G −+ 0 (r, r ′ ) = d 2 κ i4k 1 q(z 1 )n L e −iκ(ρ−ρ ′ ) √ q(z)q(z ′ )|den| 2 {E(z ′ , z 2 ) [q(z 2 ) + ik 2 ] + E −1 (z ′ , z 2 ) [q(z 2 ) − ik 2 ]} × {E(z, z 2 ) [q(z 2 ) − ik 2 ] + E −1 (z, z 2 ) [q(z 2 ) + ik 2 ]} + d 2 κ i4k 2 q(z 2 )n R e −iκ(ρ−ρ ′ ) √ q(z)q(z ′ )|den| 2 {E(z 1 , z ′ ) [q(z 1 ) + ik 1 ] + E −1 (z 1 , z ′ ) [q(z 1 ) − ik 1 ]} × {E(z 1 , z) [q(z 1 ) − ik 1 ] + E −1 (z 1 , z) [q(z 1 ) + ik 1 ]} ,(4)
where
q(z) = q 2 0 + κ 2 − 2m 2 (z−z 1 ) (z 2 −z 1 ) eV , k 1 = 2m 2 (ε − ∆ 1 ) − κ 2 , k 2 = 2m 2 (ε − ∆ 2 + eV ) − κ 2 , den = {E(z 1 , z 2 ) [q(z 2 ) − ik 2 ] [q(z 1 ) − ik 1 ] − E −1 (z 1 , z 2 ) [q(z 2 ) + ik 2 ] [q(z 1 ) + ik 1 ]} , E(z 1 , z 2 ) ≡ e z 2 z 1 q(τ )dτ ,
κ is the electron momentum perpendicular to the plane of structure, ε is the energy, z 1 and
. k ↑ 1F , k ↓ 1F , k ↑ 2F , k ↓ 2F are Fermi wave vectorsk ↑ F = 1.1Å −1 , k ↓ F = 0.6Å −1 , q 0 = 1.0Å −1 .
of electron with spin↑ (↓) in the left and right F-electrodes. The current was calculated, using the following expression:
j z (ρ, z) = e 2m dε ∂G −+ (z, ρ; z ′ , ρ) ∂z ′ − ∂G −+ (z, ρ; z ′ , ρ) ∂z z=z ′(5)
In Fig.1 and Fig.2, the dependencies of the currents in different channels (up and down spin) on coordinate ρ − ρ 0 at one interface I/F (z 2 = 15Å) (another interface is at z 1 = 0) and inside the barrier at z = 10 are shown. In this calculation, the impurity is assumed to be positioned at ρ 0 = 0 and z 0 = 5Å.
In the vicinity of the impurity, a hot spot of radius approximately equal to 6Å may be observed. The current density in the center of the hot spot exceeds the value of the background current by several orders of magnitude. In Fig.3, the TMR dependence on the distance from the impurity at different z is shown. It is interesting that the value of TMR in the vicinity of the impurity exceeds its background value (TMR for the ideal structure is equal 0.013) by more than an order of magnitude. Furthermore, in some cases, regions of ρ − ρ 0 exist in which the TMR becomes negative. was demonstrated so far in the case of a single impurity. We next investigate the case of a finite concentration of impurities.
In this case, we consider the same magnetic tunnel barrier structure with a monolayer of impurities of finite atomic concentration x, situated closer to one of the F/I interfaces.
To solve the problem, as a first step, we have to find the coherent potential and effective Keldysh Green function G −+ eff . By solving the Dyson equation in the Keldysh space, the following expression was obtained for G −+AP ↑↑ :
G −+ (z, z ′ ) = G −+ 0 (z, z ′ ) + G −+ 0 (z, z 0 ) Σ A G A 0 (z 0 , z ′ ) 1 − G A 0 (z 0 , z 0 ) Σ A + G R 0 (z, z 0 ) Σ R G −+ 0 (z 0 , z ′ ) 1 − G R 0 (z 0 , z 0 ) Σ R − G R 0 (z, z 0 ) Σ −+ G A 0 (z 0 , z ′ ) (1 − G A 0 (z 0 , z 0 ) Σ A ) (1 − G R 0 (z 0 , z 0 ) Σ R ) + G R 0 (z, z 0 ) Σ R G −+ 0 (z 0 , z 0 ) Σ A G A 0 (z 0 , z ′ ) (1 − G A 0 (z 0 , z 0 ) Σ A ) (1 − G R 0 (z 0 , z 0 ) Σ R )(6)
where Σ R(A) are the coherent potentials (C.P.) for the retarded and advanced Green functions, which have to be found from the C.P.A equation:
t = (1 − x) (ε A − Σ) 1 − (ε A − Σ)G eff (z 0 , ρ 0 ; z 0 , ρ 0 ) + (x) (ε B − Σ) 1 − (ε B − Σ)G eff (z 0 , ρ 0 ; z 0 , ρ 0 ) = 0(7)
where ε A and ε B are the onsite energies of the host (Al 2 O 3 ) and the impurity (Al) and
Σ −+ = i 2 (n R + n L )(Σ R − Σ A )
. Now to calculate the I-V characteristic, we can use the previously found expression for G −+P (AP ) αα and substitute it into the expression (5).
In Fig.5, the I-V characteristic in the AP configuration is shown. An asymmetry of the curve on the sign of the applied voltage is clearly visible.
Such a structure may be prepared for instance by sputtering a thin layer of Al on the bottom F-electrode, then oxidise it in Alumina. Thenafter, a second thicker layer of Al is sputtered on the already formed Alumina barrier but this second layer is subsequently underoxidized so that a thin layer of the random alloy Al x Al 2 O 3(1−x) remains inside the more or less ideal insulator Al 2 O 3 at an assymmetric location within the barrier.
The work was partly supported by the Russian fund of fundamental research (grant N 04-02-16688a). AV, NR and NS thank CEA for financial support during their stay.
z 2
2are the positions of the F/I interfaces, ∆ 1 and ∆ 1 denote the positions of the bottom of the energy band for spin up and down subbands.n L = f 0 (ε) and n R = f 0 (ε + eV ) are Fermi distribution functions in the left and right reservoirs and 2 q 2 0 2m height of potential barrier above Fermi level. In (1),(2),(3) and (4) ρ and z are in the plane and perpendicular to the plane coordinates, and we consider that z and z 0 are situated within the barrier. We have to take into account that all Green functions are matrices in spin space
FIG. 1 :
1Dependence of the current for different spin channels and P and AP configuration on the distance from the impurity in the plane of the structure at z=15Å.
Fig. 4 FIG. 3 :
43shows the I-V characteristics for positive and negative applied voltage. These curves Dependence of TMR on the distance from the impurity in the plane of the structure at different z. For parameters see Fig.1 are quite asymmetric with respect to the sign of the voltage. This asymmetry is related to the asymmetric positioning of the impurity inside of the barrier. It is particularly pronounced if the potential of the impurity is chosen so that the bound (resonance) state of electrons with spin up is located near the Fermi energy for the positive applied voltage = 1.2 V , and if this bound state lies below the Fermi energy for negative voltage. This diode behavior FIG. 4: Local I-V curve at ρ = ρ 0 and z = 15Å for the case of single impurity. FIG. 5: I-V curve in the case of the layer of impurities at z 0 = 3Å and x = 0.5.
. J S Moodera, L R Kinder, T M Wong, R Meservey, Phs. Rev. Lett. 743273J. S. Moodera, L. R. Kinder, T. M. Wong, and R. Meservey, Phs. Rev. Lett. 74, 3273 (1995).
S S P Parkin, Spin Dependent Transport in Magnetic Nanostrctures. Taylor & FrancisS. S. P. Parkin, Spin Dependent Transport in Magnetic Nanostrctures (Taylor & Francis, 2002).
. J S Moodera, G Mathon, JMMM. 200248J. S. Moodera and G. Mathon, JMMM 200, 248 (1999).
. J C Slonczewski, Phys. Rev. B. 396995J. C. Slonczewski, Phys. Rev. B 39, 6995 (1989).
. E Y Tsymbal, D G Pettifor, Phys. Rev. B. 58432E. Y. Tsymbal and D. G. Pettifor, Phys. Rev. B 58, 432 (1998).
. A Vedyayev, D Bagrets, A Bagrets, B Dieny, Phys. Rev. B. 63A. Vedyayev, D. Bagrets, A. Bagrets, and B. Dieny, Phys. Rev. B 63, 064429.1 (2001).
. V D Costa, Y Herry, F Bordon, M Romeo, K Ounadjela, Euro. Phys. J. 13297V. D. Costa, Y. Herry, F. Bordon, M. Romeo, and K. Ounadjela, Euro. Phys. J. B13, 297 (2000).
. E Y Tsymbal, D G Pettifor, Phys. Rev. B. 64212401E. Y. Tsymbal and D. G. Pettifor, Phys. Rev. B. 64, 212401 (2001).
. L V Keldysh, JETP. 201018L. V. Keldysh, JETP 20, 1018 (1965).
| []
|
[
"A duality framework for generalization analysis of random feature models and two-layer neural networks",
"A duality framework for generalization analysis of random feature models and two-layer neural networks"
]
| [
"Hongrui Chen ",
"Jihao Long ",
"Lei Wu "
]
| []
| []
| We consider the problem of learning functions in the F p,π and Barron spaces, which are natural function spaces that arise in the high-dimensional analysis of random feature models (RFMs) and two-layer neural networks. Through a duality analysis, we reveal that the approximation and estimation of these spaces can be considered equivalent in a certain sense. This enables us to focus on the easier problem of approximation and estimation when studying the generalization of both models. The dual equivalence is established by defining an information-based complexity that can effectively control estimation errors. Additionally, we demonstrate the flexibility of our duality framework through comprehensive analyses of two concrete applications.• The first application is to study learning functions in F p,π with RFMs. We prove that the learning does not suffer from the curse of dimensionality as long as p > 1, implying RFMs can work beyond the kernel regime. Our analysis extends existing results [CMM21] to the noisy case and removes the requirement of overparameterization.• The second application is to investigate the learnability of reproducing kernel Hilbert space (RKHS) under the L ∞ metric. We derive both lower and upper bounds of the minimax estimation error by using the spectrum of the associated kernel. We then apply these bounds to dot-product kernels and analyze how they scale with the input dimension. Our results suggest that learning with ReLU (random) features is generally intractable in terms of reaching high uniform accuracy. | null | [
"https://export.arxiv.org/pdf/2305.05642v1.pdf"
]
| 258,564,231 | 2305.05642 | 03e1c9d451da45d1714dc27872e78dc6fc2a066f |
A duality framework for generalization analysis of random feature models and two-layer neural networks
9 May 2023 May 10, 2023
Hongrui Chen
Jihao Long
Lei Wu
A duality framework for generalization analysis of random feature models and two-layer neural networks
9 May 2023 May 10, 2023
We consider the problem of learning functions in the F p,π and Barron spaces, which are natural function spaces that arise in the high-dimensional analysis of random feature models (RFMs) and two-layer neural networks. Through a duality analysis, we reveal that the approximation and estimation of these spaces can be considered equivalent in a certain sense. This enables us to focus on the easier problem of approximation and estimation when studying the generalization of both models. The dual equivalence is established by defining an information-based complexity that can effectively control estimation errors. Additionally, we demonstrate the flexibility of our duality framework through comprehensive analyses of two concrete applications.• The first application is to study learning functions in F p,π with RFMs. We prove that the learning does not suffer from the curse of dimensionality as long as p > 1, implying RFMs can work beyond the kernel regime. Our analysis extends existing results [CMM21] to the noisy case and removes the requirement of overparameterization.• The second application is to investigate the learnability of reproducing kernel Hilbert space (RKHS) under the L ∞ metric. We derive both lower and upper bounds of the minimax estimation error by using the spectrum of the associated kernel. We then apply these bounds to dot-product kernels and analyze how they scale with the input dimension. Our results suggest that learning with ReLU (random) features is generally intractable in terms of reaching high uniform accuracy.
Introduction
One of the fundamental problems in theoretical machine learning is to understand how certain highdimensional functions can be learned efficiently using machine learning models [EMWW20, Bac17a] such as neural networks and random feature models. Denote by X ⊂ R d the input domain and F the function class of interest. We say that F can be learned efficiently if both the approximation error and estimation error scale polynomially with the input dimension d. Otherwise, the learning is said to suffer from or exhibit the curse of dimensionality (CoD) [Bel66].
It is well-known that learning traditional function spaces such as Sobolev and Besov spaces suffers from the CoD, regardless of the machine learning models used [Nov06,Wai19]. Analyses with these spaces cannot explain the success of machine learning in solving high-dimensional problems. To address this, it is crucial to identify the appropriate function spaces for a given machine learning model such that the functions can be learned efficiently using that model [EMW21,EMWW20]. This can provide insight into the model's strengths and limitations when dealing with highdimensional data. This paper focuses on this issue for three popular machine learning models: kernel methods, random feature models, and two-layer neural networks.
Kernel methods are a class of methods using the hypothesis class: { n i=1 α i k(x i , ·) : α ∈ R n }, where {x i } n i=1 are training data and k : X ×X → R is a kernel function. The popular function space used in kernel method analysis is the reproducing kernel Hilbert space (RKHS) [Aro50], denoted by H k . RKHS is favored for two main reasons. First, it can be learned efficiently with kernel methods in high dimensions, as demonstrated in studies such as [GBSTW99,Zha05]. Second, the Hilbert structure and reproducing property of RKHS provide a rich set of mathematical tools that make analysis easier. For instance, the use of RKHS allows for the representation of functions as inner products with respect to the kernel, i.e., f (x) = f, k(x, ·) H k , enabling the application of techniques from functional analysis.
Neural networks are another class of models which have achieved remarkable success in solving high-dimensional problems [LBH15]. However, it remains unclear which high-dimensional functions can be efficiently learned using them. In this paper, we focus on two-layer neural networks:
f (x; θ) = m j=1 a j φ(x, v j ),(1)
where φ : X × V → R is a feature function and θ = {(a j , v j )} m j=1 are the parameters to be learned. The feature function is typically the form of φ(x, v) = σ(v T x), where σ : R → R is a nonlinear activation function.
The high-dimensional analysis of two-layer neural networks dates back to the pioneer works by Andrew Barron [Bar93,Bar94]. In these papers, Barron defined the spectral Barron space [SX20], which consists of functions that satisfy C f = (1 + ξ )|f (ξ)| dξ < ∞, and proved that these functions can be efficiently learned using two-layer neural networks. Following Barron's work, [Bar92, DeV98, KS01] defined variation spaces and [Bac17a] reformulated these spaces using integral representations, which were denoted by F 1 . More recently, [EMW21] provided a probabilistic interpretation of Barron's work and defined the Barron spaces, which are an infinite union of a family of RKHSs. These spaces played a criticle role in understanding the capabilities and limitations of two-layer neural networks in high dimensions.
Random feature models (RFMs) are another type of closely related models that have the same form as Equation (1) , but with the weights {v j } m j=1 being i.i.d. samples drawn from a fixed distribution π ∈ P(V). In RFMs, the features are predetermined, and only the outer coefficients are learnable. When the ℓ 2 norm of the coefficients is penalized, RFMs are equivalent to kernel methods with kernelk m (x, x ′ ) = 1 m m j=1 φ(x, v j )φ(x ′ , v j ) according to the representer theorem [SHS01]. It is important to note that as m → ∞, the law of large numbers (LLN) implies that
k m (x, x ′ ) → k π (x, x ′ ) := V φ(x, v)φ(x ′ , v) dπ(v).(2)
Hence RFMs are often viewed as a Monte-Carlo approximation of kernel methods with kernel k π [RR07], and consequently, most theoretical analyses of RFMs only consider target functions in the associated RKHS H kπ [CRR18, RR08, Bac17b]. However, it should be stressed that RFMs are not kernel methods if the ℓ p norm of coefficients is penalized with p < 2 [CMM21, XSSW22, HSS + 21]. Recently, RFMs have been also found useful for understanding neural networks [Dan17, EMW20, ADH + 19, CG19, JGH18, YS19, WL22].
Our contributions
In this paper, we study the learning of the F p,π and Barron spaces, whose definitions are motivated by the high-dimensional analysis of RFMs and two-layer neural networks, respectively. Consider a RFM with infinitely many features: f a = V a(v)φ(·, v) dπ(v), where the parameters are the coefficient function a(·). For any p ≥ 1, F p,π is defined by F p,π := {f a : a L p (π) < ∞}, f Fp,π = inf fa=f a L p (π) .
The Barron space B is given by
B = ∪ π∈P(V) F 2,π .(4)
These spaces have been widely adopted to analyze RFMs and two-layer neural networks [Bar92, DeV98, KS01, EMW21, Bac17a] in high dimensions. In particular, [RR08] proved F 2,π = H kπ , where the kernel k π is given by (2); therefore, studying F p,π is also highly relevant for understanding kernel methods.
In this paper, we take a duality perspective to provide a unified analysis of the F p,π and B spaces. This approach enables us to gain a more general and comprehensive understanding of the properties of these spaces, and gain insights into their relevance for understanding kernel methods, two-layer neural networks, and RFMs.
The duality property. By exploiting the Banach structure of F p,π and B, we establish a dual equivalence between the approximation and estimation for learning functions in F p,π and B. To put it simply, our major result for the case of F p,π can be informally stated as follows
The L q (ρ) estimation of F p,π is equivalent to the L p ′ (π) approximation of F q ′ ,ρ , where p ′ , q ′ denote the Hölder conjugates of p, q, satisfying 1/p + 1/p ′ = 1, 1/q + 1/q ′ = 1. An analogous dual equivalence also holds for the Barron space B. This duality property enables us to concentrate on analyzing the easier problem between approximation and estimation, leading to a unified analysis of learning in these spaces under various metrics.
The information-based complexity. To establish the aforementioned dual equivalence, we introduce an information-based complexity(I-complexity) that can effectively control the (minimax) estimation errors in various settings. A similar complexity has been utilized in approximation theory to study optimal interpolations on deterministically obtained clean data [Nov06]. However, in statistical learning settings, data are typically randomly sampled from a distribution, and the model may not necessarily interpolate data due to the presence of noise. To address this issue, we modify the definition of I-complexity to make it more appropriate for this setting. The I-complexity might be of independent interest and could potentially be applied beyond the analysis of the F p,π and B spaces. Nonetheless, we leave the exploration of its potential applications for future work.
Applications. Our duality framework allows us to simplify the proofs and strengthen the conclusions of many existing results. We illustrate this by providing two specific applications:
• Random feature learning beyond kernel regime. First, we extend the result in [Bac17b, Proposition 1] by showing that the unit ball of F p,π can be uniformly approximated by random features, where uniformity means that the choice of random weights v 1 , . . . , v m is independent of target functions. In contrast to [Bac17b], our result is not restricted to the case of F 2,π and the proof is also much simpler. Next, we provide a comprehensive analysis of learning F p,π functions using RFMs. Our analysis shows that both the sample and parameter complexities scale with the input dimension d polynomially, as long as p > 1. This result suggests that RFMs can efficiently learn functions that are not necessarily contained in RKHS.
• L ∞ learnability of RKHS. We consider the learning of functions in RKHS under the L ∞ metric: f − f * ∞ = sup x∈X |f (x) − f * (x)|, wheref , f * denote the learned model and the target function, respectively. This L ∞ learnability is crucial for understanding the performance of kernel methods and neural networks in safety-and security-critical scenarios.
By exploiting the dual equivalence, we show that the L ∞ estimation of a RKHS (i.e., F 2,π ) is equivalent to the L 2 approximation of the Barron space B with random features. To bound the error of the latter, we adopt the spectral-based approach developed in [WL22]. We derive both lower and upper bounds on the L ∞ minimax errors based on the kernel spectrum.
In particular, we examine how the L ∞ learnability depends on the input dimension d for dot-product kernels of the form k(x,
x ′ ) = E v∼τ d−1 [σ(v T x)σ(v T x ′ )]
. Specifically, we prove -For non-smooth activation functions, such as ReLU, the minimax errors grow exponentially with the input dimension d;
-For sufficiently smooth activation functions, the error scales with d only polynomially.
Note that dot-produce kernels arise naturally in studying RFMs [RR07] and neural networks [JGH18] and therefore, the above results not only apply to kernel methods but also provide insights into RFMs and neural networks. One immediate implication is that L ∞ learning with ReLU random features/neural networks is subject to the CoD, although the L 2 learning is not [Bac17a, EMW19].
The above examples demonstrate the versatility of our duality framework, and we believe it has the potential to be applied in other contexts and settings beyond these specific examples.
Related works
Duality of RFMs and neural networks. In [DEBG + 21], a dual formulation for training energy-based models with overparameterized two-layer neural networks was provided. In contrast, we focus on supervised learning and generalization analysis. We note that there is a concurrent work [SHB22], which provides fine-grained structure characterizations of the F p,π and B spaces under the framework of reproducing kernel Banach space (RKBS) [ZXZ09]. We instead establish the dual equivalence between approximation and estimation of these spaces.
Random feature learning. The early works [RR07, RR08] focused on the approximation of RFMs for functions within the associated RKHS.
[Bac17b] provided a fine-grained analysis of the approximation error by using the corresponding covariance operator. In contrast, we offer a duality analysis, which provides simpler proofs and applies to target functions beyond the RKHS. Note that [CMM21] also considered the problem of learning F p,π . However, their analysis only considered minimum-norm estimators under the noiseless case and is limited to a highly overparameterized regime with m ≫ (n log n) 2 , where m is the number of features and n is the sample size. In contrast, our analysis in Section 6 does not require overparameterization and is applicable to both noisy and noiseless cases. This is made possible by our duality framework.
L ∞ learnability of RKHS and neural networks. The L ∞ learning of RKHS was first studied in [KWW08] for the case of deterministic samples, where both upper and lower bounds are derived by using the kernel spectrum.
[PU22] improved the upper bound by allowing samples to be randomly drawn from an input distribution. In this paper, we show that similar bounds can be easily derived from our duality framework and in particular, we extends the lower bound to the noisy case, which is more common in machine learning. Additionaly, we improve the upper bound for dot-product kernels in two aspects. First, our upper bound (Theorem 22) is obtained by using RFMs, while [KWW08, PU22] used the eigenfunctions of the associated kernel as the fixed feature. Second, [KWW08, PU22] required the uniform boundedness of eigenfunctions, but the eigenfunctions of dot-product kernels, which are spherical harmonics, do no satisfy this condition.
More recently, [BGV22] showed that the L ∞ estimation of deep neural networks suffers from the CoD. Our analysis shows a stronger result: actually, the CoD occurs for the much simpler RFMs if the activation function is non-smooth, such as ReLU.
Connection with [LH22]
We acknowledge that [LH22] applied a similar approach to analyze reinforcement learning by defining a quantity called perturbation complexity. Specifically, they analyzed the case where the reward function comes from a RKHS and as a result, their analysis mainly focused on the F 2,π space. In contrast, we focus on supervised learning setup and we provide a comprehensive analysis of the F p,π spaces for all p ≥ 1 and the Barron spaces.
Organization
In Section 2, we clarify notations and preliminaries. In Section 3, we define the information-based complexity and show how it controls various estimation errors of learning a function class. In Section 4, we define the F p,π and Barron spaces, which will play critical roles in our duality analysis. In Section 5, we present the dual equivalence between estimation and approximation for learning F p,π and Barron spaces. Section 6 and 7 present two applications of our duality framework: Random feature learning beyond kernel regime and L ∞ learning of RKHS.
Preliminaries
Notations. Let Ω be a subset of a Euclidean space. We denote by P(Ω) the set of probability measures on Ω and M(Ω) the space of signed Radon measures equipped with the total variation norm µ M(Ω) = µ TV . Given z 1 , . . . , z n ∈ Ω, denote byρ n = 1 n n i=1 δ z i the empirical distribution. For any ρ ∈ P(Ω), let · p,ρ be the L p (ρ) norm. When p = 2, we write · ρ = · 2,ρ for convenience and let f, g ρ = f (x)g(x) dρ(x) for any f, g ∈ L 2 (ρ). Let C 0 (Ω) the space of continuous functions vanishing at infinity equipped with the uniform norm(L ∞ norm) g C 0 (X ) := g ∞ = sup x∈X |g(x)|. We shall occasionally use L ∞ and · ∞ to denote · C 0 (X ) , which is different from L ∞ (ρ). One should not confuse L ∞ and L ∞ (ρ). For any vector v, denote by v p = ( i |v i | p ) 1/p the ℓ p norm. When p = 2, we drop the subscript for simplicity.
For any p ∈ [1, ∞), denote by p ′ the Hölder conjugate saisifying 1 p + 1 p ′ = 1. For a normed vector space F, let F(r) := {f ∈ F : f F ≤ r} be the ball of radius r. Let S d−1 = {x ∈ R d : x 2 = 1} and τ d−1 = Unif(S d−1 ). We use a b to mean a ≤ Cb for an absolute constant C > 0 and a b is defined analogously. We use a ≍ b if there exist absolute constants C 1 , C 2 > 0 such that
C 1 b ≤ a ≤ C 2 b.
Rademacher complexity. Given x 1 , x 2 , . . . , x n ∈ X , the (empirical) Rademacher complexity of a function class F is defined by
Rad n (F) = E ξ 1 ,...,ξn [sup f ∈F 1 n n i=1 f (x i )ξ i ],(5)
where ξ 1 , . . . , ξ n are i.i.d. Rademacher random variables, i.e., P(ξ i = 1) = P(ξ i = −1) = 1/2. In particular, we will use the following classical theorem to bound the gap between an empirical quantity and its population counterpart.
Mercer decomposition. Before presenting our results, we first recall some basic facts about the eigen decomposition of a kernel. For any kernel k : X × X → R, the associated integral operator T k : L 2 (γ) → L 2 (γ) is given by T k f = k(·, x)f (x) dγ(x). When k is continuous and X is compact, the Mercer's theorem guarantees the existance of eigen decomposition of k:
k(x, x ′ ) = ∞ i=1 µ i e i (x)e i (x ′ ). Here {µ i } ∞ i=1
are the eigenvalues in a decreasing order and
{e i } ∞ i=1
are the orthonormal eigenfunctions satisfying e i (x)e j (x) dγ(x) = δ i,j . Note that the decomposition depends on the input distribution γ and when needed, we will denote by µ k,γ i the i-th eigenvalue to explicitly emphasize the influence of k and γ. We are also interested in the following quantity:
Λ k,γ (m) = ∞ i=m+1 µ k,γ i ,(6)
which will be used in Section 7 to bound the L ∞ learnability of RKHS. We refer to [Wai19, Section 12.3] for more details about Mercer's decomposition.
The Information-based complexity
Our generalization analysis relies on the informationed-based complexity (I-complexity) proposed in [Nov06]. The I-complexity of a function class F defined in [Nov06, Remark 1.3.4] is
I n (F) = inf x 1 ,...,xn∈X sup f ∈F ,f (x i )=0 f ∞ .(7)
Intuitively speaking, I n (F) quantifies the complexity of a function class with the minimax L ∞ norm of functions in F that interpolate the zero function at n points.
[Nov06] showed that I n (F) can control the minimax error of approximating F with the information of only n data points. However, the definition (7) cannot be directly applied to analyze ML models. First, the input data in machine learning are often randomly sampled from an input distribution, while I n (F) measures the complexity with the worst case over any n data points. Second, the definition only considers the interpolation regime, while it is often prefered in machine learning that our models do not interpolate data as it may cause the overfitting issue. This is particularly the case when data are noisy. To resolve these issues, we define the following modified I-complexity.
Definition 1 (I-complexity). Let F be a set of functions, ν ∈ P(X ) be a probability distribution over X , and · M be a norm used to measure prediction errors. The I-complexity of F with respect to the distribution ν is defined as
I ν (F, M, ǫ) = sup f ∈F , f ν ≤ǫ f M .(8)
The above modification makes the I-complexity useful for the generalization analysis in machine learning as it includes both the input distribution information and include the non-interpolate estimators. Next, we will show that how this complexity can bound estimation errors.
Remark 2. It should be noted that Definition 1 allows ν to be a general distribution to measure fitting errors and M to be a general norm of measuring prediction errors. In particular, one may choose ν =ρ n = 1 n n i=1 δ x i , which is the fitting error on the training data. One can also choose ν = ρ, yielding a distribution-dependent complexity I ρ (F, M, ǫ). We will show that this quantity provides a lower bound of the minimax error of estimating F in noisy case. It is worth noting that the flexibility of allowing to measure fitting and prediction errors with different metrics might be also useful for analyzing the problem of out-of-distribution generalization [YXC + 21] and reinforcement learning [LH22, LH23].
Bounding estimation errors.
Consider the supervised learning with data
{(x i , y i = f (x i ))} n i=1 .
Suppose that F is a Banach space of functions over X and f ∈ F(1). Consider an estimatorf ∈ F(1) with the empirical error satisfying
1 n n i=1 (f (x i ) − y i ) 2 1/2 = f − f ρn ≤ ǫ.
Then, the population error off can be bounded by the I-complexity:
f − f M ≤ sup g F ≤2, g ρn ≤ǫ g M = 2Iρ n (F(1), M, ǫ/2),(9)
where the first step is because
f − f M ≤ f M + f M ≤ 2.
Next, we further show that the I-complexity can also bound the minimax estimation errors. We call each measurable map from (X × R) n to F an estimator and denote by A n the set of all estimators: (X × R) n → F. For an estimator T n , the estimation error of T n is given by
T n ({(x i , y i )} n i=1 ) − f M .
The following two lemmas show that the I-complexity defined in Definition 1 can control the minimax error of learning a set of functions. Here we only state the results and the proofs can be found in Appendix A.
The sample-dependent minimax error. For fixed input data x 1 , · · · , x n and noiseless output, the sample-dependent minimax estimation error is given by
inf Tn∈An sup f F ≤1 T n ({(x i , f (x i ))} n i=1 ) − f M .(10)
In this case, the worst-case error may depend on the samples x 1 , x 2 , . . . , x n and this error measures how much we can extract from the specific n samples in the minimax sense. The following proposition shows that this minimax estimation error can be quantified by the I-complexity Iρ n :
Proposition 3. For any x 1 , · · · , x n ∈ X , we have
Iρ n (F(1), M, 0) ≤ inf Tn∈An sup f F ≤1 T ({(x i , f (x i ))} n i=1 ) − f M ≤ 2Iρ n (F(1), M, 0).
It is worth noting that the above result holds for any x 1 , x 2 , . . . , x n ∈ X , which are not necessary to be i.i.d. samples drawn from an input distribution.
The distribution-dependent minimax error. Suppose that the training data S = {(x i , y i )} n i=1 are independently generated by x i ∼ ρ, y i = f (x i ) + ξ i with ξ i ∼ N (0, ς 2 ) being the Gaussian noise. Consider the following minimax error
inf Tn∈An sup f F ≤1 E T n ({(x i , f (x i ) + ξ i )} n i=1 ) − f M ,(11)
where the expection is taken with respect to the sampling of S. This minimax error is the common choice in statistical learning setup (see [Wai19, Section 15]). In this definition, the worst case only depends on the distribution instead of specific samples. The following lemma shows that the complexity I ρ with ǫ = ς/ √ n gives a lower bound of the minimax error (11):
Proposition 4. We have inf Tn∈An sup f F ≤1 E T n ({(x i , f (x i ) + ξ i )} n i=1 ) − f M ≥ I ρ F(1), M, ς √ n
Note that in some specific cases, the I-complexity also provides an upper bound of the distributiondependent minimax error. We refer to Section 7 for details.
Remark 5. What we discuss above only considers the estimation part since the output of estimator is in the target space F. In other words, for any estimator in A n , the approximation error is exactly zero. However, in practice, an estimatef may live in the hypothesis space H, which can be different from F. In such a case, there may exist approximation error because of the difference between H and F. In Section 5, we establish that for various function spaces of interest in studying RFMs and two-layer neural networks, the I-complexity also controls the approximation error.
The F p,π and Barron spaces
Let φ : X × V → R be a general parametric feature map and X and V denote the input and weight space, respectively. A typical example of feature map is φ(x, v) = σ(v T x), which arises naturally in analyzing neural networks and random feature models. Here σ : R → R is a nonlinear activation function. In this section, we define some function spaces induced by φ, which will be utilized in our subsequent duality analysis.
We first define the function spaces for random feature models.
Definition 6. Let π ∈ P(V) be a probability distribution over the weight space. For 1 ≤ p ≤ ∞, the F p,π space on X is defined as
F p,π := f = V a(v)φ(·, v) d π(v) : a ∈ L p (π) ,
equipped with the norm
f Fp,π = inf a∈A f a p,π , where A f = a ∈ L p (π) : f = V a(v)φ(·, v) d π(v) .(12)
In this definition, we consider functions admitting the integral representation with respect to the feature map φ. For a given f , the associate a(·) will be called the representation of f . Note that the representations may not be unique. Hence, taking infimum in (12) ensures the norm of f to be measured using the optimal representation. In addition, this makes the F p,π well-defined in the sense that it is independent of the choice of representation.
Some observations go as follows.
• It follows trivially from the Hölder's inequality that
F ∞,π ⊂ F p,π ⊂ F q,π ⊂ F 1,π , if p ≥ q.(13)
• The F p,π space provides a natural setting to study the approximation power of RFMs. Specifically, if v j iid ∼ π for j = 1, . . . , m, then the law of large numbers (LLN) implies that as m → ∞,
1 m m j=1 a j φ(x, v j ) → V a(v)φ(x, v) dπ(v)
and the covergence/approximation rate is determined by the L p (π) norm of a(·). In other words, the F p,π norm of f controls the rate of approximating f with RFMs.
While existing works on RFMs have focused on the case of p = 2, it is possible to obtain similar approximation rates for all p ∈ (1, ∞] by using the Marcinkiewicz-Zygmund-type LLN [Dur19, Theorem 2.5.8]. More details can be found in Section 6.
We draw attention to the case where p = 2, for which [RR08] proved F 2,π is an RKHS:
F 2,π = H kπ , with k π (x, x ′ ) := V φ(x, v)φ(x ′ , v) d π(v).(14)
Furthermore, the converse is also true: any RKHS can be represented as F 2,π , as shown in the following lemma.
Lemma 7. If a kernel k has a finite trace, i.e., X k(x, x) d ρ(x) < ∞, then there exists a weight probability space (V, π) and feature function φ :
X × V → R such that k(x, x ′ ) = k π (x, x ′ ) := V φ(x, v)φ(x ′ , v) d π(v). Proof. By Mercer's decomposition: k(x, x ′ ) = ∞ j=1 λ j e j (x)e j (x ′ ), where (e j ) j≥1 are a set of or- thonormal basis in L 2 (ρ). Since ∞ j=1 λ j = X k(x, x) d ρ(x) < ∞, we can choose V = N + and define a probability measure on N + by π(j) = λ j i λ i . Let φ(x, j) = i λ i e j (x) for j ∈ N + . Then, k(x, x ′ ) = N + φ(x, j)φ(x ′ , j) d π(j).
We now turn to define the function space for studying two-layer neural networks.
Definition 8 (Barron space). Given a feature map φ : X × V → R such that φ(x, ·) ∈ C 0 (V) for any x ∈ X , we define the Barron space B on X as B := f = V φ(·, v) d µ(v) : µ ∈ M(V) ,
equipped with the norm
f B = inf µ∈M f µ TV , where M f = µ ∈ M(V) : f = V φ(·, v) d µ(v) .(15)
The definition above, originally proposed in [Bac17a], is equivalent to the variation-based definition used in [Bar92, DeV98, KS01, SX21]. When φ(x, v) = max(v ⊤ x, 0), it is also equivalent to the moment-based definition proposed in [EMW19, EMW21]. All of these definitions are somehow equivalent [EW22,SX21]. In our duality analysis, we will specifically adopt the above Radon measure-based definition as we relies on the fact that M(X ) is the dual space of C 0 (X ). However, to better understand the connection between the Barron space and F p,π space, it is prefered to take the moment-based definition [EMW19, EMW21]. Specifically, by extending [EMW21, Proposition 3], we have the following lemma, whose proof can be found in Appendix B.
Lemma 9 (An alternative definition of Barron spaces). For any 1 ≤ p ≤ ∞, we have
B = ∪ π∈P(V) F p,π , f B = inf π∈P(V) f Fp,π .(16)
It is shown that B is the union of all F p,π spaces of a fixed p ∈ [1, ∞]. Surprisingly, the union is independent of the value of p. One can also treat (16) as an alternative definition of the Barron space B, which extends the moment-based definition in [EMW19, EMW21] to general feature maps.
By choosing p = 2 and noting F 2,π = H kπ , we have f B = inf π∈P(V) f H kπ . This implies that two-layer neural networks can be viewed as adaptive kernel methods [EMW19]. Additionally, by setting p = 1, we obtain B = ∪ π∈P(V) F 1,π , suggesting that B is much larger than the L 1 -type space F 1,π . In particular, if π admits a density on V, then F 1,π fails to include all the functions implemented by finite-neuron neural networks. All these imply that the Barron space B should not be naviely interpreted as a L 1 -type space and the feature adaptivity plays a more critical role in B.
Remark 10. We are aware of that Definition 6 and 8 can be unified by the RKBS framework [ZXZ09, BdVRV21]. However, adopting such an approach would make the definitions, and particularly the statements of our key results in Section 5, overly abstract and difficult to comprehend. Therefore, we will adhere to the concrete definitions presented above and refer interested readers to the concurrent work [SHB22], where the RKBS structures of these spaces are discussed in detail.
The dual equivalence between approximation and estimation
In this section, we establish a duality framework that connects approximation and estimation for two-layer neural networks and random feature models. To achieve this, we need to define the following conjugate spaces.
Definition 11 (Conjugate space). For any γ ∈ P(X ), letF q,γ be a function space over the weight space V given byF
q,γ = g = X b(x)φ(x, ·) d γ(x) : b ∈ L q (γ) equipped with the norm g F q,γ = inf b∈L q (γ) b q,γ , where B f = b ∈ L q (γ) : g = X b(x)φ(x, ·) d γ(x) .
Similarly, we define theB space on V:
B = g = X φ(x, ·) d µ(x) : µ ∈ M(X )
and the norm is defined in the same way as (15). It is worth noting that the function spacesF q,γ andB are defined over the weight domain V, while F p,π and B are defined over the input domain X .
Main result
For clarity, we will begin by presenting our main result for the interpolation regime.
Theorem 12 (Interpolation regime). Let x 1 , · · · , x n ∈ X .
1. For 1 < p ≤ ∞, 1 ≤ q < ∞, we have sup f Fp,π ≤1,f (x i )=0 f q,ρ = sup g F q ′ ,ρ ≤1 inf c 1 ,··· ,cn g − n i=1 c i φ(x i , ·) p ′ ,π . (17) 2. For 1 ≤ q < ∞, we have sup f B ≤1,f (x i )=0 f q,ρ = sup g F q ′ ,ρ ≤1 inf c 1 ,··· ,cn g − n i=1 c i φ(x i , ·) C 0 (V) (18) 3. For 1 < p ≤ ∞, we have sup f Fp,π ≤1,f (x i )=0 f C 0 (X ) = sup g B ≤1 inf c 1 ,··· ,cn g − n i=1 c i φ(x i , ·) p ′ ,π .(19)
For each equality stated above, the left-hand side is exactly the I-complexity that governs the estimation error when learning the corresponding function spaces, as stated in Lemma 3, while the right-hand side is exactly the worse-case error of approximating the corresponding conjugate space with fixed features {φ(x i , ·)} n i=1 . Importantly, these equalities hold for any x 1 , . . . , x n ∈ X , regardless of whether they are sampled from a specific distribution.
Thus, Theorem 12 establishes a form of duality between the estimation and approximation of the F p,π and B spaces. In simpler terms, we can informally summarize the theorem as follows.
• The L q (ρ) estimation of F p,π is equivalent to the L p ′ (π) approximation ofF q ′ ,ρ .
• The L q (ρ) estimation of B is equivalent to the L ∞ approximation ofF q ′ ,ρ .
• The L ∞ estimation of F p,π is equivalent to the L p ′ (π) approximation ofB.
Here the L ∞ should be understood as the uniform metric · C 0 (X ) .
Remark 13. Regarding Theorem 12, it is worth noting that we conjecture that (17) and (19) do not hold when p = 1. This is due to our proof relying on the property L p (π) = (L p ′ (π)) * for 1 < p < ∞. Nevertheless, this property does not hold in general for p = 1, i.e., L 1 (π) is not the dual of L ∞ (π).
To better understand the dual equivalence, we examine some concrete cases below.
Case 1. Consider the case of q = 2. For p = 2, we have
sup f F 2,π ≤1, f (x i )=0 f ρ = sup g F 2,ρ ≤1 inf c 1 ,··· ,cn g − n i=1 c i φ(x i , ·) π ,(20)
implying that the L 2 estimation of a RKHS is equivalent to the L 2 approximation of a conjugate RKHS. For the Barron space, we have
sup f B ≤1, f (x i )=0 f ρ = sup g F 2,ρ ≤1 inf c 1 ,··· ,cn g − n i=1 c i φ(x i , ·) C 0 (X ) ,(21)
implying that estimating the Barron space is equvalent to the uniform approximation of an associated RKHS. By comparing (20) and (21), we can observe that estimatimations of two-layer neural networks and random feature models are equivalent to the approximation of the same RKHS but under different norms to measure approximation errors (L 2 vs. L ∞ ). This can also be interpreted as a precise characterization of how much larger the space B is compared to F 2,π .
Case 2. When p = 2, q = ∞, we have
sup f F 2,π ≤1, f (x i )=0 f C 0 (X ) = sup g B ≤1 inf c 1 ,··· ,cn g − n i=1 c i φ(x i , ·) π
The left hand side corresponds to the I-complexity that governs the L ∞ estimation error of learning F 2,π . This implies that we can obtain the L ∞ estimation error through the L 2 approximation of the corresponding Barron space. By utilizing this approach, we conduct a comprehensive analysis of when the L ∞ estimation of RKHS exhibits CoD or not in Section 7.
Case 3. For 1 < p ≤ 2, q = 2, we have sup f Fp,π ≤1 inf c 1 ,··· ,cm f − m j=1 c j φ(·, v j ) ρ = sup g F 2,ρ ≤1, g(x i )=0 g p ′ ,π ,
The left hand side is the worst-case error of approximating F p,π with random features. When 1 < p < 2, F p,π is larger than the RKHS F 2,π . Therefore, this duality allows to study the random feature approximation beyond RKHS. In Section 6, we delve into this observation in detail.
The non-interpolation regime. We now present the general form of the duality equivalence, which applies to the non-interpolation regime.
Theorem 14. Let ν ∈ P(X ) be a probability distribution and 1 < r ≤ ∞.
1. For 1 < p ≤ ∞, 1 ≤ q < ∞, suppose that sup f Fp,π ≤1 f r,ν < ∞, we have sup f Fp,π ≤1, f r,ν ≤ǫ f q,ρ = sup g F q ′ ,ρ ≤1 inf c∈L r ′ (ν) g − X c(x)φ(x, ·) d ν(x) p ′ ,π + ǫ c r ′ ,ν .(22)
2. For 1 ≤ q < ∞, suppose that sup
f B ≤1 f r,ν < ∞, we have sup f B ≤1, f r,ν ≤ǫ f q,ρ = sup g F q ′ ,ρ ≤1 inf c∈L r ′ (ν) g − X c(x)φ(x, ·) d ν(x) C 0 (V) + ǫ c r ′ ,ν . (23) 3. For 1 < p ≤ ∞, suppose that sup f Fp,π ≤1 f r,ν < ∞, we have sup f Fp,π ≤1, f r,ν ≤ǫ f C 0 (X ) = sup g B ≤1 inf c∈L r ′ (ν) g − X c(x)φ(x, ·) d ν(x) p ′ ,π + ǫ c r ′ ,ν .(24)
Note that taking ν = 1 n n i=1 δ x i and ǫ = 0 recovers Theorem 12. Comparing with Theorem 12, we introduce another duality pairing: the L r (ν) constraint on fitting errors ( f r,ν ≤ ǫ) in estimation and the L r ′ (ν) regularization of the coefficient function (ǫ c r ′ ,ν ) in approximation. Specifically, Theorem 14 generalizes Theorem 12 in the following ways:
• First, the right hand side represents the error of approximation with the norm of coefficients regularized. This becomes more intuitive by taking ν =ρ n , in which case the right hand side of (22) becomes
sup g∈F q ′ ,ρ inf c 1 ,...,cn f − n i=1 c i φ(x i , ·) p ′ ,π + ǫ 1 n n i=1 |c j | p 1/p .
This allows us to study regularized estimators whose coefficient norms can be well controlled. For more details, see Section 6.
• Second, the choice of ν is flexible and not limited to the empirical distributionρ n . For instance, by taking ν = ρ, the left hand side becomes the I-complexity that provides a lower bound for the distribution-dependent minimax error, as shown in Lemma 4.
We remark that the generality of Theorem 14 allows us to tailor the duality equivalence to different scenarios and provides a more comprehensive understanding of the trade-off between approximation and estimation errors.
An intuitive proof of Theorem 14
Here we provide a proof for the special case of (22) (the proofs for (23) and (24) are similar) in Theorem 14 to illustrate the role of duality in our analysis. Assuming strong duality, we show that the optimization problems on both sides of (22) are the Lagrangian dual of each other. For a formal proof of Theorem 14, we refer to Appendix C.
Proof. (Informal) For g ∈F q ′ ,ρ , let e = g − X c(x)φ(x, ·) d ν(x).
Then, minimization problem in the right hand side of (22) can be written in the following constrained form
inf c∈L r ′ (ν),e∈L p ′ (π) e p ′ ,π + ǫ c r ′ ,ν s.t. e = g − X c(x)φ(x, ·) d ν(x),
The associated Lagrangian:
J g : L r ′ (ν) × L p ′ (π) × L p (π) → R is given by J g (c, e, λ) = e p ′ ,π + V λ(v) g(v) − X c(x)φ(x, v) d ρ(x) − e(v) d π(v) + ǫ c r ′ ,ν .
The dual objective D g (λ) = inf c,e J(c, e, λ) is given by
D g (λ) = V λ(v)g(v) d π(v), if V λ(v)φ(·, v) d π(v) r,ν ≤ ǫ, λ p,π ≤ 1, −∞, otherwise,
Hence, the dual problem is
sup λ∈S V λ(v)g(v) d π(v), where S = λ ∈ L p (π) : V λ(v)φ(·, v) d π(v) r,ν ≤ ǫ, λ p,π ≤ 1 .(25)
By strong duality we arrive at
inf c∈L r ′ (ν) g − X c(x)φ(x, ·) d ν(x) p ′ ,π + ǫ c r ′ ,ν = sup λ∈S λ, g .(26)
Now we take supremum over the unit ball ofF q ′ ,ρ on the both sides of (26):
sup g F q ′ ,ρ ≤1 sup λ∈S λ, g = sup b q ′ ,ρ ≤1 sup λ∈S V λ(v) X b(x)φ(x, v) d ρ(x) d π(v) = sup b q ′ ,ρ ≤1 sup λ∈S X b(x) V λ(v)φ(x, v) d π(v) d ρ(x) = sup b q ′ ,ρ ≤1 sup f Fp,π ≤1, f r,ν ≤ǫ X b(x)f (x) d ρ(x) = sup f Fp,π ≤1, f r,ν ≤ǫ sup b q ′ ,ρ ≤1 X b(x)f (x) d ρ(x) = sup f Fp,π ≤1, f r,ν ≤ǫ f q,ρ ,
where the third step follows from the definition of S in (25). Hence, we complete the proof.
6 Random feature learning beyond kernel regime
In this section, we employ the duality framework to investigate the learnability of functions in F p,π using RFMs. While prior analyses of RFMs have primarily focused on the case where p = 2, our duality framework enables us to examine the entire range of p ∈ (1, ∞). The crux of our approach involves employing a (local) Rademacher complexity-based bound in the dual space, where the problem is significantly simplified. We first introduce the assumptions about the feature function φ, which is required for our duality analysis. We fist make some boundedness assumption.
Assumption 15. In the case of 2 ≤ q < ∞, we assume that there exists a constant M q such that φ(·, v) q,ρ ≤ M q for any v ∈ V. In the case of q = ∞, we assume φ(·, v) ∈ C 0 (X ) and φ(·, v) ∞ ≤ M ∞ for any v ∈ V.
We also assume that the Rademacher complexities of the corresponding F p,π and B space are well-controlled.
Assumption 16. We assume a Rademacher complexity bound for the spaceF q ′ ,ρ orB space: there exists a constant R q such that for any v 1 , · · · , v n ∈ V,
Rad n (F q ′ ,ρ (1)) ≤ R q √ n for the case 2 ≤ q < ∞,(27)
or Rad n (B(1)) ≤ R ∞ √ n for the case q = ∞.
Note that O(n −1/2 ) is the natural scaling of the Rademacher complexity of most function classes of interest. The following lemma demonstrates that the Rademacher complexity bounds specified in Assumption 16 are applicable to many natural choices of feature function φ. The proof is provided in Appendix D.1.
Lemma 17.
1. For 2 ≤ q < ∞, suppose that the activation function φ satisfies
φ(·, v) ρ ≤ R, for any v ∈ V. Then (27) in Assumption 16 holds with R q ≤ √ qR 2. For q = ∞, if X and V are both supported on {x : x 2 ≤ R} and φ(x, v) = σ(x ⊤ v), where σ : R → R is L-Lipschitz and σ(0) = 0. Then (28) in Assumption 16 holds with R ∞ ≤ LR 2 .
Next, we present the random feature approximation bound for functions in F p,π :
Theorem 18. Suppose 1 < p ≤ 2 and v j iid ∼ π for j ∈ [m]. If Assumption 15 and 16 hold, then w.p. at least 1 − δ over the sampling of {v j } m j=1 , there exists an absolute constant C 1 , such that for any C ≥ C 1 , we have
sup f Fp,π ≤1 inf c p ≤Cm 1/p f − 1 m m j=1 c j φ(·, v j ) q,ρ M p ′ −2 q R 2 q log 3 m + M q log(1/δ) m 1/p ′ , 2 ≤ q < ∞. sup f Fp,π ≤1 inf c p ≤Cm 1/p f − 1 m m j=1 c j φ(·, v j ) C 0 (X ) M p ′ −2 ∞ R 2 ∞ log 3 m + M ∞ log(1/δ) m 1/p ′ .(29)
The proof is deferred to Appendix D.2. This theorem establishes the uniform approximability of F p,π for RFMs. In other words, upon sampling the random features φ(·, v 1 ), . . . , φ(·, v m ), any function in F p,π (1) can be approximated effectively using these features. This finding is in agreement with the common use of RFMs in practice, where a single set of random features is repeatedly used to learn multiple functions through optimization of outer coefficients.
Note that the rate of approximation error scales as O(m −(p−1)/p ). As p approaches 1, this rate deteriorates, consistent with the observation that F p,π becomes larger as p decreases towards 1, as shown in (13). It should be noted that our bound blows up when p = 1, but this does not imply that F 1,π cannot be approximated by RFMs with a rate. In fact, F 1,π is a subset of B and B can be approximated with a rate by RFMs. However, in general, the rate scales like O(m −1/d ) [WL22], which suffers from the CoD if no further conditions on the feature function and weight distribution are imposed. These observations suggest that our bound is not tight when p is close to 1.
It is noteworthy that the approximation rate is independent of the value of q for all cases. Specifically, the L 2 approximation and the L ∞ approximation have the same rate. This contrasts with the estimation problem, where L ∞ estimation suffers from the CoD, while L 2 estimation does not. More detailed discussion on this issue can be found in Section 7.
Comparison with [Bac17b]. The special case of p = q = 2 (i.e., the L 2 (π) approximation of the RKHS F 2,π ) has been proved in [Bac17b]. However, the proof heavily relies on the Hilbert structure of F 2,π and exploits the corresponding covariance operator. In contrast, our result holds for a general choice of p and q and is derived naturally from our duality framework. Notably, our proof is substantially simpler and more concise.
Theorem 18 provides bounds solely on the approximation error. We next study the learning under finite sample case.
Theorem 19. Suppose that f * ∈ F p,π (1) and
x i iid ∼ ρ, y i = f * (x i )+ξ i , where the noise is distributed as ξ i ∼ N (0, ς 2 ). Let v 1 , · · · , v m be i.i.d. random weights sampled from π. Consider the estimator f = 1 m m j=1ĉ j φ(·, v j ) withĉ given bŷ c := argmin c p≤λ 1 n n i=1 y i − 1 m m j=1 c j φ(x i , v j ) 2 .(30)
Assume sup x∈X ,v∈V |φ(x, v)| ≤ M . Then, with an appropriate choice of λ, for any δ ∈ (0, 1), it holds w.p. at least 1 − δ over the sampling of data {(
x i , y i )} n i=1 and features {v j } m j=1 that f − f * 2 ρ M ς p ′ + log(1/δ) n + p ′ M 2 log 3 n + M log(1/δ) n + M p ′ log 3 m + M log(1/δ) m 2/p ′ .
This theorem provides an upper bound of the total error of learning functions in F p,π and the proof is deferred to Appendix D.3. Note that here we focus on norm-constrained estimators but similar arguments can be straightforwardly extended to penalized estimators. The upper bound on the total error is comprised of two components: the estimation error and the approximation error. Notably, the estimation error rate is independent of the value of p, and exhibits scaling of O(n −1 ) in the noiseless case, and O(n −1/2 ) in the presence of noise. This stands in contrast to the approximation error, for which the error rate deteriorates as decreasing p to 1.
In summary, we have proved that RFMs can effectively learn functions in F p,π as long as p > 1 and is independent of d. This demonstrates the applicability of RFMs beyond the kernel regime where p = 2.
L ∞ learnability of RKHS
It is well-known that functions in RKHS can be learned efficiently under the L 2 metric but this may not be sufficient when the L ∞ metric is more relevant, e.g., in security-and safety-critical applications. In this section, we consider the problem of learning functions in RKHS under the L ∞ norm:
f − f * ∞ = f − f * C 0 (X ) ,
wheref is an estimator and f * is the target function, respectively. By Lemma 7, for any kernel k, there exist φ : X × V → R and π ∈ P(V) such that k(x,
x ′ ) = k π (x, x ′ ) = V φ(x, v)φ(x ′ , v) dπ(v) and H k = F 2,π .
Hence it suffice to consider the F 2,π space, for which we can apply our dual equivalence. Specifically, Theorem 12 implies that the L ∞ estimation of F 2,π is equivalent to the L 2 (π)-approximation ofB with random features; the latter has been systematically investigated in [WL22,SX22]. Specifically, we shall utilize the spectral-based approach developed in [WL22] in our analysis.
Lower bounds
To present our results, we need to introduce the dual kernel. For any γ ∈ P(V), define the dual
kernelk γ : V × V → R ask γ (v, v ′ ) := X φ(x, v)φ(x, v ′ ) d γ(x).(31)
Theorem 20. Recall that Λk
γ ,π (m) = ∞ i=m+1 µk γ ,π i , where {µk γ ,π i } ∞ i=1
denotes the spectrum of the dual kernelk γ with respect to π in a decreasing order. LetΛ π (m) = sup γ∈P(V) Λk γ ,π (m). Recall that we use A n to denote the set of all measurable estimators.
1. For any input data x 1 , · · · , x n ∈ X . We have
inf Tn∈An sup f F 2,π ≤1 T n ({(x i , f (x i ))} n i=1 ) − f C 0 (X ) Λ π (m). 2. Suppose that x i iid ∼ ρ, ξ i iid ∼ N (0, ς 2 ). Lets ρ = Vk ρ (v, v) d π(v) be the trace ofk ρ , we have inf Tn∈An sup f F 2,π ≤1 E T n ({(x i , f (x i ) + ξ i )} n i=1 ) − f C 0 (X ) min 1, ς s ρ Λ π (m).
The proof is deferred to Appendix E.1. This theorem shows that the error of L ∞ estimation can be lower bounded using the spectrum of dual kernels in both the sample-dependent case and the distribution-dependent case. Next, we further show that lower bounds can also be controlled by the primal kernel k.
Corollary 21. Let H k ⊂ C 0 (X ) be the RKHS associated with the kernel k.
1. For any input data x 1 , · · · , x n ∈ X , we have
inf Tn∈An sup f H k ≤1 T n ({(x i , f (x i ))} n i=1 ) − f C 0 (X ) sup γ∈P(X ) Λ k,γ (m). 2. Suppose that x i iid ∼ ρ, ξ i iid ∼ N (0, ς 2 ). Let s ρ = X k(x, x) d ρ(x) be the trace of k. We have inf Tn∈An sup f H k ≤1 E T n ({(x i , f (x i ) + ξ i )} n i=1 ) − f C 0 (X ) min 1, ς √ s ρ sup γ∈P(X ) Λ k,γ (m).
The above result relies on the F 2,π representation of the RKHS, which is established in Lemma 7. Specifically, we show that for any kernel k, there exist a feature function φ : X × V → R and a probability measure π ∈ P(V) such that H k = F 2,π and the spectrum of the corresponding dual kernel is the same as that of k. This provides a crucial link between the primal and dual representations of the RKHS, and enables us to leverage the duality framework to derive the minimax estimation rate. For a detailed proof, we refer to Appendix E.2.
Upper bounds
We now turn to establish similar upper bounds of uniform estimation errors. Specifically, we focus on the dot-product kernels. Assume X = V = S d−1 , π = ρ = τ d−1 , and the feature function is given by a single neuron without bias: φ(x, v) = σ(x ⊤ v), where σ : R → R is a nonlinear activation function. In such a case , F 2,π andF 2,ρ are essentially the same space and we will use F 2,τ d−1 to denote them without specifying the input domain for simplicity.
The kernel associated to F 2,τ d−1 is dot-product:
k(x, x ′ ) = S d−1 σ(v ⊤ x)σ(v ⊤ x ′ ) d τ d−1 (v) = κ(x ⊤ x ′ ),(32)
where κ : [−1, 1] → R. Let {λ j } ∞ j=1 be the eigenvalues of k on L 2 (τ d−1 ) in a decreasing order. The spectral decomposition of k is given by
κ x T x ′ = ∞ k=0 N (d,k) j=1 t k Y k,j (x)Y k,j x ′ ,(33)
where t k is the eigenvalue and the spherical harmonics Y k,j is the corresponding eigenfunction that satisfies E
x ′ ∼τ d−1 κ x T x ′ Y k,j (x ′ ) = t k Y j,k (x)
. Note that {λ j } j are the eigenvalues counted with multiplicity, while {t k } k are the eigenvalues counted without multiplicity. We refer to [WL22, Section 2.1] for more details about the eigendecomposition of dot-product kernels.
Theorem 22. Suppose k : S d−1 × S d−1 → R is a dot-product kernel taking the form of (32).
For any non-increasing function L :
N + → R + that satisfies Λ k,τ d−1 (m) ≤ L(m), let q L (d) = sup k≥1 L(k) L((d+1)k) . Let {(x i , y i )} n i=1 be n samples indepdently drawn from x i ∼ τ d−1 , y i = f * (x i ) + ξ i ,
where the noise ξ i 's are mean-zero and ς-subgaussian and the target function f * ∈ H k (1). Consider the estimatorf = argmin
f H k ≤1 1 n n i=1 (f (x i ) − y i ) 2 .
Then with probability at least 1 − δ over the sampling of {(
x i , y i )} n i=1 , we have f − f * C 0 (S d−1 ) inf m≥1 q L (d)L(m) + √ m(ǫ(n, ς, δ) + e(n, δ)) ,(34)
where ǫ(n, ς, δ) = ς 2 κ(1)(1+log(1/δ)) n 1/4 , e(n, δ) = κ(1) 2 log 3 n+κ(1) log(1/δ) n .
This theorem presents a spectral-based upper bound for the L ∞ estimation error and the proof can be found in Appendix E.3. To obtain the tightest bound, one can choose L(m) = Λ k,τ d−1 (m). However, since the exact value of Λ k,τ d−1 (m) is often unknown, the introduction of L(m) is mainly for the convenience of calculating the constant q L (d).
Take L(m) = Λ k,τ d−1 (m) = ∞ j=m+1 λ j . If λ j ∼ j −(1+2β) , then we roughly have that L(m) ∼ m −2β and q L (d) ∼ d 2β . Plugging them into (34) yields
f − f * C 0 (X ) β,ς,δ inf m≥1 d β m −β + m 1/2 n −1/4 β,ς,δ d β 2β+1 n − β 2(2β+1) ,(35)
where we hide constants that depend on β, ς and δ. It is evident that if β does not depend on d, then the error rate does not exhibit the CoD.
Remark 23. The rotational invariance assumption plays a critical role in the analysis presented above, and the result may potentially be extended to settings where the densities of π and ρ are strictly positive. In addition, by utilizing localization techniques (see, e.g., [Wai19, Chapter 13]) to tackle noise-induced errors, one may be able to obtain tighter bounds. However, our main focus here is to understand how the error rate depends on the kernel spectrum, rather than to pursue optimal rates.
Examples
We now turn to instantiate the lower and upper bounds established above for concrete examples. Specifically, we focus on dot-product kernels that take the form of (32) and discuss how the smoothness of σ(·) affects the L ∞ learnability. These kernels are of particular interest in understanding RFMs and neural networks [MM19, WL22, JGH18]. To present our results, we will restate some results from [WL22, Section 4] when needed.
Non-smooth activations. Consider the ReLU α activation function σ(t) = max(0, t) α with α ∈ N + . [WL22, Proposition 5] shows that there exists a constant C α,d depending on 1/d polynomially such that
Λ k,τ d−1 (m) ≥ C α,d m − 2α+1 d−1 .(36)
Combining (36) with Corollary 21 yields the follow:
Corollary 24. For σ(t) = max(0, t) α with α ∈ N + , there exists a constant C α,d that depends on 1/d polynomially such that 1. For any input data x 1 , · · · , x n ∈ X , we have
inf Tn∈An sup f H k ≤1 T n ({(x i , f (x i ))} n i=1 ) − f C 0 (S d−1 ) ≥ C α,d n − 2α+1 2(d−1) . 2. Suppose x i iid ∼ ρ and ξ i iid ∼ N (0, ς 2 ) for i = 1, . . . , n. We have inf Tn∈An sup f H k ≤1 E T n ({(x i ,f (x i ) + ξ i )} n i=1 ) − f C 0 (S d−1 ) ≥ min(1, ς)C α,d n − 2α+1 2(d−1) .
The lower bound given in Corollary 24 suggests that kernel methods induced by the ReLU α activation functions suffers from CoD. This immediately implies that L ∞ learning with popular ReLU neural networks also suffers from CoD since the Barron space B contains the RKHS F 2,τ d−1 as its subset (according to Lemma 7).
Smooth activations. We now turn to smooth activation functions, such as sigmoid, softplus, arctan, GELU, and Swish/SiLU. All popular smooth activation functions satisfy the following assumption:
Assumption 25. Assume that B k := max t∈R |σ (k) (t)| Γ(k − 1).
See [WL22] for the verification of this assumption. Under this assumption, [WL22, Proposition 9] shows that
Λ k,τ d−1 (m) 1 m .(37)
By asserting L(m) = C m in Theorem 22 yields the follow:
Corollary 26. Suppose k : S d−1 × S d−1 → R is a dot-product kernel taking the form of (32) and the activation function σ satisfies Assumption 25. Let {(x i , y i )} n i=1 be n samples indepdently drawn from x i ∼ τ d−1 , y i = f * (x i ) + ξ i , where the noise ξ i 's are mean-zero and ς-subgaussian and the target function f * ∈ H k (1). Consider the estimator
f = argmin f H k ≤1 1 n n i=1 (f (x i ) − y i ) 2 .
Then with probability at least 1 − δ over the sampling of {(
x i , y i )} n i=1 , we have f − f * C 0 (S d−1 ) ς,δ d 1/4 n −1/8 .(38)
Corollary 26 gives that the L ∞ error scales as O(n −1/8 ). Thus, we can conclude that in this case, the L ∞ learning is tractable.
Conclusion
In this paper, we proposed a duality framework to analyze the approximation and estimation of the F p,π and Barron spaces, which is relevant to understand kernel methods, RFMs, and neural networks. Specifically, we establish a dual equivalence between approximation and estimation for learning functions in these spaces, which allows us to convert an approximation problem to an estimation problem and vice versa. Therefore, in analysis, one only needs to focus on the easier one. To demonstrate the power of our duality framework, we provide comprehensive analyses of two specific problems: random feature learning beyond RKHS and the L ∞ estimation of RKHS. Our duality analysis recovers existing results with much simpler proofs and stronger conclusions. To establish the dual equivalence, we introduce an information-based complexity to measure the capacity of a function class and show how it controls the minimax error of estimating that function class.
For future work, it is promising to apply our duality framework to different learning settings, e.g., unsupervised learning, out-of-distribution learning, and reinforcement learning. In this paper, we only consider the supervised learning setting. In fact, a similar analysis has been conducted in [LH22, LH23] to understand reinforcement learning. In addition, the proposed information-based complexity can be also useful for studying the statistical properties of other function spaces. For instance, [Nov06] already adopted similar ideas to study the optimal approximation of various traditional function spaces, such as Sobolev space.
[Aro50] Nachman Aronszajn, Theory of reproducing kernels, Transactions of the American mathematical society 68 (1950) [EMW20]
, A comparative analysis of optimization and generalization properties of twolayer neural network and random feature models under gradient descent dynamics, Science China Mathematics (2020), 1-24.
[EMW21]
, The Barron space and the flow-induced function spaces for neural network models, Constructive Approximation (2021) [RR07] Ali Rahimi and Benjamin Recht, Random features for large-scale kernel machines, Advances in neural information processing systems 20 (2007).
[RR08]
, Uniform approximation of functions with random bases, 2008 46th Annual Allerton Conference on Communication, Control, and Computing, IEEE, 2008, pp. 555-561. [SHB22]
Len Spek, Tjeerd Jan Heeringa, and Christoph Brune, Duality for neural networks through reproducing kernel Banach spaces, arXiv preprint arXiv:2211.05020 (2022).
[SHS01] Bernhard Schölkopf, Ralf Herbrich, and Alex J Smola, A generalized representer theorem, International conference on computational learning theory, Springer, 2001, pp. 416-426.
[SSBD14] Shai Shalev-Shwartz and Shai Ben-David, Understanding machine learning: From theory to algorithms, Cambridge university press, 2014.
[SST10] Nathan Srebro, Karthik Sridharan, and Ambuj Tewari, Smoothness, low noise and fast rates, Advances in Neural Information Processing Systems, vol. 23, 2010.
[SX20] Jonathan W Siegel and Jinchao Xu, Approximation rates for neural networks with general activation functions, Neural Networks 128 (2020), 313-321.
[SX21] , Characterization of the variation spaces corresponding to shallow neural networks, arXiv preprint arXiv:2106.15002 (2021).
[SX22]
, High-order approximation rates for shallow neural networks with cosine and ReLU k activation functions, Applied and Computational Harmonic Analysis 58 (2022), 1-26.
[Ver18]
Roman Vershynin, High-dimensional probability: An introduction with applications in data science, vol. 47, Cambridge university press, 2018.
[
A Proofs in Section 3
A.1 Proof of Proposition 3
Upper bound. Consider the minimum-norm interpolation estimator S n ∈ A n defined as
S n ({(x i , y i )} n i=1 ) = argmin h∈F , h(x i )=y i h F .(39)
Suppose y i = g(x i ) for some g ∈ F(1). Obviously, g itself satisfies the constraint of the minimization problem (39) and thus, we have
S n ({(x i , g(x i ))} n i=1 ) F ≤ g F S n ({(x i , g(x i )} n i=1 )(x i ) = g(x i ), ∀i ∈ [n], impying S n ({(x i , g(x i ))} n i=1 ) − g ∈ {f ∈ F : f F ≤ 2, f (x i ) = 0, i = 1, · · · , n}.(40)
Therefore,
inf Tn∈An sup g F ≤1 T ({(x i , g(x i ))} n i=1 ) − g M ≤ sup g F ≤1 S n ({(x i , g(x i ))} n i=1 ) − g M ≤ sup f F ≤2, f (x i )=0 f M = 2Î n (F(1), M, 0),
where the second step follows from (40). Lower bound. By the definition of the I-complexity, for any ǫ > 0, there must exist a h ∈ F such that h ǫ F ≤ 1, h(x i ) = 0 for i = 1, · · · , n and h ǫ M ≥Î n (F(1), M, 0) − ǫ. Note that any estimator T n ∈ A n is unable to distinguish h ǫ and −h ǫ since
T n ({(x i , h ǫ (x i ))} n i=1 ) = T n ({(x i , −h ǫ (x i ))} n i=1 ).
Therefore, for any T n ∈ A n , we have
sup g F ≤1 T ({(x i ,g(x i ))} n i=1 ) − g M ≥ 1 2 T ({(x i , h ǫ (x i ))} n i=1 ) − h ǫ M + T ({(x i , −h ǫ (x i ))} n i=1 ) − (−h ǫ ) M ≥ h ǫ M ≥Î n (F(1), M, 0) − ǫ,
where the second inequality follows from the triangle inequality of · M . Taking ǫ → 0 we complete the proof.
A.2 Proof of Proposition 4
For any function f ∈ F, let D n f denote the law of
{(x i , f (x i ) + ξ i )} n i=1
. For a given estimator T n ∈ A n and a target function f ∈ F, the performance of T n is measured by
d(T n , f ) := T n ({(x i , f (x i ) + ξ i )} n i=1 ) − f M .
By the two-point Le Cam's method [Wai19, Chapter 15.2], we have for any
f 1 , f 2 ∈ F(1) that inf Tn∈An sup f F ≤1 E[d(T n , f )] ≥ inf Tn∈An max {E[d(T n , f 1 )], E[d(T n , f 2 )]} ≥ f 1 − f 2 M 2 1 − TV(D n f 1 , D n f 2 ) .
Applying the Pinsker's inequality 1 [Wai19, Lemma 15.2], we obtain
inf
Tn∈An sup f F ≤1 E[d(T n , f )] ≥ f 1 − f 2 M 2 1 − KL(D n f 1 D n f 2 ) 2 .(41)
The KL divergence between D n f 1 and D n f 2 can be computed by
KL(D n f 1 D n f 2 ) = E D f n 1 log d D n f 1 d D n f 2 = E D n f 1 log n i=1 ρ(x i ) exp − ξ i 2 2ς 2 n i=1 ρ(x i ) exp − f 2 (x i )+ξ i −f 1 (x i ) 2 2ς 2 = E x i ∼ρ, ξ i ∼N (0,ς 2 I d ) n i=1 f 2 (x i ) − f 1 (x i ) + ξ i 2 − ξ i 2 2ς 2 = n 2ς 2 f 2 − f 1 2 ρ .(42)
Combining (41) and (42), we arrive at
inf Tn∈An sup f F ≤1 E[d(T n , f )] ≥ f 1 − f 2 M 2 1 − √ n f 2 − f 1 ρ 2ς (43)
Noticing that
I ρ F(1), M, ς √ n = sup f F ≤1, f ρ≤ς / √ n f M , 1
For two probability distributions P, Q defined on the same domain, P − Q TV ≤ KL(P ||Q)/2. there must exist a functionf ∈ F(1) such that
f ρ ≤ ς √ n , f M ≥ 2 3 I ρ F(1), M, ς √ n . Substituting f 1 = 0, f 2 =f into (43) yields inf Tn∈An sup f F ≤1 E[d(T n , f )] ≥ 1 6 I ρ F(1), M, ς √ n .
We complete the proof.
B Proof of Lemma 9
First, we show that B = ∪ π∈P(V) F p,π is a consequence of
f B = inf π f Fp,π .(44)
• If f ∈ ∪ π∈P(V) F p,π , there must exist π f ∈ P(V) such that f Fp,π f < ∞. By (44), we have
f B = inf π∈P(V) f Fp,π ≤ f Fp,π f < ∞. Thus, f ∈ B. This implies that ∪ π∈P(V) F p,π ⊂ B. • If f ∈ B, then f B = inf π f Fp,π < ∞. This implies that there must exist π f ∈ P(V) such that f Fp,π f < ∞, i.e., f ∈ F p,π f . Thus, B ⊂ ∪ π∈P(V) F p,π .
Now, what remains is to prove (44).
• By Definition 8, for any f ∈ B, there must exist µ ∈ M(V) satisfying f B = µ TV < ∞ and f = V φ(·, v) d µ(v). Let µ = µ + −µ − be the Jordan decomposition of µ with A + = supp(µ + ) and A − = supp(µ − ). Let |µ| = µ + + µ − , π = |µ| µ TV , and
a(v) = µ TV , if v ∈ A + , − µ TV , if v ∈ A − .
Then, a(·) L∞,π = µ TV < ∞ and f = a(v)ϕ(·, v) dπ(v). This implies that f F∞,π = µ TV . Therefore, for any p ∈ [1, ∞], we have
inf π ′ ∈P(V) f p,π ′ ≤ inf π ′ ∈P(V) f ∞,π ′ ≤ f ∞,π = f B ,(45)
where the first inequality follows from the Hölder's inequality.
• For any f , suppose that π f such that f 1,π f = inf π∈P(V) f 1,π . By the definition of F p,π space, there must exist a(·) such that f
= a(v)ϕ(·, v) dπ f (v) and |a(v)| dπ f (v) = f F 1,π f . By choosing µ f ∈ M(V) such that d µ d π f = a, we have µ f TV = V d |µ f |(v) = V |a(v)| d π f (v) = f F 1,π f and V φ(·, v) d µ f (v) = V a(v)φ(·, v) d π f (v) = f . Thus, we have for any p ∈ [1, ∞] that f B ≤ f F 1,π f = inf π∈P(V) f 1,π ≤ inf π∈P(V) f p,π ,(46)
where the last inequality is due to the Hölder's inequality.
By combining (45) and (46), we complete the proof of (44).
C The rigorous proof of Theorem 14
Lemma 27. Let ν ∈ P(X ). For p ≥ 1, r > 1, define
A p,r = a ∈ L p (π) : a p,π ≤ 1, V a(v)φ(·, v) d π(v) r,ν ≤ ǫ , for p > 1 A 1,r = µ ∈ M(V) : µ TV ≤ 1, V φ(·, v) d µ(v) r,ν ≤ ǫ .
1. For any function g ∈ L p ′ (π), we have
sup a∈Ap,r V a(v)g(v) d π(v) = inf h∈L r ′ (ν) g − X h(x)φ(x, ·) d ν(x) p ′ ,π + ǫ h r ′ ,ν .(47)
2. For any function g ∈ C 0 (V), we have
sup µ∈A 1,r V g(v) d µ(v) = inf h∈L r ′ (ν) g − X h(x)φ(x, ·) d ν(x) C 0 (V) + ǫ h r ′ ,ν .(48)
Proof. We first prove (47).
1. On the one hand, using the duality, we have for any h ∈ L r ′ (ν) that
g − X h(x)φ(x, ·) d ν(x) p ′ ,π + ǫ h r ′ ,ν = sup a p,π ≤1 a, g − X h(x)φ(x, ·) d ν(x) + ǫ h r ′ ,ν = sup a p,π ≤1 V a(v)g(v)π(v) − V h(x) X a(v)φ(x, v) d π(v) d ν(x) + ǫ h r ′ ,ν ≥ sup a∈Ap,r V a(v)g(v)π(v) − h r ′ ,ν X a(v)φ(·, v) d π(v) r,ν + ǫ h r ′ ,ν ≥ sup a∈Ap,r V a(v)g(v)π(v),
where the last step uses the property that X a(v)φ(·, v) d π(v) r,ν ≤ ǫ for a ∈ A p,r . Hence,
sup a∈Ap,r V a(v)g(v) d π(v) ≤ inf h∈L r ′ (ν) g − X h(x)φ(x, ·) d ν(x) p ′ ,π + ǫ h r ′ ,ν .(49)
2. On the other hand, consider the functional T : L p ′ (π) → R given by
T (l) := inf h∈L r ′ (ν) l − X h(x)φ(x, ·) d ν(x) p ′ ,π + ǫ h r ′ ,ν .
It is not hard to verify that • T is sublinear on L p ′ (π), i.e., T (l 1 + l 2 ) ≤ T (l 1 ) + T (l 2 ),
• T (λl) = λT (l) for any l ∈ L p ′ (π).
For a given g ∈ L p ′ (π), let G = span{g}. By the Hahn-Banach Theorem [Fol99, Section 5], there exists a linear functionalT on L p ′ (π) such that •T (g) = T (g), i.e.,
T (g) = inf h∈L r ′ (ν) g − X h(x)φ(x, ·) d ν(x) p ′ ,π + ǫ h r ′ ,ν ;
•T (l) ≤ T (l) for any l ∈ L p ′ (π), i.e.,
T (l) ≤ inf h∈L r ′ (ν) l − X h(x)φ(x, ·) d ν(x) p ′ ,π + ǫ h r ′ ,ν , for any l ∈ L p ′ (π).(50)
Taking h = 0 in (50) givesT (l) ≤ l p ′ ,π for any l ∈ L p ′ (π)
Hence, T op ≤ 1. By using the fact that L p ′ (π) * = L p (π) and corresponding Riesz representation theorem, there exists a g ∈ L p (π) with a p,π ≤ 1 such that
T (l) = V a g (v)l(v) d π(v), for any l ∈ L p ′ (π),(51)
where a g (·) depends on g asT depends on g.
Noticing that for any h ∈ L r ′ (ν),
X h(x) V a g (v)φ(x, v) d π(v) d ν(x) =T X h(x)φ(x, ·) d ν(x) ≤ ǫ h r ′ ,ν , we can conclude that a g (·) satisfies V a g (v)φ(x, v) d π(v) r,ν ≤ ǫ, implying a g ∈ A p,r .
Thus,
sup a∈Ap,r V a(v)g(v) d π(v) ≥ V a g (v)g(v) d π(v) =T (g) = T (g) = inf h∈L r (ν) g − X h(x)φ(x, ·) d ν(x) p ′ ,π + ǫ h r ′ ,ν(52)
By combining (49) and (52), we complete the proof of (47). The proof for (48) follows the same procedure. The only difference is the L p ′ (π) space is replaced by C 0 (V) equipped with the uniform norm and the L p (π) space is replaced by M(V) equipped with TV norm.
Proof of Theorem 14
We decompose the results of Theorem 14 to the following arguments:
1. For 1 < p, q, r < ∞, we have
sup f Fp,π ≤1, f r,ν ≤ǫ f q,ρ = sup b q ′ ,π ≤1 inf h∈L r ′ (ν) X b(x)φ(x, ·) d ρ(x) − X h(x)φ(x, ·) d ν(x) p ′ ,π + ǫ h r ′ ,ν . 2. For 1 < p, r < ∞, q = ∞, we have sup f Fp,π ≤1, f r,ν ≤ǫ f ∞ = sup µ∈M(V), µ TV ≤1 inf h∈L r ′ (ν) X φ(x, ·) d µ(x) − X h(x)φ(x, ·) d ν(x) p ′ ,π + ǫ h r ′ ,ν .
3. For 1 < p, r < ∞, p = 1, we have
sup f B ≤1, f r,ν ≤ǫ f q,ρ = sup b q ′ ,π ≤1 inf h∈L r ′ (ν) X b(x)φ(x, ·) d ρ(x) − X h(x)φ(x, ·) d ν(x) C 0 (V) + ǫ h r ′ ,ν ,
We only prove (1) and the proofs of (2),(3) are analogous. Note that
sup f Fp,π ≤1, f r,ν ≤ǫ f q,ρ = sup f Fp,π ≤1, f r,ν ≤ǫ sup b q ′ ,ρ ≤1 f (x)b(x) d ρ(x) = sup b q ′ ,ρ ≤1 sup a∈Ap X b(x) V a(v)φ(x, v) d π(v) d ρ(x) = sup b q ′ ,ρ ≤1 sup a∈Ap V a(v) V b(x)φ(x, v) d π(v) d π(v) = sup b q ′ ,ρ ≤1 inf h∈L r ′ (ν) X b(x)φ(x, ·) d ρ(x) − X h(x)φ(x, ·) d ν(x) p ′ ,π + ǫ h r ′ ,ν ,
where the last inequality comes from Lemma 27. We complete the proof.
D Proofs in Section 6
We first need the following lemmas.
Lemma 28. (Khintchine inequality, [Ver18, Exercise 2.6.5]) Let X 1 , · · · , X n be independent ςsubgaussian random variables with zero means and unit variances, and let a = (a 1 , . . . , a N ) ⊤ ∈ R N .
Then for p ≥ 2 we have N i=1 a i X i L p ς √ p N i=1 a 2 i 1/2 .
The next lemmas provides a local Rademacher complexity-based generalization bound.
Lemma 29.
[SST10] (local Rademacher complexity-based bound) Let F denote a class of functions from the input space X ⊂ R d to the output space Y ⊂ R, ℓ : X × Y → R denote a loss function. Let µ denote the population distribution on X × Y and (x i , y i ) iid ∼ µ(i = 1, · · · , n) denote the observed data. Suppose that:
• The worst-case Rademacher complexity of F is bounded by R n , i.e., for any x 1 , · · · , x n ∈ X R n (F) := sup
x 1 ,··· ,xn Rad(F) ≤ R n .
• The loss function ℓ is H-smooth with respect to the input, i.e.,
|∂ 2
x,x ℓ(x, y)| ≤ H µ − a.e.
• The loss function ℓ is bounded by b and non-negative, i.e.,
0 ≤ |ℓ(x, y)| ≤ b µ − a.e.
Consider the empirical lossR
(h) = 1 n n i=1 ℓ(h(x i ), y i ),
and the generalization error
R(h) = E µ ℓ(h(x), y).
With probability at least 1 − δ over samples {x i , y i } n i=1 , we have for any h ∈ F,
R(h) ≤R(h) + K R (h) √ H log 3/2 nR n + b log(1/δ) n + H log 3 nR 2 n + b log(1/δ) n ,
where K is an absolute constant.
D.1 Proof of Lemma 17
1. If 1 < q ′ ≤ 2, for any v 1 , · · · , v n ∈ V, we have
E ξ sup g F q ′ ,ρ ≤1 1 n n i=1 ξ i g(v i ) = E ξ sup b q ′ ,ρ ≤1 1 n n i=1 ξ i b(x)φ(x, v i ) d ρ(x) = E ξ 1 n sup b q ′ ,ρ ≤1 X b(x) n i=1 ξ i φ(x, v i ) d ρ(x) = 1 n E ξ n i=1 ξ i φ(·, v i ) q,ρ (a) ≤ 1 n E x∼ρ E ξ n i=1 ξ i φ(x, v i ) q 1/q (b) √ q n E x∼ρ n i=1 φ(x, v i ) 2 1/2 (c) ≤ √ q n E x∼ρ n i=1 φ(x, v i ) 2 1/2 (d) R √ q √ n
where where (a) and (c) use the Jensen's inequality and the concavity of the function z → z 1/q for q ∈ (1, 2], (b) follows from the Khintchine inequality (Lemma 28), and (d) uses the assumption φ(·, v) ρ ≤ R for any v ∈ V.
2. If q ′ = 1, for any v 1 , · · · , v n ∈ V, we have
E ξ sup g B ≤1 1 n n i=1 ξ i g(v i ) = E ξ sup µ∈M (X ), µ TV ≤1 1 n n i=1 ξ i σ(x ⊤ v i ) d µ(x) = E ξ 1 n sup µ∈M (X ), µ TV ≤1 X n i=1 ξ i σ(x ⊤ v i ) d µ(x) = 1 n E ξ sup x ≤R n i=1 ξ i σ(x ⊤ v i ) (a) L n E ξ sup x ≤R n i=1 (ξ i v i ) ⊤ x = LR n E ξ n i=1 ξ i v i 2 ≤ LR n E ξ n i=1 ξ i v i 2 2 = LR n n i=1 v i 2
where (a) uses the contraction property of Rademacher complexity [SSBD14, Lemma 26.9].
D.2 Proof of Theorem 18
Proof. We only consider the case 2 ≤ q < ∞; the case q = ∞ is analogous. Using the dual equivalence stated in Theorem 14, we write the random feature approximation error as
sup f Fp,π ≤1 inf c 1 ,··· ,cm f − 1 m m j=1 c j φ(·, v j ) q,ρ + ǫ 1 m m j=1 |c j | p 1/p = sup g F q ′ ,ρ ≤1, g p ′ ,πm ≤ǫ g p ′ ,π(53)
whereπ m = 1 m m j=1 δ v j is the empirical weight measure. Next, we apply Lemma 29 to bound the right hand side. Note that Assumption 15 implies that functions inF q ′ ,ρ (1) is uniformly bounded by M q . The loss function (x, y) → |x − y| p ′ is at least p ′ (p ′ − 1)(2M q ) p ′ −2 -smooth. By Lemma 29, with probability at least 1 − δ, we have sup g F q ′ ,ρ ≤1, g p ′ ,πm ≤ǫ
g p ′ p ′ ,π ǫ p ′ + p ′ (p ′ − 1)(2M q ) p ′ −2 R 2 q log 3 m + M q log(1/δ) m .
Taking the 1/p ′ power on the both sides we obtain sup g F q ′ ,ρ ≤1, g p ′ ,πm ≤ǫ
g p ′ ,π ǫ + M p ′ −2 q R 2 q log 3 m + M q log(1/δ) m 1/p ′ .(54)
Taking
ǫ ≍ M p ′ −2 q R 2 q log 3 m + M q log(1/δ) m 1/p ′ ,(54)
implies that for any f ∈ F p,π (1), there exists c 1 , · · · , c m ∈ R such that c p m 1/p and
f − 1 m m j=1 c j φ(·, v j ) q,ρ M p ′ −2 q R 2 q log 3 m + M q log(1/δ) m 1/p ′ .
We complete the proof.
D.3 Proof of Theorem 19
Lemma 30. Consider the function class of random feature model with m neurons under the ℓ p coefficient constrain:F
p,m (C) := 1 m m j=1 c j φ(·, v j ) : c p ≤ Cm 1/p .
Under Assumption 15, for any x 1 , · · · , x n ∈ X , the empirical Rademacher complexity and Gaussian complexity ofF p,m is bounded by
Rad(F p,m (C)) := E ξ∼Unif({±1} n ) sup f ∈Fp,m(C) 1 n | n i=1 ξ i f (x i )| CM p ′ n G n (F p,m (C)) := E z∼N (0,In) sup f ∈Fp,m(C) 1 n | n i=1 ξ i f (x i )| CM p ′ n
Proof. Let η 1 , . . . , η n be i.i.d. Rademacher or standard Gaussian random variable. For 1 ≤ i ≤ n, let
y i = (φ(x i , v 1 ), · · · , φ(x i , v m )) ⊤ ∈ R m .
Then the empirical Rademacher/Gaussian complexity is bounded by
1 n E η sup c p≤r n i=1 η i c ⊤ y i = r n E η sup c p ≤1 c ⊤ n i=1 η i y i = r n E η n i=1 η i y i p ′ (a) ≤ r n m j=1 E η n i=1 η i y ij p ′ 1/p ′ (b) r √ p ′ n m j=1 n i=1 y 2 ij p ′ /2 1/p ′ ,
where (a) and (b) are due to the Jensen's and Khintchine inequalities (Lemma 28), respectively. Notice that |y ij | ≤ M for any 1 ≤ i ≤ n and 1 ≤ j ≤ m, we have
r √ p ′ n m j=1 n i=1 y 2 ij p ′ /2 1/p ′ ≤ rM √ p ′ m 1/p ′ √ n .
Taking r = C m 1/p ′ we complete the proof. Based on the approximation bound given in Theorem 18 and the Rademacher complexity bound given in Lemma 30, the proof is standard (see, e.g., [Wai19, Chapter 13.3]).
Lemma 31. Suppose that sup x∈X , v∈V |φ(x, v)| ≤ M holds, for any x 1 , · · · , x n ∈ X , letρ n = 1 n n i=1 δ x i be the empirical measure. Then, w.p. at least 1 − δ over the sampling of {v j } m j=1 , there exists coefficientsc 1 , · · · ,c m ∈ R with c p m 1/p such that the random feature model
f = 1 m m j=1c j φ(·, v j ) satisfies f − f * 2 ρn M p ′ log 3 m + M log(1/δ) m 2/p ′ .
Proof. The assumption sup x∈X ,v∈V |φ(x, v)| ≤ M implies Assumption 15 holds for ρ =ρ n with M q M, R q M . Then, the result is directly obtained by applying Theorem 18 to ρ =ρ n .
Lemma 32. Consider the linear clasŝ Let {ξ i } n i=1 be independent noise distributed as N (0, ς 2 ). Since sup x∈X , v∈V |φ(x, v)| ≤ 1, for any x 1 , · · · , x n ∈ X and v 1 , · · · , v m ∈ V With probability at least 1 − δ over
{ξ i } n i=1 , we have sup f ∈Fp,m(C) 1 n n i=1 ξ i f (x i ) CM ς p ′ + log(1/δ) n .
Proof. Note that any function inF p,m (C) is uniformly bounded by C 1 M . Thus (z 1 , · · · , z n ) →
1 n n i=1 ξ i f (ξ i ) is CM √ n -Lipschitz.
The proof is completed by Gaussian concentration inequality (see, e.g., [Ver18, Theorem 5.2.2]) and the Gaussian complexity bound stated in Lemma 30.
Proof of Theorem 19 From Lemma 31, with probability at least 1 − δ, there exists a random feature functionf = 1 m m j=1c j φ(·, v j ) such that c p ≤ C 1 m 1/p and
1 n f (x i ) − f * (x i ) 2 M p ′ log 3 m + M log(1/δ) m 2/p ′(55)
where C 1 is an absolute constant. We choose the constrain coefficient as λ = C 1 m 1/p . Sincef is the minimizer of the empirical loss, we have
1 n n i=1 f * (x i ) + ξ i −f (x i ) 2 ≤ 1 n n i=1 f * (x i ) + ξ i −f (x i ) 2 .(56)
Equivalently, (56) can be rewritten as
1 n n i=1 f (x i ) − f * (x i ) 2 ≤ 1 n n i=1 f (x i ) − f * (x i ) 2 + 2 n n i=1 ξ i f (x i ) −f (x i ) .(57)
By applying Lemma 32 to bound the second term in the right hand side of (57) and combining (55) we arrive at
1 n n i=1 f (x i ) − f * (x i ) 2 M ς p ′ + log(1/δ) n + M p ′ log 3 m + M log(1/δ) m 2/p ′ .
Finally, note that c p m 1/p , we combine Lemma 29 and the Rademacher complexity bound of the linear class stated in Lemma 30 to complete the proof.
E Proofs in Section 7 E.1 Proof of Theorem 20
Let us first breifly recall some notations and definitions. For any feature function φ : X × V → R and π ∈ P(V), γ ∈ P(X ), the primal kernel is given by k π (x, x ′ ) := φ(x, v)φ(x ′ , v) dπ(v) and the dual kernel is given byk γ (v, v ′ ) := φ(x, v)φ(x, v ′ ) dγ(x). Consider the spectral decomposition of k π with respect to γ ∈ P(X ):
k π (z, z ′ ) = ∞ j=1 µ π,γ j e j (z)e j (z ′ ),
where {µ π,γ j } j≥1 are eigenvalues in decreasing order and {e j } are corresponding eigenfunctions which form an orthonormal basis of L 2 (γ). Analogously, let {μ γ,π j } j≥1 be the eigenvalues of the dual kernelk γ with respect to the L 2 (π) inner product.
Our proof needs the following two lemmas related to the random feature approximation of RKHS and Barron space.
Lemma 33. (A restatement of [WL22, Proposition 1]) Given a feature function φ : X × V → R and ρ ∈ P(X ). For any basis function ϕ 1 , · · · , ϕ n ∈ L 2 (ρ), consider the corresponding Barron space B given in Definition 8, we have
sup g B ≤1 inf c 1 ,··· ,cn∈R g − n i=1 c i ϕ i ρ ≥ sup π∈P(V) ∞ j=n+1 µ π,ρ j ,
Lemma 34. Let {e i } i≥1 be the eigenfunctions corresponding to the kernel k π with respect to L 2 (ρ). Then, we have
sup h F 2,π ≤R inf c 1 ,··· ,cn h − n i=1
c i e i ρ ≤ R µ π,ρ n .
Proof. Since e 1 , · · · , e n are orthonormal in L 2 (ρ), for any h ∈ F 2,ρ (R), the optimal approximation is given by c i = h, e i ρ . Thus
inf c 1 ,··· ,cn h − n i=1 c i e i 2 ρ = h − n i=1 h, e i ρ e i 2 ρ = ∞ j=n+1 h, e j 2 ρ .(58)
Notice that F 2,π = H kπ and by the definition of RKHS norm, we have
h 2 F 2,π = h 2 H kπ = ∞ j=1 1 µ π,ρ j h, φ j 2 ρ ≤ R 2 ,(59)
the right hand side of (58) is bounded by
∞ j=n+1 h, e i 2 ρ ≤ µ π,ρ n ∞ j=n+1 1 µ π,ρ j h, e j 2 ρ ≤ µ π,ρ n ∞ j=1 1 µ π,ρ j h, e j 2 π ≤ R 2 µ π,ρ n ,
where the last step is due to (59). Thus, we complete the proof.
Proof of Theorem 20. We will prove the sample-dependent (Part I) and distribution-dependent (Part II) cases separately.
Proof of Part I. By Lemma 3, we know that inf Tn∈An sup f F 2,π ≤1
T n ({(x i , f (x i ))} n i=1 ) − f C 0 (X ) ≍ sup f F 2,π ≤1, f (x i )=0
f C 0 (X ) .
Using the dual formulation in Theorem 14, the L ∞ generalization gap is equivalent to the random feature approximation ofB space:
sup f F 2,π ≤1, f (x i )=0 f C 0 (X ) = sup g B ≤1 inf c 1 ,··· ,cn∈R g − n i=1 c i φ(x i , ·) π ≥ inf ϕ 1 ,··· ,ϕn∈L 2 (π) sup g B ≤1 inf c 1 ,··· ,cn∈R g − n i=1 c i ϕ i π .
By applying Lemma 33 on the dual kernelk ρ with respect to the measure π, we complete the proof.
Proof of Part II. By Lemma 4, the minimax error is lower bounded by sup f F 2,π ≤1, f ρ≤ς n −1/2
f C 0 (X )(60)
Now we give a lower bound for the quantity (60). By the dual equivalence in Theorem 14, we have for any ǫ > 0 that sup f F 2,π ≤1, f ρ ≤ǫ
f C 0 (X ) = sup g B ≤1 inf c∈L 2 (ρ) g − X c(x)φ(x, ·) dρ(x) π + ǫ c ρ = sup g B ≤1 inf h∈F 2,ρ g − h π + ǫ h F 2,ρ h = X c(x)φ(x, ·) dρ(x) ,(61)
where the last step uses the definitions of the F 2,π and Barron spaces. Let {ẽ j } j≥1 be the eigenfunctions of the dual kernelk ρ on L 2 (π). Then, for any g ∈B, h ∈F 2,ρ , we have for any c 1 , . . . , c n ∈ R that
g − h π + ǫ h F 2,ρ ≥ g − n j=1 c jẽj π − h − n j=1 c iẽj π + ǫ h F 2,ρ ≥ g − n j=1 c iẽj π − μ ρ,π n h F 2,ρ + ǫ h F 2,ρ ,(62)
where the second step follows from Lemma 34. Thus, taking ǫ = μ ρ,π n , we have sup f F 2,π ≤1, f ρ ≤ √μ ρ,π n f C 0 (X ) = sup where the second step follows from (62) and the third step is due to Lemma 33.
Hence, sup f F 2,π ≤1, f ρ ≤ςn −1/2 f C 0 (X ) = ς nμ ρ,π n sup f F 2,π ≤ √ nμ ρ,π n /ς 2 , f ρ≤ √μ ρ,π n f C 0 (X ) ≥ min 1, ς nμ ρ,π n sup f F 2,π ≤1, f ρ ≤ √μ ρ,π where π γ is given in Lemma 35 and the last step is due to Theorem 20. Using the arbitrariness of γ, we complete the proof of the first part. Next, we prove the second part. For any γ ∈ P(X ), consider the constructed weight probability space (N + , π γ ) and feature map φ(x, j) = S γ e γ j (x) in Lemma 35. Then, the dual kernelk ρ is given byk ρ (j, j ′ ) = S γ X e j (x)e j ′ (x) d ρ(x).
Lets ρ = N +k ρ (j, j) d π γ (j), by Theorem 20 we have
inf Tn∈An sup f H k ≤1 E T n ({(x i , f (x i ) + ξ i )} n i=1 ) − f C 0 (X ) = inf Tn∈An sup f F 2,π γ ≤1 E T n ({(x i , f (x i ) + ξ i )} n i=1 ) − f C 0 (X ) min 1, ς s ρ ∞ j=n+1 λ γ j .
By noticing thats
ρ = N +k ρ (j, j) d π γ (j) = X n j=1 λ γ j e γ j (x) 2 d ρ(x) = X k(x, x) d ρ(x) = s ρ ,
we complete the proof of the second part.
E.3 Proof of Theorem 22
Lemma 36. For any x ∈ S d−1 , denote by σ x : R d → R the single neuron v → σ(v ⊤ x). Then, we have
inf {c i,j } 1≤i≤k, 1≤j≤N(d,i) σ x − k i=1 N (d,i) j=1 c i,j Y i,j 2 τ d−1 = ∞ i=k+1 N (d, i)t i ,(64)
and the optimal approximation satisfies
k i=1 N (d,i) j=1 c i,j Y i,j 2 H k = k i=1 N (d, i).(65)
Proof. The spherical harmonics is related to the Legendre polynomials: denote by P k the associated Legendre polynomials of degree k in d dimensions, then for any v, v ′ ∈ S d−1 we have
N (d,k) j=1 Y k,j (v)Y k,j (v ′ ) = N (d, k)P k (v ⊤ v ′ ).
Since {Y i,j } 0≤i≤k,1≤j≤N (d,i) is orthonormal in L 2 (τ d−1 ), we have
inf {c i,j } 0≤i≤k,1≤j≤N(d,i) σ x − k i=0 N (d,i) j=1 c i,j Y i,j 2 τ d−1 = σ x 2 τ d−1 − k i=0 N (d,i) j=1 Y i,j , σ x 2 τ d−1 Hence, σ x 2 τ d−1 − k i=0 N (d,i) j=1 Y i,j , σ x 2 τ d−1 = S d−1 |σ(x ⊤ v)| 2 dτ d−1 (v) − k i=0 S d−1 S d−1 σ(x ⊤ v)σ(x ⊤ v ′ ) N (d,i) j=1 Y i,j (v)Y i,j (v ′ )dτ d−1 (v)dτ d−1 (v ′ ) = S d−1 |σ(x ⊤ v)| 2 dτ d−1 (v) − k i=0 N (d, i) S d−1 S d−1 σ(x ⊤ v)σ(x ⊤ v ′ )P i (v ⊤ v ′ )dτ d−1 (v)dτ d−1 v ′ By rotation invariance, we have S d−1 |σ(x ⊤ v)| 2 d τ d−1 (x) = S d−1 S d−1 |σ(x ⊤ v)| 2 d τ d−1 (x) d τ d−1 (v) = S d−1 κ(v ⊤ v) d τ d−1 (v) = ∞ i=0 N (d, i)t i ,(66)
and k i=0 N (d, i)
S d−1 S d−1 σ(x ⊤ v)σ(x ⊤ v ′ )P i (v ⊤ v ′ )dτ d−1 (v)dτ d−1 v ′ = k i=0 N (d, i) S d−1 S d−1 S d−1 σ(x ⊤ v)σ(x ⊤ v ′ )dτ d−1 (x)P i (v ⊤ v ′ )dτ d−1 (v)dτ d−1 (v ′ ) = k i=0 N (d, i) S d−1 S d−1 κ(v ⊤ v ′ )P i (v ⊤ v ′ )dτ d−1 (v)dτ d−1 (v ′ ) = k i=0 N (d,i) j=1 S d−1 S d−1 κ(v ⊤ v ′ )Y i,j (v ′ ) d τ d−1 (v ′ ) = k i=0 N (d, i)t i ,(67)
where the last equation is because {Y i,j } 1≤i,j≤n are the eigenfunctions of the kernel operator:
S d−1 κ(v ⊤ v ′ )Y i,j (v ′ ) d τ d−1 (v ′ ) = t i Y i,j (v).
Combining (66), (67), we complete the proof of (64). Next, we prove (65). Note that the optimal approximation is given by
c i,j = σ x , Y i,j .
Similar to the proof of (64), we have
k i=1 N (d,i) j=1 c i,j Y i,j 2 H k = k i=0 1 t i N (d,i) j=1 Y i,j , σ x 2 τ d−1 = k i=0 1 t i N (d, i) S d−1 S d−1 κ(x ⊤ x ′ )P i x T x ′ dτ d−1 (x)dτ d−1 x ′ = k i=1 N (d, i)
Lemma 37. For any non-increasing function L : N + → R + that satisfies ∞ j=m+1 λ j ≤ L(m), let q(d, L) = sup k≥1 L(k) L((d+1)k) . Consider the B ball B(1) = µ ∈ M(S d−1 ), µ TV ≤ 1 :
S d−1 σ x d µ(x) .
where σ x : R d → R is given by v → σ(x ⊤ v). Then for any function f ∈ B(1) and m ∈ N + , there exists a function g ∈ H k such that g H k ≤ m and
f − g 2 τ d−1 q(d, L)L(m).
Proof. Let m k = k i=0 N (d, i). Assume that m ∈ [m k , m k+1 − 1], by Lemma 36, there exists a function g x ∈ H k such that g x H k ≤ m k ≤ m and
σ x − g x 2 τ d−1 ≤ ∞ j=m k +1 λ j ≤ L(m k ).
For B function f = S d−1 σ x dµ(x) with µ TV ≤ 1, let g = S d−1 g x d µ(x), then by Jensen's inequality we have g H k ≤ S d−1 g x H k d µ(x) ≤ m and
f − g 2 τ d−1 ≤ S d−1 σ x − g x 2 τ d−1 d µ(x) ≤ L(m k ).
Note that Lemma 38. For any non-increasing function L : N + → R + that satisfies ∞ j=m+1 λ j ≤ L(m), let q(d, L) = sup k≥1 L(k) L((d+1)k) . Let x 1 , · · · , x n be input data sample from τ d−1 andρ n = 1 n n i=1 δ x i . With probability at least 1 − δ over samples {x i } n i=1 , for any ǫ > 0 we have
sup f H k ≤1, f ρn ≤ǫ f ∞ inf m≥1
q(d, L)L(m) + √ m (ǫ + e(n, δ)) . ,
where e(n, δ) = κ(1) 2 log 3 n+κ(1) log(1/δ) n .
where K ∈ R n×n is the kernel matrix given by K ij = k(x i , x j ). Note that
E[z ⊤ Kz] = n i=1 E[ξ 2 i ]k(x i , x i ) ≤ nς 2 κ(1).
By Hanson-Wright inequality [Ver18, Theorem 6.2.1], w.p. at least 1 − δ over the noise {ξ i } n i=1 , we have z ⊤ Kz nς 2 κ(1) (1 + log(1/δ)) and thus 1 n n i=1 f (x i ) − f * (x i ) 2 ǫ(n, ς, δ) := ς 2 κ(1) (1 + log(1/δ)) n 1/4
. Using Lemma 38 with ǫ = ǫ(n, ς, δ), we complete the proof.
ρ
(v, v) d π(v),we complete the proof.
≤ d + 1(see [WL22, Proposition 3] for details), we have L(m k ) ≤ L(m k+1 ) L(m k ) L(m) ≤ q(d, L)L(m).
, no. 3, 337-404. [Bac17a]
Francis Bach, Breaking the curse of dimensionality with convex neural networks, The
Journal of Machine Learning Research 18 (2017), no. 1, 629-681.
[Bac17b]
, On the equivalence between kernel quadrature rules and random feature ex-
pansions, The Journal of Machine Learning Research 18 (2017), no. 1, 714-751.
[Bar92]
Andrew R Barron, Neural net approximation, Proc. 7th Yale Workshop on Adaptive
and Learning Systems, vol. 1, 1992, pp. 69-72.
[Bar93]
Andrew R. Barron, Universal approximation bounds for superpositions of a sigmoidal
function, IEEE Transactions on Information theory 39 (1993), no. 3, 930-945.
[Bar94]
Andrew R Barron, Approximation and estimation bounds for artificial neural net-
works, Machine Learning 14 (1994), no. 1, 115-133.
[BdVRV21] Francesca Bartolucci, Ernesto de Vito, Lorenzo Rosasco, and Stefano Vigogna,
Understanding neural networks with reproducing kernel Banach spaces, ArXiv
abs/2109.09710 (2021).
[Bel66]
Richard Bellman, Dynamic programming, Science 153 (1966), no. 3731, 34-37.
[BGV22]
Julius Berner, Philipp Grohs, and Felix Voigtlaender, Training ReLU networks to high
uniform accuracy is intractable, arXiv preprint arXiv:2205.13531 (2022).
[CG19]
Yuan Cao and Quanquan Gu, Generalization bounds of stochastic gradient descent for
wide and deep neural networks, Advances in neural information processing systems
32 (2019).
[CMM21]
Michael Celentano, Theodor Misiakiewicz, and Andrea Montanari, Minimum com-
plexity interpolation in random features models, ArXiv abs/2103.15996 (2021).
[CRR18]
Luigi Carratino, Alessandro Rudi, and Lorenzo Rosasco, Learning with SGD and ran-
dom features, Proceedings of the 32nd International Conference on Neural Information
Processing Systems, 2018, pp. 10213-10224.
[Dan17]
Amit Daniely, SGD learns the conjugate kernel class of the network, Proceedings of
the 31st International Conference on Neural Information Processing Systems, 2017,
pp. 2419-2427.
[DEBG + 21] Carles Domingo-Enrich, Alberto Bietti, Marylou Gabri'e, Joan Bruna, and Eric
Vanden-Eijnden, Dual training of energy-based models with overparametrized shallow
neural networks, ArXiv abs/2107.05134 (2021).
[DeV98]
Ronald A DeVore, Nonlinear approximation, Acta numerica 7 (1998), 51-150.
[Dur19]
Rick Durrett, Probability: theory and examples, vol. 49, Cambridge university press,
2019.
[EMW19]
Weinan E, Chao Ma, and Lei Wu, A priori estimates of the population risk for two-
layer neural networks, Communications in Mathematical Sciences 17 (2019), no. 5,
1407-1425.
AcknowledgementsLei Wu is supported in part by a startup fund from Peking University. Hongrui Chen is partially supported by the elite undergraduate training program of School of Mathematical Sciences at Peking UniversityE.2 Proof of Corollary 21We first need the following lemma.Lemma 35. For any kernel k : X × X → R and γ ∈ P(X ), let {λ γ j } j≥1 denote the spectrum of k with respect to the inner product L 2 (γ). Then there exists weight probability space (V, π γ ) and a feature map φ : X × V → R such that• the spectrum of the dual kernelk γ :be the eigen decomposition of k with respect to the L 2 (γ) inner product. According to Lemma 7, let V = N + andwhere S γ = ∞ j=1 λ γ j . Then, π γ is a probability measure on N + and H k can be represented byConsider the dual kernel on N + given bỹSince {e γ j } j are the orthonormal basis of L 2 (γ), we have for any j, j ′ ∈ N + thatwhere δ j,j ′ = 1 for j = j ′ and otherwise δ j,j ′ = 0. For any j ∈ N + , define f j :forms an orthonormal basis of L 2 (π γ ). Moreover, for any j ∈ N + , we haveThis implies that f j is the eigen function and λ γ j is the corresponding eigenvalue ofk γ . Thus, we complete the proof.Proof of Corollary 21 For any γ ∈ P(X ), we haveProof. By the dual formulation in theorem 14, we rewrite the L ∞ generalization gap asFor any h ∈ B(1) and m ∈ N + , by Lemma 37 there exists a function g ∈ H k such thatBy Theorem 18, with probability at least 1 − δ sampled data x 1 , · · · , x n , there exists coefficients c 1 , · · · , c n such thatCombining(69),(70), and using the arbitrariness of m, we can bound the right hand side of (68) byWe complete the proof.Proof of Theorem 22. First, we bound the empirical loss 1By the optimality off , we have
On exact computation with an infinitely wide neural net. Adh + 19] Sanjeev, Arora, S Simon, Wei Du, Zhiyuan Hu, Li, R Russ, Ruosong Salakhutdinov, Wang, Advances in Neural Information Processing Systems. ADH + 19] Sanjeev Arora, Simon S Du, Wei Hu, Zhiyuan Li, Russ R Salakhutdinov, and Ruosong Wang, On exact computation with an infinitely wide neural net, Advances in Neural Information Processing Systems, 2019, pp. 8141-8150.
| []
|
[
"Million-scale Object Detection with Large Vision Model",
"Million-scale Object Detection with Large Vision Model"
]
| [
"Feng Lin \nIntellifusion Inc\nShenzhenChina\n\nHarbin Institute of Technology\nShenzhenChina\n",
"Wenze Hu \nIntellifusion Inc\nShenzhenChina\n",
"Yaowei Wang [email protected] \nPeng Cheng Laboratory\nShenzhenChina\n",
"Yonghong Tian [email protected] \nPeng Cheng Laboratory\nShenzhenChina\n",
"Guangming Lu \nHarbin Institute of Technology\nShenzhenChina\n",
"Fanglin Chen [email protected] \nHarbin Institute of Technology\nShenzhenChina\n",
"Yong Xu \nSouth China University of Technology\nGuangzhouChina\n",
"Xiaoyu Wang \nIntellifusion Inc\nShenzhenChina\n"
]
| [
"Intellifusion Inc\nShenzhenChina",
"Harbin Institute of Technology\nShenzhenChina",
"Intellifusion Inc\nShenzhenChina",
"Peng Cheng Laboratory\nShenzhenChina",
"Peng Cheng Laboratory\nShenzhenChina",
"Harbin Institute of Technology\nShenzhenChina",
"Harbin Institute of Technology\nShenzhenChina",
"South China University of Technology\nGuangzhouChina",
"Intellifusion Inc\nShenzhenChina"
]
| []
| Over the past few years, there has been growing interest in developing a broad, universal, and general-purpose computer vision system. Such a system would have the potential to solve a wide range of vision tasks simultaneously, without being restricted to a specific problem or data domain. This is crucial for practical, real-world computer vision applications. In this study, we focus on the million-scale multi-domain universal object detection problem, which presents several challenges, including cross-dataset category label duplication, label conflicts, and the need to handle hierarchical taxonomies. Furthermore, there is an ongoing challenge in the field to find a resource-efficient way to leverage large pre-trained vision models for million-scale cross-dataset object detection. To address these challenges, we introduce our approach to label handling, hierarchy-aware loss design, and resource-efficient model training using a pre-trained large model. Our method was ranked second in the object detection track of the Robust Vision Challenge 2022 (RVC 2022). We hope that our detailed study will serve as a useful reference and alternative approach for similar problems in the computer vision community. The code is available at https://github.com/linfeng93/Large-UniDet. | 10.48550/arxiv.2212.09408 | [
"https://export.arxiv.org/pdf/2212.09408v2.pdf"
]
| 254,854,644 | 2212.09408 | 5b08d5c63d80ba097a8aa050006b2db1d2b259a7 |
Million-scale Object Detection with Large Vision Model
Feng Lin
Intellifusion Inc
ShenzhenChina
Harbin Institute of Technology
ShenzhenChina
Wenze Hu
Intellifusion Inc
ShenzhenChina
Yaowei Wang [email protected]
Peng Cheng Laboratory
ShenzhenChina
Yonghong Tian [email protected]
Peng Cheng Laboratory
ShenzhenChina
Guangming Lu
Harbin Institute of Technology
ShenzhenChina
Fanglin Chen [email protected]
Harbin Institute of Technology
ShenzhenChina
Yong Xu
South China University of Technology
GuangzhouChina
Xiaoyu Wang
Intellifusion Inc
ShenzhenChina
Million-scale Object Detection with Large Vision Model
Springer Nature 2022 L A T E X templateuniversal object detectionlarge vision modelresource-efficienthierarchical taxonomy
Over the past few years, there has been growing interest in developing a broad, universal, and general-purpose computer vision system. Such a system would have the potential to solve a wide range of vision tasks simultaneously, without being restricted to a specific problem or data domain. This is crucial for practical, real-world computer vision applications. In this study, we focus on the million-scale multi-domain universal object detection problem, which presents several challenges, including cross-dataset category label duplication, label conflicts, and the need to handle hierarchical taxonomies. Furthermore, there is an ongoing challenge in the field to find a resource-efficient way to leverage large pre-trained vision models for million-scale cross-dataset object detection. To address these challenges, we introduce our approach to label handling, hierarchy-aware loss design, and resource-efficient model training using a pre-trained large model. Our method was ranked second in the object detection track of the Robust Vision Challenge 2022 (RVC 2022). We hope that our detailed study will serve as a useful reference and alternative approach for similar problems in the computer vision community. The code is available at https://github.com/linfeng93/Large-UniDet.
Introduction
A universal, general-purpose computer vision system has become a trend in the development of computer vision technology (Gong, Dai, Chen, Li, & Van Gool, 2021;Hasan, Liao, Li, Akram, & Shao, 2021;Y. He et al., 2022;Wang, Cai, Gao, & Vasconcelos, 2019;X. Zhou, Koltun, & Krähenbühl, 2022). This universal vision model is a multi-talented agent that can simultaneously solve a wide range of vision tasks with minimal human intervention. Researchers no longer need to train separate models for each individual vision task or fine-tune an existing model for a specific data domain. Instead, they can achieve all tasks with a single effort. The universality of this model is a promising direction towards human-like AI and has important implications for real-world computer vision applications (Yuan et al., 2021). Overview. The design of Large-UniDet is based on a two-stage RCNN-style object detection network. The frozen backbone is a RegNet architecture initialized with the weights of SEER models. The NAS-FPN blocks can be stacked N times for a better accuracy-cost trade-off. The classification branch of Cascade R-CNN outputs 541 class scores including the background as the cardinality of the unified label space is 540.
In this study, we aim to contribute to the development of universal vision technology. Specifically, we focus on the million-scale universal object detection problem across different domains. The goal is to have a single object detector that can perform the inference process once and generate unified detection results across all datasets, regardless of their differences.
The challenge of developing a million-scale multi-domain universal object detection system lies in two areas: (a) curating a large-scale and diverse training dataset, and (b) creating a robust visual representation approach. The training dataset must cover a wide range of data domains in order to achieve satisfactory results across domains. However, such a dataset is currently not available. Furthermore, building a unified large-scale dataset with dense annotations for object detection X. Zhou et al., 2022) and similar fine-grained computer vision tasks (Bevandić &Šegvić, 2022;Lambert, Liu, Sener, Hays, & Koltun, 2020;Ranftl, Lasinger, Hafner, Schindler, & Koltun, 2020) is cost-prohibitive. In terms of robust visual representations, it is challenging to ensure the common object detector is robust to the million-scale and diverse source data, as objects of interest can vary greatly in different images.
Fortunately, with the growing number of available detection datasets, researchers can now implement universal object detectors by reusing these datasets. We take advantage of these datasets by unifying their independent label spaces, allowing us to handle multi-domain object detection with different label vocabularies. However, using multiple diverse datasets can result in annotation inconsistencies, such as label duplication, conflicts, and incomplete hierarchical taxonomy. To address these issues, we have designed a comprehensive loss formulation in the unified label space.
To tackle the robustness issue for the millionscale multi-domain object detection problem, we utilize large pre-trained vision models. Recent studies have demonstrated the superiority of larger models in capturing higher-quality visual representations compared to smaller models (Bello et al., 2021;Kolesnikov, Zhai, & Beyer, 2019;Radosavovic, Kosaraju, Girshick, He, & Dollár, 2020). These high-quality representations lead to better generalization both in-domain and out-ofdomain (Goyal et al., 2022). Thus, we believe that the use of large well-trained vision models would significantly improve the performance of universal object detection for million-scale diverse datasets. Our experiments indeed show a noticeable improvement in performance by using larger vision models in the universal object detection task.
As we acquire the feature robustness by taking advantage of large pre-trained vision models, computational resources become a critical demand because of both computational and memory-wise costs. Without many computational resources yet, we introduce a resource-efficient training formulation for large vision models inspired by a recent work (Vasconcelos, Birodkar, & Dumoulin, 2022), which saves considerable computational resources, especially GPU memory, during the training procedures.
This paper discusses our approach to the challenge of multi-domain universal object detection at a scale of millions of diverse datasets. We utilize the power of large pre-trained vision models and present an efficient training formulation that saves computational resources. Our method, Large-UniDet, has achieved remarkable results and won the second prize in the object detection track of the Robust Vision Challenge 2022 1 . The success of our approach is attributed to the efficient formulation design, careful label handling, and knowledge transfer from large-scale pre-training.
Our contributions are summarized as follows.
• Our approach explores the use of large vision models for the challenging task of million-scale multi-domain universal object detection. • We present a resource-efficient training formulation for large vision models in universal object detection, which saves computational resources during training procedures. • With the unified label space, we handle multidomain object detection with different label vocabularies and overcome cross-dataset label duplication and semantic hierarchy problems. • The proposed method, Large-UniDet, achieved the 2nd prize in the object detection track of RVC 2022, demonstrating its impressive performance and robustness.
2 Related Works
Universal Object Detector
Recent years have seen a growing interest in universal object detection. Wang et al. propose a universal detector with a domain attention module that leverages shared knowledge across different data domains. The design consists of multiple dataset-specific detectors that share most network parameters while keeping the categories of each dataset separate. Universal-RCNN (Xu, Fang, Liang, Kang, & Li, 2020), on the other hand, tries to incorporate graph transfer learning to model the intra-domain and inter-domain semantics of categories from multiple datasets (Krishna et al., 2017;T.-Y. Lin et al., 2014;B. Zhou et al., 2017). Unlike these methods, Zhao et al. build a unified label space by manually merging multiple label spaces of different datasets, and their framework is dedicated to managing partial annotations through the use of pseudo-labeling. UniDet (X. Zhou et al., 2022), in contrast, presents an automatic method to unify label spaces based on visual concepts generated by a partitioned object detector with three separate branches. Cai et al. (L. Cai et al., 2022) construct a unified label space by extracting category embeddings from each dataset using a language model. Recently, Meng et al. (Meng et al., 2022) leveraged pre-trained language embeddings to generate adapted queries for each category embedding across different datasets, modeling object classification as a region-word alignment problem without a merged label space. Similar to the methods mentioned above based on the unified label spaces, we propose a solution to improve the unified label space in universal object detection by modifying the manuallycrafted taxonomy used in the RVC challenge. Our proposed method also addresses the challenges posed by label duplication and semantic hierarchy issues across multiple datasets.
Pre-training for Vision Tasks
Pre-training is a widespread technique in computer vision (Azizi et al., 2021;L. Cai et al., 2022;Caron et al., 2020;Joulin, Maaten, Jabri, & Vasilache, 2016;Kornblith, Shlens, & Le, 2019;Sun, Shrivastava, Singh, & Gupta, 2017) that enhances performance by using backbones models trained on large-scale datasets such as ImageNet (Deng et al., 2009), JFT-300M , Open-Images (Kuznetsova et al., 2020), or web-collected data (Goyal et al., 2022). The backbone generates robust visual representations that can benefit various downstream vision tasks (Goyal et al., 2022). For object detection, the choice of the pre-trained backbone is crucial for determining performance (Y. Liu et al., 2020). Typically, the strength of a pre-trained backbone comes from its (a) powerful architecture, (b) broad training data, and (c) sophisticated pre-training methods.
Stronger architecture. To better understand the impact of backbone architectures on object detection performance, Huang et al. (Huang et al., 2017) studied the relationship between backbone capacities and performance. Liu et al. (Y. Liu et al., 2020) took a different approach and improved the backbone's strength by combining multiple identical backbones. Furthermore, Liu et al. (Z. Liu et al., 2022) leveraged the power of vision transformers to create an extremely large object detector by using an expanded Swin transformer as the feature extractor. While many works in the field aim to improve performance through innovative model design, space constraints prevent further listing of related literature.
Training data. In terms of the training data, Sun et al. have demonstrated the impact of using a large-scale dataset JFT-300M on the robustness of representation learning. Bu et al. (Bu, Peng, Yan, Tan, & Zhang, 2021) have taken a different approach by combining various detection datasets (Dollar, Wojek, Schiele, & Perona, 2011;Kuznetsova et al., 2020;T.-Y. Lin et al., 2014;S. Shao et al., 2019;Zhang, Benenson, & Schiele, 2017) to attain better pre-trained weights for transfer learning in downstream tasks. Meanwhile, Cai et al. (L. Cai et al., 2022) have utilized existing detection datasets (Gupta, Dollar, & Girshick, 2019;Kuznetsova et al., 2020;S. Shao et al., 2019) to create a large pre-training dataset through careful curation based on welldefined principles. It should be noted that due to space constraints, additional related works are not discussed here.
Pre-training approach. The recent advancements in model pre-training (Caron et al., 2020;K. He et al., 2022;K. He, Fan, Wu, Xie, & Girshick, 2020;F. Lin, Xu, Li, Xiong, & Qi, 2021;Xu et al., 2022) have demonstrated the superiority of self-supervised methods over supervised approaches in computer vision tasks, such as object detection, semantic segmentation, and image classification. With the advantage of utilizing unlimited diverse image data from the web, self-supervised pre-training methods are capable of capturing more discriminative visual representations without relying on manual annotations (Goyal et al., 2022). Furthermore, the training of vision foundation models on large-scale image-text data (Jia et al., 2021;Radford et al., 2021;Yuan et al., 2021) highlights the significant impact of representation learning on both in-domain and out-of-domain downstream tasks.
To enhance the performance of our universal object detector, we have chosen to use large selfsupervised vision models known as SEER models (Goyal et al., 2022) as the backbone. As discussed earlier, robust backbones can be attributed to high-capacity architectures, a diverse training data, and cutting-edge pre-training techniques. The largest SEER model that we will be using boasts a massive 10 billion network parameters. The SEER models are trained on a self-supervised clustering-based method (Caron et al., 2020) utilizing 1 billion less biased uncurated images collected from the web. This results in robust visual representations that perform well on both indomain and out-of-domain benchmarks (Goyal et al., 2022). Our belief is that the SEER backbones will be capable of producing more discriminative features and provide better out-of-distribution generalization for the task of universal object detection across datasets with varying characteristics.
Method
Resource-efficient Detection with a Large Vision Model
In this section, we introduce our strong object detector that is built on large pre-trained backbone networks. The use of large vision models has been demonstrated to improve the performance of many computer vision tasks. However, the enormous computational and memory requirements for training these models limit their practical use (Dai, Liu, Le, & Tan, 2021;Radford et al., 2021;. To address this challenge, we propose a computationally & memory efficient training approach that freezes the parameters of the billions of pre-trained backbone neurons and fine-tunes the extracted visual representations on the subsequent detector components. This allows us to train our largest model on a limited number of GPUs, specifically 16 NVIDIA 3090 GPUs. Our resource-efficient approach leverages the recent advancements in knowledge transfer (Vasconcelos et al., 2022) and is specifically designed for large pre-trained vision models, providing a valuable resource for the computer vision community that is interested in object detection with limited computational resources. Fig. 1 illustrates the overall framework. Each detector component is described in detail in the remaining content of this section.
Frozen Backbone with Billions of Parameters
In view of the superior performance of the SEER model (Goyal et al., 2022) in terms of fairness and bias reduction across different domains, we have adopted it as the backbone of our object detection network to ensure robust visual representations across three distinct datasets: MS COCO (COCO), OpenImages Dataset (OID), and Mapillary Vistas Dataset (MVD). Typically, to optimize object detectors, both the initialized backbone and subsequent detector components are finetuned on detection datasets. However, fine-tuning the backbone on smaller detection datasets can result in the backbone parameters drifting away from their pre-trained initialization, which can negatively impact detection performance (Vasconcelos et al., 2022). Additionally, fine-tuning a heavy backbone can significantly increase computational complexity. To achieve superior detection performance while managing computational complexity, we have chosen to freeze the backbone parameters during the training process. This efficient formulation not only saves resources but also positively impacts the performance of long-tailed object categories through knowledge preservation (Vasconcelos et al., 2022), which is important in the RVC multi-domain scenario.
With billions of uncurated internet images, the SEER models are trained to achieve both indomain and out-of-domain generalization. Based on the observation that the generalization increases with the model size (Goyal et al., 2022), we carefully evaluate the trade-off between cost and performance to determine the best version for our experiments and final submission for the RVC competition. In the end, we opt to use both the lighter version (SEER-RegNet32gf) and the second largest version (SEER-RegNet256gf) for our extensive evaluations.
Cascade Detection Heads
In order to enhance the performance of our object detector, we implemented a two-stage RCNNstyle detection framework with a frozen SEER backbone. Initial experiments using Faster R-CNN (Ren, He, Girshick, & Sun, 2015) did not produce satisfactory results, likely due to its limited number of learnable parameters making it difficult to handle the large-scale detection tasks across diverse datasets. Taking inspiration from recent advances in the field (Vasconcelos et al., 2022), we adopted high-capacity Cascade R-CNN (Z. Cai & Vasconcelos, 2018) as our detection heads, which greatly improved performance as discussed in Section 4.5.1.
Stacked Dense Neck
The Feature Pyramid Network (FPN) is a fundamental component in object detection frameworks, serving as an adaptive module that integrates and improves hierarchical features. It acts like a neck that connects the backbone and the subsequent detection heads. The original FPN design (T.-Y. Lin et al., 2017) transfers multi-level semantic information from the backbone through a top-down pathway and lateral connections, creating a simple and straightforward path for knowledge integration. Subsequent designs (Ghiasi, Lin, & Le, 2019;S. Liu, Qi, Qin, Shi, & Jia, 2018;Pang et al., 2019;Tan, Pang, & Le, 2020) have introduced cross-scale connections to reinforce visual representations with semantically important information and low-level details.
We employ a stacked, densely connected FPN, namely NAS-FPN (Ghiasi et al., 2019), as the neck of our object detector for the following four reasons.
• As universal object detection is to detect hundreds of object categories from various datasets, the impressive ability of NAS-FPN to generate robust representations meets the challenges of million-scale multi-domain detection. • As we freeze the backbone, the remaining detector components require higher model capacity (described in Section 3.1.2), while stacked NAS-FPN offers excellent flexibility in constructing rich neck architecture. • The released SEER models are trained on billions of uncurated web-scale images. Inevitably, there is some domain gap between the upstream pre-training dataset and downstream detection datasets. As we do not finetune the SEER models on the downstream data, we believe the early NAS-FPN blocks can act as domain adaptors to align the domain gap. • Last but not least, we observe that multi-level side-outputs of SEER models have very different characteristics. Some shallow side-outputs are dense, while the deeper ones are generally sparse and weak. The rich connections of NAS-FPN offer more possible ways for better feature integration.
Adaptive RPN
In the context of multi-domain object detection, objects belonging to the same category can exhibit different characteristics across different domains. For example, a person in an autonomous driving dataset such as MVD is typically much smaller in the high-resolution street scenes, while a person in COCO images is usually much larger. This variation in object size highlights the need for an adaptive region proposal network (RPN) to generate high-quality proposals that can handle the diverse object sizes in each domain. The Cascade RPN (Vu, Jang, Pham, & Yoo, 2019) overcomes the limitations of traditional RPNs, which rely on heuristically determining appropriate scales and aspect ratios for pre-defined anchors. Additionally, having too many pre-defined anchors can slow down the training process. By incorporating the Cascade RPN into our network, we are able to improve the quality of proposals and increase the overall model capacity, providing the best of both worlds.
Cross-dataset Model Training
Label Space Unification Across Multiple Datasets
The goal of this section is to outline the creation of a unified label space for three datasets, addressing the issues of label duplication and semantic hierarchy across the datasets. The RVC organizers have provided a manually-crafted taxonomy 2 as a starting point. This taxonomy maps each category from COCO or MVD to a single category in the RVC official label space, as well as each leaf-node category from OID. However, the nonleaf categories from OID are not included in this label space. To complete the label space, we modify the official taxonomy by simply adding all of the non-leaf categories from OID, excluding the root entry. This results in a unified label space with a cardinality of 540. The OID has a semantic hierarchy where the superclasses, or non-leaf categories, are considered to be more general than other classes. However, this leads to inconsistencies in granularity and results in issues such as label duplication and problems with the semantic hierarchy. For example, the person (/m/01g317) superclass in OID and the person class in COCO are semantically the same, but are treated as separate categories. For another example, the cow class in COCO is semantically a child of the animal (/m/0jbk) superclass in OID, but OID's hierarchy does not reflect this parent-child relationship. This overlap in taxonomy can negatively impact the performance of universal object detection. To address these issues, we propose a unified hierarchical taxonomy and implement a hierarchy-aware loss suppression method, which will be explained in Section 3.2.2 and 3.2.3, respectively.
Multi-label with Hierarchical Taxonomy Completion
To address the semantic hierarchy challenges in OID, we introduce a method of completing the hierarchical taxonomy by incorporating categories from the RVC official taxonomy. The resolution of the remaining cross-dataset semantic hierarchy issues is presented in Section 3.2.3.
We convert the one-hot category labels to multi-class labels by considering all parent categories as positives for OID images. This is similar to UniDet (X. Zhou et al., 2022) but with the added consideration of the semantic hierarchies of COCO and MVD using the OID semantic hierarchy. For each annotated box that has been merged with an OID leaf-node category according to the RVC official taxonomy, we treat it as its OID equivalent. For instance, if the COCO banana and
animal-ground animal (/m/0jbk) -animal object-vehicle land vehicle (/m/01prls) -caravan object-vehicle-other land vehicle (/m/01prls) -vehicle object-vehicle-trailer land vehicle (/m/01prls) object-vehicle land vehicle (/m/01prls) -wheeled-slow object-support-traffic
traffic sign (/m/01mqdt) -sign-frame object-traffic-sign traffic sign (/m/01mqdt) -back object-traffic-sign traffic sign (/m/01mqdt) -front Table 2 The parent-child category names between OID superclasses and COCO / MVD classes in semantics.
L rpn = 1 N N i=0 Srpn s=0 (α · (1 − IoU (p i s , y i rloc )) + BCE(q i s , y i rcls )) (1) L head = 1 N N i=0 S head s=0 (β · SmoothL 1 (r i s , y i loc ) + γ C · C c=0 L c cls ) (2) L c cls = (1 − 1 D(y i cls ) (c)) · BCE(x ic s , 1 P(y i cls ) (c))(3)
the OID banana (/m/09qck) have been merged into a single mutual category, a bounding box annotated as banana from COCO would receive a positive label for the fruit category since banana belongs to the fruit superclass according to the OID semantic hierarchy. We employ a multi-label classifier in the detection heads and use sigmoid activation functions to obtain class confidence scores for each bounding box.
It is important to note that this hierarchical taxonomy completion is not a complete solution. There are several annotated objects from COCO and MVD that do not match any OID leaf-node category but are semantically associated with a certain superclass from OID. Instead of activating the corresponding parent categories, we handle these semantic hierarchies through an intricate adaptation in the loss function, which is discussed in the following section.
Hierarchy-aware Cross-dataset Loss Suppression
To address both label duplication and unsolved semantic hierarchies described in Sections 3.2.1 and 3.2.2, we propose a loss adaptation strategy called Hierarchy-Aware Cross-Dataset Loss Suppression (HCLS). This strategy is based on the semantic hierarchy of OID and suppresses losses over categories involved in label duplication and semantic hierarchy between OID and COCO/MVD in the box classification branches. More specifically,
• For each category from OID, HCLS ignores the losses over all its child categories, as a common practice for hierarchical taxonomy (Kuznetsova et al., 2020).
• For each category from COCO / MVD, which is not merged with any OID leaf-node category in the RVC official taxonomy, HCLS searches all the superclasses from OID and performs one of the following adaptations to the loss: (a) [Label duplication] Suppose this category matches one of the superclasses from OID in semantics. In this case, HCLS ignores the loss between its OID equivalent and itself, in addition to the losses between its OID equivalent's parents/children and itself.
(b) [Cross-dataset semantic hierarchy] Suppose this category belongs to one of the superclasses from OID in semantics. HCLS ignores the losses between all its parent categories and itself.
(c) [Neither] Suppose this category is independent of any superclass of OID. HCLS does nothing about loss adaptation. In other words, we equally calculate losses over all the categories in the unified label space in loss functions.
In Fig. 2 and Fig. 3, two examples illustrate the loss adaptation process: (a) and (b). Note that we do not perform any tedious category merging but rather handle label duplication at the loss level. According to the RVC official taxonomy, there are less than 50 independent categories, so we manually search for cross-dataset label duplication and semantic hierarchy. For further details, please refer to Table 1, which lists the processed semantically duplicate categories, and Table 2, which lists the processed semantic hierarchies across datasets.
Overall Formulation
The overall loss function can be formulated as the weighted sum of the RPN loss and the detector head loss, described as follows,
L = λ · L rpn + L head (4)
where λ is the weight factor set to 0.7, while L rpn represents the RPN loss (1) and L head represents the detector head loss (2). In the detector head loss L head , the classification loss L cls is given in (3).
In formulas (1), (2), and (3), the symbols p, q, r, and x denote the respective outputs for RPN regression branch, RPN classification branch, the detector head regression branch, and the detector head classification branch, while the y denotes the corresponding ground truth. N is the number of samples in each mini-batch. C is the number of categories including background in the unified label space. S rpn represents the number of stages of Cascade RPN while S head represents the number of stages of Cascade R-CNN. We set S rpn to 2 and S head to 3 for a performance-cost trade-off reason. IoU represents the IoU loss (Yu, Jiang, Wang, Cao, & Huang, 2016), BCE represents the binary cross entropy loss, and SmoothL 1 represents the smooth L1 loss. 1 A (x) represents the indicator function in which the result turns out 1 when x ∈ A. Specifically, D(y) denotes the union of the categories involved in HCLS for class y, as described in Section 3.2.3. P(y) denotes the union of the parent categories of class y as described in Section 3.2.2. The loss weights α, β, and γ are set to 10.0, 1.0, and 1.5, respectively. Table 3 provides a brief overview of the three datasets used in the experiments. The COCO dataset (T.-Y. Lin et al., 2014) consists of everyday images of objects and humans, annotated with 80 common object categories. The MVD dataset (Neuhold, Ollmann, Rota Bulo, & Kontschieder, 2017) is a high-resolution streetscene imagery dataset, and version 1.2 is used in the experiments, including 37 object categories. Unlike COCO and MVD, the OID dataset (Kuznetsova et al., 2020) is annotated with a semantic hierarchy, and the images are diverse, often containing multiple objects and complex scenes. The annotated classes have a long-tailed distribution, and the training set with
Experiments
Datasets
Implementation Details
For our experiments, we adopt the mmdetection codebase for the implementation of the proposed method. Our method uses the frozen SEER-RegNet32gf and SEER-RegNet256gf as the backbone, providing a resource-efficient training formulation. In order to ensure synchronous computation across all GPU workers, we replace the traditional BatchNorm (BN) with synchronized BatchNorm (SyncBN). The hyperparameters for the NAS-FPN, Cascade RPN, and Cascade R-CNN components are kept as default, unless specified otherwise. We employ standard data augmentation techniques such as random flipping and random scaling of the short edge of the image within a range of [480,960]. The optimization process is conducted using the Stochastic Gradient Descent (SGD) optimizer, with a base learning rate of 0.01, a weight decay of 0.0001, and a batch size of 16. To address the imbalanced class distribution and size disparities across the three datasets, we adopt both class-aware sampling and datasetwise re-sampling. The re-sampling ratio is set to 1: 4: 8 for OID, COCO, and MVD, respectively. The model training is performed on 8 NVIDIA 3090 / A100 GPUs, with the mixed-precision technique (Micikevicius et al., 2018) being used to speed up the process. During the inference stage, the short edge of the images is resized to 800 while the long edge is restricted to 1333, keeping the aspect ratio unchanged, unless specified otherwise.
RVC Submission
Our RVC final submission utilized a modified version of the proposed detector based on SEER-RegNet256gf. To meet the RVC deadline, certain simplifications were made to the model: • Instead of using the default setting of {C 2 , C 3 , C 4 , C 5 } in NAS-FPN, the side-outputs {C 3 , C 4 , C 5 } were employed and a 2× downsampling was performed on C5 twice to create a 5-level feature pyramid. Although this simplification reduced the accuracy of detecting small objects, it significantly shortened the training time. • The basic anchor scale in Cascade RPN was decreased to 5.04 (4 × 2 1/3 ) to align with the changes in NAS-FPN and to reduce missed detections of small objects. • The model was trained for 720,000 iterations, with the learning rate dropped by a factor of 0.1 at 600,000 iterations.
During the dataset-agnostic inference procedure, the Soft-NMS (Bodla, Singh, Chellappa, & Davis, 2017) was performed with an IoU threshold of 0.6 and a score threshold of 0.001, then,
• for COCO, the max number of predictions per image was restricted to 100, the short edge of the input image was resized to 800. • for MVD, the max number of predictions per image was restricted to 300, the short edge of the input image was resized to 2048. • for OID, the max number of predictions per image was restricted to 300, the short edge of the input image was resized to 800.
We did not utilize any advanced inference technique, such as multi-scale test augmentation. The performance of our submission (IFFF RVC) on the three datasets is summarized in Table 4. For comparison with the results presented in this paper, we evaluate the model for our RVC submission on the validation sets with a maximum of 300 predictions per image, using the standard Non-Maximum Suppression (NMS) method. All other testing configurations are maintained as those used on the test sets.
Main Results
Comparisons on the RVC final submissions, as well as our new results, are summarized in Table 4. The ranked #1 MD RVC employs a large transformerbased object detector (Carion et al., 2020) with an acceleration training strategy that increases the input size progressively. From another perspective, our method, named Large-UniDet, is devoted to building a computation & memory-saving training formulation and generating robust multi-domain object detection predictions by taking advantage of large pre-trained vision models. Compared to our RVC submission IFFF RVC, Large-UniDet isn't simplified as the description in Section 4.3 and is further improved with a training practice where the model is adapted for the high resolution of input data (detailed in Section 4.5.4). As we can see in Table 4, based on a lighter backbone (SEER-RegNet32gf), Large-UniDet achieves 48.8, 66.2, 25.9, 39.4, and 68.5 points in terms of mAP on COCO val set, AP50 on COCO val set, mAP on MVD val set, AP50 on MVD val set, and AP50 on OID val set, respectively. The larger backbone (SEER-RegNet256gf) improves universal object detection in performance consistently (+3.1 mAP / 3.8 AP50 on COCO, +1.8 mAP / 2.8 AP50 on MVD, +1.3 AP50 on OID), which does demonstrate the effectiveness of visual representations generated by larger vision models. Without the simplifications in our RVC submission, Large-UniDet [S] and [L] are trained for 1.15M iterations, longer than the IFFF RVC, with the base learning rate which is linearly warm-upped in the preceding 4k iterations and decreased by a factor of 10 at 850k and 1.0M iterations. When testing, we generate no more than 300 predictions per image with the common NMS for a fair comparison with the IFFF RVC on validation sets.
After the universal object detection training, we conduct the dataset-specific individual finetuning with the high-resolution training images, detailed in Section 4.5.4. This practice can further improve the performance of three datasets, especially on MVD, which has fairly different characteristics compared with the other two datasets. 18.3 (+3.1) 3.0 (+1.0) + NAS-FPN (×5) 18.6 (+3.4) 3.2 (+1.2) + NAS-FPN (×7) 19.4 (+4.2) 3.5 (+1.5) + Cascade RPN 20.2 (+5.0) 4.2 (+2.2) Table 6 Ablation analysis of detector components on MVD val set. For fast convergence, we initialize the models with the counterparts in Table 5, and train them for 12 epochs on 8 NVIDIA 3090 GPUs, with a base learning rate 0.01 which is divided by 10 after 8 and 11 epochs.
The detailed AP numbers are shown in Table 4, denoted as Large-UniDet with superscript † .
Ablation Study
Detector Components Analysis
We conduct the ablation analysis based on SEER-RegNet32gf. Table 5 and Table 6 report accuracycost comparisons about different detector configurations on COCO and MVD, respectively. With a frozen backbone, Cascade R-CNN outperforms the baseline Faster R-CNN by a significant margin, improving the mAP to 39.9 on COCO while 15.2 on MVD with a little increased training cost (+1 hour for COCO and +0.2 hour for MVD).
When integrating the high-capacity necks into Cascade R-CNN, we achieve higher accuracy but simultaneously suffer the increasing computation burden. Three frequently used FPNs are compared in Tables 5 and 6. As we can see, PAFPN (S. Liu et al., 2018) yields 0.9 and 1.5 points improvement, and BiFPN (Tan et al., 2020) yields at most 1.6 and 1.9 points improvement on COCO and MVD, respectively. As a better choice, NAS-FPN yields 5.8 and 4.2 points improvement. Tables 5 and 6 show that the computational cost increases with the number of stacked neck blocks. At the same Table 7 Comparison on loss strategies. The used object detector is Cascade R-CNN based on SEER-RegNet32gf. The five models are trained for 420k iterations, with a base learning rate 0.01 which is decayed by a factor of 0.1 at 280k iterations. Note that the metric of COCO and MVD is mAP and the metric of OID is AP50. The best and the second best results are highlighted in bold font and under line, respectively.
time, the performance growth is gradually decelerated, and even the performance degrades for the stacked BiFPN. Consequently, we equip our detector with seven stacked NAS-FPN blocks, enjoying a good trade-off between accuracy and cost. Besides, Cascade RPN brings a consistent gain (+1.0 mAP at least on COCO and +0.2 mAP at least on MVD) across whatever necks, especially for NAS-FPN, without increasing too much extra computational cost.
Hierarchical Loss Strategies
To demonstrate the effectiveness of our hierarchyaware cross-dataset loss suppression (HCLS), we compare a number of hierarchical loss strategies in Table 7 and Table 8.
• Baseline refers to a situation where the semantic hierarchy is not taken into consideration, and each annotated bounding box is assigned a single positive class label. As a result, there is no loss adaptation applied. • Naive loss suppression denotes that the loss calculation for the classification task takes the semantic hierarchy of OID into account by ignoring the losses for the children and parent categories. This approach incorporates the semantic hierarchy by removing the impact of relationships between parent and child categories, but also leads to a loss of positive samples for the superclasses in OID, resulting in lower performance on OID. • Unified hierarchy takes into account all parentchild relationships across datasets by considering the cross-dataset label duplication presented Table 8 Comparison on loss strategies. The used object detector is Cascade R-CNN based on SEER-RegNet32gf with NAS-FPN and Cascade RPN. The five models are trained for 420k iterations, with a base learning rate 0.01 which is decayed by a factor of 0.1 at 280k iterations. Note that the metric of COCO and MVD is mAP and the metric of OID is AP50. The best and the second best results are highlighted in bold font and under line, respectively.
in Table 1, cross-dataset semantic hierarchy presented in Table 2, and the original semantic hierarchy of OID for each category in the unified label space. It considers all parents and semantic equivalents as positive and eliminates the losses over all child categories, thereby increasing the training set for superclasses with more positive samples, resulting in a significant improvement in performance on OID (+3.1 AP50 in Table 7 and +1.8 AP50 in Table 8). However, this approach may negatively impact the performance on COCO and MVD, as categories from different datasets may match based on language cues, but still be semantically different. For example, the bear category in COCO encompasses a wide range of carnivorous mammals of the Ursidae family, while its equivalent bear (/m/01dws) in OID includes not only these conventional bears but also teddy bears (/m/0kmg4), leading to taxonomy inconsistencies. This was observed in the severe performance decline of the bear category in COCO, with an AP of 41.5 using the Unified hierarchy, compared to APs over 65.1 for other categories in Table 7, with the best AP of 67.9 achieved by the HCLS. • OID hierarchy only takes into account the semantic hierarchy of OID. It does not consider the relationships between categories from different datasets. This is a common approach when working with OID (X. Zhou et al., 2022), but it means that cross-dataset relationships are not incorporated into the loss adaptation. • Our loss strategy, denoted as OID hierarchy + HCLS, takes into account the semantic hierarchy of OID, label duplication, and semantic hierarchy among the three datasets simultaneously. This loss adaptation approach results in the best AP on OID (+3.5 AP50 in Table 7 and +2.7 AP50 in Table 8), a slight improvement on COCO, and comparable accuracy on MVD compared to the Baseline. Table 9 compares two training approaches: finetuning the entire object detector, which is denoted as finetune, and the approach used in this paper, where the backbone is frozen during training, denoted as freeze. We evaluate these two strategies by simultaneously training models on either COCO or MVD, using either the lighter SEER backbone or the larger one, in terms of performance and GPU memory consumption.
Training Strategies
-Performance The results of finetune and freeze, are shown in Table 9. freeze consistently improves performance across datasets, especially with a longer training schedule. On the other hand, finetune shows a decline in performance on COCO with a longer training schedule. This may be due to the backbone representations drifting away from the original SEER visual representations and over-fitting on the smaller downstream dataset, which weakens the performance of the object detector on COCO. -GPU memory consumption
As we can see in Table 9, freezing the backbone during the training process requires significantly less GPU memory than finetuning the entire object detector, including the backbone. The SEER-RegNet32gf-based model requires 10 GB of memory per image during the training process with the freeze strategy, whereas the finetune strategy requires 16 GB of memory per image. Similarly, the SEER-RegNet256gfbased model requires 15 GB of memory per image with the freeze strategy, while the finetune strategy requires 60 GB of memory per image. As a result, freezing the backbone is a more feasible option for training models on memory-constrained computational resources, such as NVIDIA 3090.
Scaling Up and Finetuning
Scaling up during the inference procedure As outlined in Section 4.3, the short edges of MVD images were resized to 2048 during the inference process to improve detection results due to the presence of many small objects in highresolution images. This scaling up improved the mAP by about 3 points on MVD in our RVC submission and significantly benefited the models in this paper as well. Fig. 4 The best results and the 800-pixel short-edge baselines can be found in Table 10. Scaling up had a significant impact on MVD for both the small and large models, leading to an increase of +4.8 mAP and +4.4 mAP, respectively. It is important to note that scaling up the size of the testing images does not result in improved performance on COCO or OID. This is because while the inference scaling improves the detection of small objects, it negatively impacts the performance for larger objects, as previously reported in the literature (Gao, Yu, Li, Morariu, & Davis, 2018). With this in mind, we believe that adapting the object detector for high-resolution images has the potential to improve the detection of large objects while still maintaining the improvement for small objects.
Dataset-specific finetuning with scaling up The acceleration practice of training with highresolution images after low-resolution pre-training can achieve satisfactory performance while reducing the computational cost during the training process (Singh & Davis, 2018). After conducting universal object detection training, we carry out dataset-specific finetuning at a higher resolution, taking into account the performance, training cost, and significant differences among the three datasets.
We use the cosine learning rate annealing with warm restarts (Loshchilov & Hutter, 2016) during the finetuning procedure. Without any alterations to the model design, the finetuning and inference configurations are specified that,
• for COCO, the model is trained for 24 epochs with six cyclical restarts where training images are scaled in a range of [640,1200] and is evaluated with the 800-pixel resized short edge of testing images; • for MVD, the model is trained for 24 epochs with six cyclical restarts where training images are scaled in a range of [1024,2048] and is evaluated with the 2048-pixel resized short edge of testing images; • for OID, the model is trained for 6 epochs with two cyclical restarts where training images are scaled in a range of [640,1200] and is evaluated with the 800-pixel resized short edge of testing images. Table 10 summarizes the results of the finetuning. Based on the lighter backbone, the dataset-specific high-resolution finetuning increases the performance by 3.2 mAP on COCO, 10.9 mAP on MVD, and 0.7 mAP on OID. While based on the larger backbone, the finetuning still retains considerable improvement on three datasets (+1.6 mAP on COCO, +9.9 mAP on MVD, and +0.7 mAP on OID, respectively).
Visualization
Figs. 5, 6 and 7 showcase the detection results of the Large-UniDet model which uses the SEER-RegNet256gf backbone, and has not undergone further high-resolution specific fine-tuning. These results display up to five recognized categories per bounding box with a confidence threshold, highlighting the presence of category label duplication and semantic hierarchy across datasets in universal object detection. We will delve into this phenomenon in greater detail and provide examples in the subsequent sections.
In the upper-left visualization result in Fig. 5, we can see that a significant number of the categories present in the original label space have been detected successfully. Additionally, some unannotated categories such as man, woman, girl, person super, face, suit, and dress are also transferred from the other two datasets, despite not being part of the original label space in COCO.
Label duplication. The categories person and person super are semantically duplicated, resulting in the high confidence scores for the two people appearing in this image. To be specific, the man is classified as person with a confidence score of 0.94 and person super with a confidence score of 0.63, meanwhile, the woman is classified as both person and person super with a confidence score of 0.89.
Semantic hierarchy. As the categories woman, man, girl, boy are subclasses of person, the two people in this image are classified as one or more categories among these four subclasses.
Limitations In Fig. 5, we can see that while the majority of annotated categories in the original label space of COCO are detected successfully, some unannotated categories are also transferred from the other two datasets. However, the detection and classification of these unannotated categories can be challenging due to annotation inconsistencies across the three datasets. For example, in Fig. 5, the categories eye, nose, mouth, and hand from OID are not detected accurately in the COCO images. Additionally, categories such as person super, man, woman, and turkey that semantically match with objects in COCO images are either detected with low confidence scores or not detected at all. Similarly, Fig. 6 and 7 show similar results for MVD and OID, where categories from other datasets are sometimes omitted.
Given our observations, it appears that our universal object detector is partially overfitting to the unique characteristics of each data domain. It seems that the model has discovered a way to "cheat" in universal object detection by memorizing something that distinguishes the testing images from different datasets. This may result in harm to practical applications.
Discussion
This raises an interesting questionwill unifying the annotations in the unified label space improve the performance of the universal object detector on each individual dataset? According to literature , the addition of pseudo-labels for unannotated objects has been shown to be effective in the setting of a fully-annotated mixed dataset. However, the impact of annotation inconsistencies on individual datasets has not been thoroughly explored. In Section 4.5.2, we tested a comparable label handling strategy, the Unified hierarchy, to unify the annotations for semantically duplicated categories, but we did not observe any improvement in COCO or MVD as shown in Tables 7 and 8. Despite this, further exploration into this topic is left as open question for future work.
Conclusion
With the aim of solving the million-scale multidomain universal object detection problem, we have proposed several resource-efficient techniques for using large vision models to obtain robust visual representations across diverse datasets. Our universal object detector incorporates three essential detector components with high capacity and freezes the parameters of the large vision models. To address the cross-dataset label duplication and semantic hierarchy issues, we have implemented hierarchical taxonomy completion and a loss adaptation strategy called hierarchy-aware cross-dataset loss suppression (HCLS) within the unified label space for multiple datasets. Our practices and findings offer a promising solution for real-world computer vision applications and demonstrate the potential for universal object detection.
Data Availability Statement
Fig. 1
1Fig. 1 Overview. The design of Large-UniDet is based on a two-stage RCNN-style object detection network. The frozen backbone is a RegNet architecture initialized with the weights of SEER models. The NAS-FPN blocks can be stacked N times for a better accuracy-cost trade-off. The classification branch of Cascade R-CNN outputs 541 class scores including the background as the cardinality of the unified label space is 540.
Fig. 2
2An example shows the loss suppression for semantically label duplication between COCO and OID.
Fig. 3
3An example shows the loss suppression for semantic hierarchy between MVD and OID.
Fig. 5
5Qualitative results on COCO val set. On the top left corner of each visualized bounding box, the predicted categories of which the corresponding confidence scores are greater than 0.01 are listed in the descending order of the confidence scores. The entry classname super indicates this class is a superclass named classname in the unified label space.
Fig. 6
6Qualitative results on MVD val set. On the top left corner of each visualized bounding box, the predicted categories of which the corresponding confidence scores are greater than 0.1 are listed in the descending order of the confidence scores. The entry classname super indicates this class is a superclass named classname in the unified label space.
Fig. 7
7Qualitative results on OID val set. On the top left corner of each visualized bounding box, the predicted categories of which the corresponding confidence scores are greater than 0.05 are listed in the descending order of the confidence scores. The entry classname super indicates this class is a superclass named classname in the unified label space.
Bear COCO classes ... ... Carnivore (/m/01lrl) Animal (/m/0jbk) ...Bear (/m/01dws)
OID superclasses
Brown Bear
(/m/01dxs)
Polar Bear
(/m/0633h)
Teddy Bear
(/m/0kmg4)
...
match
ignore losses
Table 1The duplicated category names between OID superclasses and COCO / MVD classes in semantics.Two categories match in semantics
COCO classes
OID superclasses
sports ball
ball (/m/018xm)
bear
bear (/m/01dws)
bed
bed (/m/03ssj5)
bird
bird (/m/015p6)
boat
boat (/m/019jd)
car
car (/m/0k4j)
clock
clock (/m/01x3z)
person
person (/m/01g317)
MVD classes
OID superclasses
object-vehicle-car
car (/m/0k4j)
human-person
person (/m/01g317)
Object-vehicle-trailer
MVD classes
... ...
Land Vehicle (/m/01prls)
OID superclasses
...
Vehicle (/m/07yv9)
ignore losses
Ambulance
(/m/012n7d)
Bicycle
(/m/0199g)
...
Table 3
3The training datasets used in the experiments, which are provided by organizers of RVC 2022.500 object categories is used, following the stan-
dard practice of Open Images Challenge 2019.
Table 5
5Ablation analysis of detector components on COCO val set. The models are trained for 12 epochs on 8 NVIDIA 3090 GPUs, with a base learning rate 0.01 which is divided by 10 after 8 and 11 epochs.
Table 9 The
9object detectors are Cascade R-CNN enhanced with NAS-FPN (×7) and Cascade RPN. The training time
is measured in hours and the GPU memory consumption is measured in GB / image. The models are trained with a batch
size of 16 on 16 NVIDIA 3090 / A100 GPUs. For comparison purpose, we convert the training time on different devices to
the training time on 8 NVIDIA 3090 GPUs uniformly.
Fig. 4The performance on MVD vs. the lengths of the short edge of the testing images. We obtain the best mAP at 25.9 with the 1600-pixel resized short edge based on the lighter backbone (Large-UniDet [S]), and obtain best mAP at 27.7 with the 1400-pixel resized short edge based on the larger one (Large-UniDet [L]). The initial mAP and highest mAP of both two models are reported inTable 10. models obtained through universal object detection training performed best when the short edge of the testing images was resized to 1600 (for Large-UniDet[S]) or 1400 (for Large-UniDet[L]).demonstrates that the
1,000
1,500
2,000
22
24
26
28
length of the short edge
mAP
Large-UniDet [S]
Large-UniDet [L]
Table 10Comparison on the baseline, scaling up during the inference procedure (denoted as SU), and fintuning with higher-resolution training images (denoted as FT). The baseline is the universal object detection training without either scaling up the input size when testing or the following dataset-specific high-resolution finetuning.In the table, [S] and [L] represent Large-UniDet [S] andLarge-UniDet [L], respectively. The metric of COCO and MVD is mAP and the metric of OID is AP50.Model
SU
FT
COCO
MVD
OID
[S]
48.8
21.1
68.5
-
25.9
-
52.0
32.0
69.2
[L]
51.9
23.3
69.8
-
27.7
-
53.5
33.2
70.5
https://github.com/ozendelait/rvc devkit/blob/master/ objdet/obj det mapping.csv
The authors confirm the data supporting the findings of this work are available within the article or its supplementary materials.
Big self-supervised models advance medical image classification. S Azizi, B Mustafa, F Ryan, Z Beaver, J Freyberg, J Deaton, Proceedings of the ieee/cvf international conference on computer vision. the ieee/cvf international conference on computer visionothers (2021)Azizi, S., Mustafa, B., Ryan, F., Beaver, Z., Frey- berg, J., Deaton, J., . . . others (2021). Big self-supervised models advance medi- cal image classification. Proceedings of the ieee/cvf international conference on com- puter vision (pp. 3478-3488).
Revisiting resnets: Improved training and scaling strategies. I Bello, W Fedus, X Du, E D Cubuk, A Srinivas, T.-Y Lin, . . Zoph, B , Advances in Neural Information Processing Systems. 34Bello, I., Fedus, W., Du, X., Cubuk, E.D., Srini- vas, A., Lin, T.-Y., . . . Zoph, B. (2021). Revisiting resnets: Improved training and scaling strategies. Advances in Neural Information Processing Systems, 34 , 22614- 22627.
. P Bevandić, S &šegvić, Bevandić, P., &Šegvić, S. (2022).
arXiv:2207.08445Springer Nature 2022 L A T E X template preprint. Springer Nature 2022 L A T E X template preprint arXiv:2207.08445 .
Soft-nms-improving object detection with one line of code. N Bodla, B Singh, R Chellappa, L S Davis, Proceedings of the ieee international conference on computer vision. the ieee international conference on computer visionBodla, N., Singh, B., Chellappa, R., Davis, L.S. (2017). Soft-nms-improving object detec- tion with one line of code. Proceedings of the ieee international conference on computer vision (pp. 5561-5569).
Gaia: A transfer learning system of object detection that fits your needs. X Bu, J Peng, J Yan, T Tan, Z Zhang, Proceedings of the ieee/cvf conference on computer vision and pattern recognition. the ieee/cvf conference on computer vision and pattern recognitionBu, X., Peng, J., Yan, J., Tan, T., Zhang, Z. (2021). Gaia: A transfer learning system of object detection that fits your needs. Proceedings of the ieee/cvf conference on computer vision and pattern recognition (pp. 274-283).
Bigdetection: A largescale benchmark for improved object detector pre-training. L Cai, Z Zhang, Y Zhu, L Zhang, M Li, X Xue, Proceedings of the ieee/cvf conference on computer vision and pattern recognition. the ieee/cvf conference on computer vision and pattern recognitionCai, L., Zhang, Z., Zhu, Y., Zhang, L., Li, M., Xue, X. (2022). Bigdetection: A large- scale benchmark for improved object detec- tor pre-training. Proceedings of the ieee/cvf conference on computer vision and pattern recognition (pp. 4777-4787).
Cascade r-cnn: Delving into high quality object detection. Z Cai, N Vasconcelos, Proceedings of the ieee conference on computer vision and pattern recognition. the ieee conference on computer vision and pattern recognitionCai, Z., & Vasconcelos, N. (2018). Cascade r-cnn: Delving into high quality object detection. Proceedings of the ieee conference on com- puter vision and pattern recognition (pp. 6154-6162).
. N Carion, F Massa, G Synnaeve, N Usunier, A Kirillov, S Zagoruyko, Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S. (2020).
End-to-end object detection with transformers. European conference on computer vision. End-to-end object detection with transform- ers. European conference on computer vision (pp. 213-229).
Unsupervised learning of visual features by contrasting cluster assignments. M Caron, I Misra, J Mairal, P Goyal, P Bojanowski, A Joulin, Advances in Neural Information Processing Systems. 33Caron, M., Misra, I., Mairal, J., Goyal, P., Bojanowski, P., Joulin, A. (2020). Unsu- pervised learning of visual features by con- trasting cluster assignments. Advances in Neural Information Processing Systems, 33 , 9912-9924.
Mmdetection: Open mmlab detection toolbox and benchmark. K Chen, J Wang, J Pang, Y Cao, Y Xiong, X Li, arXiv:1906.07155arXiv preprintChen, K., Wang, J., Pang, J., Cao, Y., Xiong, Y., Li, X., . . . others (2019). Mmdetection: Open mmlab detection toolbox and bench- mark. arXiv preprint arXiv:1906.07155 .
Coatnet: Marrying convolution and attention for all data sizes. Z Dai, H Liu, Q V Le, M Tan, Advances in Neural Information Processing Systems. 34Dai, Z., Liu, H., Le, Q.V., Tan, M. (2021). Coat- net: Marrying convolution and attention for all data sizes. Advances in Neural Informa- tion Processing Systems, 34 , 3965-3977.
Imagenet: A largescale hierarchical image database. J Deng, W Dong, R Socher, L.-J Li, K Li, L Fei-Fei, ieee conference on computer vision and pattern recognition. Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L. (2009). Imagenet: A large- scale hierarchical image database. 2009 ieee conference on computer vision and pattern recognition (pp. 248-255).
J Devlin, M.-W Chang, K Lee, K Toutanova, arXiv:1810.04805Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprintDevlin, J., Chang, M.-W., Lee, K., Toutanova, K. (2018). Bert: Pre-training of deep bidi- rectional transformers for language under- standing. arXiv preprint arXiv:1810.04805 .
Pedestrian detection: An evaluation of the state of the art. P Dollar, C Wojek, B Schiele, P Perona, IEEE transactions on pattern analysis and machine intelligence. 34Dollar, P., Wojek, C., Schiele, B., Perona, P. (2011). Pedestrian detection: An evaluation of the state of the art. IEEE transactions on pattern analysis and machine intelligence, 34 (4), 743-761.
Dynamic zoom-in network for fast object detection in large images. M Gao, R Yu, A Li, V I Morariu, L S Davis, Proceedings of the ieee conference on computer vision and pattern recognition. the ieee conference on computer vision and pattern recognitionGao, M., Yu, R., Li, A., Morariu, V.I., Davis, L.S. (2018). Dynamic zoom-in network for fast object detection in large images. Proceedings of the ieee conference on computer vision and pattern recognition (pp. 6926-6935).
Nas-fpn: Learning scalable feature pyramid architecture for object detection. G Ghiasi, T.-Y Lin, Q V Le, Proceedings of the ieee/cvf conference on computer vision and pattern recognition. the ieee/cvf conference on computer vision and pattern recognitionGhiasi, G., Lin, T.-Y., Le, Q.V. (2019). Nas-fpn: Learning scalable feature pyramid architec- ture for object detection. Proceedings of the ieee/cvf conference on computer vision and pattern recognition (pp. 7036-7045).
mdalu: Multi-source domain adaptation and label unification with partial datasets. R Gong, D Dai, Y Chen, W Li, L Van Gool, Proceedings of the ieee/cvf international conference on computer vision. the ieee/cvf international conference on computer visionGong, R., Dai, D., Chen, Y., Li, W., Van Gool, L. (2021). mdalu: Multi-source domain adaptation and label unification with partial datasets. Proceedings of the ieee/cvf inter- national conference on computer vision (pp. 8876-8885).
Vision models are more robust and fair when pretrained on uncurated images without supervision. P Goyal, Q Duval, I Seessel, M Caron, M Singh, I Misra, . . Bojanowski, P , arXiv:2202.08360arXiv preprintGoyal, P., Duval, Q., Seessel, I., Caron, M., Singh, M., Misra, I., . . . Bojanowski, P. (2022). Vision models are more robust and fair when pretrained on uncurated images without supervision. arXiv preprint arXiv:2202.08360 .
Lvis: A dataset for large vocabulary instance segmentation. A Gupta, P Dollar, R Girshick, Proceedings of the ieee/cvf conference on computer vision and pattern recognition. the ieee/cvf conference on computer vision and pattern recognitionGupta, A., Dollar, P., Girshick, R. (2019). Lvis: A dataset for large vocabulary instance segmentation. Proceedings of the ieee/cvf conference on computer vision and pattern recognition (pp. 5356-5364).
Generalizable pedestrian detection: The elephant in the room. I Hasan, S Liao, J Li, S U Akram, L Shao, Proceedings of the ieee/cvf conference on computer vision and pattern recognition. the ieee/cvf conference on computer vision and pattern recognitionHasan, I., Liao, S., Li, J., Akram, S.U., Shao, L. (2021). Generalizable pedestrian detection: The elephant in the room. Proceedings of the ieee/cvf conference on computer vision and pattern recognition (pp. 11328-11337).
Masked autoencoders are scalable vision learners. K He, X Chen, S Xie, Y Li, P Dollár, R Girshick, Proceedings of the ieee/cvf conference on computer vision and pattern recognition. the ieee/cvf conference on computer vision and pattern recognitionHe, K., Chen, X., Xie, S., Li, Y., Dollár, P., Gir- shick, R. (2022). Masked autoencoders are scalable vision learners. Proceedings of the ieee/cvf conference on computer vision and pattern recognition (pp. 16000-16009).
Momentum contrast for unsupervised visual representation learning. Proceedings of the ieee/cvf conference on computer vision and pattern recognition. K He, H Fan, Y Wu, S Xie, R Girshick, He, K., Fan, H., Wu, Y., Xie, S., Girshick, R. (2020). Momentum contrast for unsuper- vised visual representation learning. Pro- ceedings of the ieee/cvf conference on com- puter vision and pattern recognition (pp. 9729-9738).
. Y He, G Huang, S Chen, J Teng, K Wang, Z Yin, He, Y., Huang, G., Chen, S., Teng, J., Wang, K., Yin, Z., . . .
Xlearner: Learning cross sources and tasks for universal visual representation. European conference on computer vision. J Shao, Shao, J. (2022). X- learner: Learning cross sources and tasks for universal visual representation. Euro- pean conference on computer vision (pp. 509-528).
. J Huang, V Rathod, C Sun, M Zhu, A Korattikara, A Fathi, Huang, J., Rathod, V., Sun, C., Zhu, M., Korat- tikara, A., Fathi, A., . . . others (2017).
Speed/accuracy trade-offs for modern convolutional object detectors. Proceedings of the ieee conference on computer vision and pattern recognition. the ieee conference on computer vision and pattern recognitionSpeed/accuracy trade-offs for modern con- volutional object detectors. Proceedings of the ieee conference on computer vision and pattern recognition (pp. 7310-7311).
Scaling up visual and vision-language representation learning with noisy text supervision. C Jia, Y Yang, Y Xia, Y.-T Chen, Z Parekh, H Pham, . . Duerig, T , International conference on machine learningJia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., . . . Duerig, T. (2021). Scaling up visual and vision-language representa- tion learning with noisy text supervision. International conference on machine learn- ing (pp. 4904-4916).
Learning visual features from large weakly supervised data. A Joulin, L V D Maaten, A Jabri, N Vasilache, European conference on computer vision. Joulin, A., Maaten, L.v.d., Jabri, A., Vasilache, N. (2016). Learning visual features from large weakly supervised data. European conference on computer vision (pp. 67-84).
. A Kolesnikov, X Zhai, L Beyer, Kolesnikov, A., Zhai, X., Beyer, L. (2019).
Revisiting self-supervised visual representation learning. Proceedings of the ieee/cvf conference on computer vision and pattern recognition. the ieee/cvf conference on computer vision and pattern recognitionRevisiting self-supervised visual representa- tion learning. Proceedings of the ieee/cvf conference on computer vision and pattern recognition (pp. 1920-1929).
S Kornblith, J Shlens, Q V Le, Do better imagenet models transfer better? Proceedings of the ieee/cvf conference on computer vision and pattern recognition. Kornblith, S., Shlens, J., Le, Q.V. (2019). Do bet- ter imagenet models transfer better? Pro- ceedings of the ieee/cvf conference on com- puter vision and pattern recognition (pp. 2661-2671).
Visual genome: Connecting language and vision using crowdsourced dense image annotations. R Krishna, Y Zhu, O Groth, J Johnson, K Hata, J Kravitz, International journal of computer vision. 1231Krishna, R., Zhu, Y., Groth, O., Johnson, J., Hata, K., Kravitz, J., . . . others (2017). Visual genome: Connecting language and vision using crowdsourced dense image annotations. International journal of com- puter vision, 123 (1), 32-73.
others (2020). The open images dataset v4. A Kuznetsova, H Rom, N Alldrin, J Uijlings, I Krasin, J Pont-Tuset, International Journal of Computer Vision. 1287Kuznetsova, A., Rom, H., Alldrin, N., Uijlings, J., Krasin, I., Pont-Tuset, J., . . . others (2020). The open images dataset v4. International Journal of Computer Vision, 128 (7), 1956- 1981.
Mseg: A composite dataset for multi-domain semantic segmentation. Proceedings of the ieee/cvf conference on computer vision and pattern recognition. J Lambert, Z Liu, O Sener, J Hays, V Koltun, Lambert, J., Liu, Z., Sener, O., Hays, J., Koltun, V. (2020). Mseg: A composite dataset for multi-domain semantic segmentation. Pro- ceedings of the ieee/cvf conference on com- puter vision and pattern recognition (pp. 2879-2888).
Auto-encoding transformations in reparameterized lie groups for unsupervised learning. F Lin, H Xu, H Li, H Xiong, G.-J Qi, Proceedings of the aaai conference on artificial intelligence. the aaai conference on artificial intelligence35Lin, F., Xu, H., Li, H., Xiong, H., Qi, G.-J. (2021). Auto-encoding transformations in reparameterized lie groups for unsupervised learning. Proceedings of the aaai confer- ence on artificial intelligence (Vol. 35, pp. 8610-8617).
Feature pyramid networks for object detection. T.-Y Lin, P Dollár, R Girshick, K He, B Hariharan, S Belongie, Proceedings of the ieee conference on computer vision and pattern recognition. the ieee conference on computer vision and pattern recognitionLin, T.-Y., Dollár, P., Girshick, R., He, K., Har- iharan, B., Belongie, S. (2017). Feature pyramid networks for object detection. Pro- ceedings of the ieee conference on computer vision and pattern recognition (pp. 2117- 2125).
T.-Y Lin, M Maire, S Belongie, J Hays, P Perona, D Ramanan, . . Zitnick, C L , Microsoft coco: Common objects in context. European conference on computer vision. Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., . . . Zitnick, C.L. (2014). Microsoft coco: Common objects in context. European conference on computer vision (pp. 740-755).
Path aggregation network for instance segmentation. S Liu, L Qi, H Qin, J Shi, J Jia, Proceedings of the ieee conference on computer vision and pattern recognition. the ieee conference on computer vision and pattern recognitionLiu, S., Qi, L., Qin, H., Shi, J., Jia, J. (2018). Path aggregation network for instance segmenta- tion. Proceedings of the ieee conference on computer vision and pattern recognition (pp. 8759-8768).
Cbnet: A novel composite backbone network architecture for object detection. Y Liu, Y Wang, S Wang, T Liang, Q Zhao, Z Tang, H Ling, Proceedings of the aaai conference on artificial intelligence. the aaai conference on artificial intelligence34Liu, Y., Wang, Y., Wang, S., Liang, T., Zhao, Q., Tang, Z., Ling, H. (2020). Cbnet: A novel composite backbone network architec- ture for object detection. Proceedings of the aaai conference on artificial intelligence (Vol. 34, pp. 11653-11660).
Swin transformer v2: Scaling up capacity and resolution. Z Liu, H Hu, Y Lin, Z Yao, Z Xie, Y Wei, Proceedings of the ieee/cvf conference on computer vision and pattern recognition. the ieee/cvf conference on computer vision and pattern recognitionothers (2022)Liu, Z., Hu, H., Lin, Y., Yao, Z., Xie, Z., Wei, Y., . . . others (2022). Swin transformer v2: Scal- ing up capacity and resolution. Proceedings of the ieee/cvf conference on computer vision and pattern recognition (pp. 12009-12019).
Sgdr: Stochastic gradient descent with warm restarts. I Loshchilov, F Hutter, arXiv:1608.03983arXiv preprintLoshchilov, I., & Hutter, F. (2016). Sgdr: Stochas- tic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983 .
Detection hub: Unifying object detection datasets via query adaptation on language embedding. L Meng, X Dai, Y Chen, P Zhang, D Chen, M Liu, . . Jiang, Y.-G , arXiv:2206.03484arXiv preprintMeng, L., Dai, X., Chen, Y., Zhang, P., Chen, D., Liu, M., . . . Jiang, Y.-G. (2022). Detection hub: Unifying object detection datasets via query adaptation on language embedding. arXiv preprint arXiv:2206.03484 .
Mixed precision training. P Micikevicius, S Narang, J Alben, G Diamos, E Elsen, D Garcia, . . Others, International conference on learning representationsMicikevicius, P., Narang, S., Alben, J., Diamos, G., Elsen, E., Garcia, D., . . . others (2018). Mixed precision training. International con- ference on learning representations.
The mapillary vistas dataset for semantic understanding of street scenes. G Neuhold, T Ollmann, S Rota Bulo, P Kontschieder, Proceedings of the ieee international conference on computer vision. the ieee international conference on computer visionNeuhold, G., Ollmann, T., Rota Bulo, S., Kontschieder, P. (2017). The mapillary vis- tas dataset for semantic understanding of street scenes. Proceedings of the ieee inter- national conference on computer vision (pp. 4990-4999).
Libra r-cnn: Towards balanced learning for object detection. Proceedings of the ieee/cvf conference on computer vision and pattern recognition. J Pang, K Chen, J Shi, H Feng, W Ouyang, D Lin, Pang, J., Chen, K., Shi, J., Feng, H., Ouyang, W., Lin, D. (2019). Libra r-cnn: Towards balanced learning for object detection. Pro- ceedings of the ieee/cvf conference on com- puter vision and pattern recognition (pp. 821-830).
. A Radford, J W Kim, C Hallacy, A Ramesh, G Goh, S Agarwal, Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., . . . others (2021).
Learning transferable visual models from natural language supervision. International conference on machine learning. Learning transferable visual models from natural language supervision. International conference on machine learning (pp. 8748- 8763).
Designing network design spaces. I Radosavovic, R P Kosaraju, R Girshick, K He, P Dollár, Proceedings of the ieee/cvf conference on computer vision and pattern recognition. the ieee/cvf conference on computer vision and pattern recognitionRadosavovic, I., Kosaraju, R.P., Girshick, R., He, K., Dollár, P. (2020). Designing network design spaces. Proceedings of the ieee/cvf conference on computer vision and pattern recognition (pp. 10428-10436).
Towards robust monocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer. R Ranftl, K Lasinger, D Hafner, K Schindler, V Koltun, IEEE transactions. Ranftl, R., Lasinger, K., Hafner, D., Schindler, K., Koltun, V. (2020). Towards robust monocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer. IEEE transactions on pattern analysis and machine intelligence.
Faster r-cnn: Towards real-time object detection with region proposal networks. S Ren, K He, R Girshick, J Sun, Advances in neural information processing systems. 28Ren, S., He, K., Girshick, R., Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28 .
J Shao, S Chen, Y Li, K Wang, Z Yin, Y He, arXiv:2111.08687Intern: A new learning paradigm towards general vision. arXiv preprintShao, J., Chen, S., Li, Y., Wang, K., Yin, Z., He, Y., . . . others (2021). Intern: A new learn- ing paradigm towards general vision. arXiv preprint arXiv:2111.08687 .
Objects365: A large-scale, high-quality dataset for object detection. S Shao, Z Li, T Zhang, C Peng, G Yu, X Zhang, . . Sun, J , Proceedings of the ieee/cvf international conference on computer vision. the ieee/cvf international conference on computer visionShao, S., Li, Z., Zhang, T., Peng, C., Yu, G., Zhang, X., . . . Sun, J. (2019). Objects365: A large-scale, high-quality dataset for object detection. Proceedings of the ieee/cvf inter- national conference on computer vision (pp. 8430-8439).
An analysis of scale invariance in object detection snip. B Singh, L S Davis, Proceedings of the ieee conference on computer vision and pattern recognition. the ieee conference on computer vision and pattern recognitionSingh, B., & Davis, L.S. (2018). An analy- sis of scale invariance in object detection snip. Proceedings of the ieee conference on computer vision and pattern recognition (pp. 3578-3587).
Revisiting unreasonable effectiveness of data in deep learning era. C Sun, A Shrivastava, S Singh, A Gupta, Proceedings of the ieee international conference on computer vision. the ieee international conference on computer visionSun, C., Shrivastava, A., Singh, S., Gupta, A. (2017). Revisiting unreasonable effective- ness of data in deep learning era. Proceed- ings of the ieee international conference on computer vision (pp. 843-852).
Efficientdet: Scalable and efficient object detection. M Tan, R Pang, Q V Le, Proceedings of the ieee/cvf conference on computer vision and pattern recognition. the ieee/cvf conference on computer vision and pattern recognitionTan, M., Pang, R., Le, Q.V. (2020). Efficient- det: Scalable and efficient object detection. Proceedings of the ieee/cvf conference on computer vision and pattern recognition (pp. 10781-10790).
Proper reuse of image classification features improves object detection. C Vasconcelos, V Birodkar, V Dumoulin, Proceedings of the ieee/cvf conference on computer vision and pattern recognition. the ieee/cvf conference on computer vision and pattern recognitionVasconcelos, C., Birodkar, V., Dumoulin, V. (2022). Proper reuse of image classification features improves object detection. Proceed- ings of the ieee/cvf conference on computer vision and pattern recognition (pp. 13628- 13637).
Cascade rpn: Delving into high-quality region proposal network with adaptive convolution. T Vu, H Jang, T X Pham, C Yoo, Advances in neural information processing systems. 32Vu, T., Jang, H., Pham, T.X., Yoo, C. (2019). Cascade rpn: Delving into high-quality region proposal network with adaptive con- volution. Advances in neural information processing systems, 32 .
Towards universal object detection by domain attention. X Wang, Z Cai, D Gao, N Vasconcelos, Proceedings of the ieee/cvf conference on computer vision and pattern recognition. the ieee/cvf conference on computer vision and pattern recognitionWang, X., Cai, Z., Gao, D., Vasconcelos, N. (2019). Towards universal object detection by domain attention. Proceedings of the ieee/cvf conference on computer vision and pattern recognition (pp. 7289-7298).
Universal-rcnn: Universal object detector via transferable graph r-cnn. H Xu, L Fang, X Liang, W Kang, Z Li, Proceedings of the aaai conference on artificial intelligence. the aaai conference on artificial intelligence34Xu, H., Fang, L., Liang, X., Kang, W., Li, Z. (2020). Universal-rcnn: Universal object detector via transferable graph r-cnn. Pro- ceedings of the aaai conference on artificial intelligence (Vol. 34, pp. 12492-12499).
Seed the views: Hierarchical semantic alignment for contrastive representation learning. H Xu, X Zhang, H Li, L Xie, W Dai, H Xiong, Q Tian, IEEE Transactions on Pattern Analysis and Machine Intelligence. Xu, H., Zhang, X., Li, H., Xie, L., Dai, W., Xiong, H., Tian, Q. (2022). Seed the views: Hier- archical semantic alignment for contrastive representation learning. IEEE Transactions on Pattern Analysis and Machine Intelli- gence.
Unitbox: An advanced object detection network. J Yu, Y Jiang, Z Wang, Z Cao, T Huang, Proceedings of the 24th acm international conference on multimedia. the 24th acm international conference on multimediaYu, J., Jiang, Y., Wang, Z., Cao, Z., Huang, T. (2016). Unitbox: An advanced object detec- tion network. Proceedings of the 24th acm international conference on multimedia (pp. 516-520).
L Yuan, D Chen, Y.-L Chen, N Codella, X Dai, J Gao, arXiv:2111.11432Florence: A new foundation model for computer vision. arXiv preprintYuan, L., Chen, D., Chen, Y.-L., Codella, N., Dai, X., Gao, J., . . . others (2021). Florence: A new foundation model for computer vision. arXiv preprint arXiv:2111.11432 .
Citypersons: A diverse dataset for pedestrian detection. S Zhang, R Benenson, B Schiele, Proceedings of the ieee conference on computer vision and pattern recognition. the ieee conference on computer vision and pattern recognitionZhang, S., Benenson, R., Schiele, B. (2017). Citypersons: A diverse dataset for pedes- trian detection. Proceedings of the ieee conference on computer vision and pattern recognition (pp. 3213-3221).
Object detection with a unified label space from multiple datasets. X Zhao, S Schulter, G Sharma, Y.-H Tsai, M Chandraker, Y Wu, European conference on computer vision. Zhao, X., Schulter, S., Sharma, G., Tsai, Y.-H., Chandraker, M., Wu, Y. (2020). Object detection with a unified label space from multiple datasets. European conference on computer vision (pp. 178-193).
Scene parsing through ade20k dataset. B Zhou, H Zhao, X Puig, S Fidler, A Barriuso, A Torralba, Proceedings of the ieee conference on computer vision and pattern recognition. the ieee conference on computer vision and pattern recognitionZhou, B., Zhao, H., Puig, X., Fidler, S., Bar- riuso, A., Torralba, A. (2017). Scene parsing through ade20k dataset. Proceedings of the ieee conference on computer vision and pattern recognition (pp. 633-641).
Simple multi-dataset detection. X Zhou, V Koltun, P Krähenbühl, Proceedings of the ieee/cvf conference on computer vision and pattern recognition. the ieee/cvf conference on computer vision and pattern recognitionZhou, X., Koltun, V., Krähenbühl, P. (2022). Sim- ple multi-dataset detection. Proceedings of the ieee/cvf conference on computer vision and pattern recognition (pp. 7571-7580).
| [
"https://github.com/linfeng93/Large-UniDet.",
"https://github.com/ozendelait/rvc"
]
|
[
"Calculation of the optical response of C 60 and Na 8 using time-dependent density functional theory and local orbitals",
"Calculation of the optical response of C 60 and Na 8 using time-dependent density functional theory and local orbitals"
]
| [
"Argyrios Tsolakidis \nDepartment of Physics and Materials Research Laboratory\nUniversity of Illinois at Urbana-Champaign\n61801UrbanaIllinois\n",
"Daniel Sánchez-Portal \nDepartment of Physics and Materials Research Laboratory\nUniversity of Illinois at Urbana-Champaign\n61801UrbanaIllinois\n",
"Richard M Martin \nDepartment of Physics and Materials Research Laboratory\nUniversity of Illinois at Urbana-Champaign\n61801UrbanaIllinois\n"
]
| [
"Department of Physics and Materials Research Laboratory\nUniversity of Illinois at Urbana-Champaign\n61801UrbanaIllinois",
"Department of Physics and Materials Research Laboratory\nUniversity of Illinois at Urbana-Champaign\n61801UrbanaIllinois",
"Department of Physics and Materials Research Laboratory\nUniversity of Illinois at Urbana-Champaign\n61801UrbanaIllinois"
]
| []
| We report on a general method for the calculation of the frequency-dependent optical response of clusters based upon time-dependent density functional theory (TDDFT). The implementation is done using explicit propagation in the time domain and a self-consistent program that uses a linear combination of atomic orbitals (LCAO). Our actual calculations employ the SIESTA program, which is designed to be fast and accurate for large clusters. We use the adiabatic local density approximation to account for exchange and correlation effects. Results are presented for the imaginary part of the linear polarizability, ℑα(ω), and the dipole strength function, S(ω), of C60 and Na8, compared to previous calculations and to experiment. We also show how to calculate the integrated frequency-dependent second order non-linear polarizability for the case of a step function electric field,γstep(ω), and present results for C60. | 10.1103/physrevb.66.235416 | [
"https://export.arxiv.org/pdf/cond-mat/0109488v1.pdf"
]
| 119,485,812 | cond-mat/0109488 | 3de71aa5eef8ac0563c218f2ee6e961ef4e60ba4 |
Calculation of the optical response of C 60 and Na 8 using time-dependent density functional theory and local orbitals
26 Sep 2001 (March 22, 2022)
Argyrios Tsolakidis
Department of Physics and Materials Research Laboratory
University of Illinois at Urbana-Champaign
61801UrbanaIllinois
Daniel Sánchez-Portal
Department of Physics and Materials Research Laboratory
University of Illinois at Urbana-Champaign
61801UrbanaIllinois
Richard M Martin
Department of Physics and Materials Research Laboratory
University of Illinois at Urbana-Champaign
61801UrbanaIllinois
Calculation of the optical response of C 60 and Na 8 using time-dependent density functional theory and local orbitals
26 Sep 2001 (March 22, 2022)
We report on a general method for the calculation of the frequency-dependent optical response of clusters based upon time-dependent density functional theory (TDDFT). The implementation is done using explicit propagation in the time domain and a self-consistent program that uses a linear combination of atomic orbitals (LCAO). Our actual calculations employ the SIESTA program, which is designed to be fast and accurate for large clusters. We use the adiabatic local density approximation to account for exchange and correlation effects. Results are presented for the imaginary part of the linear polarizability, ℑα(ω), and the dipole strength function, S(ω), of C60 and Na8, compared to previous calculations and to experiment. We also show how to calculate the integrated frequency-dependent second order non-linear polarizability for the case of a step function electric field,γstep(ω), and present results for C60.
Although density functional theory (DFT) 1,2 is a very successful theory for the ground state properties, the excited states calculated within the Kohn-Sham scheme often are much less successful in describing the optical response and the excitation spectra. The solution to this problem, in principle, is the extension of DFT to the time-dependent systems. It is interesting to note that the first calculation 3 using TDDFT preceded any formal development and it relied heavily on the analogy with the time-dependent Hartree-Fock method. The first steps towards the formulation of TDDFT were done by Deb and Gosh 4,5 who focused on potentials periodic in time, and by Bartolotti 6,7 who focused on adiabatic processes. Runge and Gross 8 established the foundations of TDDFT for a generic form of the time-dependent potential. TDDFT was further developed 9,10 to acquire a structure that is very similar to that of the conventional DFT. A very interesting feature of TDDFT, that does not appear in DFT, is the dependence of the density functionals on the initial state. For more information about TDDFT the reader is advised to read the authoritative reviews of Gross, Ullrich, and Gossmann, 11 and Gross, Dobson, and Petersilka. 12 The polarizability describes the distortion of the charge cloud caused by the application of an external field. It is one of the most important response functions because it is directly related to electron-electron interactions, and correlations. In addition, it determines the response to charged particles, and optical properties. A quantity of particular interest is the dipole strength function, S(ω), which is directly related to the frequency-dependent linear polarizability, α(ω), by
α(ω) = e 2h m ∞ 0 S(ω ′ )dω ′ ω ′ 2 − ω 2 .(1)
By taking the imaginary part of Eq. (1) we obtain S(ω) = 2m πe 2h ωℑα(ω).
The dipole strength function, S(ω), is proportional to the photoabsorption cross section, σ(ω), measured by most experiments and, therefore, allows direct comparison with experiment. In addition, the integration of S over energy gives the number of electrons, N e , (f-sum rule ) i.e.
∞ 0 dES(E) = i f i = N e ,(3)
where f i are the oscillator strengths. This sum rule is very important because it provides an internal consistency test for the calculations, indicating the completeness and adequacy of the basis set used for the computation of the optical response. Optical probes are some of the most successful experimental tools that allow access to the properties of clusters. Consequently, there are many calculations of the optical response of small atomic aggregates. In particular, there exist several theoretical studies of the examples chosen here, C 60 and Na 8 . This allows us to calibrate the accuracy of our method in comparison with other computational schemes. One of the first ab initio calculations of the dipole response of atomic clusters within TDDFT was performed by Yabana and Bertsch, 13 who studied large sodium and lithium clusters, and the C 60 molecule using a real time and space approach. For small Na clusters, Vasiliev et al. 14 calculated the absorption cross section using the time-dependent density functional response theory (TD-DFRT) developed by Casida. 15 The purpose of this work is to propose a method that will have significant advantages for the calculation of the polarizability of large clusters, reducing considerably the computation time while retaining the desired accuracy. This paper is organized as follows: In section II, we describe the method of calculation. In section II C, we give details about the way we solve the time-dependent Kohn-Sham equation and briefly summarize other methods available. In section III, we present an overview of relevant calculations and the results of our calculation for C 60 and Na 8 . We compare our results with other calculations and experiments. In section IV, we describe the calculation and present the results for the imaginary part of the integrated frequency-dependent second order non-linear polarizability for the case of a step function electric field, ℑγ step (ω), for C 60 . In section V, we give the conclusions.
II. METHOD OF CALCULATION
A. Electronic structure calculations
Our method involves the description of the electronic states using linear combination of atomic orbitals (LCAO). Because the size of the LCAO basis is small, compared with other usual choices like plane waves or real space grids, the TDDFT calculations can be done efficiently using the techniques described below. Our scheme is based on the SIESTA 16-18 code, which is used to compute the initial wavefunctions and the Hamiltonian matrix for each time step. SIESTA is a generalpurpose DFT code which uses a local basis, and has been specially optimized to deal with large systems. As such, it represents an ideal tool for treating large clusters. Core electrons are replaced by norm-conserving pseudopotentials 19 in the fully nonlocal Kleinman-Bylander 20 form, and the basis set is a general and flexible linear combination of numerical atomic orbitals (NAOs), constructed from the eigenstates of the atomic pseudopotentials. 17,21 The NAOs are confined, being strictly zero beyond a certain radius. In addition, the electron wavefunctions and density are projected onto a real space grid in order to calculate the Hartree and exchange-correlation potentials and their matrix elements.
The use of confined NAOs is very important for the efficiency of the SIESTA code. With them, by exploiting the explicit sparseness of the Hamiltonian and density matrices, the computational cost for the construction and storage of the Hamiltonian and the electronic density can be made to scale linearly with the number of atoms, in the limit of large systems. Therefore, a considerable effort has been devoted to obtain orbital bases that would meet the standards of precision of conventional first-principles calculations, while keeping their range as small as possible. A simple scheme for the generation of transferable bases that satisfy both requirements was presented in Refs. 17 and 22. These bases, which we utilize in this work, have been successfully applied to study the ground state properties of very different systems, ranging from insulators to metals, and from bulk to surfaces and nanostructures. 18 It is not obvious however that these confined basis sets will be also adequate for the TDDFT calculation of the optical response. In this paper we show that, at least for the two systems considered, the optical absorption can be accurately calculated using basis of NAOs with reasonable confinement radii, and a moderate number of orbitals per atom. Our results are in good agreement with other TDDFT calculations using computationally more demanding basis sets.
Our approach is to carry out the calculations in the time domain, explicitly evolving the wavefunctions. We consider a bounded system in a finite electric field, i.e. the Hamiltonian includes a perturbation ∆H = −E · x. For the linear response calculations in this paper we have set the value of this field to 0.01 eV/Å. The system is solved for the ground state using standard time independent density functional theory. 23 Then we switch off the electric field at time t = 0, and for every subsequent time step we propagate the occupied Kohn-Sham eigenstates by solving the time-dependent Kohn-Sham equation (h = 1)
i ∂Ψ ∂t = HΨ,(4)
where H is the time-dependent Hamiltonian given by
H = − 1 2 ∇ 2 + V ext (r, t) + ρ(r ′ , t) |r − r ′ | dr ′ + V xc [ρ](r, t).(5)
The calculation of the exchange-correlation potential is done using the adiabatic local density approximation (ALDA) where V xc takes the form
V xc [ρ](r, t) ∼ = δE LDA xc [ρ t ] δρ t (r) = V LDA xc [ρ t ](r).(6)
E LDA xc [ρ t ] is the exchange-correlation energy of the homogeneous electron gas. 24 It is important to notice that the V xc in ALDA is local both in time and space. For every time step we solve Eq. (4), and from the new wavefunctions we construct the new density matrix
ρ µν (t) = i occ c µ i (t)c ν i (t),(7)
where c µ i (t) are the coefficients of the occupied wavefunctions which correspond to the basis orbitals φ µ (r). ρ µν (t) has to be calculated and stored for overlapping orbitals only. The electron density is then obtained by
ρ(r, t) = µ,ν ρ µν (t) φ µ (r)φ ν (r),(8)
and used for the calculation of the Hamiltonian in the new cycle.
B. Calculation of the polarizabilities
For every time step we calculate the dipole moment D(t) of the electrons in the cluster. This defines the response to all orders and the frequency dependent response is found by the Fourier transform
D(ω) ≡ dte iωt−δt D(t).(9)
In our case we Fourier transform the dipole moment only for t>0. It is necessary to include a damping factor δ in order to perform the Fourier transform. This damping factor gives the minimum width of the peaks of the imaginary part of the response. Physically, it can be regarded as an approximate way to account for broadening. To linear order the polarizability is given by D(ω) = α(ω)E(ω), so that
ℑα(ω) = ω ℜD(ω) E ,(10)
where the field is given by E(t) = E θ(−t). After Fourier transforming the dipole moment we obtain the elements of the frequency-dependent polarizability tensor α ij (ω). We repeat the calculation with the electric field along different axis unless the symmetry is high enough that this is not needed. The average linear polarizability is given by
< α(ω) >= 1 3 T r{α ij (ω)}.(11)
The choice of the coordinate system does not affect the average polarizability because of the rotational invariance of the trace.
C. Solution of the time-dependent Kohn-Sham equation
Efficient solution of the time-dependent Kohn-Sham equation (Eq. (4)) is of particular interest because together with the calculation of the Hamiltonian, they are the most time consuming parts of the calculation. In this section we describe our approach of solving Eq. (4), as well as other existing methods used for the same purpose.
In the LCAO formalism Eq. (4) takes the form
i ∂c ∂t = S −1 Hc(12)
where S is the overlap matrix between the orbitals and c is the column of the coefficients of the local orbitals. The overlap matrix is fixed for a given atomic configuration, hence we have to calculate and invert it only once. The formal solution of Eq. (12) is
c(t) = U (t, 0)c(0) = T exp −i t 0 S −1 H(t ′ )dt ′ c(0),(13)
where T is the time ordering operator. The most elementary solution is obtained by breaking the total evolution operator into evolution operators of small time durations
U (t, 0) ≃ N −1 n=0 U ((n + 1)∆t, n∆t),(14)
where ∆t = Ttot N and
U (t + ∆t, t) = exp −iS −1 H(t)∆t .(15)
T tot is the total time that we allow the system to evolve. The differences among propagation schemes arise from the way the exponential in Eq. (15) is approximated. In our approach, we approximate the exponential in Eq. (15) with the Crank-Nicholson operator. 25 The coefficients between the steps n + 1 and n are related by the equation
c n+1 = 1 − iS −1 H(t n ) ∆t 2 1 + iS −1 H(t n ) ∆t 2 c n .(16)
This method is unitary, strictly preserving the orthonormality of the states for an arbitrary time evolution. For time independent Hamiltonians it is also explicitly time reversal invariant, and exactly conserves energy. In practice, with a suitable choice of ∆t, the energy is satisfactorily conserved even when the Hamiltonian changes with time. For example, in the calculations described below, the drift of the total energy at the end of the simulation (∼130 fs in both cases) was only ∆E tot /E tot ∼ 3 × 10 −7 for C 60 and, ∼ 8 × 10 −6 for Na 8 , after N C60 ∼ 6100 and N N a8 ∼ 2800 time steps, respectively. The larger energy drift in the case of Na 8 is attributed to the use of larger time step. The method is stable when ∆t∆E max << 1, where ∆E max is the range of the eigenstates of S −1 H. We can increase the stability of the solution if we include more terms of the expansion in the numerator and denominator of the Crank-Nicholson operator, i.e.
c n+1 = 1 − iS −1 H ∆t 2 − 1 2 (S −1 H ∆t 2 ) 2 + i 1 6 (S −1 H ∆t 2 ) 3 1 + iS −1 H ∆t 2 − 1 2 (S −1 H ∆t 2 ) 2 − i 1 6 (S −1 H ∆t 2 ) 3 c n .(17)
By including more terms in the expansion it is possible either to increase the time step preserving the accuracy, or to increase the accuracy of the dynamics and the energy conservation for a given time step. The main advantage of using a bigger time step is the saving of time because we have to calculate the Hamiltonian fewer times. The energy resolution will not be affected since it depends on the total time that we allow the system to evolve. The method presented in this work has many similarities with that described by Yabana and Bertsch, 13 the main difference being the use of an LCAO basis set in the present case. However, this is a key difference because the size of the matrices used in the calculations is considerably smaller compared to other basis choices. In addition, our method has other advantages associated with the real time formulation of TDDFT. Only occupied states are used in the calculation, in contrast to the perturbative approach 15,26 where there is a sum over the excited states of the system. The implementation is relatively simple, since we use essentially the same operations as already used to find the ground state properties. It is also advantageous that nonlinear effects can be included in a straightforward way. One disadvantage of the real time approach is the calculation of the Hamiltonian for every time step. Although this is not an attractive feature there is no other way to calculate the time evolution of the system.
There are many other ways to approach the solution of the equations. For completeness, we discuss in the Appendix several methods that could be of potential relevance for our calculations. One of the main goals of these methods is to enable longer time steps. This would be a great advantage in our work; however, the fact that the self-consistent Hamiltonian changes as a function of time, limits their use.
III. DISCUSSION OF RESULTS
A. Small metal clusters: Na8
The first calculation we performed is the optical response of Na 8 . The main purpose of this calculation was to investigate the accuracy of our method in the case of a small cluster, where the effects related to the confinement of the orbitals should be more noticeable and where the size of our basis is much smaller than in previous calculations using real space grids. 14 Na 8 is the smallest closed shell Na cluster that its optical response exhibits the presence of a plasmon which is experimentally observed at 2.53 eV. 27,28 The width of the plasmon is due to Landau damping. 29 Previous work can be grouped into two types: earlier work on jellium spheres 27,29-31 that reproduces the qualitative features but not the quantitative energies of the peaks, and more recent work 14,32 that takes into account the detailed atomic structure and is in general in very good agreement with experiment. 27,28 In the first category are the calculations of Selby et al ., 27,30 who calculated the photoabsorption cross section using the modified Mie theory, which is a classical theory. The plasmon was found to be at ∼ 2.76 eV. By using the self-consistent jellium model in the time dependent local density approximation (TDLDA), first introduced by Ekardt, 29 Yannouleas et al. 31 calculated the photoabsorption cross section of Na 8 and predicted the plasmon position at 2.82 eV.
Bounačić-Koutecký et al. 32 calculated the absorption spectrum of Na 8 using the configuration-interaction method (CI). Because the all-electron calculation is computationally very demanding, they obtained the excited states by a non-empirical effective core potential corrected for the core-valence correlation using a core polarization potential. The position of the plasmon was predicted at ∼ 2.55 eV. Vasiliev et al. 14 calculated the photoabsorption cross section using TD-DFRT. 15 Their calculations made use of norm-conserving pseudopotentials and a real-space mesh as a basis set. The position of the plasmon agreed with the photoabsorption experiments of Selby et al. 27 and Wang et al. 28 within 0.1-0.2 eV.
In our calculation we let the system evolve for the total time of T=31.42 eV −1 . The energy resolution, determined by ∆ω = π T , is, in consequence, equal to 0.1 eV. The time step is 11.025 ×10 −3 eV −1 , and the damping factor used in the Fourier transform is 0.095 eV. Troullier-Martins pseudopotentials 19 including non-linear partial core corrections 33 for the exchangecorrelation interaction between valence and core electrons, and an auxiliary real-space grid 16 equivalent to a plane-wave cutoff of 70 Ry are also used in this calculation.
The basis set includes 13 NAOs per atom: two radial shapes to represent the 3s states plus a polarization 17 p shell with confinement radii r s =r P ol. p =12.2 a.u., and two additional 3p and 3d shells with radii r p =r d =10.0 a.u.. Fig. 1 and Fig. 2 present, respectively, our results for the dipole strength function and the imaginary part of the linear polarizability of Na 8 for energies up to 4 eV. The shape of these curves is in excellent agreement with both, the calculations of Vasiliev et al. 14 and, the experiments of Wang et al.. 28 However, the results appear to be shifted to higher energies. In fact, the maximum of the plasmon peak is obtained at 2.86 eV, which is 0.33 eV higher than the experimentally observed value. This shift to higher energies seems to be related to the extension of the LCAO basis: using more confined orbitals we get a larger shift. The integrated dipole strength is equal to 6.97 out of 8, thus fulfilling 87.13 % of the sum rule. The partial fulfillment of the sum rule signifies the incompleteness of our basis set. The static linear polarizability α(0) can be obtained from standard (static) calculations of the induced dipole as a function of the applied field. Using this approach we obtain a value of 13.2Å 3 /atom. An alternative way to calculate α(0) is provided by the formula
α(0) = e 2h m ∞ 0 S(ω)dω ω 2 = 2 π ∞ 0 ℑα(ω) ω dω,(18)
from which we obtain a value of 12. The best known Buckyball C 60 is a very interesting system with strong electron-electron interactions due to the confinement. There are quite a few calculations concerning the optical properties of C 60 and in particular its optical response. The main feature of the optical response of C 60 is the presence of two collective excitations (plasmons). The low energy plasmon can be associated with the π electrons while the high energy plasmon with both the σ and π electrons, in analogy with the plasmons in graphite. 35,36 The plasmons have been observed by a plethora of experiments. [37][38][39][40][41] The earliest theoretical work 42-45 on C 60 involved simplifying approximations for the electron states (tightbinding or spherical averaging) and for the electron interaction (neglect or RPA-like treatments). We will compare our results with those of Westin et al. 46 and Yabana and Bertsch 13 , who used large basis sets and realistic carbon potentials. Westin et al. used single particle wavefuctions, determined from a local density approximation (LDA) calculation, to evaluate the dipole matrix elements which combined with a sum over states approach yielded the unscreened frequency-dependent linear polarizability. Screening was included in a RPA-like fashion by introducing an effective screening parameter. The polarizability calculated in the static limit was used to evaluate this parameter for the calculation of the dynamic response. The optical response and the sum rule for the low energy part were in reasonable agreement with the experiment of Leach et al.. 41 Yabana and Bertsch 13 used TDLDA, evolving the system in real space and time, to calculate the dipole strength function of C 60 . Their calculation also gives reasonable agreement with the experimental data of Leach et al., 41 for the sum rule of the low energy part although it misses many details of the structure. This calculation is very similar in quality to ours.
The total simulation time in our calculation of the polarizability and dipole strength of the C 60 molecule is again 31.416 eV −1 , and the corresponding maximum energy resolution of 0.1 eV. The time step however, which is set equal to 5.145 ×10 −3 eV −1 , is smaller than the one used for Na 8 . This is because of the higher frequency range of the response of C 60 . The damping factor used in the Fourier transform is equal to 0.34 eV in this case. Troullier-Martins pseudopotentials 19 , a double-ζ polarized basis set, and a real-space grid cutoff 16 of 70 Ry were used in this calculation. There are 13 NAOs per C atom: two different radial shapes for the description of the 2s states, another two for the 2p, plus an additional shell of d orbitals. The radii of confinement used are r s =5.12 a.u. and r p =r P ol. d =6.25 a.u. (corresponding to an energy shift 17 of 50 meV). For C 60 , the calculated spectra show small dependence in these radii, at least as far as they are not selected to be very stringent. In Fig. 3, the dipole moment is shown as a function of the time step number. The dipole strength function obtained from the time evolution of the dipole moment is shown in Fig. 4 for energies up to 60 eV. Its main features are the low energy transitions that come from the π electrons and the σ and π electron transitions in the region of 14-27 eV. In the low energy part of dipole strength function we have peaks at 3.46, 4.35, 5.36, and 5.84 eV, which agree very well with the ones obtained by the calculations of Westin et al.. 46 By integrating the dipole strength function over energy we get the sum rule strength. The total sum rule strength is 223.78 out of 240. Therefore, we satisfy the sum rule up to 93.24 %. This reflects the incompleteness of our basis set, which fails to reproduce some of the excitations in the high energy part of the spectrum. The σ plasmon is broadened, but this is a common feature of all the TDDFT calculations done for C 60 . 13 In Fig. 5, the imaginary part of the polarizability is given as function of energy. By using Eq. (18), the static linear polarizability α(0) is found to be 91.1Å 3 , while our finite field calculations produce a value of 87.3Å 3 . Results for α(0), from very accurate finite-field calculations using fifteen values of the field, are given in section IV. Both values are higher than the lower limit estimation of 62.5Å 3 from quantum-mechanical calculations, 47 and in good agreement with the value of 85Å 3 obtained by Yabana and Bertsch. 13 They also agree well with the experimental values of 79.3Å 3 from UV absorption, 48 and 85.2Å 3 from EELS spectra. 49,50
IV. NON-LINEAR POLARIZABILITIES
Because of the non-perturbative nature of our method we are able, for large values of the applied field, to obtain non-linear polarizabilities. In this section, we present the calculation of the imaginary part of the integrated frequency-dependent second order non-linear polarizability, ℑγ step (ω), which is related to the response to a step function, for C 60 . Because C 60 is centrosymmetric the first order non-linear polarizability, β(ω), and all other polarizabilities involving an even number of fields, vanish by symmetry.
The advantage of the explicit time method is that exactly the same methods can be used to calculate the nonlinear response of the system. The disadvantage is that (unlike the linear case where each Fourier component is independent) the non-linear response depends upon the detailed spectrum of the applied field. Here we derive the non-linear response of an electric field coupled to a C 60 for the case where the field is the step function used before. A different calculation would have to be done to find the non-linear response to a field with a different time dependence.
First we give the relation of our calculation to the general definition of second order non-linear response, as a function of time, which is 51
D (3) (t) = t −∞ dt 1 t −∞ dt 2 t −∞ dt 3 (19) γ(t; t 1 , t 2 , t 3 )E(t 1 )E(t 2 )E(t 3 ).
For the case of a step function perturbation i.e. E(t) = E θ(−t), it takes the form
D (3) (t) = iE 3 lim δi→0 + dω 1 dω 2 dω 3 (2π) 3 (20) e −i(ω1+ω2+ω3)t γ(−ω 1 − ω 2 − ω 3 ; ω 1 , ω 2 , ω 3 ) (ω 1 − iδ 1 )(ω 2 − iδ 2 )(ω 3 − iδ 3 ) .
We Fourier transform Eq. (20) and obtain the second order non-linear response as a function of frequency
D (3) (ω) = iE 3 lim δi→0 + dω 2 dω 3 (2π) 2 (21) γ(−ω; ω − ω 2 − ω 3 , ω 2 , ω 3 ) (ω − ω 2 − ω 3 − iδ 1 )(ω 2 − iδ 2 )(ω 3 − iδ 3 ) .
The quantity we calculate is the real part of the second order non-linear response, ℜD (3) (ω), from which we can extract the imaginary part of the integrated second order non-linear polarizability, ℑγ step (ω). Explicit details are given below and in analogy to Eq. (10) ℑγ step (ω) is given by
ℑγ step (ω) = −ω lim δi→0 + dω 2 dω 3 (2π) 2 (22) ℑγ(−ω; ω − ω 2 − ω 3 , ω 2 , ω 3 ) (ω − ω 2 − ω 3 − iδ 1 )(ω 2 − iδ 2 )(ω 3 − iδ 3 ) = ω ℜD (3) (ω) E 3 .
Just as in Eq. (18) for the linear term, ℑγ step (ω) can be related to the static second order non-linear polarizability by the expression
2 π ∞ 0 dω ω ℑγ step (ω) = γ(0; 0, 0, 0).(23)
Eq. (23) can be trivially derived by realizing that D (3) (t = 0) = γ(0; 0, 0, 0)E 3 when a step function perturbation is applied. Alternatively, we can derive Eq. (23) directly from Eq. (21) by applying the Kramers-Kroning relations for γ(−ω 1 − ω 2 − ω 3 ; ω 1 , ω 2 , ω 3 ). In fact, with the help of the Kramers-Kroning relations we can derive another interesting equality for the first moment of our integrated response,
1 3 dω ℑγ step (ω) = dω ℑγ(−ω; ω, 0, 0).(24)
For the calculation ofγ step (ω) we calculate the response of the system under two different step function perturbations. In the first calculation the field used is equal to E 1 = 0.10 V/Å, and we assume that the response D 1 (ω) is linear with respect to the field. This assumption is true since for ω = 0 the non-linear terms contribute to the response only 4.82×10 −3 %. The contribution is of the same order of magnitude for ω = 0. In the second calculation the field is equal to E 2 = 1.00 V/Å, and we assume that the response D 2 (ω) consists of the linear response and the second order non-linear response D
2 (ω). The values of the field are in the same range as those used by Westin et al. 46 for the determination of the static second order non-linear polarizability. Using Eq. (10), we have that
D 1 (ω) = α(ω)E 1 (ω) = α(ω) E 1 iω(25)
and
D 2 (ω) − α(ω)E 2 (ω) = D (3) 2 (ω).(26)
From Eq. (22) it follows that D
2 (ω) =γ step (ω) iω E 3 2 ,(3)
and from Eqs. (25), (26), and (27)
γ step (ω) = iω E 3 2 D 2 (ω) − E 2 E 1 D 1 (ω) .(28)
Our calculation forγ step (ω) is quite straightforward in contrast to the perturbative method where it becomes computationally very demanding. In Fig. 6, we present the results, up to 60 eV, for ℑγ step (ω), where ℑγ step (ω) is given by Eq. (22). As expected, ℑγ step (ω) has both positive and negative values. The reason why ℑγ step (ω) does not vanish below some finite frequency (as does the linear response) is because the second-order non-linear term represents many processes of both absorption and emission of photons and the C 60 molecule can couple to a continuum of modes that extends to zero frequency. This can also be seen in the integral expression Eq. (22).
Similarly to the case of the linear polarizability, we can obtain an estimation of the magnitude of γ(0; 0, 0, 0) from static self-consistent calculations performed with finite fields. This value can be contrasted to similar calculations in the literature, providing an estimation of the uncertainty of our calculations of the non-linear terms, and an internal consistency test for our calculation of the integrated frequency-dependent second order non-linear polarizability. We have followed here a procedure similar to that used in Ref. 52, performing LDA calculations of the total energy and electric dipole of the C 60 molecule for fifteen different values of an external static electric field E ranging from 0.003 V/Å to 2.0 V/Å. The results of the total energy were then fitted using the expression W tot =W 0 -1 2 αE 2 -1 4 γE 4 , where α is the linear polarizability and γ the second order non-linear polarizability. The values obtained for α and γ are, respectively, 85. 52 reported values of 82.7Å 3 and 7.0×10 −36 esu, for α and γ, respectively, using an all-electron method with a Gaussian expansion as a basis set. Van Gisbergen et al. 54 reported very similar values, 82.5Å 3 and 7.3×10 −36 esu, using a computational scheme based on a frozen-core approximation, and a basis set of Slater functions. It is interesting to note that these results are much smaller than those obtained using simplified tight-binding models within an independent electron picture, where the effects of screening are neglected. In such calculations the values of γ(0; 0, 0, 0) obtained are of the order of 200×10 −36 esu. [55][56][57]
V. CONCLUSION
We presented a method for the calculation of the optical response of atoms and clusters. The main features of the method are the description of the wavefunctions in terms of an efficient local orbital (LCAO) basis and the explicit evolution of the system in time. This approach is designed for large clusters and in fact it gives excellent results for C 60 . It is also shown to work remarkably well even for systems for which the LCAO basis is very small, such as Na 8 . Our approach has the desirable features that only occupied states are needed and that all the most computationally intensive operations are essentially the same as those used to calculate the ground state properties. In addition, non-linear effects can be included in a straightforward way, and we have shown how to calculate the second order non-linear response for C 60 .
exp −iS −1 H∆t ≃ exp (−i(∆E/2 + E min )∆t) × (A1) N n=0 a n ∆E t 2 φ n (H norm )
where φ n are the Chebyshev polynomials, and the expansion coefficients a n (x) can be shown to be analogous to Bessel functions of the first kind of order n. Another approximation for the exponential in Eq. (14) is performed by using the split-operator method introduced by Feit et al. 60 According to this method, the exponential which contains the Hamiltonian operator can be split as
exp[−i(T + V )∆t] ≃ (A2) exp(−i 1 2 T ∆t) exp(−iV ∆t) exp(−i 1 2 T ∆t).
This method was later generalized by Suzuki 61 for an arbitrary number of operators, providing higher order expansions (formula (A2) corresponds to second order in ∆t), and a rigorous extension of the method to timedependent Hamiltonians. 62 The split operator method takes advantage of the fact that it is very convenient to treat operators in their diagonal representations. For example, it is trivial to apply the kinetic energy operator to a wave-function in Fourier space, while the effect of a local potential is more easily calculated in real space. This method is, in principle, unconditionally stable and norm conserving. On the other hand, it does not conserve energy and it can only be used to Hamiltonian operators which can be split into two non-commuting parts with a simple transformation between them. Therefore, the method is very well suited for plane wave or real space methods, where efficient fast Fourier transform algorithms provide an exact transformation between finite plane-wave expansions and real space grids. In other words, they span exactly the same subspace of functions, and it is possible to switch between representations where the kinetic energy and the potential are diagonal within a given subspace. For an LCAO basis this is not possible. If we take an arbitrary wavefunction expanded in an orbital basis ψ(r) = ν c ν φ ν (r), the result of applying the operator exp(−iV t) using a grid of points in realspace f (r i ) = exp(−iV (r i )t)ψ(r i ) will be, in general, not representable using the same local basis, i.e. some of the resulting function has been spilled from the subspace spanned by the local basis. An alternative way to solve Eq. (12) is by using the second-order differencing (SOD) method introduced by Askar and Cakmak. 63 In SOD the symmetric relation is used
c(t + ∆t) − c(t − ∆t) = (e −iS −1 H(t)∆t − e iS −1 H(t)∆t )c(t) (A3)
and by expanding the exponential in Taylor series, the second-order propagation scheme is obtained
c(t + ∆t) ≃ c(t − ∆t) − 2i∆tS −1 H(t)c(t).(A4)
By construction SOD obeys the time-reversal symmetry. SOD is stable, and norm and energy are approximately conserved, only if the Hamiltonian is Hermitian. In spite of its simplicity, an important drawback of this method for its application to large systems is that it is necessary to store the wavefunctions at two different time steps, which may require a large amount of memory. Finally, another method for solving Eq. (12) is the short iterative Lanczos. 59,64 It is very convenient for calculations that involve very big Hamiltonians, especially when they are time independent. The Hamiltonian is projected to a subspace of smaller dimensionality and takes a tridiagonal form that makes easy to perform calculations. The Lanczos recurrence creates a set of orthogonal polynomials which constitute a finite polynomial approximation of the operator. An interesting feature of the method is its dependence on both the operator and the initial vector. The Lanczos recurrence relation is (S −1 H) q j = β j−1 q j−1 + α j q j + β j q j+1 .
(A5)
The coefficients are α j = (q j , (S −1 H) q j ) and β j−1 = (q j−1 , (S −1 H) q j ), where (a, b) = a † Sb is the usual complex inner product, and (q i , q j ) = δ ij . Matrices H and S have N b × N b dimension and the vectors q j have N b components, N b the total number of basis functions. For very large N b , where the method is particularly useful, the inverse S −1 is not directly calculated but an iterative method is used instead. 64 The recurrence relation is initiated by setting
q 0 = c(0)(A6)
and (S −1 H) q 0 = α 0 q 0 + β 0 q 1 .
After P iterations, the projected operator (S −1 H) P = (q j , (S −1 H) q i ) is a P × P matrix with a tridiagonal form that can be very easily diagonalized. The solution of Eq. (12) becomes c(∆t) = Z † e −iDP ∆t Zc(0),
where Z is the N b × P transformation matrix that diagonalizes (S −1 H) P and D P is the diagonalized matrix. For an arbitrary initial condition, the accuracy of the time evolution achieved with Eq. (A8) is equivalent to a P order expansion of the evolution operator
c(∆t) = P −1 k=0 −it k k! (S −1 H) k c(0),(A9)
and, consequently, the energy is only approximately conserved. However, the propagator in Eq. (A8) is unitary, and the normalization condition is strictly preserved. For time independent Hamiltonians, Eq. (A8) can be used to evolve the wavefunction for a time interval τ that depends on P . For times larger than τ , the time evolution predicted by a P order expansion becomes inaccurate, and it is necessary to recalculate propagator using c(τ ) as the starting point for the recurrence procedure. The method is, therefore, very well suited to follow the evolution of wavefunctions described by a large basis set during short periods of time (< τ ).
For the purpose of the calculation of response functions, we believe that the method adopted in this paper, based on the Crank-Nicholson operator, is superior to the short iterative Lanczos, at least for moderate basis sizes. There are several reasons for this: i) We need to evolve the wavefunctions for long times. ii) The Hamiltonian is time dependent. This implies that the recurrence cycle for the calculation of the approximated time evolution operator has to be repeated for each time step. iii) In our case, we have to evolve all the occupied states. The propagator in Eq.(A8), however, depends on the initial state. Therefore, it is necessary to develop some generalization of the scheme presented. For example, it might be also possible to construct an approximate time evolution operator starting from some weighted linear combination of the occupied states, and projecting it into a subspace of dimension P > N occ , where N occ is the number of occupied wavefunctions.
a] (in units of Angstrom cubed) FIG. 2. Imaginary part of the linear polarizability of Na8 vs. energy.
5Å 3 /atom. (This result can also be derived from the fact that for the step perturbation D(t = 0) = α(0)E.) The discrepancy between both estimations is probably related with the lack of energy resolution of the calculated α(ω) to perform the integral in Eq. (18) with the required accuracy. Both results are in reasonable agreement with the values of 14.6Å 3 /atom and 14.7Å 3 /atom, computed by Vasiliev et al. 14,34 using the TDLDA and finite field methods, respectively. B. Large molecules: C60
a] (in units of Angstrom cubed) FIG. 5. Imaginary part of the linear polarizability of C60 vs. energy.
FIG. 6 .
6Imaginary part of the integrated second order non-linear polarizability , ℑγstep(ω), of C60 vs. energy.
3Å 3 and 3.54×10 −36 esu. The results for the dipole moment were fitted to the expression D=αE+γE 3 . The corresponding values obtained are 85.3Å 3 and 3.71×10 −36 esu. These values of γ(0; 0, 0, 0) are in excellent agreement with the value of 3.72×10 −36 esu obtained using Eq. (23), thus confirming the validity of our calculation forγ step (ω). All of the values obtained by our calculations are smaller than the upper bound of 3.7×10 −35 esu proposed by the experiments of Geng and Wright. in the literature. Quong et al.
ACKNOWLEDGMENTS
We would like to thank Prof. L. Cooper for the useful discussions, and Dr. I. Vasiliev for reading the manuscript. This material is based upon work supported by the U.S. Department of Energy, Division of Materials Sciences under Award No. DEFG02-ER9645439, through the Frederick Seitz Materials Research Laboratory.APPENDIX: ALTERNATIVE APPROACHES FOR THE SOLUTION OF THE TIME-DEPENDENT KOHN-SHAM EQUATIONOther ways to approximate the exponential in Eq.(14) include the expansion of the exponential in a series of Chebyshev polynomials58
∆E=E max − E min . E max and E min are, respectively, the maximum and minimum eigenvalues of S −1 H. The Chebyshev polynomials are chosen because their error decreases exponentially when N is large enough, due to the uniform character of the Chebyshev expansion.58 Time reversal is built into the expansion coefficients, but the method does not effectively conserve norm or energy. While it works remarkably well for time-independent Hamiltonians, 59 for timedependent Hamiltonians the method becomes inefficient as the Chebyshev polynomials of the Hamiltonian have to be recalculated for each time step.H norm is a normalized Hamiltonian 2(S −1 H − E av )/∆E,
where E av =(E max + E min )/2,
* Present address: Departamento de Física de Materiales and DIPC, Facultad de Química, UPV/EHU, Apdo. 1072, E-20080. San Sebastián, Spain* Present address: Departamento de Física de Materiales and DIPC, Facultad de Química, UPV/EHU, Apdo. 1072, E- 20080 San Sebastián, Spain.
. P Hohenberg, W Kohn, Phys. Rev. 136864P. Hohenberg and W. Kohn, Phys. Rev. 136, B864 (1964).
. W Kohn, L J Sham, Phys. Rev. 1401133W. Kohn and L. J. Sham, Phys. Rev. 140, A1133 (1965).
. A Zangwill, P Soven, Phys. Rev. A. 211561A. Zangwill and P. Soven, Phys. Rev. A 21, 1561 (1980).
. B M Deb, S K Ghosh, J. Chem. Phys. 77B. M. Deb and S. K. Ghosh, J. Chem. Phys. 77, 342 (1982).
. S K Ghosh, B M Deb, Chem. Phys. 71295S. K. Ghosh and B. M. Deb, Chem. Phys. 71, 295 (1982).
. L J Bartolotti, Phys. Rev. A. 241661L. J. Bartolotti, Phys. Rev. A 24, 1661 (1981).
. L J Bartolotti, Phys. Rev. A. 262243L. J. Bartolotti, Phys. Rev. A 26, 2243 (1982).
. E Runge, E K U Gross, Phys. Rev. Lett. 52997E. Runge and E. K. U. Gross, Phys. Rev. Lett. 52, 997 (1984).
. E K U Gross, W Kohn, Phys. Rev. Lett. 552850E. K. U. Gross and W. Kohn, Phys. Rev. Lett. 55, 2850 (1985).
. D Mearns, W Kohn, Phys. Rev. A. 354796D. Mearns and W. Kohn, Phys. Rev. A 35, 4796 (1987).
E K U Gross, C A Ullrich, U J Gossmann, Density Functional Theory. E. K. U. Gross and R. M. DreizlerNew YorkPlenum PressE. K. U. Gross, C. A. Ullrich, and U. J. Gossmann, in Density Functional Theory, edited by E. K. U. Gross and R. M. Dreizler (Plenum Press, New York, 1995).
. E K U Gross, J F Dobson, M Petersilka, Topics in Current Chemistry. R. F. Nalewajski181Springer-VerlagE. K. U. Gross, J. F. Dobson, and M. Petersilka, in Topics in Current Chemistry, Vol. 181, edited by R. F. Nalewajski (Springer-Verlag, Berlin Heidelberg, 1996).
. K Yabana, G F Bertsch, Phys. Rev. B. 544484K. Yabana and G. F. Bertsch, Phys. Rev. B 54, 4484 (1996).
. I Vasiliev, S Ögüt, J R Chelikowsky, Phys. Rev. Lett. 821919I. Vasiliev, S.Ögüt, and J. R. Chelikowsky, Phys. Rev. Lett. 82, 1919 (1999).
M E Casida, Recent Developments and Applications of Modern Density Functional Theory. J. M. SeminarioAmsterdamElsevierM. E. Casida, in Recent Developments and Applications of Modern Density Functional Theory, edited by J. M. Semi- nario (Elsevier, Amsterdam, 1996).
. D Sánchez-Portal, P Ordejón, E Artacho, J M Soler, Int. J. Quant. Chem. 65453D. Sánchez-Portal, P. Ordejón, E. Artacho, and J. M. Soler, Int. J. Quant. Chem. 65, 453 (1997).
. E Artacho, D Sánchez-Portal, P Ordejón, A García, J M Soler, Phys. Stat. Sol. (b). 215809E. Artacho, D. Sánchez-Portal, P. Ordejón, A. García, and J. M. Soler, Phys. Stat. Sol. (b) 215, 809 (1999).
. P Ordejón, Phys. Stat. Sol. (b). 217335P. Ordejón, Phys. Stat. Sol. (b) 217, 335 (2000).
. N Troullier, J L Martins, Phys. Rev. B. 431993N. Troullier and J. L. Martins, Phys. Rev. B 43, 1993 (1991).
. L Kleinman, D M Bylander, Phys. Rev. Lett. 481425L. Kleinman and D. M. Bylander, Phys. Rev. Lett. 48, 1425 (1982).
. O F Sankey, D J Niklewski, Phys. Rev. B. 403979O. F. Sankey and D. J. Niklewski, Phys. Rev. B 40, 3979 (1989);
. D Sánchez-Portal, E Artacho, J M Soler, J. Phys.: Condens. Matter. 83859D. Sánchez-Portal, E. Artacho, and J. M. Soler, J. Phys.: Condens. Matter 8, 3859 (1996).
. J Junquera, O Paz, D Sánchez-Portal, E Artacho, submitted; condmat/0104170J. Junquera, O. Paz, D. Sánchez-Portal, and E. Artacho, submitted; condmat/0104170.
Our calculations are performed using a supercell geometry. Therefore, we are talking here about the ground state of a bounded system in the presence of a periodic triangular potential, with slope E. This provides a meaningful model for a cluster in the presence of a finite electric field only if |E| < |Ec|, where the critical slope depends on the system under consideration, and on the lateral size of the simulation cell L. In particular, Ec → 0 when L → ∞. In our calculations of the linear response we use small electric fields, such that |E| << |Ec|Our calculations are performed using a supercell geome- try. Therefore, we are talking here about the ground state of a bounded system in the presence of a periodic trian- gular potential, with slope E. This provides a meaningful model for a cluster in the presence of a finite electric field only if |E| < |Ec|, where the critical slope depends on the system under consideration, and on the lateral size of the simulation cell L. In particular, Ec → 0 when L → ∞. In our calculations of the linear response we use small electric fields, such that |E| << |Ec|.
. D M Ceperley, B J Alder, Phys. Rev. Lett. 45566D. M. Ceperley and B. J. Alder, Phys. Rev. Lett. 45, 566 (1980).
R S Varga, Matrix Iterative Analysis. Englewood CliffsPrentice-Hall, Inc263R. S. Varga, Matrix Iterative Analysis (Prentice-Hall, Inc., Englewood Cliffs, N.J., 1962), p. 263.
. H Ehrenreich, M H Cohen, Phys. Rev. 115786H. Ehrenreich and M. H. Cohen, Phys. Rev. 115, 786 (1959).
. K Selby, M Vollmer, J Masui, V Kresin, W A De Heer, W D Knight, Phys. Rev. B. 405417K. Selby, M. Vollmer, J. Masui, V. Kresin, W. A. de Heer, and W. D. Knight, Phys. Rev. B 40, 5417 (1989).
. C R C Wang, S Pollack, D Cameron, M M Kappes, J. Chem. Phys. 933787C. R. C. Wang, S. Pollack, D. Cameron, and M. M. Kappes, J. Chem. Phys. 93, 3787 (1990).
. W Ekardt, Phys. Rev. Lett. 521925W. Ekardt, Phys. Rev. Lett. 52, 1925 (1984).
. W A De Heer, K Selby, V Kresin, J Masui, M Vollmer, A Châtelain, W D Knight, Phys. Rev. Lett. 591805W. A. de Heer, K. Selby, V. Kresin, J. Masui, M. Vollmer, A. Châtelain, and W. D. Knight, Phys. Rev. Lett. 59, 1805 (1987).
. C Yannouleas, R A Broglia, M Brack, P F Bortignon, Phys. Rev. Lett. 63255C. Yannouleas, R. A. Broglia, M. Brack, and P. F. Bor- tignon, Phys. Rev. Lett. 63, 255 (1989).
. V Bounačić-Koutecký, P Fantucci, J Koutecký, J. Chem. Phys. 933802V. Bounačić-Koutecký, P. Fantucci, and J. Koutecký, J. Chem. Phys. 93, 3802 (1990).
. S G Louie, S Froyen, M L Cohen, Phys. Rev. 261738S. G. Louie, S. Froyen, and M. L. Cohen, Phys. Rev. B26, 1738 (1982).
. I Vasiliev, S Ögüt, J R Chelikowsky, Phys. Rev. Lett. 784805I. Vasiliev, S.Ögüt, and J. R. Chelikowsky, Phys. Rev. Lett. 78, 4805 (1997).
. I L Spain, Chem. Phys. Carbon. 16119I. L. Spain, Chem. Phys. Carbon 16, 119 (1980).
. Y Saito, H Shinohara, A Ohsita, Jpn. J. Appl. Phys. 301068Y. Saito, H. Shinohara, and A. Ohsita, Jpn. J. Appl. Phys. 30, L1068 (1991).
. H Ajie, M M Alvarez, S J Anz, R D Beck, F Diederich, K Fostiropoulos, D R Huffman, W Kratschmer, Y Rubin, K E Schriver, D Sensharma, R L Whetten, J. Phys. Chem. 948630H. Ajie, M. M. Alvarez, S. J. Anz, R. D. Beck, F. Diederich, K. Fostiropoulos, D. R. Huffman, W. Kratschmer, Y. Ru- bin, K. E. Schriver, D. Sensharma, and R. L. Whetten, J. Phys. Chem. 94, 8630 (1990).
. J R Heath, R F Curl, R E Smalley, J. Chem. Phys. 874236J. R. Heath, R. F. Curl, and R. E. Smalley, J. Chem. Phys. 87, 4236 (1987).
. J W Keller, M A Coplan, Chem. Phys. Lett. 19389J. W. Keller and M. A. Coplan, Chem. Phys. Lett. 193, 89 (1992).
. I V Hertel, H Steger, J Vries, B Weisser, C Menzel, B Kamke, W Kamke, Phys. Rev. Lett. 68784I. V. Hertel, H. Steger, J. de Vries, B. Weisser, C. Menzel, B. Kamke, and W. Kamke, Phys. Rev. Lett. 68, 784 (1992).
. S Leach, M Vervloet, A Desprès, E Bréheret, J Hare, T J Dennis, H W Kroto, R Taylor, D R M Walton, Chem. Phys. 160451S. Leach, M. Vervloet, A. Desprès, E. Bréheret, J. Hare, T. J. Dennis, H. W. Kroto, R. Taylor, and D. R. M. Walton, Chem. Phys. 160, 451 (1992).
. G F Bertsch, A Bulgac, D Tománek, Y Wang, Phys. Rev. Lett. 672690G. F. Bertsch, A. Bulgac, D. Tománek, and Y. Wang, Phys. Rev. Lett. 67, 2690 (1991).
. A Rubio, J A Alonso, J M López, Physica B. 183247A. Rubio, J. A. Alonso, and J. M. López, Physica B 183, 247 (1993).
. A Bulgac, N Ju, Phys. Rev. B. 464297A. Bulgac and N. Ju, Phys. Rev. B. 46, 4297 (1992).
. F Alasia, R A Broglia, H E Roman, L I Serra, G Colò, J M Pacheco, J. Phys. B. 27643F. Alasia, R. A. Broglia, H. E. Roman, L. I. Serra, G. Colò, and J. M. Pacheco, J. Phys. B, 27, L 643 (1994).
. E Westin, A Rosén, G Te Velde, E J Baerends, J. Phys. B. 295087E. Westin, A. Rosén, G. Te Velde, and E. J. Baerends, J. Phys. B. 29, 5087 (1996).
. P W Fowler, P Lazzeretti, R Zanasi, Chem. Phys. Lett. 16579P. W. Fowler, P. Lazzeretti, and R. Zanasi, Chem. Phys. Lett. 165, 79 (1990).
. S L Ren, Y Wang, A M Rao, E Mcrae, J M Holden, T Hager, K Wang, Wen-Tse Lee, W F Ni, J Selegue, P C Eklund, Appl. Phys. Lett. 592678S. L. Ren, Y. Wang, A. M. Rao, E. McRae, J. M. Holden, T. Hager, K. Wang, Wen-Tse Lee, W. F. Ni, J. Selegue, and P. C. Eklund, Appl. Phys. Lett. 59, 2678 (1991).
. E Sohmen, J Fink, W Krätschmer, Z. Phys. B. 8687E. Sohmen, J. Fink, and W. Krätschmer, Z. Phys. B 86, 87 (1992).
. H Cohen, E Kolodney, T Maniv, M Foldman, Solid State Commun. 81183H. Cohen, E. Kolodney, T. Maniv, and M. Foldman, Solid State Commun. 81, 183 (1992).
Theory of Nonlinear Optical Susceptibilities. C Flytzanis, Quantum Electronics. H. Rabin and C. L. TangNew YorkAcademic PressC. Flytzanis, "Theory of Nonlinear Optical Susceptibili- ties," in Quantum Electronics, edited by H. Rabin and C. L. Tang (Academic Press, New York, 1975).
. A A Quong, M R Pederson, Phys. Rev. B. 4612906A. A. Quong and M. R. Pederson, Phys. Rev. B 46, 12906 (1992).
. Lei Geng, J C Wright, Chem. Phys. Lett. 249105Lei Geng and J. C. Wright, Chem. Phys. Lett. 249, 105 (1996).
. S J A Van Gisbergen, J G Snijders, E J Baerends, Phys. Rev. Lett. 783097S. J. A. van Gisbergen, J. G. Snijders, and E. J. Baerends, Phys. Rev. Lett. 78, 3097 (1997).
. Z Shuai, J L Brédas, Phys. Rev. B. 4616135Z. Shuai and J. L. Brédas, Phys. Rev. B 46, 16135 (1992).
. K Harigaya, S Abe, Jpn. J. Appl. Phys. 31887K. Harigaya and S. Abe, Jpn. J. Appl. Phys. 31, L887 (1992).
. J Dong, J Jiang, J Yu, Z D Wang, D Y Xing, Phys. Rev. B. 529066J. Dong, J. Jiang, J. Yu, Z. D. Wang, and D. Y. Xing, Phys. Rev. B 52, 9066 (1995).
. H Tal-Ezer, R Kosloff, J. Chem. Phys. 813967H. Tal-Ezer and R. Kosloff, J. Chem. Phys. 81, 3967 (1984).
. C Leforestier, R H Bisseling, C Cerjan, M D Feit, R Friesner, A Guldberg, A Hammerich, G Jolicard, W Karrlein, H D Meyer, N Lipkin, O Roncero, R Kosloff, J. Comp. Phys. 9459C. Leforestier, R. H. Bisseling, C. Cerjan, M. D. Feit, R. Friesner, A. Guldberg, A. Hammerich, G. Jolicard, W. Kar- rlein, H. D. Meyer, N. Lipkin, O. Roncero, and R. Kosloff, J. Comp. Phys. 94, 59 (1991).
. M D Feit, J A Fleck, J R , A Steiger, J. Comp. Phys. 47412M. D. Feit, J. A. Fleck, JR., and A. Steiger, J. Comp. Phys. 47, 412 (1982).
. M Suzuki, J. Phys. Soc. Jpn. 613015M. Suzuki, J. Phys. Soc. Jpn. 61, L3015 (1992);
. M Suzuki, T Yamauchi, J. Math. Phys. 344892M. Suzuki and T. Yamauchi, J. Math. Phys. 34, 4892 (1993).
. M Suzuki, Proc. Jpn. Acad., Ser. B: Phys. Biol. Sci. 69161M. Suzuki, Proc. Jpn. Acad., Ser. B: Phys. Biol. Sci. 69, 161 (1993).
. A Askar, A S Cakmak, J. Chem. Phys. 682794A. Askar and A. S. Cakmak, J. Chem. Phys. 68, 2794 (1978).
. T J Park, J C Light, J. Chem. Phys. 855870T. J. Park and J. C. Light, J. Chem. Phys. 85, 5870 (1986).
| []
|
[
"Even flavor QED3 in an external magnetic field",
"Even flavor QED3 in an external magnetic field"
]
| [
"K Farakos ",
"G Koutsoumbas \nPhysics Department\nNational Technical University\nZografou Campus, 157 80AthensGreece\n",
"N Mavromatos ",
"A Momen \nDepartment of Physics\nTheoretical Physics\n1 Keble RoadOX1 3NPOxfordU.K\n"
]
| [
"Physics Department\nNational Technical University\nZografou Campus, 157 80AthensGreece",
"Department of Physics\nTheoretical Physics\n1 Keble RoadOX1 3NPOxfordU.K"
]
| []
| The magnetically induced fermionic condensate is studied at zero and finite temperatures. The effects of a non-homogeneous external magnetic fields is briefly considered. | 10.1016/s0920-5632(00)91742-0 | [
"https://export.arxiv.org/pdf/hep-lat/9909010v1.pdf"
]
| 119,353,173 | hep-lat/9909010 | 487782ef2495366ee7f5a747f67e504b13567e38 |
Even flavor QED3 in an external magnetic field
Sep 1999
K Farakos
G Koutsoumbas
Physics Department
National Technical University
Zografou Campus, 157 80AthensGreece
N Mavromatos
A Momen
Department of Physics
Theoretical Physics
1 Keble RoadOX1 3NPOxfordU.K
Even flavor QED3 in an external magnetic field
Sep 1999arXiv:hep-lat/9909010v1 2 1
The magnetically induced fermionic condensate is studied at zero and finite temperatures. The effects of a non-homogeneous external magnetic fields is briefly considered.
INTRODUCTION
Considerations of the effects of external magnetic fields in the Early Universe and in problems of high T c superconductivity ( [1], [2]) are the main motivations to study the behaviour of the fermionic matter under the influence of an external magnetic field.
The three-dimensional continuum Lagrangian of the model is given by:
L = − 1 4 (F µν ) 2 + ΨD µ γ µ Ψ − mΨΨ,(1)
where D µ = ∂ µ − iga S µ − ieA µ ; a S µ is a fluctuating gauge field, while A µ represents the external gauge field. The main object of interest here is the condensate < ΨΨ >, which is the coincidence limit of the fermion propagator, S F (x, y).
RESULTS IN THE CONTINUUM
A first estimate for the enhancement of the condensate arising from the external fields may be gained through the analysis of the relevant Schwinger-Dyson equation:
S −1 F (p) = γ · p − g d 3 k (2π) 3 γ µ S F (k)Γ ν (k, p − k)D µν (p − k) (2)
where Γ ν is the fermion-photon vertex function and D µν is the exact photon propagator ( [5]). The results of a recent approximate solution of the above equation in the regime of small homogeneous external magnetic field, both for quenched and dynamical fermions are depicted in figure 1, where one may see the dynamical mass generated, versus the magnetic field strength. The upper curve (labeled "Q,Int-solve") is the solution for quenched fermions, while the lower curve is the weak field approximation to the dynamical fermionic condensate. We have also included the weak field approximation to the quenched result, as a measure of the reliability of the weak field expansion.
There have also been approximations in the regime of strong magnetic fields [3], but for a fully quantitative treatment one should rely on the lattice approach ( [4,5]).
LATTICE RESULTS
We will first present the results for the T = 0 case and a homogeneous magnetic field. Figure 2 contains the condensate versus the gauge field coupling constant, β g , for several values of the magnetic field. This figure is the final outcome of a series of measurements performed at several values of the bare mass; the result shown here is the extrapolation to the zero mass limit. The result is independent from the magnetic field at strong gauge coupling, because the gauge interactions are the main contributor to the condensate in this regime; the magnetic field takes over in the weak coupling, on the right hand part of the diagram. Figure 3 contains the condensate versus β g for a typical fixed value of the magnetic field. The uppermost data correspond to a symmetric lattice 16 3 . The curve is smooth and no sign of discontinuity can be seen anywhere. The next result comes from an asymmetric lattice, with a rather large time extent, though: 24 2 × 6. A structure starts showing up at β g ≈ 0.45. To see better this structure, we go to the 16 2 × 4 lattice; the structure moves to β g ≈ 0.40 and becomes somehow more steep. The effect of the spatial size of the lattice is not big, as one may see by comparing the points for 16 2 × 4 versus the ones for 24 2 × 4, which are also shown on the figure. Finally the points for 16 2 ×2 show a clear discontinuity in the slope. Although it deserves more detailed study, it seems safe to interpret this discontinuity as a symmetry restoring phase transition, imposed by the nonzero temperature.
In figure 4 we use the results for various lattice sizes to construct a graph showing the dependence of the condensate on the temperature for two values of the external magnetic field. It is surprisingly similar to the corresponding result for the "free" case ( [4]). The condensate tends to zero for large temperatures; this tendency is more intense for the smallest magnetic field. In the last figure we show the results of a study of a non-homogeneous magnetic field. We consider a lattice in which its central 6×6 region (for all values of z) carries a constant magnetic flux, while in the rest part the magnetic field vanishes ( [5]); we measure the condensate along a straight line passing from the center of the lattice at a fixed value of the magnetic field. This profile is shown in figure 5 and one may see that the condensate is non-zero only in the region of non-vanishing magnetic flux. Away from this region the remaining condensate may be accounted for by the explicit mass term and has little to do with the external magnetic field.
Figure 1 .
1Solution of Schwinger-Dyson equation for quenched and dynamical fermions.
Figure 2 .
2< ΨΨ > versus β g for various values of the magnetic field.
Figure 3 .
3< ΨΨ > versus β G in the zero mass limit at various tamperatures 1Nt .
Figure 4 .
4< ΨΨ > versus the temperature for two values of the magnetic field.
Figure 5 .
5< ΨΨ > along a straight line passing from the center of the lattice if the magnetic field parameter b is set to 0.111. The central region of non-zero flux is 6 × 6.
Acknowledgements K.F. and G.K. would like to acknowledge financial support from the TMR project "Finite temperature phase transitions in particle Physics", EU contract number: FMRX-CT97-0122. The work of N.E.M. is partially supported by PPARC (UK) through an Advanced Fellowship. The work of A.M. is supported by PPARC.
. N Dorey, N E Mavromatos, Nucl.Phys. 386614N.Dorey and N.E.Mavromatos, Nucl.Phys. B386 (1992) 614.
. K Farakos, N E Mavromatos, Int.J.Mod. Phys. 12809K.Farakos and N.E.Mavromatos, Int.J.Mod. Phys. B12 (1998) 809.
Dynamical mass generation in (2+1) dimensional electrodynamics in an external magnetic field. A V Shpagin, hep-ph/9611412A.V.Shpagin, Dynamical mass generation in (2+1) dimensional electrodynamics in an ex- ternal magnetic field, hep-ph/9611412.
. K Farakos, G Koutsoumbas, N E Mavromatos, Phys.Lett. 431147K.Farakos, G.Koutsoumbas and N.E.Mavro- matos, Phys.Lett. B431(1998)147.
On magnetic catalysis in even-flavour QED3. K Farakos, G Koutsoumbas, N E Mavromatos, A Momen, hep-ph/9905272K.Farakos, G.Koutsoumbas N.E.Mavromatos and A.Momen, On magnetic catalysis in even-flavour QED3, hep-ph/9905272.
| []
|
[
"Simulation Study of γγ -> hh in a Photon Collider",
"Simulation Study of γγ -> hh in a Photon Collider"
]
| [
"Tohru Takahashi \nGraduate School of Advanced Sciences of Matter\nHiroshima University Higashi-Hiroshima\nHiroshimaJapan\n",
"Nozomi Maeda \nGraduate School of Advanced Sciences of Matter\nHiroshima University Higashi-Hiroshima\nHiroshimaJapan\n",
"Katsumasa Ikematsu \nKEK, High Energy Accelerator Research Organization Tsukuba\nIbarakiJapan\n",
"Keisuke Fujii \nKEK, High Energy Accelerator Research Organization Tsukuba\nIbarakiJapan\n",
"Eri Asakawa \nInstitute of Physics\n-Meiji Gakuin University Yokohama\nKanagawaJapan\n",
"Daisuke Harada \nKEK, High Energy Accelerator Research Organization Tsukuba\nIbarakiJapan\n",
"Shinya Kanemura \nDepartment of Physics\nUniversity of Toyama Toyama\nToyamaJapan\n",
"Yoshimasa Kurihara \nKEK, High Energy Accelerator Research Organization Tsukuba\nIbarakiJapan\n",
"Yasuhiro Okada \nKEK, High Energy Accelerator Research Organization Tsukuba\nIbarakiJapan\n"
]
| [
"Graduate School of Advanced Sciences of Matter\nHiroshima University Higashi-Hiroshima\nHiroshimaJapan",
"Graduate School of Advanced Sciences of Matter\nHiroshima University Higashi-Hiroshima\nHiroshimaJapan",
"KEK, High Energy Accelerator Research Organization Tsukuba\nIbarakiJapan",
"KEK, High Energy Accelerator Research Organization Tsukuba\nIbarakiJapan",
"Institute of Physics\n-Meiji Gakuin University Yokohama\nKanagawaJapan",
"KEK, High Energy Accelerator Research Organization Tsukuba\nIbarakiJapan",
"Department of Physics\nUniversity of Toyama Toyama\nToyamaJapan",
"KEK, High Energy Accelerator Research Organization Tsukuba\nIbarakiJapan",
"KEK, High Energy Accelerator Research Organization Tsukuba\nIbarakiJapan"
]
| []
| We studied a feasibility of measuring Higgs boson pair production in a Photon Linear Collider. The optimum energy of γγ collision was estimated with a realistic luminosity distribution. We also discussed simulation study for detecting the signal against W boson pair backgrounds. | null | [
"https://export.arxiv.org/pdf/0902.3377v2.pdf"
]
| 118,381,578 | 0902.3377 | 21afa9ab82e1c2db722ce41406b4d2b8a628b9f9 |
Simulation Study of γγ -> hh in a Photon Collider
Tohru Takahashi
Graduate School of Advanced Sciences of Matter
Hiroshima University Higashi-Hiroshima
HiroshimaJapan
Nozomi Maeda
Graduate School of Advanced Sciences of Matter
Hiroshima University Higashi-Hiroshima
HiroshimaJapan
Katsumasa Ikematsu
KEK, High Energy Accelerator Research Organization Tsukuba
IbarakiJapan
Keisuke Fujii
KEK, High Energy Accelerator Research Organization Tsukuba
IbarakiJapan
Eri Asakawa
Institute of Physics
-Meiji Gakuin University Yokohama
KanagawaJapan
Daisuke Harada
KEK, High Energy Accelerator Research Organization Tsukuba
IbarakiJapan
Shinya Kanemura
Department of Physics
University of Toyama Toyama
ToyamaJapan
Yoshimasa Kurihara
KEK, High Energy Accelerator Research Organization Tsukuba
IbarakiJapan
Yasuhiro Okada
KEK, High Energy Accelerator Research Organization Tsukuba
IbarakiJapan
Simulation Study of γγ -> hh in a Photon Collider
We studied a feasibility of measuring Higgs boson pair production in a Photon Linear Collider. The optimum energy of γγ collision was estimated with a realistic luminosity distribution. We also discussed simulation study for detecting the signal against W boson pair backgrounds.
Introduction
Discovery the Higgs boson, the missing part of the standard model, is urgent and important task for present particle physics and it is expected to be found at the LHC experiment. Assuming discovery of the Higgs boson, the precision measurement of its property, where the ILC electron-positron collider will play a main role, is a key to establish the mechanism of mass generation of gauge boson and fermions based on the spontaneous symmetry breaking. In addition, new physics beyond the standard model may manifest via the precision measurements of the Higgs sector. Higgs physics at the ILC has been studied and summarized in [2]. In addition to the electron positron interaction at the ILC, an idea to construct the Photon Linear Collider (PLC) is considered as an option of the ILC. In the photon collider, intense laser pulses are flushed on to electrons accelerated by the linac and are converted to photon beams by the backward Compton scattering between laser photons and electrons. The PLC has potential to provide information of two photon decay with of the Higgs boson via H γγ → . The design of the PLC and physics opportunities has described in references [2-3]. Among Higgs boson properties, the mass and the self-coupling constant are to determine Higgs potential. While the measurement of the Higgs boson mass is the first thing to be done in the ILC experiment, the self-coupling constant requires detail investigation and luminosity to be measured. Thus, to see prospects to measure the self-coupling constant is important for the projection of the ILC performance to reveal nature of the electroweak symmetry breaking as well as to explore physics beyond the standard model which could manifest via symmetry breaking.
Higgs boson self-coupling in the PLC
In the PLC, Higgs boson can be pair produced via self-coupling as shown in figure 1. It was first investigated in Ref. [4] and recently revisited [5]. The Higgs potential is expressed in general as;
V µ φ λ φ = − + reads, 2 2 h m v λ = , hhh v λ λ = , hhhh λ λ
= , respectively with v being vacuum expectation value of the Higgs field. We define self-coupling parameter as;
(1 )
SM λ λ δκ = +
where SM λ andδκ stand for Higgs self-coupling in the standard model and its deviation respectively.
As was discussed in [5], Higgs self-coupling can be studied in the electron positron interaction via , e e Zhh hhνν + − → . Unlike the electron positron interaction, Higgs boson is produced by loop induced processes. Thus δκ dependence is different from those in the electron positron interaction and the PLC has different sensitivity for new physics.
Optimized PLC Parameters
In order to set beam parameters of the PLC for Higgs self-coupling study by pair production. We first searched for the optimum beam energy to maximize statistical sensitivity, stat S , which is defined as; → about 50 pb. The study indicated that that the PLC parameters have to be set to realize luminosity peak to be around 300 GeV. Figure 3 . The same analysis is performed with realistic luminosity distribution described next section and is found to be consistent each other.
LCWS/ILC 2008
Beam Parameters
Based on the discussion in the previous section, a set of parameters for the PLC were prepared as summarized in
Simulation Study of Higgs pair production
Simulation Scheme
In order to estimate efficiency for signals and backgrounds, we performed simulation study. The cross section were calculated using program developed by authors and cross section of background were calculated using GRACE, an automatic event generator program [7]. During the calculation, luminosity distribution described in section 2.3 was convoluted. Finally, detector response were simulated by the JSF quick simulator [8] . χ for two mass assumptions. One combination giving least 2 χ for Higgs or W boson mass assumption was chosen for an event. The black points are for Higgs event and reds are for W bosons
Event Analysis
Since Higgs boson of 120 GeV mainly decays into b-quark pairs with the branching ratio of about 0.67, we concentrated to the case that both Higgs boson decayed into b-quark pairs. For each event from the JSF simulator, we applied so called forced four jets analysis in which the clustering algorithm is applied to the event by changing clustering parameter until the event is categorized as a four jets event. After the forced four jets analysis, invariant masses for jet pairs were calculated. As it is a four jets event, it is necessary to choose a right jets pairs originating from parent Higgs (W for the background) bosons out of three possible combinations. For this purpose we defined a 2 χ as;
( ) ∑ − ≡ i W h i jj W h m m 2 2 ) ( 2 ) ( σ χ , where suffix h(w) means the 2 ) (W h χ
assuming Higgs(W) boson mass for a jet pairs. σ is the mass resolution for a jet pairs and is estimated to be 4 GeV in average. The summation runs over two jet pairs for one combination. In each jet combination, 2 χ was calculated for both Higgs and W mass assumption so that so that total of six 2 χ s are calculated for an event. The jet combination and hypnoses of the least 2 χ was chosen to be a most probable assumption for an event. Figure 4 show correlation of 2 h χ and 2 W χ for the most probable assumption in an event. It appears to be possible to separate signal and background by applying proper cut in the two dimensional plot.
Summary and Outlook
Aiming measurement of Higgs boson self-coupling in a PLC, feasibility study for detecting Higgs boson pairs production is being studied. The optimum center of mass energy of γγ for 120GeV Higgs boson was searched with both theoretical and with realistic luminosity distribution We found that both estimate consistently showed the optimum energy of about 300 GeV. Using parameter set for the electron beam and laser pulses, the number of events was estimated by convoluting luminosity distribution with the analytical calculation of the production cross section.
To investigate experimental feasibility, a trial to separate signal and W boson pair production background were performed. We found that the background can be suppressed once we apply cuts for kinematical variable such as properly defined 2 χ using jet pair masses. However, event selection only with kinematical parameters did not appear to be enough to reach tolerable background contamination. It is concluded that development of sophisticated bquark tagging for four jets event is crucial for further improvement.
Figure 1 :
1Diagrams for Higgs boson pair creation by γγ interaction via self-coupling
Figure 2 :
2Sensitivity for Higgs boson selfcoupling as a function of center of mass energy (CMS) of γγ collision. See detail for definition of the CMS energy 2 Pair Production of Higgs boson in the PLC
Figure 3 :
3Sensitivity for anomalous coupling as a function of ratio of background (γγare efficiency and effective cross sections with B stands for background processes. The effective cross sections are estimated by convoluting theoretically calculated luminosity distribution as; − are assumed.The horizontal axis of the figure is the high energy peak in the luminosity distribution in term of γγ collision energy[3]. The effective cross section for pair production ˆ( )
shows the sensitivity as a function of the efficiency, B
Table1 :
Table1Parameters for the electron beam and the laser pulse. Refer text for the definition of parameters not stated in the table
Figure 4 :
4Simulated luminosity distribution by CAIN with x=4.8 parameters
Figure 5
5Figure 5:
for the peak γγ center of mass energy of 300 GeV. A parameter with x=4.8 is typically chosen to maximize energy of the PLC while the one with x=3.76 is to keep wave length of the laser at 1054nm which is a technically preferable for solid state lasers.Figure 4shows luminosity distribution with x=4.8 simulated with the CAIN program[6]. The integrated luminosity, effective cross section are also summarized in table 1.table 1. The maximum
energy and energy distribution of the
Compton scattered photons depend on
a kinematical parameter
2 4
4 E
x
m c
ω
≡
where , , ,
E m c
ω
are energy of laser
photon, electron beam energy,
electron rest mass and velocity of
light, respectively. Two parameter set
were prepared for different x
parameter taking technical issues in to
account
AcknowledgementsThe authors would like to thank the ILC physics working group for valuable discussion and suggestion.
. B Badelek, Int. J. of Mod. Phys. A19. 75B. Badelek, at.al., Int. J. of Mod. Phys. A19 75(2004)
. R Belusevic, G Jikia, Phys. Rev. 7073017R. Belusevic and G. Jikia Phys. Rev. D70 073017 (2004);
. S Kanemura, TILC08 SendaiS. Kanemura talk given in TILC08 Sendai (2008),
. E Asakawa, arXiv:0809.0094Phys. Lett. B in press. hep-ph], D.Harada, talk given in this workshopE. Asakawa et al., Phys. Lett. B in press, arXiv: 0809.0094 [hep-ph], D.Harada, talk given in this workshop.
GRACE system. GRACE system http://minami-home.kek.jp/
. JSF. JSF http://www-jlc.kek.jp/subg/offl/jsf/jsf.html
| []
|
[]
| [
"Peter Aichelburg \nInstitute of Theoretical Physics\nUniversity of Vienna\nBoltzamanngasse 5A-1090Vienna\n",
"Abhay Ashtekar \nCenter for Gravitational Physics and Geometry Physics Department\nPenn State\n16802University ParkPA\n",
"Organizers ",
"A Ashtekar ",
"J Baez [email protected] ",
"F Barbero ",
"A Barvinsky ",
"F Embacher ",
"R Gambini ",
"D Giulini ",
"J Halliwell ",
"T Jacobson ",
"R Loll ",
"D Marolf ",
"K Meissner ",
"R Myers ",
"J Pullin ",
"M Reisenberger ",
"C Rovelli ",
"T Strobl [email protected]:[email protected] ",
"T Thiemann ",
"Fernando Barbero ",
"Barbero@laeff Esa Es ",
"A O Barvinsky "
]
| [
"Institute of Theoretical Physics\nUniversity of Vienna\nBoltzamanngasse 5A-1090Vienna",
"Center for Gravitational Physics and Geometry Physics Department\nPenn State\n16802University ParkPA"
]
| []
| This pre-print contains the abstracts of seminars (including key references) presented at the ESI workshop on mathematical problems in quantum gravity held during July and August of 1996. Contributors include While these contributions cover most of the talks given during the workshop, there were also a few additional speakers whose contributions were not received in time. | null | [
"https://export.arxiv.org/pdf/gr-qc/9701042v1.pdf"
]
| 119,378,824 | gr-qc/9701042 | e683b6766897ef4a20f448b2bb0b0ef815f0076a |
arXiv:gr-qc/9701042v1 18 Jan 1997
Peter Aichelburg
Institute of Theoretical Physics
University of Vienna
Boltzamanngasse 5A-1090Vienna
Abhay Ashtekar
Center for Gravitational Physics and Geometry Physics Department
Penn State
16802University ParkPA
Organizers
A Ashtekar
J Baez [email protected]
F Barbero
A Barvinsky
F Embacher
R Gambini
D Giulini
J Halliwell
T Jacobson
R Loll
D Marolf
K Meissner
R Myers
J Pullin
M Reisenberger
C Rovelli
T Strobl [email protected]:[email protected]
T Thiemann
Fernando Barbero
Barbero@laeff Esa Es
A O Barvinsky arXiv:gr-qc/9701042v1 18 Jan 1997Abstracts of Seminars given at the Workshop on Mathematical Problems of Quantum Gravity held at the Erwin Schrödinger Institute, Vienna Contributors Ashtekar, Abhay: [email protected] Baez, John : Rodolfo Gambini: [email protected] Domenico Giulini: [email protected] J.J. Halliwell: [email protected] T. Jacobson: [email protected] Renate Loll: [email protected] Don Marolf: [email protected] Krzysztof A. Meissner: [email protected] Robert Myers: [email protected] Jorge Pullin: [email protected] Michael P. Reisenberger: [email protected] Carlo Rovelli: [email protected] Thomas Strobl:
This pre-print contains the abstracts of seminars (including key references) presented at the ESI workshop on mathematical problems in quantum gravity held during July and August of 1996. Contributors include While these contributions cover most of the talks given during the workshop, there were also a few additional speakers whose contributions were not received in time.
Quantum theory of geometry Abhay Ashtekar
This was primarily a review talk, based largely on joint work with Jerzy Lewandowski. Over the last three years, a new functional calculus has been developed on the quantum configuration space of general relativity without any reference to a background geometrical structure in space-time (such as a metric). The purpose of this talk was to indicate how this machinery can be applied to systematically construct a quantum theory of geometry. The kinematical Hilbert space of quantum gravity was presented. States represent polymerlike, 1-dimensional excitations of geometry. Regulated Operators corresponding to areas of 2-surfaces were introduced on the kinematical Hilbert space of quantum gravity and shown to be self-adjoint. Their full spectrum was presented. It is purely discrete and contains some physically interesting information. First, the "area gap", i.e., the value of the smallest non-zero excitation, contains information about the global topology of the surface. Second, in the large eigenvalue limit, the eigenvalues become closer and closer to each other such that |a n+1 − a n | ≤ [(l P /2 √ a n ) + O(l 2 P /a n )]l 2 P , where a n and a n+1 are the consecutive eigenvalues and l P the Planck length. This shows why the continuum limit is such an excellent approximation. It also has a more interesting implication for the Hawking effect. Because we do not have an equal level spacing, the types of potential problems pointed out by Bekenstein and Mukhanov do not arise and the semi-classical approximation used by Hawking in his calculation of the black-body spectrum turns out to be excellent.
These and numerous other results are providing more and more intuition for the nature of quantum geometry. This framework is to quantum gravity what familiar differential geometry is to classical general relativity. Like differential geometry, it has no dynamical content; specific field equations are not involved. Just as the formulae for lengths, areas and volumes of differential geometry are valid in all theories of gravity (involving a spacetime metric), the formulae, identities and theorems involving our quantum states and operators would hold in all dynamical theories of gravity (which include n-beins which are canonically conjugate to connections).
A. Ashtekar and J. Lewandowski, gr-qc/9602046; Class. & Quantum Grav. (in press).
Unforeseen non-commutativity between geometric operators
Abhay Ashtekar
This was a report on some joint work with Alejandro Corichi, Jerzy Lewandowski and Jose Antonio Zapata.
One of the implications of the work reported in the first talk is that area operators associated with different operators do not always commute. This is at first surprising because the classical formula for areas involves only triads, without any reference to connections and from the basic Poisson bracket relations one expects the triads to commute among themselves. It turns out, however, that the naive expectation is incorrect. The reason is that the formula for areas involves triads which are smeared only on 2-surfaces rather than in 3 dimensions and the Poisson brackets between such objects are, strictly speaking, singular. (Furthermore, in our framework based on holonomies, triad operators which are smeared in 3 dimensions are not likely to be well-defined!)
To analyze this issue in detail, we examine the Poisson algebra between the following phase space functions: cylindrical functions of (smooth) connections and triads smeared on 2-surfaces. (Incidentally, since the triads have density weight one, they are in fact 2-forms and it is thus geometrically natural to smear them on 2-surfaces.) If we assume naively that the smeared triads commute, we run into a problem with the Jacobi identity; the naive Poisson algebra is not a Lie algebra and is therefore incorrect. One can regulate the naive algebra carefully to obtain a Lie algebra. Then, one finds that the (2-dimensionally) smeared triads fail to Poisson commute. The commutators between the quantum triad operators just mirror this correct Poisson algebra and this is why the area operators fail to commute.
A. Ashtekar, A. Corichi, J. Lewandowski and J.A. Zapata, CGPG pre-print.
Geometry of quantum mechanics
Abhay Ashtekar
This talk summarized joint work with Troy Schilling which constitutes his 1996 Ph.D. thesis at Penn State.
In the way we normally formulate these theories, classical mechanics has deep roots in (symplectic) geometry while quantum mechanics is essentially algebraic. However, one can recast quantum mechanics in a geometric language which brings out the similarities and differences between the two theories. The idea is to pass from the Hilbert space to the space of rays, i.e. to the "true" space of states of quantum mechanics. The space of rays -or the projective Hilbert space, is in particular, a symplectic manifold, which happens to be equipped with a further Kähler structure. Regarding it as a symplectic manifold, one can repeat the familiar constructions from classical mechanics. For example, given any function, one can construct its Hamiltonian vector field. If one uses the expectation value of the Hamiltonian operator as the function, it turns out that the resulting "classical" symplectic evolution is precisely the (projection of the) Schrödinger evolution on the Hilbert space. Roughly, properties of quantum mechanics which it "shares" with classical mechanics use only the symplectic structure on the projective Hilbert space. The "genuinely" quantum properties such as uncertainties and probabilities refer to the Kähler metric. Thus, purely in mathematical physics terms, one can regard quantum mechanics as a special case of classical mechanics, one in which the phase space happens to have a Kähler structure (which then enables one to do more.) This geometrical formulation of quantum mechanics sheds considerable light on the second quantization procedure and on semi-classical states and dynamics.
After the work was completed, we found that many of our results were discovered independently by a number of authors, most notably L. Hughstone and by R. Cirelli, A. Maniá and L. Pizzochero.
A. Ashtekar and T. Schilling; in: The Proceedings of the First Canadian-Mexican-American Physical Societies' Conference, edited by A. Zapeda (American Institute of Physics, NY 1995.) T. Schilling; Geometry of Quantum Mechanics, Ph.D. Thesis, Penn State, 1996.
Probing quantum gravity through exactly soluble midi-superspaces Abhay Ashtekar
This talk summarized joint work with Monica Pierri which constituted part of her Ph.D. thesis.
The idea was to consider midi-superspaces which are simple enough to be exactly soluble both classically and quantum mechanically and use the solution to probe various nagging issues of quantum gravity such as the issue of time and the nature of the vacuum. The specific example presented was the midi-superspace of Einstein-Rosen (i.e., cylindrical) gravitational waves. This model was analyzed by Karel Kuchǎr already in the early seventies and by Michel Allen in the mid-eighties. However certain issues concerning boundary conditions, surface terms and functional analytic subtleties could not be discussed then. Using results on asymptotics and techniques of regularization that have been developed since then, their discussion can be completed to construct a complete quantum theory. We used this solution to construct a regulated, quantum space-time metric operator and address issues such as "light cone fluctuations". That this is possible within the canonical framework is noteworthy since concerns are often expressed that the canonical quantization procedure may not be able to handle such "space-time" issues. The model has a well-defined, non-trivial Hamiltonian operator and a stable vacuum state (the eigenstate of the Hamiltonian with zero eigenvalue.) On general states, one can write the Schrödinger equation. However, the mathematical parameter in this equation has the physical interpretation of time only on semi-classical states. Finally, this solution can also be used to probe a key issue in our non-perturbative quantum gravity program: existence of operators corresponding to traces of holonomies around closed loops. These operators do exist in spite of the fact that they involve smearing of the connection only in one dimension.
More recently, this model has been used to show the existence of certain unforeseen quantum gravity effects which can be large even when the space-time curvature is small.
A. Ashtekar and M. Pierri, gr-qc/9606085, J. Math. Phys., 37, 6250-70, (1996). A. Ashtekar, gr-qc/9610008, Phys. Rev. Lett. 77, 4864-67 (1996).
Topological Quantum Field Theory
John Baez
The simplest sort of topological quantum field theory is BF theory, where the Lagrangian is of the form tr(BF ), with F being the curvature of a connection and B being a Lie-algebra valued (n − 2)-form in n dimensions. When n is 3 or 4 one can also add a "cosmological constant term" of form tr(BBB) or tr(BB), respectively. In this talk, I summarized what is known about BF theory in dimensions 2, 3, and 4, as well as the equivalent state sum models. In particular, I described how state sum models of BFlike theories in 2 dimensions arise from certain monoids, while in 3 dimensions they arise from certain monoidal categories and in 4 dimensions from certain monoidal 2-categories (most notably the category of representations of a quantum group, which may be seen as a monoidal 2-category with one object). I also sketched how 4-dimensional BF theory underlies Chern-Simons theory in 3 dimensions.
References:
John Baez and James Dolan, Higher-dimensional algebra and topological quantum field theory, Jour. Math. Phys.. 36 (1995), 6073-6105.
John Baez, Four-dimensional BF theory as a topological quantum field theory, to appear in Lett. Math. Phys., preprint available as q-alg/9507006.
The Entropy of 2-Part Systems
John Baez I sketched the mathematical relationships between three constructions which might at first glance seem unrelated: Everett's relative state formalism, the Gelfand-Naimark-Segal construction, and the construction of nontrivial spaces of states on 'half' of a 3-manifold from the single state of BF theory with cosmological constant on the whole 3-manifold -the so-called 'Chern-Simons state'. In all these constructions a single state gives rise to a Hilbert space of states. The last one gives a way of understanding the mathematical content of Smolin's argument for the Bekenstein bound.
From Euclidean to Lorentzian Gravity: The Real Way
Fernando Barbero
The complex character of the Ashtekar variables has been one of the major issues to be understood in order to succeed in the quantization of general relativity by using the loop variables approach. The possibility of describing Lorentzian GR with a real Ashtekar connection can be realized by restricting oneself only to real canonical transformations in the transit from the SO(3)-ADM phase space to the SO(3)-Yang Mills phase space of the Ashtekar formalism [1]. The resulting Hamiltonian constraint is more complicated that the usual one but it is still written in terms of Ashtekar variables and, hence, loop variables can still be used to quantize it. From a Lagrangian point of view it is interesting to see if one can use local internal symmetries, instead of non-local ones, to write the action for Lorentzian general relativity. This can be achieved [2] by writing the Einstein-Hilbert action for a two parameter family of metrics whose signature can be adjusted at will by a suitable choice of these parameters.
Semiclassical methods in the theory of constrained dynamics A.O.Barvinsky
Operator realization of quantum constraints, the lowest-order structure functions and physical observables is found in the one-loop (linear inh) approximation of the Dirac quantization for the general theory subject to first-class constraints. The general semiclassical solution of the quantum Dirac constraints is found. Semiclassical unitary equivalence of the Dirac and reduced phase-space quantization methods is established in terms of the conserved physical inner product in the space of physical states. The conservation of this inner product and its independence of the choice of gauge conditions is based on the Stokes theorem for a special closed form integrated over the physical subspace of the configuration space of the theory (superspace). Geometrical covariance properties of the quantum Dirac constraints with respect to contact canonical transformations and transformations of the basis of constraints is studied. Applications of these general methods of quantum constrained dynamics are considered in quantum cosmology of the early inflationary Universe. A.O.Barvinsky, Geometry of the Dirac quantization of constrained systems, preprint ESI (1996).
References
Quantum Origin of the Early Universe and the Energy Scale of Inflation
A.O.Barvinsky
Quantum origin of the early inflationary Universe from the no-boundary and tunnelling quantum states is considered in the one-loop approximation of quantum cosmology. A universal effective action algorithm for the distribution function of chaotic inflationary cosmologies is derived for both of these states. The energy scale of inflation is calculated by finding a sharp probability peak in this distribution function for a tunnelling model driven by the inflaton field with large negative constant ξ of nonminimal interaction. The sub-Planckian parameters of this peak (the mean value of the corresponding Hubble constant H≃ 10 −5 m P , its quantum width ∆H/H≃ 10 −5 and the number of inflationary e-foldings N≃ 60) are found to be in good correspondence with the observational status of inflation theory, provided the coupling constants of the theory are constrained by a condition which is likely to be enforced by the (quasi) supersymmetric nature of the sub-Planckian particle physics model.
Mode decomposition and unitarity in quantum cosmology
Franz Embacher
It is common folklore that the space of wave functions of quantum cosmology may not be decomposed into positive and negative frequency modes when the background structure (DeWitt metric and potential) does not admit symmetries. However, there are still perspectives for defining a generalized notion of preferred mode decomposition, starting from the space of solutions of the wave equation and the indefinite Klein-Gordon scalar product
Q(ψ 1 , ψ 2 ) = − i 2 Σ dΣ α (ψ * 1 ∇ α ψ 2 − ψ * 2 ∇ α ψ 1 ).
In the case of a positive potential U we outline a strategy in doing so.
The technical tool for analyzing the wave equation is the selection of a solution S of the Hamilton-Jacobi equation, which generates a congruence of classcial trajectories, and a weight function D. The pair (S, D) is called WKB-branch. Within any WKB-branch, an operator H -satisfying a particular differential equation-may be chosen such that any solution of i∂ t χ = Hχ (with ∂ t the derivative along the trajectories) gives rise to a solution of the original wave equation (−∇ 2 + U )ψ = 0. The wave functions constructed in this way define the space H + , generalizing the notion of positive frequency with respect to a WKB-branch.
The crucial step in the general strategy is to perform a natural choice of the operator H with respect to any WKB-branch, such that for any two infinitesimally close branches the respective decompositions coincide. It is achieved by iteratively solving the differental equation for H, thus ending up with a formal expression (whose existence at least in simple cases can be inferred explicitly). Since arbitrarily high derivatives of the ingredients of the model (the DeWitt metric and the potential) appear, the proper mathematical existence of the preferred decomposition is likely to be related to the global structure of the model (and possibly to analyticity issues). Unitarity (i.e. a Schrödinger type evolution equation for wave functions) shows up only in terms of WKB-branches, much like components of a tensor show up only in a coordinate system. Details of this approach are presented in the two references cited below, along with speculations on a possible relation to the refined algebraic quantization program. This work was supported by the Austrian Academy of Sciences in the framework of the "Austrian Programme for Advanced Research and Technology." F. Embacher, Decomposition and unitarity in quantum cosmology , preprint UWThPh-1996-64, gr-qc/9611006; F. Embacher, Mode decomposition and unitarity in quantum cosmology, Talk given at the Second Meeting on Constrained Dynamics and Quantum gravity, Santa Margherita Ligure, September 17-21, 1996, to appear in the Proceedings.
Gauge Invariance in the Extended Loop Representation
Rodolfo Gambini
The Gauss constraint in the extended loop representation is studied. It is shown that there is a sector of the state space that is gauge invariant. We determine necessary and sufficient conditions for states belonging to this sector. This conditions are satisfied by the extended Vassiliev invariants.
Views on Super-selection Rules in QT and QFT Domenico Giulini
Many derivations of super-selection rules are purely formal in nature and hence do not make sufficiently clear the actual physical input that leads to them. In the context of quantum mechanics we critically review standard derivations of the super-selection rules for univalence and overall mass (so-called Bargmann super-selection rule.) The strong dependence on the required symmetry group is emphasized [1]. In particular, it is pointed out that in order for mass to define a super-selection rule it should be considered as a dynamical variable. We present a minimal extension of the dynamics of n point particles interacting via some Galilei invariant potential in which mass it also treated as dynamical variable. Here the classical symmetry group turns out to be given by an R-extension of the Galilei group. In contrast to the Galilei group its extension does have an action on the space of states including non-trivial superpositions of mass eigenstates. Hence no super-selection rules appear in this model [2]. Finally, we discuss the super-selection rule of electric charge (see chap. 6 of [1]). We emphasize the necessity of a consistent variational principle including charged configurations in its domain of differentiability. We present such a principle and show the unavoidable existence of additional degrees of freedom ("surface variables") which measure the overall multipole moments of the charge distribution. Here super-selection rules result only if the surface variables are not in the algebra of observables, as it would be the case if one restricts to (quasi-) local observables. However, it is tempting to think of such a restriction as being conditioned dynamically and hence in principle only of approximate validity [3]. We address the rôle of large diffeomorphisms in Witten's formulation of gravity as an ISO(2,1) gauge theory on a space-time T 2 × R. In a "space-like sector" the classical phase space is P = T * (Q) with Q = R 2 − {0} as configuration space. Using the vertical polarization the quantum state space becomes H = L 2 (Q, µ L ), µ L being the standard Lebesgue measure. By large diffeomorphisms we mean the orientation preserving mapping class group of T 2 , given by SL(2, Z). It acts by its defining representation on Q and by its canonical lift on P. The action on the former is 'wild' in the sense that the isomorphicity class of stabilizer subgroups is nowhere locally constant. The configuration space quotient is hence nowhere locally a manifold. In contrast, the lifted action on P is free on the open and dense subset where the two vectors of coordinates and momenta are not perpendicular. The action of SL(2, Z) on H is given by composition in the argument of the function representing the element in H. We show that the action is fully reducible in terms of a direct integral over R of vector spaces isomorphic to L 2 (S 2 , dϕ), which carry SL(2, Z)irreducible sub-representations of certain irreducible representations of SL(2, R) from the continuous series [1]. Hence for each measurable set ∆ ⊂ R one finds a closed subspace H ∆ ⊂ H which is invariant under large diffeomorphisms. Now, if SL(2, Z) is considered as gauge group, the observables should be contained in its commutant, implying in particular that no vector in H defines a pure state for this algebra. H is not the physical Hilbert space nor does it contain it. We conclude that the reduction of large diffeomorphisms should a priori not be regarded as a problem simpler than the reduction of diffeomorphisms generated by the constraints (identity component).
References:
D. Giulini, J. Louko: "Diffeomorphism Invariant Subspaces in Witten's 2+1 Quantum Gravity on R × T 2 ", Class. Quant. Grav. 12, 2735Grav. 12, -2745Grav. 12, (1995.
Mapping-Class Groups of General 3-Manifolds Domenico Giulini
Let Σ be a closed orientable 3-manifold, ∞ ∈ Σ a distinguished point, and D F (Σ) the space of diffeomorphisms that fix the frames at ∞. We are interested in the mapping class groups S(Σ) := D F (Σ)/D 0 F (Σ) for general Σ, where D 0 F (Σ) denotes the identity component of D F (Σ) [1]. As is well known, Σ is diffeomorphic to a connected sum of finitely many (say n) and uniquely determined prime 3-manifolds P i . [NB: P is prime ⇔ π 2 (P ) = 0 or P = S 2 × S 1 (the 'handle').] Using this, we think of Σ as a configuration of n elementary 'objects' attached to a common base along mutually disjoint connecting spheres [2]. An obvious subgroup of S(Σ) is generated by the semi-direct product of permutations of diffeomorphic objects with their internal symmetry groups S(P i ). This subgroup is in fact also a factor iff P i = S 2 × S 1 [3]. It is explained how a full presentation of S(Σ) may be constructed using the Fuks-Rabinovich presentation for the automorphism group of free products [3]. For this one considers the natural map h : S(Σ) → Aut(π 1 (Σ, ∞)). If all P i satisfy that homotopic diffeomorphisms are also isotopic (no prime violating this is known), the kernel, ker(h), is known to be of the form Z m 2 , m ≤ n, generated by certain rotations parallel to imbedded 2-spheres. The image, Im(h), can be explicitly presented. In many cases it exhausts all of Aut(π 1 (Σ)). Finally, one specifies the correct action of Im(h) on ker(h), which results in a semi-direct product of these two groups. All this can be nicely exemplified explicitly for the connected sum of n handles or n P R 3 's [2] [3].
On the Probability of Entering a Spacetime Region in Non-Relativistic Quantum Mechanics J.J.Halliwell and E.Zafiris
What is the probability of a particle entering a given region of space at any time between t 1 and t 2 ? Standard quantum theory assigns probabilities to alternatives at a fixed moment of time and is not immediately suited to questions of this type. We use the decoherent histories approach to quantum theory to compute the probability of a non-relativistic particle entering a spacetime region. Aside from being of general formal interest, this question is relevant to the problems of arrival times and tunneling times. It may also be relevant to relativistic systems and in particular, to quantum gravity, where a variable playing the role of time may not exist. For a system consisting of a single nonrelativistic particle, histories coarse-grained according to whether or not they pass through spacetime regions are generally not decoherent, except for very special initial states, and thus probabilities cannot be assigned. Decoherence may, however, be achieved by coupling the particle to an environment consisting of a set of harmonic oscillators in a thermal bath. Probabilities for spacetime coarse grainings are thus calculated by considering restricted density operator propagators of the quantum Brownian motion model, and we find approximate methods for calculating these. Another method of achieving decoherence, which we explore, is to consider a system consisting of a large number N of identical, non-interacting, free particles, and consider histories in which an imprecisely defined proportion of the particles cross the spacetime region. We find that there is decoherence, essentially due to statistics for large N . We thus obtain general expressions for the probabilities for a variety of spacetime problems for a particle starting in an arbitrary initial state.
Issues in Black Hole Thermodynamics T. Jacobson
This talk provided an overview of black hole thermodynamics and discussed recent progress and open questions. The issues discussed included: the generalized second law, entanglement entropy, the "holographic hypothesis", and the statistical meaning of black hole entropy. In particular, the relation between matter field contributions to the entropy and the renormalization of Newton's constant, the nature of the "bare" entropy, and Carlip's counting of black hole states in 2+1-dimensional quantum gravity were discussed.
Origin of the Outgoing Black Hole Modes
T. Jacobson
The origin of the outgoing black hole modes is puzzling if no transplanckian reservoir at the horizon is availavble. In this talk I explained this puzzle and discussed models with high frequency dispersion, motivated by condensed matter analogies, which can resolve the puzzle. These models arose originally from Unruh's sonic analog of a black hole, which is an inhomogeneous fluid flow with a "sonic horizon" where the flow speed exceeds the speed of sound. I explained how high frequency dispersion in the wave equation satisfied by the sound field (or its analog) leads to a process of "mode conversion", whereby ingoing short wavelength modes are converted into long wavelength outgoing ones. Results of a calculation of the Hawking spectrum in one such model were described.
1+1 Sector of 3+1 Gravity
T. Jacobson
Ashtekar's formulation of general relativity admits an extension to degenerate metrics, and it appears that such metrics may play an important role in the quantization of the theory. Recently Matschull showed how this degenerate extension of GR can be described in a fully spacetime covariant manner, and he showed that the degenerate "geometries" allowed by the Ashtekar extension always possess a local "causal cone", though the cone is collapsed in one or more dimensions. In this talk, the rank-1 sector of the classical theory was discussed, motivated by the degeneracy of the triad along the Wilson lines in quantum loop states. I showed that the classical lines behave like (1+1)-dimensional spacetimes with a pair of massless Dirac fields propagating along them ("connection waves"). Matschull's causal structure is precisely the light cone for these waves. Further, if the lines form a congruence of closed loops, the holonomy must be the same on all the loops. Results for inclusion of matter and supergravity were also obtained in this work.
References:
Matschull, H.J., "Causal structure and diffeomorphisms in Ashtekar's gravity", Class. Quantum Grav. 13 (1996) 765-782. Jacobson, T., "1+1 sector of 3+1 gravity", Class. Quantum Grav. 13 (1996) L111-L116.
Some News of Lattice Gravity
R. Loll
At the Vienna workshop, I reported about some new results which I obtained in the quantization of Hamiltonian lattice gravity. Because of the close structural resemblance of the calculations, these are also of relevance to the continuum loop quantization. I am working with (a discretized version of) real connection variables and their canonically conjugate momenta (A, E) on a cubic N 3 -lattice. The natural scalar product at the kinematical level is therefore identical with that of the usual lattice gauge theory with a compact gauge group (SO(3) or SU (2)), and the basic link operators (the link holonomyÛ (l) and link momentap i (l)) are self-adjoint. The Hamiltonian constraint is non-polynomial, but this can be handled along the lines described in [1].
Like in the continuum, one may define self-adjoint operators measuring volumes and areas, and the gauge-invariant states diagonalizing these operators are simple linear combinations of so-called spin network states. The one-dimensional Wilson loop states underlying this construction are obtained by simply multiplying together the one-dimensional basic link holonomies (that form part of the single, fixed lattice), whereas in the continuum they are somewhat less natural composite objects depending on the connections A and on arbitrary embedded loops in the three-dimensional spatial manifold Σ. Related to this is the fact that in the continuum theory -unlike on the lattice -there still exist natural Diff Σ-actions.
The first interesting property I found is the following: suppose one wanted to latticequantize the classical phase space function corresponding to the "area of a surface perpendicular to the 3-direction", d 2 x E 3i E 3 i . To reproduce the spectrum found by Ashtekar and Lewandowski in the continuum, one substitutes the continuum momenta E b (x) by symmetrized lattice momenta 1 2 (p + (n, b) + p − (n, b)), where n is now a lattice vertex and the symmetrization is taken over the link momenta in positive and negative b-direction, both based at (i.e. transforming non-trivially at) n. Doing this, I realized that this operator is not diagonal in terms of certain volume eigenstates I had constructed previously on the lattice, that is, volume and area operators in general do not commute! (The commutator may still vanish on certain "simple" loop states.) I was at first greatly worried by this result, since the corresponding classical phase space functions do of course commute. Moreover, the calculations I did can be directly translated to the continuum, which means that also there areas and volumes do not commute. Details of this can be found in my forthcoming paper [2]. On the lattice there is an easy way around this problem: choose a different discretization for the term under the square root, namely, 1 2 (p + (n, b) 2 +p − (n, b) 2 )) instead of 1 4 (p + (n, b) + p − (n, b)) 2 . This differs from the latter by terms of higher order in the lattice spacing a, as a → 0 in the continuum limit, and is therefore an equally good operator from the point of view of the lattice theory. Being a sum of two laplacians, it commutes strongly with any other lattice operator.
I came across another curious feature while looking for simultaneous eigenvectors of the volume and the area operators on the lattice. Since their operator expressions reduce to sums over vertex contributions, one can diagonalize them separately at each intersection. The dual unit cubes around individual vertices may therefore be regarded as smallest building blocks of geometry. I looked at a particular family of states at some fixed vertex n, namely those where the six links meeting at n (the lattice is cubic) have the same occupation number (or spin) j = 1, 2, 3, . . ., and only differ by how the flux lines are contracted gauge-invariantly at n. One may then extract local length scales by computing √ a 0 and 3 √ v 0 , with a 0 and v 0 the eigenvalues of the area (which for these states are the same in all three directions) and the volume. One finds that at least for the first few j the length scale one obtains from the area is larger than that calculated from the volume, even if one always picks the state of highest volume from the entire set. For example, for j = 1, one finds √ a ∼ 0.866, 3 √ v max ∼ 0.821, and for j = 2, √ a ∼ 1.189, 3 √ v max ∼ 1.077, in suitable units. This behavior seems to persist also for higher j [3]. It remains to be understood to what extent this is a general feature of local lattice states. If it were, it would be difficult to understand how one could construct states representing flat space, say, from those smallest building blocks.
A review was presented of recent calculations of black hole entropy in string theory, giving the general picture of how calculations of D-brane bound states are related to black hole entropy. The original calculation by Strominger and Vafa (Phys. Lett. B379, 99 (1996); hep-th/9601029) was outlined and some extensions were mentioned. A discussion along these lines and additional references can be found in the review by Horowitz, grqc/9604051.
Duality Symmetries in String and Field Theory
Krzysztof A. Meissner
The talk given at the ESI Workshop on Quantum Gravity in Vienna described duality symmetries both in string and in field theories. The subject is now being intensively studied for at least three reasons. The first one is that one aspect of duality (strong vs. weak coupling) gives us hope to probe normally inaccessible region of strongly coupled quantum field theories by showing their equivalence with (usually different) weakly coupled theories where the perturbation expansion can be trusted. This kind of equivalence was shown up to now in a limited number of theories (like N=2 supersymmetric Yang-Mills theories) but even these few examples are extremely helpful in understanding strongly interacting quantum field theories. The second reason is that they relate seemingly different string theories (like heterotic and IIB) which is seen upon compactifying them on two different manifolds and comparing the resulting effective actions. The number of dualities of that type is rapidly increasing and it led to speculations that all string theories are interconnected and in fact they all descend from one theory (called "M-theory" if 11dimensional or "F-theory" if 12 dimensional) but compactified with different boundary conditions. The third reason that dualities are intensively studied is that they are "solution generating" symmetries i.e. starting with a solution to the equations of motion of a given theory and acting on it with elements of the duality group, we get a whole class of new solutions. The resulting solutions can be very complicated and rather impossible to get by solving the equations of motion. Such global symmetries have also a conserved current which is very helpful in classifying the solutions. One particular example is the so called string cosmology with gravity coupled to a scalar (dilaton) and antisymmetric tensor where there is an O(d, d) global symmetry where d is the number of space dimensions. This symmetry allows for many generalizations and is always present in the gravitational sector of "string-inspired" effective actions.
Black Hole Entropy from Strings and D-Branes
Robert Myers
Finding a statistical mechanical interpretation of black hole entropy is an outstanding problem which has eluded physicists for over 20 years. Recently, progress into this question has been made using new insights from string theory. This progress is a spin-off from the work on string dualities, and the realization of the important role of extended objects beyond just strings. In particular, a class of extended objects known as D-branes [1] have proven very valuable from a calculational standpoint. It was found that different kinds of D-branes can be combined to produce black holes in a certain strong coupling limit. On the other hand in weak coupling, these systems are amenable to statistical mechanical analysis within string theory. These calculations were first carried out for a class of extremal black holes in five-dimensions [2], but then rapidly extended to a variety of other configurations [3][4][5][6][7]. Even though these calculations still apply to a relatively restricted class of black holes, they represent a breakthrough in our understanding of black hole entropy, since for the first time, we have some insight into the underlying microscopic degrees of freedom for a black hole.
Quantum Gravi-dynamics as skein relations in knot space
Jorge Pullin
In the loop representation, states that are solutions of the diffeomorphism constraint are knot invariants. Typically, one tries to find a realization of the Hamiltonian constraint that acts on such states in order to find quantum states of the gravitational field. Such an action can never yield an operator in the space of knots, since the Hamiltonian is a non diffeomorphism invariant function of a point, and therefore cannot be realized in a space of diffeomorphism invariant states. We argue, however, that many proposals for regularized actions of the Hamiltonian can be decomposed into a non diffeomorphism invariant pre-factor times a topological operator. The latter can be realized in a space of diffeomorphism invariant wave functions. We analyze in particular a recently proposed lattice regularization [1]. In terms of it, the topological operator can be interpreted as a skein relation between intersecting knots [2]. This relation determines partially the knot polynomial that is the general solution of the Einstein equations. The indeterminacy is related to the fact that the theory does not have a single solution since it has local degrees of freedom. We show that certain knot invariants, which in the continuum [3] and extended [4] loop representations were found to solve formally the Hamiltonian constraint satisfy the skein relations found to represent the dynamics of general relativity, providing additional confirmation that they could be states of gravity. The calculations are carried out for a particular type of triple intersections; further studies will be needed to elucidate in a more general way if the states are actually compatible with the skein relations for general intersections. The idea of viewing the constraint as partial skein relations is not confined to this particular approach and holds promise in the context of the recently proposed Hamiltonian for real Lorentzian gravity in terms of spin network states.
A path integral formulation of loop quantized simplicial euclidean general relativity
Michael P. Reisenberger
A four dimensional path integral formulation of simplicial euclidean general relativity (GR) corresponding to canonical GR in Ashtekar's connection representation is presented. By integrating out the spacetime connection the path integral is turned into a sum over spins, analogous to the Ponzano-Regge model for 2 +1 GR, corresponding to canonical GR in the spin network representation. In this latter case the path integral may be interpreted as a) a sum over world sheets of spin networks, and b) a sum over discrete spacetime geometries analogous to the discrete spatial geometries found in loop quantized canonical GR in the continuum. The discreteness of the 4-geometry should persist in a continuum limit of the simplicial model because it results from the discreteness of the spins of SU (2) representations, not from the discreteness of the simplicial complex modeling spacetime. However, the existence of a continuum limit has not been established.
The path integral model is derived from a new classical simplicial action which in the classical continuum limit converges to the Plebanski action for GR.
References:
M. Reisenberger. A left-handed simplicial action for euclidean general relativity. gr-qc 9609002, 1996. (presents the classical simplicial theory.)
Black Hole Entropy from Loop Quantum Gravity
Carlo Rovelli
I have discussed recent ideas on the possibility of deriving the Bekenstein-Hawking formula, which states that the Entropy of a (non rotating) black hole is proportional to its Area, from Loop Quantum Gravity.
In his treatment of the 2+1 black hole Carlip assigns the black hole entropy to states of a WZW boundary action located at the (stretched) horizon. In the 1+1 dimensional context of (1) such a boundary action will describe purely mechanical degrees of freedom. The following nice picture evolves [3]: The phase space of these edge degrees of freedom on a black hole spacetime of mass M coincides precisely with the respective symplectic leaf in the above mentioned R 3 ; thus the fictitious point particles obtained above become 'alive' and physical. Despite this appealing picture, the entropy obtained in this way does not seem to agree with the one obtained by other, quite reliable semiclassical approaches.
, Quantum gravity and the algebra of tangles, Jour. Class. Quantum Grav. 10 (1993), 673-694. Lee Smolin, Linking topological quantum field theory and non-perturbative quantum gravity, Jour. Math. Phys. 36 (1995), 6417-6455.
: A.O. Barvinsky, The general semiclassical solution of the Wheeler-DeWitt equations and the issue of unitarity in quantum cosmology, Phys. Lett. B241 (1990) 201. A.O. Barvinsky and V. Krykhtin, Dirac and BFV quantization methods in the 1loop approximation: closure of the quantum constraint algebra and the conserved inner product, Class. Quantum Grav. 10 (1993) 1957. A.O.Barvinsky, Operator ordering in theories subject to constraints of the gravitational type, Class. Quantum Grav. 10 (1993) 1985. A.O.Barvinsky, Unitarity approach to quantum cosmology, Phys.Reports 230 (1993) 237.
O.Barvinsky and A.Yu.Kamenshchik, One-loop quantum cosmology: the normalizability of the Hartle-Hawking wave function and the probability of inflation, Class. Quantum Grav.7 (1990) L181. A.O.Barvinsky, Unitarity approach to quantum cosmology, Phys.Reports 230 (1993) 237. A.O.Barvinsky and A.Yu.Kamenshchik, Tunnelling geometries: analyticity, unitarity and instantons in quantum cosmology, Phys.Rev. D50 (1994) 5093. A.O.Barvinsky, Reduction methods for functional determinants in quantum gravity and cosmology, Phys.Rev. D50 (1994) 5115. A.O.Barvinsky and A.Yu.Kamenshchik, Quantum scale of inflation and particle physics of the early Universe, Phys.Lett. B332 (1994) 270.
Bartolo, R. Gambini, J. Griego, J. Pullin J Math. Phys. 36 6510, 1995. C. Di Bartolo, "The Gauss Constraint in the Extended Loop Representation" prepring gr-qc 9607014, 1996.
Galilei invariance in Quantum Mechanics and the Bargmann Superselection Rule", Ann. Phys. (NY) 249, (1996) D. Giulini, C. Kiefer, H.D. Zeh: "Decoherence, Symmetries and Super-selection Rules, Phys. Lett. A 119,291-298 (1995).Diffeomorphism Invariant Subspaces inWitten's 2+1-Dimensional Quantum Gravity on T 2 × R Domenico Giulini
: "On the Configuration Space Topology in General Relativity", Helv. Phys. Acta 68, 86-111 (1995). D. Giulini: "3-Manifolds for Relativists", Int.Jour. Theor. Phys. 33, 913-930 (1994). D. Giulini: "The Group of Large Diffeomorphisms in General Relativity. Banach Center Publications, in press.
J.C. Breckenridge, D.A. Lowe, R.C. Myers, A.W. Peet, A. Strominger and C. Vafa, "Macroscopic and Microscopic Entropy of Near Extremal Spinning Black Holes", Phys. Lett. B381 (1996) 423 [hep-th/9603078]. C.V. Johnson, R.R. Khuri and R.C. Myers, "Entropy of 4-D Extremal Black Holes", Phys. Lett. B378 (1996) 78 [hep-th/9603061]; J.M. Maldacena and A. Strominger, "Statistical Entropy of Four-Dimensional Extremal Black Holes", Phys. Rev. Lett. 77 (1996) 428 [hep-th/9603060]. G.T. Horowitz, J.M. Maldacena and A. Strominger, "Nonextremal Black Hole Microstates and U-Duality", hep-th/9603109; G.T. Horowitz, D.A. Lowe and J.M. Maldacena, "Statistical Entropy of Nonextremal Four-Dimensional Black Holes and U-Duality", Phys. Rev. Lett. 77 (1996) 430 [hepth/9603195]. G.T. Horowitz and D. Marolf, "Counting States of Black Strings with Traveling Waves", hep-th/9605224; "Counting States of Black Strings with Traveling Waves 2", hep-th/9606113.
We construct an operator that measures the length of a curve in four-dimensional Lorentzian vacuum quantum gravity.We work in a representation in which a SU (2) connection is diagonal and it is therefore surprising that the operator obtained after regularization is densely defined, does not suffer from factor ordering singularities and does not require any renormalization.We show that the length operator admits self-adjoint extensions and compute part of its spectrum which like its companions, the volume and area operators already constructed in the literature, is purely discrete and roughly is quantized in units of the Planck length.The length operator contains full and direct information about all the components of the metric tensor which faciliates the construction of so-called weave states which approximate a given classical 3-geometry.
. J , Fernando Barbero, G , Phys.Rev. 51J. Fernando Barbero G. Phys.Rev.D51:5507-5510, (1995).
The world as a hologram. J , Fernando Barbero, G ; L , References: Susskind. 546377Phys.Rev.J. Fernando Barbero G. Phys.Rev.D54:1492-1499, (1996). References: Susskind, L., "The world as a hologram", Jour. Math. Phys. 36 (1995) 6377.
Black hole entropy in the O(N) model. D Kabat, S H Shenker, M J Strassler, Phys. Rev. D. 527027Kabat, D., Shenker, S.H. and Strassler, M.J., "Black hole entropy in the O(N) model", Phys. Rev. D 52 (1995) 7027.
Sonic analog of black holes and the effects of high frequencies on black hole evaporation. S Carlip, gr-qc/9606043. References: Unruh. 512827The statistical mechanics of the three-dimensional euclidean black holeCarlip, S., "The statistical mechanics of the three-dimensional euclidean black hole", gr-qc/9606043. References: Unruh, W.G., "Sonic analog of black holes and the effects of high frequencies on black hole evaporation", Phys. Rev. D 51 (1995) 2827.
Hawking spectrum and high frequency dispersion. S Corley, T Jacobson, Phys. Rev. D. 54Corley, S. and Jacobson, T., "Hawking spectrum and high frequency dispersion", Phys. Rev. D 54 (1996) 1568-1586.
A Real Alternative to Quantum Gravity in Loop Space. T Jacobson, Phys. Rev. D. R. Loll. 53References: R.. in preparationJacobson, T., "On the origin of the outgoing black hole modes", Phys. Rev. D 53 (1996) 7082-7088. References: R. Loll: "A Real Alternative to Quantum Gravity in Loop Space", to appear in Phys. Rev. D. R. Loll, in preparation.
. R , in preparationR. Loll, in preparation.
. D-Branes , Black Hole Entropy Donald Marolf References, D-branes and Black Hole entropy Donald Marolf References:
Notes on D-branes. J Polchinski, S Chaudhuri, C V Johnson, hep-th/9602052J. Polchinski, S. Chaudhuri and C.V. Johnson, "Notes on D-branes", hep-th/9602052.
Microscopic Origin of the Bekenstein-Hawking Entropy. A Strominger, C Vafa, hep-th/9601029Physics Letters. 37999A. Strominger and C. Vafa, "Microscopic Origin of the Bekenstein-Hawking Entropy", Physics Letters B379 (1996) 99 [hep-th/9601029].
D-Brane Approach to Black Hole Quantum Mechanics. C G Callan, J M Maldacena, hep-th/9602043Nuclear Physics. 472591C.G. Callan and J.M. Maldacena, "D-Brane Approach to Black Hole Quantum Me- chanics", Nuclear Physics B472 (1996) 591 [hep-th/9602043];
Counting States of Near Extremal Black Holes. G T Horowitz, A Strominger, hep-th/9602051Physical Review Letters. 772368G.T. Horowitz and A. Strominger, "Counting States of Near Extremal Black Holes", Physical Review Letters 77 (1996) 2368 [hep-th/9602051].
D-Branes and Spinning Black Holes. J C Breckenridge, R C Myers, A W Peet, C Vafa, hep-th/9602065J.C. Breckenridge, R.C. Myers, A.W. Peet and C. Vafa, "D-Branes and Spinning Black Holes", hep-th/9602065;
The general solution of the quantum Einstein equations. R Gambini, J Pullin, gr-qc/9603019R. Gambini, J. Pullin, "The general solution of the quantum Einstein equations?" preprint gr-qc/9603019
Lattice knot theory and quantum gravity in the loop representation. H Fort, R Gambini, J Pullin, preprint gr-qc/9608033H. Fort, R. Gambini, J. Pullin, "Lattice knot theory and quantum gravity in the loop representation", preprint gr-qc/9608033
. B Bruegmann, R Gambini, J Pullin, Phys. Rev. Lett. 68431B. Bruegmann, R. Gambini, J. Pullin, Phys. Rev. Lett 68 431 (1992).
. C Di Bartolo, R Gambini, J Griego, J Pullin, Phys. Rev. Lett. 723297C. Di Bartolo, R. Gambini, J. Griego, J. Pullin, Phys. Rev. Lett. 72 3297 (1994).
A left-handed simplicial action for euclidean general relativity Michael P. Reisenberger. A left-handed simplicial action for euclidean general relativity Michael P. Reisenberger
An action for simplicial euclidean general relativity involving only left-handed fields is presented. The simplicial theory is shown to converge to continuum general relativity in the Plebanski formulation as the simplicial complex is refined. An entirely analogous hyper-cubic lattice theory, which approximates Plebanski's form of general relativity is also presented. References: C Rovelli. Loop Quantum Gravity and Black Hole Physics. gr-qc/9608032. Contains an introduction to Loop Quantum Gravity, and a detailed discussionAn action for simplicial euclidean general relativity involving only left-handed fields is presented. The simplicial theory is shown to converge to continuum general relativity in the Plebanski formulation as the simplicial complex is refined. An entirely analogous hyper-cubic lattice theory, which approximates Plebanski's form of general relativity is also presented. References: C Rovelli, "Loop Quantum Gravity and Black Hole Physics", gr-qc/9608032. Contains an introduction to Loop Quantum Gravity, and a detailed discussion.
Black Hole Entropy from Loop Quantum Gravity. C Rovelli, gr-qc/9603063. Short: only the main idea and computation. C Rovelli, "Black Hole Entropy from Loop Quantum Gravity", gr-qc/9603063. Short: only the main idea and computation.
On statistical mechanics of gravitational systems" gr-qc/9605047. A slightly different approach. K Krasnov, K Krasnov, "On statistical mechanics of gravitational systems" gr-qc/9605047. A slightly different approach.
The Bekenstein bound and non-perturbative quantum gravity. K Krasnov, K Krasnov, "The Bekenstein bound and non-perturbative quantum gravity", gr-
In the context of (1) the target space of the σ-model is an R 3 , equipped with a Poisson bracket induced by the choice of U, V , and W . Correspondingly this auxiliary R 3 foliates (stratifies) into (generically) two-dimensional symplectic sub-manifolds, characterized by a target space coordinate M . On-shell the latter may be identified with the (generalized) mass of the spacetime, a Dirac observable of the theory. The general framework of Poisson σ-models allows to determine the spectrum of M by means of a simple analysis of the above foliation; e.g., Spec(M ) is discrete, iff the respective symplectic leaves have non-trivial second homotopy. For some choices (but not for all. U Where, V , particular, for Lorentzian signature of the metric g all classical, diffeomorphism inequivalent solutions have been found. U, V, W in) Spec(M ) becomes discrete for Euclidean signature of the theory (i.e. but continuous for Lorentzian signature. If a similar relation holds for some (any) Dirac observable O of 4d Einstein gravity, the recently proposed generalized Wick transformation O Lor = T O Eucl T −1 cannot hold. at least in a strict sense. Referenceswhere U, V, W are functions of the dilaton Φ. I briefly reviewed what is known about these (midi-superspace) models on the classical level [1]. In particular, for Lorentzian signature of the metric g all classical, diffeomorphism inequivalent solutions have been found. For not too specific choices of U, V , and W these include perfectly smooth solutions on any ('reasonable') non-compact two-surface as well as various multi black hole configurations. I then came to discuss the Dirac quantization of (1). Here the reformulation of this action in terms of so-called Poisson σ-models [2], comparable to the formulation of 2+1 gravity as Chern-Simons theory, is essential. In the context of (1) the target space of the σ-model is an R 3 , equipped with a Poisson bracket induced by the choice of U, V , and W . Correspondingly this auxiliary R 3 foliates (stratifies) into (generically) two-dimensional symplectic sub-manifolds, characterized by a target space coordinate M . On-shell the latter may be identified with the (generalized) mass of the spacetime, a Dirac observable of the theory. The general framework of Poisson σ-models allows to determine the spectrum of M by means of a simple analysis of the above foliation; e.g., Spec(M ) is discrete, iff the respective symplectic leaves have non-trivial second homotopy. For some choices (but not for all) of U, V, W in (1) Spec(M ) becomes discrete for Euclidean signature of the theory (i.e. of the metric g), but continuous for Lorentzian signature. If a similar relation holds for some (any) Dirac observable O of 4d Einstein gravity, the recently proposed generalized Wick transformation O Lor = T O Eucl T −1 cannot hold, at least in a strict sense. References:
. T Klösch, T Strobl, hep-th/9607226Class. Quantum Grav. 13T. Klösch and T. Strobl, Class. Quantum Grav. 13 (1996) 965-983, 2395-2421, and hep-th/9607226.
P Schaller, T Strobl, hep-th/95070203129 as well as proceedings contributions hep-th/9411163 and. 9P. Schaller and T. Strobl, Mod. Phys. Letts. A9 (1994), 3129 as well as proceedings contributions hep-th/9411163 and hep-th/9507020.
Quantum Spin Dynamics Thomas Thiemann An anomaly-free operator corresponding to the Wheeler-DeWitt constraint of Lorentzian, four-dimensional, canonical, non-perturbative vacuum gravity is constructed in the continuum. This operator is entirely free of factor ordering singularities and can be defined in symmetric and non-symmetric form. We work in the real connection representation and obtain a well-defined quantum theory. We compute the complete solution to the Quantum Einstein Equations for the non-symmetric version of the operator and a physical inner product thereon. The action of the Wheeler-DeWitt constraint on spin-network states is by annihilating, creating and rerouting the quanta of angular momentum associated with the edges of the underlying graph while the ADM-energy is essentially diagonalized by the spinnetwork states. We argue that the spin-network representation is the "non-linear Fock representation. J Gegenberg, G Kunstatter, T Strobl, gr-qc/9607055 and in preparation. of quantum gravity, thus justifying the term "Quantum Spin Dynamics (QSD)". ReferencesJ. Gegenberg, G. Kunstatter, and T. Strobl, gr-qc/9607055 and in preparation. Quantum Spin Dynamics Thomas Thiemann An anomaly-free operator corresponding to the Wheeler-DeWitt constraint of Lorent- zian, four-dimensional, canonical, non-perturbative vacuum gravity is constructed in the continuum. This operator is entirely free of factor ordering singularities and can be defined in symmetric and non-symmetric form. We work in the real connection representation and obtain a well-defined quantum theory. We compute the complete solution to the Quantum Einstein Equations for the non-symmetric version of the operator and a physical inner product thereon. The action of the Wheeler-DeWitt constraint on spin-network states is by annihilat- ing, creating and rerouting the quanta of angular momentum associated with the edges of the underlying graph while the ADM-energy is essentially diagonalized by the spin- network states. We argue that the spin-network representation is the "non-linear Fock representation" of quantum gravity, thus justifying the term "Quantum Spin Dynamics (QSD)". References:
Anomaly-free formulation of non-perturbative, four-dimensional Lorentzian quantum gravity, HUTMP-96/B-350, gr-qc/9606088. T Thiemann, Phys. Lett. 380T. Thiemann, "Anomaly-free formulation of non-perturbative, four-dimensional Lor- entzian quantum gravity, HUTMP-96/B-350, gr-qc/9606088, Phys. Lett. B380 (1996) 257-264
Quantum Spin Dynamics (QSD). T Thiemann, gr-qc/9606089Harvard Preprint HUTMP-96/B-351T. Thiemann, "Quantum Spin Dynamics (QSD)", Harvard Preprint HUTMP-96/B- 351, gr-qc/9606089
Quantum Spin Dynamics (QSD) II. T Thiemann, gr-qc/9606090Harvard Preprint HUTMP-96/B-352T. Thiemann, "Quantum Spin Dynamics (QSD) II", Harvard Preprint HUTMP-96/B- 352, gr-qc/9606090
A length operator in canonical quantum gravity. T Thiemann, gr-qc/960692Harvard PreprintT. Thiemann, "A length operator in canonical quantum gravity", Harvard Preprint HUTMP-96/B-354, gr-qc/960692
| []
|
[
"Published as a conference paper at ICLR 2023 STOCHASTIC DIFFERENTIALLY PRIVATE AND FAIR LEARNING",
"Published as a conference paper at ICLR 2023 STOCHASTIC DIFFERENTIALLY PRIVATE AND FAIR LEARNING"
]
| [
"Andrew Lowy [email protected] ",
"Devansh Guptai [email protected] ",
"Meisam Razaviyayn [email protected] ",
"\nndraprastha Institute of Information Technology\nUniversity of Southern California\nDelhi\n",
"\nUniversity of Southern\nCalifornia\n"
]
| [
"ndraprastha Institute of Information Technology\nUniversity of Southern California\nDelhi",
"University of Southern\nCalifornia"
]
| []
| Machine learning models are increasingly used in high-stakes decision-making systems. In such applications, a major concern is that these models sometimes discriminate against certain demographic groups such as individuals with certain race, gender, or age. Another major concern in these applications is the violation of the privacy of users. While fair learning algorithms have been developed to mitigate discrimination issues, these algorithms can still leak sensitive information, such as individuals' health or financial records. Utilizing the notion of differential privacy (DP), prior works aimed at developing learning algorithms that are both private and fair. However, existing algorithms for DP fair learning are either not guaranteed to converge or require full batch of data in each iteration of the algorithm to converge. In this paper, we provide the first stochastic differentially private algorithm for fair learning that is guaranteed to converge. Here, the term "stochastic" refers to the fact that our proposed algorithm converges even when minibatches of data are used at each iteration (i.e. stochastic optimization). Our framework is flexible enough to permit different fairness notions, including demographic parity and equalized odds. In addition, our algorithm can be applied to non-binary classification tasks with multiple (non-binary) sensitive attributes. As a byproduct of our convergence analysis, we provide the first utility guarantee for a DP algorithm for solving nonconvex-strongly concave min-max problems. Our numerical experiments show that the proposed algorithm consistently offers significant performance gains over the state-of-the-art baselines, and can be applied to larger scale problems with non-binary target/sensitive attributes. | 10.48550/arxiv.2210.08781 | [
"https://export.arxiv.org/pdf/2210.08781v2.pdf"
]
| 252,917,944 | 2210.08781 | 04a1d57a43c7f92aa4fdd660d2b6c312c8c1cbe4 |
Published as a conference paper at ICLR 2023 STOCHASTIC DIFFERENTIALLY PRIVATE AND FAIR LEARNING
Andrew Lowy [email protected]
Devansh Guptai [email protected]
Meisam Razaviyayn [email protected]
ndraprastha Institute of Information Technology
University of Southern California
Delhi
University of Southern
California
Published as a conference paper at ICLR 2023 STOCHASTIC DIFFERENTIALLY PRIVATE AND FAIR LEARNING
Machine learning models are increasingly used in high-stakes decision-making systems. In such applications, a major concern is that these models sometimes discriminate against certain demographic groups such as individuals with certain race, gender, or age. Another major concern in these applications is the violation of the privacy of users. While fair learning algorithms have been developed to mitigate discrimination issues, these algorithms can still leak sensitive information, such as individuals' health or financial records. Utilizing the notion of differential privacy (DP), prior works aimed at developing learning algorithms that are both private and fair. However, existing algorithms for DP fair learning are either not guaranteed to converge or require full batch of data in each iteration of the algorithm to converge. In this paper, we provide the first stochastic differentially private algorithm for fair learning that is guaranteed to converge. Here, the term "stochastic" refers to the fact that our proposed algorithm converges even when minibatches of data are used at each iteration (i.e. stochastic optimization). Our framework is flexible enough to permit different fairness notions, including demographic parity and equalized odds. In addition, our algorithm can be applied to non-binary classification tasks with multiple (non-binary) sensitive attributes. As a byproduct of our convergence analysis, we provide the first utility guarantee for a DP algorithm for solving nonconvex-strongly concave min-max problems. Our numerical experiments show that the proposed algorithm consistently offers significant performance gains over the state-of-the-art baselines, and can be applied to larger scale problems with non-binary target/sensitive attributes.
INTRODUCTION
In recent years, machine learning algorithms have been increasingly used to inform decisions with far-reaching consequences (e.g. whether to release someone from prison or grant them a loan), raising concerns about their compliance with laws, regulations, societal norms, and ethical values. Specifically, machine learning algorithms have been found to discriminate against certain "sensitive" demographic groups (e.g. racial minorities), prompting a profusion of algorithmic fairness research (Dwork et al., 2012;Sweeney, 2013;Datta et al., 2015;Feldman et al., 2015;Bolukbasi et al., 2016;Angwin et al., 2016;Calmon et al., 2017;Hardt et al., 2016a;Fish et al., 2016;Woodworth et al., 2017;Zafar et al., 2017;Bechavod & Ligett, 2017;Kearns et al., 2018;Prost et al., 2019;Baharlouei et al., 2020;Lowy et al., 2022a). Algorithmic fairness literature aims to develop fair machine learning algorithms that output non-discriminatory predictions.
Fair learning algorithms typically need access to the sensitive data in order to ensure that the trained model is non-discriminatory. However, consumer privacy laws (such as the E.U. General Data Protection Regulation) restrict the use of sensitive demographic data in algorithmic decision-making.
Work done as a visiting scholar at the University of Southern California, Viterbi School of Engineering.
These two requirements-fair algorithms trained with private data-presents a quandary: how can we train a model to be fair to a certain demographic if we don't even know which of our training examples belong to that group?
The works of Veale & Binns (2017); Kilbertus et al. (2018) proposed a solution to this quandary using secure multi-party computation (MPC), which allows the learner to train a fair model without directly accessing the sensitive attributes. Unfortunately, as Jagielski et al. (2019) observed, MPC does not prevent the trained model from leaking sensitive data. For example, with MPC, the output of the trained model could be used to infer the race of an individual in the training data set (Fredrikson et al., 2015;He et al., 2019;Song et al., 2020;Carlini et al., 2021). To prevent such leaks, Jagielski et al. (2019) argued for the use of differential privacy (Dwork et al., 2006) in fair learning. Differential privacy (DP) provides a strong guarantee that no company (or adversary) can learn much more about any individual than they could have learned had that individual's data never been used.
Since Jagielski et al. (2019), several follow-up works have proposed alternate approaches to DP fair learning (Xu et al., 2019;Ding et al., 2020;Mozannar et al., 2020;Tran et al., 2021b;a;2022). As shown in Fig. 1, each of these approaches suffers from at least two critical shortcomings. In particular, none of these methods have convergence guarantees when mini-batches of data are used in training. In training large-scale models, memory and efficiency constraints require the use of small minibatches in each iteration of training (i.e. stochastic optimization). Thus, existing DP fair learning methods cannot be used in such settings since they require computations on the full training data set in every iteration. See Appendix A for a more comprehensive discussion of related work.
Our Contributions: In this work, we propose a novel algorithmic framework for DP fair learning. Our approach builds on the non-private fair learning method of Lowy et al. (2022a). We consider a regularized empirical risk minimization (ERM) problem where the regularizer penalizes fairness violations, as measured by the Exponential Rényi Mutual Information. Using a result from Lowy et al. (2022a), we reformulate this fair ERM problem as a min-max optimization problem. Then, we use an efficient differentially private variation of stochastic gradient descent-ascent (DP-SGDA) to solve this fair ERM min-max objective. The main features of our algorithm are:
1. Guaranteed convergence for any privacy and fairness level, even when mini-batches of data are used in each iteration of training (i.e. stochastic optimization setting). As discussed, stochastic optimization is essential in large-scale machine learning scenarios. Our algorithm is the first stochastic DP fair learning method with provable convergence. 2. Flexibility to handle non-binary classification with multiple (non-binary) sensitive attributes (e.g. race and gender) under different fairness notions such as demographic parity or equalized odds. In each of these cases, our algorithm is guaranteed to converge.
Empirically, we show that our method outperforms the previous state-of-the-art methods in terms of fairness vs. accuracy trade-off across all privacy levels. Moreover, our algorithm is capable of training with mini-batch updates and can handle non-binary target and non-binary sensitive attributes. By contrast, existing DP fairness algorithms could not converge in our stochastic/non-binary experiment.
A byproduct of our algorithmic developments and analyses is the first DP convergent algorithm for nonconvex min-max optimization: namely, we provide an upper bound on the stationarity gap of DP-SGDA for solving problems of the form min θ max W F pθ, W q, where F p¨, W q is non-convex. We expect this result to be of independent interest to the DP optimization community. Prior works that provide convergence results for DP min-max problems have assumed that F p¨, W q is either (strongly) convex (Boob & Guzmán, 2021;Zhang et al., 2022) or satisfies a generalization of strong convexity known as the Polyak-Łojasiewicz (PL) condition (Yang et al., 2022).
PROBLEM SETTING AND PRELIMINARIES
Let Z " tz i " px i , s i , y i qu n i"1 be a data set with non-sensitive features x i P X , discrete sensitive attributes (e.g. race, gender) s i P rks fi t1, . . . , ku, and labels y i P rls. Let p y θ pxq denote the model predictions parameterized by θ, and pθ, x, yq " pp y θ pxq, yq be a loss function (e.g. cross-entropy loss). Our goal is to (approximately) solve the empirical risk minimization (ERM) problem
in a fair manner, while maintaining the differential privacy of the sensitive data ts i u n i"1 . We consider two different notions of fairness in this work: 1 Definition 2.1 (Fairness Notions). Let A : Z Ñ Y be a classifier.
• A satisfies demographic parity (Dwork et al., 2012) if the predictions ApZq are statistically independent of the sensitive attributes. • A satisfies equalized odds (Hardt et al., 2016a) if the predictions ApZq are conditionally independent of the sensitive attributes given Y " y for all y. Depending on the specific problem at hand, one fairness notion may be more desirable than the other (Dwork et al., 2012;Hardt et al., 2016a).
In practical applications, achieving exact fairness, i.e. (conditional) independence of p Y and S, is unrealistic. In fact, achieving exact fairness can be impossible for a differentially private algorithm that achieves non-trivial accuracy (Cummings et al., 2019). Thus, we instead aim to design an algorithm that achieves small fairness violation on the given data set Z. Fairness violation can be measured in different ways: see e.g. Lowy et al. (2022a) for a thorough survey. For example, if demographic parity is the desired fairness notion, then one can measure (empirical) demographic parity violation by
max p yPY max sPSˇp p Y |S pp y|sq´p p Y pp yqˇˇ,(2)
wherep denotes an empirical probability calculated directly from pZ, tp y i u n i"1 q. Next, we define differential privacy (DP). Following the DP fair learning literature in (Jagielski et al., 2019;Tran et al., 2021b;2022)), we consider a relaxation of DP, in which only the sensitive attributes require privacy. Say Z and Z 1 are adjacent with respect to sensitive data if Z " tpx i , y i , s i qu n i"1 , Z 1 " tpx i , y i , s 1 i qu n i"1 , and there is a unique i P rns such that s i ‰ s 1 i . Definition 2.2 (Differential Privacy w.r.t. Sensitive Attributes). Let ě 0, δ P r0, 1q. A randomized algorithm A is p , δq-differentially private w.r.t. sensitive attributes S (DP) if for all pairs of data sets Z, Z 1 that are adjacent w.r.t. sensitive attributes, we have
PpApZq P Oq ď e PpApZq P Oq`δ,(3)for all measurable O Ď Y.
As discussed in Section 1, Theorem 2.2 is useful if a company wants to train a fair model, but is unable to use the sensitive attributes (which are needed to train a fair model) due to privacy concerns and laws (e.g., the E.U. GDPR). Theorem 2.2 enables the company to privately use the sensitive attributes to train a fair model, while satisfying legal and ethical constraints. That being said, Theorem 2.2 still may not prevent leakage of non-sensitive data. Thus, if the company is concerned with privacy of user data beyond the sensitive demographic attributes, then it should impose DP for all the features. Our algorithm and analysis readily extends to DP for all features: see Section 3.
Throughout the paper, we shall restrict attention to data sets that contain at least ρ-fraction of every sensitive attribute for some ρ P p0, 1q: i.e. 1 |Z| ř |Z| i"1 1 tsi"ru ě ρ for all r P rks. This is a reasonable assumption in practice: for example, if sex is the sensitive attribute and a data set contains all men, then training a model that is fair with respect to sex and has a non-trivial performance (better than random) seems almost impossible. Understanding what performance is (im-)possible for DP fair learning in the absence of sample diversity is an important direction for future work.
PRIVATE FAIR ERM VIA EXPONENTIAL RÉNYI MUTUAL INFORMATION
A standard in-processing strategy in the literature for enforcing fairness is to add a regularization term to the empirical objective that penalizes fairness violations (Zhang et al., 2018;Donini et al., 2018;Mary et al., 2019;Baharlouei et al., 2020;Cho et al., 2020b;Lowy et al., 2022a). We can then jointly optimize for fairness and accuracy by solving
min θ ! p Lpθq`λDp p Y , S, Y q ) ,
where D is some measure of statistical (conditional) dependence between the sensitive attributes and the predictions (given Y ), and λ ě 0 is a scalar balancing fairness and accuracy considerations.
p DRp p Y , Sq :" E #p p Y ,S p p Y , Sq p p Y p p Y qpSpSq +´1 " ÿ jPrls ÿ rPrksp p Y ,S pj, rq 2 p p Y pjqpSprq´1 (ERMI)
Theorem 3.1 is what we would use if demographic parity were the desired fairness notion. If instead one wanted to encourage equalized odds, then Theorem 3.1 can be readily adapted to these fairness notions by substituting appropriate conditional probabilities forp p Y ,S ,p p Y , andp S in (ERMI): see Appendix B for details. 2 It can be shown that ERMI ě 0, and is zero if and only if demographic parity (or equalized odds, for the conditional version of ERMI) is satisfied (Lowy et al., 2022a). Further, ERMI provides an upper bound on other commonly used measures of fairness violation: e.g.) (2), Shannon mutual information (Cho et al., 2020a), Rényi correlation (Baharlouei et al., 2020), L q fairness violation (Kearns et al., 2018;Hardt et al., 2016a) (Lowy et al., 2022a. This implies that any algorithm that makes ERMI small will also have small fairness violation with respect to these other notions. Lastly, (Lowy et al., 2022a, Proposition 2) shows that empirical ERMI (Theorem 3.1) is an asymptotically unbiased estimator of "population ERMI"-which can be defined as in Theorem 3.1, except that empirical distributions are replaced by their population counterparts.
Our approach to enforcing fairness is to augment (1) with an ERMI regularizer and privately solve:
min θ ! FERMIpθq :" p Lpθq`λ p D R p p Y θ pXq, Sq ) . (FERMI obj.)
Since empirical ERMI is an asymptotically unbiased estimator of population ERMI, a solution to (FERMI obj.) is likely to generalize to the corresponding fair population risk minimization problem (Lowy et al., 2022a). There are numerous ways to privately solve (FERMI obj.). For example, one could use the exponential mechanism (McSherry & Talwar, 2007), or run noisy gradient descent (GD) (Bassily et al., 2014). The problem with these approaches is that they are inefficient or require computing n gradients at every iteration, which is prohibitive for large-scale problems, as discussed earlier. Notice that we could not run noisy stochastic GD (SGD) on (FERMI obj.) because we do not (yet) have a statistically unbiased estimate of ∇ θ p D R p p Y θ pXq, Sq.
Our next goal is to derive a stochastic, differentially private fair learning algorithm. For feature input x, let the predicted class labels be given by p ypx, θq " j P rls with probability F j px, θq, where Fpx, θq is differentiable in θ, has range r0, 1s l , and ř l j"1 F j px, θq " 1. For instance, Fpx, θq " pF 1 px, θq, . . . , F l px, θqq could represent the output of a neural net after softmax layer or the probability label assigned by a logistic regression model. Then we have the following min-max re-formulation of (FERMI obj.):
Theorem 3.2 (Lowy et al. (2022a)). There are differentiable functions p ψ i such that (FERMI obj.) is equivalent to
min θ max W PR kˆl # p F pθ, W q :" p Lpθq`λ 1 n n ÿ i"1 p ψipθ, W q + .(4)
Further, p ψ i pθ,¨q is strongly concave for all θ.
The functions p ψ i are given explicitly in Appendix C. Theorem 3.2 is useful because it permits us to use stochastic optimization to solve (FERMI obj.): for any batch size m P rns, the gradients (with respect to θ and W ) of 1 m ř iPB px i , y i ; θq`λ p ψ i pθ, W q are statistically unbiased estimators of the gradients of p F pθ, W q, if B is drawn uniformly from Z. However, when differential privacy of the sensitive attributes is also desired, the formulation (4) presents some challenges, due to the non-convexity of p F p¨, W q. Indeed, there is no known DP algorithm for solving non-convex min-max problems that is proven to converge. Next, we provide the first such convergence guarantee.
NOISY DP-FERMI FOR STOCHASTIC PRIVATE FAIR ERM
Our proposed stochastic DP algorithm for solving (FERMI obj.), is given in Algorithm 1. It is a noisy DP variation of two-timescale stochastic gradient descent ascent (SGDA) Lin et al. (2020).
Algorithm 1 DP-FERMI Algorithm for Private Fair ERM 1: Input: θ 0 P R d θ , W 0 " 0 P R kˆl , step-sizes pη θ , η w q, fairness parameter λ ě 0, iteration number T , minibatch size |B t | " m P rns, set W Ă R kˆl , noise parameters σ 2 w , σ 2 θ . 2: Compute p P´1 Draw a mini-batch B t of data points tpx i , s i , y i qu iPBt 5:
Set θ t`1 Ð θ t´η θ |Bt| ř iPBt r∇ θ px i , y i ; θ t q`λp∇ θ p ψ i pθ t , W t q`u t qs, where u t " N p0, σ 2 θ I d θ q. 6: Set W t`1 Ð Π W´Wt`ηw " λ |Bt| ř iPBt ∇ w p ψ i pθ t , W t q`V t ı¯,
where V t is a kˆl matrix with independent random Gaussian entries pV t q r,j " N p0, σ 2 w q. 7: end for 8: Pickt uniformly at random from t1, . . . , T u. 9: Return:θ T :" θt.
Explicit formulae for ∇ θ p ψ i pθ t , W t q and ∇ w p ψ i pθ t , W t q are given in Theorem D.1 (Appendix D). We provide the privacy guarantee of Algorithm 1 in Theorem 3.3: Theorem 3.3. Let ď 2 lnp1{δq, δ P p0, 1q, and T ě´n ? 2m¯2 . Assume Fpx,¨q is L θ -Lipschitz for all x, and |pW t q r,j | ď D for all t P rT s, r P rks, j P rls. Then, for σ 2 w ě 16T lnp1{δq 2 n 2 ρ and σ 2 θ ě 16L 2 θ D 2 lnp1{δqT 2 n 2 ρ , Algorithm 1 is p , δq-DP with respect to the sensitive attributes for all data sets containing at least ρ-fraction of minority attributes. Further, if σ 2 w ě 32T lnp1{δq
2 n 2´1 ρ`D 2¯a nd σ 2 θ ě 64L 2 θ D 2 lnp1{δqT 2 n 2 ρ`3 2D 4 L 2 θ l 2 T lnp1{δq 2 n 2
, then Algorithm 1 is p , δq-DP (with respect to all features) for all data sets containing at least ρ-fraction of minority attributes.
See Appendix D for the proof. Next, we give a convergence guarantee for Algorithm 1: Theorem 3.4. Assume the loss function p¨, x, yq and Fpx,¨q are Lipschitz continuous with Lipschitz gradient for all px, yq, and p P S prq ě ρ ą 0 @ r P rks. In Algorithm 1, choose W to be a sufficiently large ball that contains W˚pθq :" argmax W p F pθ, W q for every θ in some neighborhood of θ˚P argmin θ max W p F pθ, W q. Then there exist algorithmic parameters such that the p , δq-DP Algorithm 1 returnsθ T with E}∇FERMIpθ T q} 2 " O˜a maxpd θ , klq lnp1{δq n¸,
treating D " diameterpWq, λ, ρ, l, and the Lipschitz and smoothness parameters of and F as constants.
Theorem 3.4 shows that Algorithm 1 finds an approximate stationary point of (FERMI obj.). Finding approximate stationary points is generally the best one can hope to do in polynomial time for nonconvex optimization (Murty & Kabadi, 1985). The stationarity gap in Theorem 3.4 depends on the number of samples n and model parameters d θ , the desired level of privacy p , δq, and the number of labels l and sensitive attributes k. For large-scale models (e.g. deep neural nets), we typically have d θ " 1 and k, l " Op1q, so that the convergence rate of Algorithm 1 is essentially immune to the number of labels and sensitive attributes. In contrast, no existing works with convergence guarantees are able to handle non-binary classification (l ą 2), even with full batches and a single binary sensitive attribute.
A few more remarks are in order. First, the utility bound in Theorem 3.4 corresponds to DP for all of the features. If DP is only required for the sensitive attributes, then using the smaller σ 2 θ , σ 2 w in Theorem 3.3 would improve the dependence on constants D, l, L θ in the utility bound. Second, the choice of W in Theorem 3.4 implies that (4) is equivalent to min θ max W PW p F pθ, W q, which is what our algorithm directly solves (c.f. (7)). Lastly, note that while we return a uniformly random iterate in Algorithm 1 for our theoretical convergence analysis, we recommend returning the last iterate θ T in practice: our numerical experiments show strong performance of the last iterate.
In Theorem E.1 of Appendix E, we prove a result which is more general than Theorem 3.4. Theorem E.1 shows that noisy DP-SGDA converges to an approximate stationary point of any smooth nonconvex-strongly concave min-max optimization problem (not just (4)). We expect Theorem E.1 to be of general interest to the DP optimization community beyond its applications to DP fair learning, since it is the first DP convergence guarantee for nonconvex min-max optimization. We also give a bound on the iteration complexity T in Appendix E.
The proof of Theorem E.1 involves a careful analysis of how the Gaussian noises propagate through the optimization trajectories of θ t and w t . Compared with DP non-convex minimization analyses 2022), which assume that f p¨, wq is convex or PL, dealing with non-convexity is a challenge that requires different optimization techniques.
NUMERICAL EXPERIMENTS
In this section, we evaluate the performance of our proposed approach (DP-FERMI) in terms of the fairness violation vs. test error for different privacy levels. We present our results in two parts: In Section 4.1, we assess the performance of our method in training logistic regression models on several benchmark tabular datasets. Since this is a standard setup that existing DP fairness algorithms can handle, we are able to compare our method against the state-of-the-art baselines. We carefully tuned the hyperparameters of all baselines for fair comparison. We find that DP-FERMI consistently outperforms all state-of-the-art baselines across all data sets and all privacy levels. These observations hold for both demographic parity and equalized odds fairness notions. To quantify the improvement of our results over the state-of-the-art baselines, we calculated the performance gain with respect to fairness violation (for fixed accuracy level) that our model yields over all the datasets. We obtained a performance gain of demographic parity that was 79.648 % better than Tran et al. (2021b) on average, and 65.89% better on median. The average performance gain of equalized odds was 96.65% while median percentage gain was 90.02%. In Section 4.2, we showcase the scalability of DP-FERMI by using it to train a deep convolutional neural network for classification on a large image dataset. In Appendix F, we give detailed descriptions of the data sets, experimental setups and training procedure, along with additional results.
STANDARD BENCHMARK EXPERIMENTS: LOGISTIC REGRESSION ON TABULAR
DATASETS In the first set of experiments we train a logistic regression model using DP-FERMI (Algorithm 1) for demographic parity and a modified version of DP-FERMI (described in Appendix F) for equalized odds. We compare DP-FERMI against all applicable publicly available baselines in each expeiment.
DEMOGRAPHIC PARITY
We use four benchmark tabular datasets: Adult Income, Retired Adult, Parkinsons, and Credit-Card dataset from the UCI machine learning repository (Dua & Graff (2017)). The predicted variables and sensitive attributes are both binary in these datasets. We analyze fairness-accuracy trade-offs with four different values of P t0.5, 1, 3, 9u for each dataset. We compare against state-of-the-art algorithms proposed in Tran et al. For the Adult dataset, the task is to predict whether the income is greater than $50K or not keeping gender as the sensitive attribute. The Retired Adult dataset is the same as the Adult dataset, but with updated data. We use the same output and sensitive attributes for both experiments. The results for Adult and Retired Adult are shown in Figs. 2 and 6 (in Appendix F.2). Compared to Tran et al. (2021a;b), DP-FERMI offers superior fairness-accuracy tradeoffs at every privacy ( ) level.
(a) " 0.5 (b) " 1 (c) " 3 (d) " 9
Figure 2: Private, Fair (Demographic Parity) logistic regression on Adult Dataset. In the Parkinsons dataset, the task is to predict whether the total UPDRS score of the patient is greater than the median or not keeping gender as the sensitive attribute. Results for P t1, 3u are shown in Fig. 3. See Fig. 8 in Appendix F for P t0.5, 9u. Our algorithm again outperforms the baselines Tran et al. (2021a;b) for all tested privacy levels.
In the Credit Card dataset , the task is to predict whether the user will default payment the next month keeping gender as the sensitive attribute. Results are shown in Fig. 7 in Appendix F.2. Once again, DP-FERMI provides the most favorable privacy-fairness-accuracy profile. Next, we consider the slightly modified version of Algorithm 1, which is designed to minimize the Equalized Odds violation by replacing the absolute probabilities in the objective with class conditional probabilities: see Appendix F.2.4 for details.
We considered the Credit Card and Adult datasets for these experiments, using the same sensitive attributes as mentioned above. Results for Credit Card are shown in Fig. 4. Adult results are given in Fig. 9 in The results in Fig. 5 empirically verify our main theoretical result: DP-FERMI converges even for non-binary classification with small batch size and non-binary sensitive attributes. We took Tran et al. (2021a;b) as our baselines and attempted to adapt them to this non-binary large-scale task. We observed that the baselines were very unstable while training and mostly gave degenerate results (predicting a single output irrespective of the input). By contrast, our method was able to obtain stable and meaningful tradeoff curves. Also, while Tran et al. (2022) reported results on UTK-Face, their code is not publicly available and we were unable to reproduce their results.
CONCLUDING REMARKS
Motivated by pressing legal, ethical, and social considerations, we studied the challenging problem of learning fair models with differentially private demographic data. We observed that existing works suffer from a few crucial limitations that render their approaches impractical for large-scale problems. Specifically, existing approaches require full batches of data in each iteration (and/or exponential runtime) in order to provide convergence/accuracy guarantees. We addressed these limitations by deriving a DP stochastic optimization algorithm for fair learning, and rigorously proved the convergence of the proposed method. Our convergence guarantee holds even for non-binary classification (with any hypothesis class, even infinite VC dimension, c.f. Jagielski et al. (2019)) with multiple sensitive attributes and access to random minibatches of data in each iteration. Finally, we evaluated our method in extensive numerical experiments and found that it significantly outperforms the previous state-of-the-art models, in terms of fairness-accuracy tradeoff. The potential societal impacts of our work are discussed in Appendix G. 2) an p , δq-DP in-processing approach based on Agarwal et al. (2018). The major drawback of their post-processing approach is the unrealistic requirement that the algorithm have access to the sensitive attributes at test time, which Jagielski et al. (2019) admits "isn't feasible (or legal) in certain applications." Additionally, post-processing approaches are known to suffer from inferior fairnessaccuracy tradeoffs compared with in-processing methods. While the in-processing method of Jagielski et al. (2019) does not require access to sensitive attributes at test time, it comes with a different set of disadvantages: 1) it is limited to binary classification; 2) its theoretical performance guarantees require the use of the computationally inefficient (i.e. exponential-time) exponential mechanism (McSherry & Talwar, 2007); 3) its theoretical performance guarantees require computations on the full training set and do not permit mini-batch implementations; 4) it requires the hypothesis class H to have finite VC dimension. In this work, we propose the first algorithm that overcomes all of these pitfalls: our algorithm is amenable to multi-way classification with multiple sensitive attributes, computationally efficient, and comes with convergence guarantees that hold even when mini-batches of m ă n samples are used in each iteration of training, and even when VCpHq " 8. Furthermore, our framework is flexible enough to accommodate many notions of group fairness besides equalized odds (e.g. demographic parity, accuracy parity).
ACKNOWLEDGMENTS
p D R p p Y ; S|Y q :" E #p p Y ,S|Y p p Y , S|Y q p p Y |Y p p Y |Y qp S|Y pS|Y q +´1 " l ÿ y"1 l ÿ j"1 k ÿ r"1p p Y ,S|Y pj, r|yq 2 p p Y |Y pj|yqp S|Y pr|yqp Y pyq´1.(5)
Herep p Y ,S|Y denotes the empirical joint distribution of the predictions and sensitive attributes p p Y , Sq conditional on the true labels Y . In particular, if D R p p Y ; S|Y q " 0, then p Y and S are conditionally independent given Y (i.e. equalized odds is satisfied).
C COMPLETE VERSION OF THEOREM 3.2
Let p ypx i ; θq P t0, 1u l and s i P t0, 1u k be the one-hot encodings of p ypx i , θq and s i , respectively: i.e., p y j px i ; θq " 1 tp ypxi,θq"ju and s i,r " 1 tsi"ru for j P rls, r P rks. Also, denote p P s " diagpp p S p1q, . . . , p p S pkqq, where p p S prq :" 1 n ř n i"1 1 tsi"ru ě ρ ą 0 is the empirical probability of attribute r (r P rks). Then we have the following re-formulation of (FERMI obj.) as a min-max problem:
Theorem C.1 (Lowy et al. (2022a)). (FERMI obj.) is equivalent to min θ max W PR kˆl # p F pθ, W q :" p Lpθq`λ 1 n n ÿ i"1 p ψipθ, W q + ,(6)
where
p ψ i pθ, W q :"´TrpW Erp ypx i , θqp ypx i , θq T |x i sW T q 2 TrpW Erp ypx i ; θqs T i |x i , s i s p P´1 {2 s q´1,
Erp ypx i ; θqp ypx i ; θq T |x i s " diagpF 1 px i , θq, . . . , F l px i , θqq, and Erp ypx i ; θqs T i |x i , s i s is a kˆl matrix with Erp ypx i ; θqs T i |x i , s i s r,j " s i,r F j px i , θq.
Strong concavity of p ψ i is shown in Lowy et al. (2022a).
D DP-FERMI ALGORITHM: PRIVACY
We begin with a routine calculation of the derivatives of p ψ i , which follows by elementary matrix calculus:
Lemma D.1. Let p ψ i pθ, W q "´TrpW Erp ypx i , θqp ypx i , θq T |x i sW T q2 TrpW Erp ypx i ; θqs T i |x i , s i s p P´1 {2 s q´1, where Erp ypx i ; θqp ypx i ; θq T |x i s " diagpF 1 px i , θq, . . . , F l px i , θqq and Erp ypx i ; θqs T i |x i , s i s is a kˆl matrix with Erp ypx i ; θqs T i |x i , s i s r,j " s i,r F j px i , θq. Then, ∇ θ p ψ i pθ, W q "´∇ θ vecpErp ypx i , θqp ypx i , θq T |x i sq T vecpW T W q`2∇ θ vecpErs i p ypx i , θq T |x i , s i sq vecˆW T´p P S¯´1 {2ȧ nd ∇ w p ψ i pθ, W q "´2W Erp ypx i , θqp ypx i , θq T |x i s`2 p P´1 {2 S Ers i p ypx i , θq T |x i , s i s.
Using Theorem D.1, we can prove that Algorithm 1 is DP:
Theorem D.2 (Re-statement of Theorem 3.3). Let ď 2 lnp1{δq, δ P p0, 1q, and T ě´n ? 2m¯2 . Assume Fp¨, xq is L θ -Lipschitz for all x, and |pW t q r,j | ď D for all t P rT s, r P rks, j P rls.
Then, for σ 2 w ě 16T lnp1{δq 2 n 2 ρ and σ 2 θ ě 16L 2 θ D 2 lnp1{δqT 2 n 2 ρ , Algorithm 1 is p , δq-DP with respect to the sensitive attributes for all data sets containing at least ρ-fraction of minority attributes. Further, if σ 2 w ě 32T lnp1{δq
2 n 2´1 ρ`D 2¯a nd σ 2 θ ě 64L 2 θ D 2 lnp1{δqT 2 n 2 ρ`3 2D 4 L 2 θ l 2 T lnp1{δq 2 n 2
, then Algorithm 1 is p , δq-DP (with respect to all features) for all data sets containing at least ρ-fraction of minority attributes.
Proof. First consider the case in which only the sensitive attributes are private. By the moments accountant Theorem 1 in Abadi et al. (2016), it suffices to bound the sensitivity of the gradient updates by ∆ 2 θ ď 8D 2 L 2 θ m 2 ρ and ∆ 2 w ď 8 m 2 ρ . Here
∆ 2 θ " sup Z"Z 1 ,θ,W › › › › › 1 m ÿ iPBt " ∇ θ p ψpθ, W ; z i q´∇ θ p ψpθ, W ; z 1 i q ı › › › › › 2
and Z " Z 1 means that Z and Z 1 are two data sets (both with ρ-fraction of minority attributes) that differ in exactly one person's sensitive attributes: i.e. s i ‰ s 1 i for some unique i P rns, but z j " z 1 j for all j ‰ i and px i , y i q " px 1 i , y 1 i q. Likewise,
∆ 2 w " sup Z"Z 1 ,θ,W › › › › › 1 m ÿ iPBt " ∇ w p ψpθ, W ; z i q´∇ w p ψpθ, W ; z 1 i q ı › › › › › 2 .
Now, by Theorem D.1,
∇ θ p ψ i pθ, W q "´∇ θ vecpErp ypx i , θqp ypx i , θq T |x i sq T vecpW T W q 2∇ θ vecpErs i p ypx i , θq T |x i , s i sq vecˆW T´p P S¯´1 {2˙,
and notice that only the second term depends on S. Therefore, we can bound the 2 -sensitivity of the θ-gradient updates by:
∆ 2 θ " sup Z"Z 1 ,W,θ › › › › › 1 m m ÿ i"1 2∇ θ vecpErs i p ypx i , θq T |x i , s i sq vecˆW T´p P S¯´1 {22 ∇ θ vecpErs 1 i p ypx i , θq T |x i , s 1 i sq vecˆW T´p P S 1¯´1 {2˙› › › › › 2 ď 4 m 2 sup x,si,s 1 i ,W,θ » - - k ÿ r"1 l ÿ j"1 }∇ θ F j pθ, xq} 2 W 2 r,j¨s i,r b p P S prq´s 1 i,r b p P S 1 prq‚ 2 fi ffi fl ď 8 ρm 2 sup x,W,θ˜l ÿ j"1 }∇ θ F j pθ, xq} 2 W 2 r,jḑ 8D 2 L 2 θ ρm 2 ,
using Lipschitz continuity of Fp¨, xq, the assumption that W has diameter bounded by D, the assumption that the data sets have at least ρ-fraction of sensitive attribute r for all r P rks. Similarly, for the W -gradients, we have
∇ w p ψ i pθ, W q "´2W Erp ypx i , θqp ypx i , θq T |x i s`2 p P´1 {2 S Ers i p ypx i , θq T |x i , s i s by Theorem D.1. Hence ∆ 2 W " sup θ,W,si,s 1 i 4 m 2 › › › › ›´W diagpF 1 pθ, x i q, . . . , F l pθ, x i qq`p P´1 {2 S Ers i p y i px i ; θ t q T |x i , s i s W diagpF 1 pθ, x i q, . . . , F l pθ, x i qq´p P´1 {2 S 1 Ers 1 i p y i px i ; θ t q T |x i , s 1 i s › › › › › 2 ď 4 m 2 sup θ,W,si,s 1 i l ÿ j"1 F j pθ, x i q 2 k ÿ r"1¨s i,r b p P S prq´s 1 i,r b p P S 1 prq‚ 2 ď 8 m 2 ρ , since ř l j"1 F j pθ, x i q 2 ď ř l j"1 F j pθ, x i q " 1.
This establishes the desired privacy guarantee with respect to sensitive attributes for Algorithm 1. Now consider the case in which all features are private. We aim to bound the sensitivities of the gradient updates to changes in a single sample z i " ps i , x i , y i q. Denote these new sensitivities bỹ
∆ θ " sup Z"Z 1 ,θ,W › › › › › 1 m ÿ iPBt " ∇ θ p ψpθ, W ; z i q´∇ θ p ψpθ, W ; z 1 i q ı › › › › › ,
where we now write Z " Z 1 to mean that Z and Z 1 are two data sets (both with ρ-fraction of minority attributes) that differ in exactly one person's (sensitive and non-sensitive) data: i.e. z i ‰ z 1 i for some unique i P rns. Likewise,
∆ W " sup Z"Z 1 ,θ,W › › › › › 1 m ÿ iPBt " ∇ w p ψpθ, W ; z i q´∇ w p ψpθ, W ; z 1 i q ı › › › › › . Theñ ∆ θ " 1 m sup zi,z 1 i ,θ,W,S"S 1 › › › › ›´∇ θ vecpErp ypx i , θqp ypx i , θq T |x i sq T vecpW T W q`2∇ θ vecpErs i p ypx i , θq T |x i , s i sq vecˆW T´p P S¯´1 {2˙`∇ θ vecpErp ypx 1 i , θqp ypx 1 i , θq T |x 1 i sq T vecpW T W q 2∇ θ vecpErs 1 i p ypx 1 i , θq T |x 1 i , s 1 i sq vecˆW T´p P S 1¯´1 {2˙› › › › › ď 2L θ lD m`∆ θ .
Thus,∆ 2 θ ď 4L 2 θ l 2 D 2 m 2`2 ∆ 2 θ . Therefore, by the moments accountant, the collection of all θ t updates in Algorithm 1 is p , δq-DP if σ 2 θ ě
32D 2 L 2 θ T lnp1{δq ρ 2 n 2`8 D 2 L 2 θ l 2 T lnp1{δq 2 n 2 " 8L 2 θ D 2 T lnp1{δq 2 n 2´4 ρ`l 2¯.
Published as a conference paper at ICLR 2023
Next, we bound the sensitivity∆ W of the W -gradient updates. We havẽ
∆ 2 W " sup θ,W,zi,z 1 i 4 m 2 › › › › ›´W diagpF 1 pθ, x i q, . . . , F l pθ, x i qq`p P´1 {2 S Ers i p y i px i ; θ t q T |x i , s i s W diagpF 1 pθ, x 1 i q, . . . , F l pθ, x 1 i qq´p P´1 {2 S 1 Ers 1 i p y T i px 1 i ; θ t q|x 1 i , s 1 i s › › › › › 2 ď 2∆ 2 W`8 m 2 sup θ,W,xi,x 1 i › › › › › W diagpF 1 pθ, x i q´F 1 pθ, x 1 i q, . . . , F l pθ, x i q´F l pθ, x 1 i qq › › › › › 2 ď 2∆ 2 W`1 6D 2 m 2 sup θ,xi l ÿ j"1 F j pθ, x i q 2 ď 2∆ 2 W`1 6D 2 m 2 .
Therefore, by the moments accountant, the collection of all W t updates in Algorithm 1 is p , δq-DP if σ 2 w ě 32T lnp1{δq
2 n 2´1 ρ`D
2¯. This completes the proof.
E DP-FERMI ALGORITHM: UTILITY
To prove Theorem 3.4, we will first derive a more general result. Namely, in Appendix E.1, we will provide a precise upper bound on the stationarity gap of noisy DP stochastic gradient descent ascent (DP-SGDA).
E.1 NOISY DP-SGDA FOR NONCONVEX-STRONGLY CONCAVE MIN-MAX PROBLEMS
Consider a generic (smooth) nonconvex-strongly concave min-max ERM problem:
min θPR d θ max wPW # F pθ, wq :" 1 n n ÿ i"1 f pθ, w; ziq + ,(7)
where f pθ,¨; zq is µ-strongly concave 3 for all θ, z but f p¨, w; zq is potentially non-convex. We
Algorithm 2 Noisy Differentially Private Stochastic Gradient Descent-Ascent (DP-SGDA) 1: Input: data Z, θ 0 P R d θ , w 0 P W, step-sizes pη θ , η w q, privacy noise parameters σ θ , σ w , batch size m, iteration number T ě 1. 2: for t " 0, 1, . . . , T´1 do 3:
Draw a batch of data points tz i u m i"1 uniformly at random from Z.
4:
Update θ t`1 Ð θ t´ηθ`1 m ř m i"1 ∇ θ f pθ t , w t ; z i q`u t˘, where u t " N p0, σ 2 θ I d θ q and w t`1 Ð Π W " w t`ηw`1 m ř m i"1 ∇ w f pθ t , w t ; z i q`v t˘‰ ,
where v t " N p0, σ 2 w I dw q. 5: end for 6: Drawθ T uniformly at random from tθ t u T t"1 . 7: Return:θ T propose Noisy DP-SGDA 4 (Algorithm 2) for privately solving (7), which is a noisy DP variation of two-timescale SGDA (Lin et al., 2020). Now, we provide the first theoretical convergence guarantee for DP non-convex min-max optimization: Theorem E.1 (Privacy and Utility of Algorithm 2, Informal Version). Let ď 2 lnp1{δq, δ P p0, 1q. Assume: f p¨, w; zq is L θ -Lipschitz 5 and f pθ,¨; zq is L w -Lipschitz for all θ, w, z; and W Ă R dw Published as a conference paper at ICLR 2023 is a convex, compact set. Denote Φpθq " max wPW F pθ, wq. Choose σ 2 w "
8T L 2 w lnp1{δq 2 n 2 , σ 2 θ "
8T L 2 θ lnp1{δq 2 n 2 , and T ě´n ? 2m¯2 . Then, Algorithm 2 is p , δq-DP. Further, if f p¨,¨; zq has Lipschitz gradients and f pθ,¨; zq is strongly concave, then D T, η θ , η w such that
E}∇ΦpθT q} 2 " O˜a d lnp1{δq n¸,
where d " maxpd θ , d w q. (The expectation is solely over the algorithm.)
In our DP fair learning application, f pθ, W ; z i q " pθ, x i , y i q`λ p ψ i pθ, W q and the strong concavity assumption on f in Theorem E.1 is automatically satisfied, by Lowy et al. (2022a). The Lipschitz and smoothness assumptions on f are standard in optimization literature and are satisfied for loss functions that are typically used in pracdtice. In our application to DP-FERMI, these assumptions hold as long as the loss function and F are Lipschitz continuous with Lipschitz gradients. Our next goal is to prove (the precise, scale-invariant version of) Theorem E.1. To that end, we require the following notation.
Notation and Assumptions: Let f : R d θˆR dwˆZ Ñ R, and F pθ, wq " 1 n ř n i"1 f pθ, w; z i q for fixed training data Z " pz 1 ,¨¨¨, z n q P Z n . Let W Ă R dw be a convex, compact set. For any θ P R d θ , denote w˚pθq P argmax wPW F pθ, wq and p
Φpθq
" max wPW F pθ, wq. Let ∆ Φ " p Φpθ 0 q´inf θ p Φ Z pθq. Recall that a function h is β-smooth if its derivative ∇h is β-Lipschitz. We write a À b if there is an absolute constant C ą 0 such that a ď Cb. Assumption E.2.
1. f p¨, w; zq is L θ -Lipschitz and β θ -smooth for all w P W, z P Z.
2. f pθ,¨; zq is L w -Lipschitz, β w -smooth, and µ-strongly concave on W for all θ P R d θ , z P Z.
3. }∇ w f pθ, w; zq´∇ w f pθ 1 , w; zq} ď β θw }θ´θ 1 } and }∇ θ f pθ, w; zq´∇ θ f pθ, w 1 ; zq} ď β θw }w´w 1 } for all θ, θ 1 , w, w 1 , z.
4. W has 2 diameter bounded by D ě 0.
5. ∇ w F pθ, w˚pθqq " 0 for all θ, where w˚pθq denotes the unconstrained global minimizer of F pθ,¨q.
The first four assumptions are standard in (DP and min-max) optimization. The fifth assumption means that W contains the unconstrained global minimizer w˚pθq of F pθ,¨q for all θ. Hence (7) is equivalent to min
θPR d θ max wPR dw F pθ, wq.
This assumption is not actually necessary for our convergence result to hold, but we will need it when we apply our results to the DP fairness problem. Moreover, it simplifies the proof of our convergence result. We refer to problems of the form (7) that satisfy Theorem E.2 as "(smooth) nonconvex-strongly concave min-max." We denote κ w :" βw µ and κ θw :" β θw µ . We can now provide the complete, precise version of Theorem E.1: Theorem E.3 (Privacy and Utility of Algorithm 2, Formal Version). Let ď 2 lnp1{δq, δ P p0, 1q.
Grant Theorem E.2. Choose σ 2 w " 8T L 2 w lnp1{δq 2 n 2 , σ 2 θ " 8T L 2 θ lnp1{δq 2 n 2
, and T ě´n ? 2m¯2 . Then Algorithm 2 is p , δq-DP. Further, if we choose η θ " 1 16κwpβ θ`βθw κ θw q , η w " 1 βw , and
T « a κ w r∆ Φ pβ θ`βθw κ θw q`β 2 θw D 2 s n min´1 L θ ? d θ , βw β θw Lw ?
κwdw¯, then
E}∇Φpθ T q} 2 À b ∆ Φ pβ θ`βθw κ θw qκ w`κw β 2 θw D 2 q « L θ a d θ lnp1{δq n`ˆβ θw ? κ w β w˙L w a d w lnp1{δq n f 1 tmănu mˆL 2 θ`κ w β 2 θw L 2 w β 2 w˙.
In particular, if m ě minˆ
nL θ ? d θ κwr∆Φpβ θ`βθw κ θw q`β 2 θw D 2 s , nLw ? κw β θw βw ? dwκwr∆Φpβ θ`βθw κ θw q`β 2 θw D 2 s˙, then E}∇Φpθ T q} 2 À b κ w r∆ Φ pβ θ`βθw κ θw q`β 2 θw D 2 s˜a lnp1{δq n¸ˆL θ a d θ`ˆβ θw ? κ w β w˙L w a d w˙.
The proof of Theorem E.3 will require several technical lemmas. These technical lemmas, in turn, require some preliminary lemmas, which we present below.
We begin with a refinement of Lemma 4.3 from Lin et al. (2020):
Lemma E.4. Grant Theorem E.2. Then Φ is 2pβ θ`βθw κ θw q-smooth with ∇Φpθq " ∇ θ F pθ, w˚pθqq, and w˚p¨q is κ w -Lipschitz.
Proof. The proof follows almost exactly as in the proof of Lemma 4.3 of Lin et al. (2020), using Danskin's theorem, but we carefully track the different smoothness parameters with respect to w and θ (and their units) to obtain the more precise result.
Lemma E.5 (Lei et al. (2017)). Let ta l u lPrns be an arbitrary collection of vectors such that ř n l"1 a l " 0. Further, let S be a uniformly random subset of rns of size m. Then,
E › › › › › 1 m ÿ lPS a l › › › › › 2 " n´m pn´1qm 1 n n ÿ l"1 }a l } 2 ď 1 tmănu m n n ÿ l"1 }a l } 2 .
Lemma E.6 (Co-coercivity of the gradient). For any β-smooth and convex function g, we have }∇gpaq´∇gpbq} 2 ď 2βpgpaq´gpbq´xgpbq, a´byq, for all a, b P domainpgq.
Having recalled the necessary preliminaries, we now provide the novel technical ingredients that we'll need for the proof of Theorem E.3. The next lemma quantifies the progress made in minimizing Φ from a single step of noisy stochastic gradient descent in θ (i.e. line 4 of Algorithm 2):
Lemma E.7. For all t P rT s, the iterates of Algorithm 2 satisfy
EΦpθ t q ď Φpθ t´1 q´´η θ 2´2 pβ θ`βθw κ θw qη 2 θ¯E }∇Φpθ t´1 q} 2´η θ 2`2 η 2 θ pβ θ`βθw κ θw qE}∇Φpθ t´1 q´∇ θ F pθ t´1 , w t´1 q} 2¯`p β θ β θw κ θw qη 2 θˆdθ σ 2 θ`4 L 2 θ m 1 tmănu˙, conditional on θ t´1 , w t´1 .
Proof. Let us denote r g :" 1 m ř m i"1 ∇ θ f pθ t´1 , w t´1 ; z i q`u t´1 :" g`u t´1 , so θ t " θ t´1´ηθ r g. First condition on the randomness due to sampling and Gaussian noise addition. By smoothness of Φ (see Theorem E.4), we have Φpθ t q ď Φpθ t´1 q´η θ xr g, ∇Φpθ t´1 qy`pβ θ`βθw κ θw qη 2 θ }r g} 2 " Φpθ t´1 q´η θ }∇Φpθ t´1 q} 2´η θ xr g´∇Φpθ t´1 q, ∇Φpθ t´1 qy`pβ θ`βθw κ θw qη 2 θ }r g} 2 .
Taking expectation (conditional on θ t´1 , w t´1 ),
ErΦpθ t qs ď Φpθ t´1 q´η θ }∇Φpθ t´1 q} 2´η θ x∇ θ F pθ t´1 , w t´1 q´∇Φpθ t´1 q, ∇Φpθ t´1 qỳ pβ θ`βθw κ θw qη 2
In the first inequality, we used the fact that the Gaussian noise has mean zero and is independent of pθ t´1 , w t´1 , Zq, plus the fact that Eg " ∇ θ F pθ t´1 , w t´1 q. In the second inequality, we used Theorem E.5 and Lipschitz continuity of f . In the third and fourth inequalities, we used Young's inequality and Cauchy-Schwartz.
For the particular η θ prescribed in Theorem E.3, we obtain: Lemma E.8. Grant Theorem E.2. If η θ " 1 16κwpβ θ`βθw κ θw q , then the iterates of Algorithm 2 satisfy (@t ě 0)
EΦpθ t`1 q ď E " Φpθ t q´3 8 η θ }Φpθ t q} 2`5 8 η θˆβ 2 θw }w˚pθ t q´w t } 2`d θ σ 2 θ`4 L 2 θ m 1 tmănu˙ .
Proof. By Theorem E.7, we have
EΦpθ t`1 q ď EΦpθ t q´´η θ 2´2 pβ θ`βθw κ θw qη 2 θ¯E }∇Φpθ t q} 2´η θ 2`2 η 2 θ pβ θ`βθw κ θw qE}∇Φpθ t q´∇ θ F pθ t , w t q} 2¯`p β θ β θw κ θw qη 2 θˆdθ σ 2 θ`4 L 2 θ m 1 tmănuď EΦpθ t q´3 8 η θ E}∇Φpθ t q} 2`5 8 η θ " E}∇Φpθ t q´∇ θ F pθ t , w t q} 2`d θ σ 2 θ`4 L 2 θ m 1 tmănu ď EΦpθ t q´3 8 η θ E}∇Φpθ t q} 2`5 8 η θ " β 2 θw E}w˚pθ t q´w t } 2`d θ σ 2 θ`4 L 2 θ m 1 tmănu .
In the second inequality, we simply used the definition of η θ . In the third inequality, we used the fact that ∇Φpθ t q " ∇ θ F pθ t , w˚pθ t qq (see Theorem E.4) together with Theorem E.2 (part 3).
Next, we describe the progress made in the w t updates: Lemma E.9. Grant Theorem E.2. If η w " 1 βw , then the iterates of Algorithm 2 satisfy (@t ě 0)
E}w˚pθ t`1 q´w t`1 } 2 ďˆ1´1 2κ w`4 κ w κ 2 θw η 2 θ β 2 θw˙E }w˚pθ t q´w t } 2`2 β 2 wˆ4 L 2 w m 1 tmănu`dw σ 2 w4 κ w κ 2 θw η 2 θ`E }∇Φpθ t q} 2`d θ σ 2 θ˘.
Proof of Theorem E.3. Privacy: This is an easy consequence of Theorem 1 in Abadi et al. (2016) (with precise constants obtained from the proof therein, as in Bassily et al. (2019)) applied to both the min (descent in θ) and max (ascent in w) updates. Unlike Abadi et al. (2016), we don't clip the gradients here before adding noise, but the Lipschitz continuity assumptions (Theorem E.2 parts 1 and 2) imply that the 2 -sensitivity of the gradient updates in lines 4 and 5 of Algorithm 2 are nevertheless bounded by 2L θ {m and 2L w {m, respectively. Thus, Theorem 1 in Abadi et al. (2016) still applies. Convergence: Denote ζ :" 1´1 2κw`4 κ w κ 2 θw η 2 θ β 2 θw , δ t " E}w˚pθ t q´w t } 2 , and
C t :" 2 β 2 wˆ4 L 2 w m 1 tmănu`dw σ 2 w˙`4 κ w κ 2 θw η 2 θ`E }∇Φpθ t q} 2`d θ σ 2 1 tmănuˆL 2 θ`κ w β 2 θw L 2 w β 2 w˙`κ w β 2 θw L 2 w d w T lnp1{δq β 2 w 2 n 2 β 2 θw D 2 κ w T .
Our choice of T then implies
E}∇Φpθ T q} 2 À b ∆ Φ pβ θ`βθw κ θw qκ w`κw β 2 θw D 2 q « L θ a d θ lnp1{δq n`ˆβ θw ? κ w β w˙L w a d w lnp1{δq n f 1 tmănu mˆL 2 θ`κ w β 2 θw L 2 w β 2 w˙.
Finally, our choice of sufficiently large m yields the last claim in Theorem E.3.
E.2 PROOF OF THEOREM 3.4
Theorem 3.4 is an easy consequence of Theorem E.1, which we proved in the above subsection: Theorem E.10 (Re-statement of Theorem 3.4). Assume the loss function p¨, x, yq and Fpx,¨q are Lipschitz continuous with Lipschitz gradient for all px, yq, and p P S prq ě ρ ą 0 @ r P rks. In Algorithm 1, choose W to be a sufficiently large ball that contains W˚pθq :" argmax W p F pθ, W q for every θ in some neighborhood of θ˚P argmin θ max W p F pθ, W q. Then there exist algorithmic parameters such that the p , δq-DP Algorithm 1 returnsθ T with E}∇FERMIpθ T q} 2 " O˜a maxpd θ , klq lnp1{δq n¸,
treating D " diameterpWq, λ, ρ, l, and the Lipschitz and smoothness parameters of and F as constants.
Proof. By Theorem E.1, it suffices to show that f pθ, W ; z i q :" pθ, x i , y i q`λ p ψ i pθ, W q is Lipschitz continuous with Lipschitz gradient in both the θ and W variables for any z i " px i , y i , s i q, and that f pθ,¨; z i q is strongly concave. We assumed p¨, x i , y i q is Lipschitz continuous with Lipschitz gradient. Further, the work of Lowy et al. (2022a) showed that f pθ,¨; z i q is strongly concave. Thus, it suffices to show that p ψ i pθ, W q is Lipschitz continuous with Lipschitz gradient. This clearly holds by Theorem D.1, since Fpx,¨q is Lipschitz continuous with Lipschitz gradient and W P W is bounded.
F NUMERICAL EXPERIMENTS: ADDITIONAL DETAILS AND RESULTS
F.1 MEASURING DEMOGRAPHIC PARITY AND EQUALIZED ODDS VIOLATION
We used the expressions given in (10) and (11) to measure the demographic parity violation and the equalized odds violation respectively. We denote Y to be the set of all possible output classes and S to be the classes of the sensitive attribute. P rEs denotes the empirical probability of the occurrence of an event E. max y 1 PY,s1,s2PSˇP rp y " y 1 |s " s 1 s´P rp y " y 1 |s " s 2 sˇˇ(10) max y 1 PY,s1,s2PS maxpˇˇP rp y " y 1 |s " s 1 , y " y 1 s´P rp y " y 1 |s " s 2 , y " y 1 sˇˇ,ˇP rp y " y 1 |s " s 1 , y ‰ y 1 s´P rp y " y 1 |s " s 2 , y ‰ y 1 sˇˇq (11) F.2 TABULAR DATASETS
F.2.1 MODEL DESCRIPTION AND EXPERIMENTAL DETAILS
Demographic Parity: We split each dataset in a 3:1 train:test ratio. We preprocess the data similar to Hardt et al. (2016a) and use a simple logistic regression model with a sigmoid output O " σpW x`bq which we treat as conditional probabilities ppp y " i|xq. The predicted variables and sensitive attributes are both binary in this case across all the datasets. We analyze fairness-accuracy trade-offs with four different values of P t0.5, 1, 3, 9u for each dataset. We compare against state-of-the-art algorithms proposed in Tran et al. (2021a) and (the demographic parity objective of) Tran et al. (2021b). The tradeoff curves for DP-FERMI were generated by sweeping across different values for λ P r0, 2.5s. The learning rates for the descent and ascent, η θ and η w , remained constant during the optimization process and were chosen from r0.005, 0.01s. Batch size was 1024. We tuned the 2 diameter of the projection set W and θ-gradient clipping threshold in r1, 5s in order to generate stable results with high privacy (i.e. low ). Each model was trained for 200 epochs. The results displayed are averages over 15 trials (random seeds) for each value of .
Equalized Odds: We replicated the experimental setup described above, but we took 2 diameter of W and the value of gradient clipping for θ to be in r1, 2s. Also, we only tested three values of P t0.5, 1, 3u.
F.2.2 DESCRIPTION OF DATASETS
Adult Income Dataset: This dataset contains the census information about the individuals. The classification task is to predict whether the person earns more than 50k every year or not. We followed a preprocessing approach similar to Lowy et al. (2022a). After preprocessing, there were a total of 102 input features from this dataset. The sensitive attribute for this work in this dataset was taken to be gender. This dataset consists of around 48,000 entries spanning across two CSV files, which we combine and then we take the train-test split of 3:1.
Retired Adult Income Dataset: The Retired Adult Income Dataset proposed by Ding et al. (2021a) is essentially a superset of the Adult Income Dataset which attempts to counter some caveats of the Adult dataset. The input and the output attributes for this dataset is the same as that of the Adult Dataset and the sensitive attribute considered in this work is the same as that of the Adult. This dataset contains around 45,000 entries.
Parkinsons Dataset: In the Parkinsons dataset, we use the part of the dataset which had the UPRDS scores along with some of the features of the recordings obtained from individuals affected and not affected with the Parkinsons disease. The classification task was to predict from the features whether the UPDRS score was greater than the median score or not. After preprocessing, there were a total of 19 input features from this dataset and the sensitive attribute for this dataset was taken to be gender. This dataset contains around 5800 entries in total. We took a train-test split of 3:1.
Credit Card Dataset: This dataset contains the financial data of users in a bank in Taiwan consisting of their gender, education level, age, marital status, previous bills, and payments. The assigned classification task is to predict whether the person defaults their credit card bills or not, essentially making the task if the clients were credible or not. We followed a preprocessing approach similar to Lowy et al. (2022a). After preprocessing, there were a total of 85 input features from this dataset. The sensitive attribute for this dataset was taken to be gender. This dataset consists of around 30,000 entries from which we take the train-test split of 3:1.
UTK-Face Dataset:
This dataset is a large scale image dataset containing with an age span from 0 to 116. The dataset consists of over 20,000 face images with details of age, gender, and ethnicity and covers large variation in pose, facial expression, illumination, occlusion, resolution. We consider the age classification task with 9 age groups similar to the experimental setup in Tran et al. (2022). We consider the sensitive attribute to be the ethnicity which consists of 5 different classes. Additional Results for Parkinsons Dataset: More results for Parkinsons are shown in Fig. 8. DP-FERMI offers the best performance.
F.2.4 EQUALIZED ODDS
Equalized Odds Variation of DP-FERMI Algorithm: The (FERMI obj.) minimizes the Exponential Renyi Mutual Information (ERMI) between the output and the sensitive attributes which essentially leads to a reduced demographic parity violation. The equalized-odds condition is more constrained and enforces the demographic parity condition for data grouped according to labels. For the equalized-odds, the ERMI between the predicted and the sensitive attributes is minimized conditional to each of the label present in the output variable of the dataset. So, the FERMI regularizer is split into as many parts as the number of labels in the output. This enforces each part of the FERMI regularizer to minimize the ERMI while an output label is given/constant. Each part has its own unique W that is maximized in order to create a stochastic estimator for the ERMI with respect to each of the output labels.
Adult Results: Results for the equalized odds version of DP-FERMI on Adult dataset are shown in Fig. 9. Our approach outperforms the previous state-of-the-art methods. We split the dataset in a 3:1 train:test ratio. Batch size was 64. 128 x 128 normalized images were used as input for our model. We tuned the 2 diameter of W and the value of gradient clipping for θ to be in r1, 2s and learning rates for the descent and ascent, η θ and η w , remained constant during the optimization process and were chosen as 0.001 and 0.005 respectively. We analyze the fairness-accuracy trade-offs with five different values of P t10, 25, 50, 100, 200u. The results displayed were averaged over observations obtained from 5 different randomly chosen seeds on each configuration of and a dataset. Each model was trained for 150 epochs. The tradeoff curves for this set of experiments were obtained by sweeping across different values for λ P r0, 500s.
G SOCIETAL IMPACTS
In this paper, we considered the socially consequential problem of privately learning fair models from sensitive data. Motivated by the lack of scalable private fair learning methods in the literature, e developed the first differentially private (DP) fair learning algorithm that is guaranteed to converge with small batches (stochastic optimization). We hope that our method will be used to help companies, governments, and other organizations to responsibly use sensitive, private data. Specifically, we hope that our DP-FERMI algorithm will be useful in reducing discrimination in algorithmic decisionmaking while simultaneously preventing leakage of sensitive user data. The stochastic nature of our algorithm might be especially appealing to companies that are using very large models and datasets. On the other hand, there are also some important limitations of our method that need to be considered before deployment.
One caveat of our work is that we have assumed that the given data set contains fair and accurate labels. For example, if gender is the sensitive attribute and "likelihood of repaying a loan" is the target, then we assume that the training data accurately describes everyone's financial history without discrimination. If training data is biased against a certain demographic group, then it is possible that our algorithm could amplify (rather than mitigate) unfairness. See e.g. Kilbertus et al. (2020); Bechavod et al. (2019) for further discussion.
Another important practical consideration is how to weigh/value the different desiderata (privacy, fairness, and accuracy) when deploying our method. As shown in prior works (e.g., Cummings et al. (2019)) and re-enforced in the present work, there are fundamental tradeoffs between fairness, accuracy, and privacy: improvements in one generally come at a cost to the other two. Determining the relative importance of each of these three desiderata is a critical question that lacks a clear or general answer. Depending on the application, one might be seriously concerned with either discrimination or privacy attacks, and should calibrate and λ accordingly. Or, perhaps very high accuracy is necessary for a particular task, with privacy and/or fairness as an afterthought. In such a case, one might want to start with very large and small λ to ensure high accuracy, and then gradually shrink and/or increase λ to improve privacy/fairness until training accuracy dips below a critical threshold. A thorough and rigorous exploration of these issues could be an interesting direction for future work.
, x i , y i q +
Figure 1 :
1Comparison with existing works. "Guarantee" refers to provable guarantee. N/A: the post-processing method of Jagielski et al. (2019) is not an iterative algorithm. *Method requires access to the sensitive data at test time. The in-processing method of Jagielski et al. (2019) is inefficient. The work of Mozannar et al. (2020) specializes to equalized odds, but most of their analysis seems to be extendable to other fairness notions.
for t " 0, 1, . . . , T do 4:
(e.g. Wang et al. (2019); Hu et al. (2021); Ding et al. (2021b); Lowy et al. (2022b)), the two noises required to privatize the solution of the min-max problem we consider complicates the analysis and requires careful tuning of η θ and η W . Compared to existing analyses of DP min-max games in Boob & Guzmán (2021); Yang et al. (2022); Zhang et al. (
(2021a) and (the demographic parity objective of) Tran et al. (2021b). The results displayed are averages over 15 trials (random seeds) for each value of .
Figure 3 :
3Private, Fair (Demogrpahic Parity) logistic regression on Parkinsons Dataset 4.1.2 EQUALIZED ODDS
Figure 4 :
4Appendix F.2.4. Compared to Jagielski et al. (2019) and the equalized odds objective in Tran et al. (2021b), our equalized odds variant of DP-FERMI outperforms these state-of-the-art baselines at every privacy level. Private, Fair (Equalized Odds) logistic regression on Credit Card Dataset 4.2 LARGE-SCALE EXPERIMENT: DEEP CONVOLUTIONAL NEURAL NETWORK ON IMAGE DATASET In our second set of experiments, we train a deep 9-layer VGG-like classifier (Simonyan & Zisserman, 2015) with d « 1.6 million parameters on the UTK-Face dataset (Zhang et al., 2017b) using Algorithm 1. We classify the facial images into 9 age groups similar to the setup in Tran et al. (2022), while keeping race (containing 5 classes) as the sensitive attribute. See Appendix F.3 for more details.We analyze consider with four different privacy levels P t10, 25, 50, 100u. Compared to the tabular datasets, larger is needed to obtain stable results in the large-scale setting since the number of parameters d is much larger and the cost of privacy increases with d (see Theorem 3.4). Larger values of ą 100 were used in the baseline Jagielski et al. (2019) for smaller scale experiments.
Figure 5 :
5DP-FERMI on a Deep CNN for Image Classification on UTK-Face
Fair Learning: The study of differentially private fair learning algorithms was initiated by Jagielski et al. (2019). Jagielski et al. (2019) considered equalized odds and proposed two DP algorithms: 1) an -DP post-processing approach derived from Hardt et al. (2016a); and
Following
Jagielski et al. (2019), several works have proposed other DP fair learning algorithms. None of these works have managed to simultaneously address all the shortcomings of the method of Jagielski et al. (2019). The work of Xu et al. (2019) proposed DP and fair binary logistic regression, but did not provide any theoretical convergence/performance guarantees. The work of Mozannar et al. (2020) combined aspects of both Hardt et al. (2016a) and Agarwal et al. (2018) in a two-step locally differentially private fairness algorithm. Their approach is limited to binary classification. Moreover, their algorithm requires n{2 samples in each iteration (of their in-processing step), making it impractical for large-scale problems. More recently, Tran et al. (2021b) devised another DP in-processing method based on lagrange duality, which covers non-binary classification problems. In a subsequent work, Tran et al. (2021a) studied the effect of DP on accuracy parity in ERM, and proposed using a regularizer to promote fairness. Finally, Tran et al. (2022)provided a semisupervised fair "Private Aggregation of Teacher Ensembles" framework. A shortcoming of each of these three most recent works is their lack of theoretical convergence or accuracy guarantees. In another vein, some works have observed the disparate impact of privacy constraints on demographic subgroups(Bagdasaryan et al., 2019; Tran et al., 2021c).
(
Private) Min-Max Optimization: Non-privately, smooth nonconvex-concave min-max optimization has been studied in, e.g. Nouiehed et al. (2019); Kong & Monteiro (2019); Lin et al. (2020); Ostrovskii et al. (2021), with state-of-the-art convergence rates (under the strongest notion of stationarity) due to Ostrovskii et al. (2021). With DP constraints, min-max optimization had only been studied in the convex-concave and PL-concave settings prior to the current work Boob & Guzmán (2021); Zhang et al. (2022); Yang et al. (2022). Private ERM and Stochastic Optimization: In the absence of fairness considerations, DP optimization and ERM has been studied extensively. Most works have focused on the convex or PL settings; see, e.g. Bassily et al. (2014; 2019); Asi et al. (2021); Lowy & Razaviyayn (2021; 2023) and the references therein. The works of Zhang et al. (2017a); Wang et al. (2017; 2019); Arora et al. (2022) have considered non-convex (non-PL) loss functions. B EQUALIZED ODDS VERSION OF ERMI If equalized odds (Hardt et al., 2016b) is the desired fairness notion, then one should use the following variation of ERMI as a regularizer Lowy et al. (2022a):
F. 2 Figure 6 :
26Results: See Fig. 6 for our results on Retired Adult Dataset. The results are qualitatively similar to the reusults reported in the main body: our algorithm (DP-FERMI) achieves the most favorable fairness-accuracy tradeoffs across all privacy levels. Private, fair logistic regression on the Retired Adult Dataset Credit Card Results: See Fig. 7 for our results on Credit Card Dataset. DP-FERMI offers superior fairness-accuracy-privacy profile compared to all applicable baselines.
Figure 7 :" 9 Figure 8 :
798Private, fair (demographic parity) logistic regression on the Credit Card Dataset (a) " 0.5 (b) Private, Fair (Demogrpahic Parity) Logistic regression on Parkinsons Dataset
RetiredFigure 9 :Figure 10 :
910Adult Results: (Initial) Results for the equalized odds version of DP-FERMI on the retiredadult dataset are shown in Fig. 10. Our approach outperforms Tran et al. (2021b) and we are currently tuning our non-private and/or non-fair versions of our models along with Jagielski et al. (2019). Results obtained for applying DP-FERMI with equalized odds violation on logistic regression on the Adult Dataset Results obtained for applying DP-FERMI with equalized odds violation on logistic regression on the Retired Adult Dataset F.3 IMAGE DATASET (UTK-FACE)
This work was supported in part with funding from the NSF CAREER award 2144985, from the YIP AFOSR award, from a gift from the USC-Meta Center for Research and Education in AI & Learning, and from a gift from the USC-Amazon Center on Secure & Trusted Machine Learning.Rachel Cummings, Varun Gupta, Dhamma Kimpara, and Jamie Morgenstern. On the compatibility of privacy and fairness. In Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization, pp. 309-315, 2019. Amit Datta, Michael Carl Tschantz, and Anupam Datta. Automated experiments on ad privacy settings. Proceedings on privacy enhancing technologies, 2015(1):92-112, 2015. Frances Ding, Moritz Hardt, John Miller, and Ludwig Schmidt. Retiring adult: New datasets for fair machine learning. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (eds.), Advances in Neural Information Processing Systems, volume 34, pp. 6478-6490. Curran Associates, Inc., 2021a. URL https://proceedings.neurips.cc/paper/2021/file/ 32e54441e6382a7fbacbbbaf3c450059-Paper.pdf. Jiahao Ding, Xinyue Zhang, Xiaohuan Li, Junyi Wang, Rong Yu, and Miao Pan. Differentially private and fair classification via calibrated functional mechanism. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pp. 622-629, 2020. Jiahao Ding, Guannan Liang, Jinbo Bi, and Miao Pan. Differentially private and communication efficient collaborative learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp. 7219-7227, 2021b. Michele Donini, Luca Oneto, Shai Ben-David, John S Shawe-Taylor, and Massimiliano Pontil. Empirical risk minimization under fairness constraints. In Advances in Neural Information Processing Systems, pp. 2791-2801, 2018. Dheeru Dua and Casey Graff. UCI machine learning repository, 2017. URL http://archive. ics.uci.edu/ml. Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity in private data analysis. In Theory of cryptography conference, pp. 265-284. Springer, 2006. Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference, pp. 214-226, 2012. Michael Feldman, Sorelle A Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. Certifying and removing disparate impact. In proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, pp. 259-268, 2015. Benjamin Fish, Jeremy Kun, and Ádám D Lelkes. A confidence-based approach for balancing fairness and accuracy. In Proceedings of the 2016 SIAM International Conference on Data Mining, pp. 144-152. SIAM, 2016. Matt Fredrikson, Somesh Jha, and Thomas Ristenpart. Model inversion attacks that exploit confidence information and basic countermeasures. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 1322-1333, 2015. Tianyi Lin, Chi Jin, and Michael Jordan. On gradient descent ascent for nonconvex-concave minimax problems. In International Conference on Machine Learning, pp. 6083-6093. PMLR, 2020. Andrew Lowy and Meisam Razaviyayn. Output perturbation for differentially private convex optimization with improved population loss bounds, runtimes and applications to private adversarial training. arXiv preprint:2102.04704, 2021. Andrew Lowy, Ali Ghafelebashi, and Meisam Razaviyayn. Private non-convex federated learning without a trusted server. arXiv preprint arXiv:2203.06735, 2022b. Jérémie Mary, Clément Calauzenes, and Noureddine El Karoui. Fairness-aware learning for continuous attributes and treatments. In International Conference on Machine Learning, pp. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations, 2015. D Wang, M Ye, and J Xu. Differentially private empirical risk minimization revisited: Faster and more general. In Proc. 31st Annual Conference on Advances in Neural Information Processing Systems (NIPS 2017), 2017. Lingxiao Wang, Bargav Jayaraman, David Evans, and Quanquan Gu. Efficient privacy-preserving stochastic nonconvex optimization. arXiv preprint arXiv:1910.13659, 2019. Zhenhuan Yang, Shu Hu, Yunwen Lei, Kush R Varshney, Siwei Lyu, and Yiming Ying. Differentially private sgda for minimax problems. arXiv preprint arXiv:2201.09046, 2022.Moritz Hardt, Eric Price, and Nati Srebro. Equality of opportunity in supervised learning. In Advances
in neural information processing systems, pp. 3315-3323, 2016a.
Moritz Hardt, Ben Recht, and Yoram Singer. Train faster, generalize better: Stability of stochastic
gradient descent. In Maria Florina Balcan and Kilian Q. Weinberger (eds.), Proceedings of The
33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine
Learning Research, pp. 1225-1234, New York, New York, USA, 20-22 Jun 2016b. PMLR. URL
http://proceedings.mlr.press/v48/hardt16.html.
Zecheng He, Tianwei Zhang, and Ruby B Lee. Model inversion attacks against collaborative inference.
In Proceedings of the 35th Annual Computer Security Applications Conference, pp. 148-162,
2019.
Rui Hu, Yuanxiong Guo, and Yanmin Gong. Concentrated differentially private federated learning
with performance analysis. IEEE Open Journal of the Computer Society, 2:276-289, 2021. doi:
10.1109/OJCS.2021.3099108.
Matthew Jagielski, Michael Kearns, Jieming Mao, Alina Oprea, Aaron Roth, Saeed Sharifi-Malvajerdi,
and Jonathan Ullman. Differentially private fair learning. In International Conference on Machine
Learning, pp. 3000-3008. PMLR, 2019.
Michael Kearns, Seth Neel, Aaron Roth, and Zhiwei Steven Wu. Preventing fairness gerrymandering:
Auditing and learning for subgroup fairness. In International Conference on Machine Learning,
pp. 2564-2572, 2018.
Niki Kilbertus, Adrià Gascón, Matt Kusner, Michael Veale, Krishna Gummadi, and Adrian Weller.
Blind justice: Fairness with encrypted sensitive attributes. In International Conference on Machine
Learning, pp. 2630-2639. PMLR, 2018.
Niki Kilbertus, Manuel Gomez Rodriguez, Bernhard Schölkopf, Krikamol Muandet, and Isabel
Valera. Fair decisions despite imperfect predictions. In Silvia Chiappa and Roberto Calandra
(eds.), Proceedings of the Twenty Third International Conference on Artificial Intelligence and
Statistics, volume 108 of Proceedings of Machine Learning Research, pp. 277-287. PMLR, 26-28
Aug 2020.
W. Kong and R. D. C. Monteiro. An accelerated inexact proximal point method for solving nonconvex-
concave min-max problems. arXiv:1905.13433, 2019.
Lihua Lei, Cheng Ju, Jianbo Chen, and Michael I Jordan. Non-convex finite-sum optimization
via scsg methods. In Proceedings of the 31st International Conference on Neural Information
Processing Systems, pp. 2345-2355, 2017.
Andrew Lowy and Meisam Razaviyayn. Private stochastic optimization with large worst-case
lipschitz parameter: Optimal rates for (non-smooth) convex losses and extension to non-convex
losses. In International Conference on Algorithmic Learning Theory, pp. 986-1054. PMLR, 2023.
Andrew Lowy, Sina Baharlouei, Rakesh Pavan, Meisam Razaviyayn, and Ahmad Beirami. A
stochastic optimization framework for fair risk minimization. Transactions on Machine Learning
Research, 2022a.
4382-4391.
PMLR, 2019.
Frank McSherry and Kunal Talwar. Mechanism design via differential privacy. In 48th Annual IEEE
Symposium on Foundations of Computer Science (FOCS'07), pp. 94-103. IEEE, 2007.
Hussein Mozannar, Mesrob Ohannessian, and Nathan Srebro. Fair learning with private demographic
data. In International Conference on Machine Learning, pp. 7066-7075. PMLR, 2020.
Katta G Murty and Santosh N Kabadi. Some np-complete problems in quadratic and nonlinear
programming. 1985.
M. Nouiehed, M. Sanjabi, J. D. Lee, and M. Razaviyayn. Solving a class of non-convex min-max
games using iterative first order methods. arXiv preprint arXiv:1902.08297, 2019.
Dmitrii M Ostrovskii, Andrew Lowy, and Meisam Razaviyayn. Efficient search of first-order nash
equilibria in nonconvex-concave smooth min-max problems. SIAM Journal of Optimization, 2021.
Flavien Prost, Hai Qian, Qiuwen Chen, Ed H Chi, Jilin Chen, and Alex Beutel. Toward a better
trade-off between performance and fairness with kernel-based distribution matching. arXiv preprint
arXiv:1910.11779, 2019.
Mengkai Song, Zhibo Wang, Zhifei Zhang, Yang Song, Qian Wang, Ju Ren, and Hairong Qi.
Analyzing user-level privacy attack against federated learning. IEEE Journal on Selected Areas in
Communications, 38(10):2430-2444, 2020. doi: 10.1109/JSAC.2020.3000372.
Latanya Sweeney. Discrimination in online ad delivery. arXiv preprint arXiv:1301.6822, 2013.
Cuong Tran, My Dinh, and Ferdinando Fioretto. Differentially private empirical risk minimization
under the fairness lens. Advances in Neural Information Processing Systems, 34:27555-27565,
2021a.
Cuong Tran, Ferdinando Fioretto, and Pascal Van Hentenryck. Differentially private and fair deep
learning: A lagrangian dual approach. In Proceedings of the AAAI Conference on Artificial
Intelligence, volume 35, pp. 9932-9939, 2021b.
Cuong Tran, Ferdinando Fioretto, Pascal Van Hentenryck, and Zhiyan Yao. Decision making with
differential privacy under a fairness lens. In IJCAI, pp. 560-566, 2021c.
Cuong Tran, Keyu Zhu, Ferdinando Fioretto, and Pascal Van Hentenryck. Sf-pate: Scalable, fair, and
private aggregation of teacher ensembles. arXiv preprint arXiv:2204.05157, 2022.
Michael Veale and Reuben Binns. Fairer machine learning in the real world: Mitigating discrimination
without collecting sensitive data. Big Data & Society, 4(2):2053951717743530, 2017.
Blake Woodworth, Suriya Gunasekar, Mesrob I Ohannessian, and Nathan Srebro. Learning non-
discriminatory predictors. arXiv preprint arXiv:1702.06081, 2017.
Depeng Xu, Shuhan Yuan, and Xintao Wu. Achieving differential privacy and fairness in logistic
regression. In Companion proceedings of The 2019 world wide web conference, pp. 594-599,
2019.
Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rogriguez, and Krishna P Gummadi. Fairness
constraints: Mechanisms for fair classification. In Artificial Intelligence and Statistics, pp. 962-970.
PMLR, 2017.
Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. Mitigating unwanted biases with adversarial
learning. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pp. 335-
340, 2018.
Jiaqi Zhang, Kai Zheng, Wenlong Mou, and Liwei Wang. Efficient private erm for smooth objectives,
2017a.
Liang Zhang, Kiran Koshy Thekumparampil, Sewoong Oh, and Niao He. Bring your own al-
gorithm for optimal differentially private stochastic minimax optimization. arXiv preprint
arXiv:2206.00363, 2022.
Zhifei Zhang, Yang Song, and Hairong Qi. Age progression/regression by conditional adversarial
autoencoder. In Proceedings of the IEEE conference on computer vision and pattern recognition,
pp. 5810-5818, 2017b.
Our method can also handle any other fairness notion that can be defined in terms of statistical (conditional) independence, such as equal opportunity. However, our method cannot handle all fairness notions: for example, false discovery rate and calibration error are not covered by our framework.
To simplify the presentation, we will assume that demographic parity is the fairness notion of interest in the remainder of this section. However, we consider both fairness notions in our numerical experiments.
We say a differentiable function g is µ-strongly concave if gpαq`x∇gpαq, α 1´α y´µ 2 }α´α 1 } 2 ě gpα 1 q for all α, α 1 . 4 DP-SGDA was also used inYang et al. (2022) for convex and PL min-max problems.5 We say function g is L-Lipschitz if }gpαq´gpα 1 q} ď L}α´α 1 } for all α, α 1 .
Proof. Fix any t and denote δ t :" E}w˚pθ t q´w t } 2 :" E}w˚´w t } 2 . We may assume without loss of generality that f pθ,¨; zq is µ-strongly convex and that w t`1 " Π W rw t1 βw`1 m ř m i"1 ∇ w f pθ t , w t ; z i q`v t˘s :" Π W rw t´1 βw p∇hpw t q`v t qs :" Π W rw t´1 βw ∇hpw t qs. Now,Further,using independence and Theorem E.5 plus Lipschitz continuity of f in the first inequality and Theorem E.6 (plus Theorem E.2 part 5) in the second inequality. This impliesTherefore,by Young's inequality, (8), and Theorem E.4. Since´1`1 2κw´1¯´1´1 κw¯ď 1´1 2κw , we obtain δ t`1 ďˆ1´1 2κ w`4 κ w κ 2 θw η 2 θ β 2 θw˙δt`2 β 2 w " d w σ 2 w`4 L 2 w m 1 tmănu `4 κ w κ 2 θw η 2 θ " }∇Φpθ t q} 2`d θ σ 2 t ‰ , as desired.We are now prepared to prove Theorem E.3.
Deep learning with differential privacy. Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan Mcmahan, Ilya Mironov, Kunal Talwar, Li Zhang, 10.1145/2976749.2978318Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. the 2016 ACM SIGSAC Conference on Computer and Communications SecurityMartin Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, Oct 2016. doi: 10.1145/2976749.2978318. URL http://dx.doi.org/10.1145/2976749.2978318.
A reductions approach to fair classification. Alekh Agarwal, Alina Beygelzimer, Miroslav Dudík, John Langford, Hanna Wallach, International Conference on Machine Learning. PMLRAlekh Agarwal, Alina Beygelzimer, Miroslav Dudík, John Langford, and Hanna Wallach. A reductions approach to fair classification. In International Conference on Machine Learning, pp. 60-69. PMLR, 2018.
Julia Angwin, Jeff Larson, Surya Mattu, Lauren Kirchner, Machine bias. ProPublica. Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. Machine bias. ProPublica, 2016.
Faster rates of convergence to stationary points in differentially private optimization. Raman Arora, Raef Bassily, Tomás González, Cristóbal Guzmán, Michael Menart, Enayat Ullah, arXiv:2206.00846arXiv preprintRaman Arora, Raef Bassily, Tomás González, Cristóbal Guzmán, Michael Menart, and Enayat Ullah. Faster rates of convergence to stationary points in differentially private optimization. arXiv preprint arXiv:2206.00846, 2022.
Private stochastic convex optimization: Optimal rates in l1 geometry. Hilal Asi, Vitaly Feldman, Tomer Koren, Kunal Talwar, International Conference on Machine Learning. PMLRHilal Asi, Vitaly Feldman, Tomer Koren, and Kunal Talwar. Private stochastic convex optimization: Optimal rates in l1 geometry. In International Conference on Machine Learning, pp. 393-403. PMLR, 2021.
Differential privacy has disparate impact on model accuracy. Eugene Bagdasaryan, Omid Poursaeed, Vitaly Shmatikov, Advances in neural information processing systems. 32Eugene Bagdasaryan, Omid Poursaeed, and Vitaly Shmatikov. Differential privacy has disparate impact on model accuracy. Advances in neural information processing systems, 32, 2019.
Rényi fair inference. Sina Baharlouei, Maher Nouiehed, Ahmad Beirami, Meisam Razaviyayn, ICLR. Sina Baharlouei, Maher Nouiehed, Ahmad Beirami, and Meisam Razaviyayn. Rényi fair inference. In ICLR, 2020.
Private empirical risk minimization: Efficient algorithms and tight error bounds. Raef Bassily, Adam Smith, Abhradeep Thakurta, IEEE 55th Annual Symposium on Foundations of Computer Science. IEEERaef Bassily, Adam Smith, and Abhradeep Thakurta. Private empirical risk minimization: Efficient algorithms and tight error bounds. In 2014 IEEE 55th Annual Symposium on Foundations of Computer Science, pp. 464-473. IEEE, 2014.
Private stochastic convex optimization with optimal rates. Raef Bassily, Vitaly Feldman, Kunal Talwar, Abhradeep Thakurta, Advances in Neural Information Processing Systems. Raef Bassily, Vitaly Feldman, Kunal Talwar, and Abhradeep Thakurta. Private stochastic convex optimization with optimal rates. In Advances in Neural Information Processing Systems, 2019.
Penalizing unfairness in binary classification. Yahav Bechavod, Katrina Ligett, arXiv:1707.00044arXiv preprintYahav Bechavod and Katrina Ligett. Penalizing unfairness in binary classification. arXiv preprint arXiv:1707.00044, 2017.
Yahav Bechavod, Katrina Ligett, Aaron Roth, Bo Waggoner, Zhiwei Steven Wu, arXiv:1902.02242Equal opportunity in online classification with partial feedback. arXiv preprintYahav Bechavod, Katrina Ligett, Aaron Roth, Bo Waggoner, and Zhiwei Steven Wu. Equal opportu- nity in online classification with partial feedback. arXiv preprint arXiv:1902.02242, 2019.
Man is to computer programmer as woman is to homemaker? debiasing word embeddings. Tolga Bolukbasi, Kai-Wei Chang, Y James, Venkatesh Zou, Adam T Saligrama, Kalai, Advances in neural information processing systems. Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Advances in neural information processing systems, pp. 4349-4357, 2016.
Optimal algorithms for differentially private stochastic monotone variational inequalities and saddle-point problems. Digvijay Boob, Cristóbal Guzmán, arXiv:2104.02988arXiv preprintDigvijay Boob and Cristóbal Guzmán. Optimal algorithms for differentially private stochastic monotone variational inequalities and saddle-point problems. arXiv preprint arXiv:2104.02988, 2021.
Optimized pre-processing for discrimination prevention. Flavio Calmon, Dennis Wei, Bhanukiran Vinzamuri, Kush R Karthikeyan Natesan Ramamurthy, Varshney, Advances in Neural Information Processing Systems. Flavio Calmon, Dennis Wei, Bhanukiran Vinzamuri, Karthikeyan Natesan Ramamurthy, and Kush R Varshney. Optimized pre-processing for discrimination prevention. In Advances in Neural Information Processing Systems, pp. 3992-4001, 2017.
Extracting training data from large language models. Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, 30th USENIX Security Symposium (USENIX Security 21). Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, et al. Extracting training data from large language models. In 30th USENIX Security Symposium (USENIX Security 21), pp. 2633-2650, 2021.
A fair classifier using mutual information. Jaewoong Cho, Gyeongjo Hwang, Changho Suh, 2020 IEEE International Symposium on Information Theory (ISIT). IEEEJaewoong Cho, Gyeongjo Hwang, and Changho Suh. A fair classifier using mutual information. In 2020 IEEE International Symposium on Information Theory (ISIT), pp. 2521-2526. IEEE, 2020a.
A fair classifier using kernel density estimation. Jaewoong Cho, Gyeongjo Hwang, Changho Suh, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020. Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin2020Jaewoong Cho, Gyeongjo Hwang, and Changho Suh. A fair classifier using kernel density estimation. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (eds.), Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020b.
| []
|
[
"Learning in RKHM: a C * -Algebraic Twist for Kernel Machines",
"Learning in RKHM: a C * -Algebraic Twist for Kernel Machines"
]
| [
"Yuka Hashimoto \nNTT Network Service Systems Laboratories\nNTT Corporation\nTokyoJapan\n\nCenter for Advanced Intelligence Project\nRIKEN\nTokyoJapan\n",
"Masahiro Ikeda \nCenter for Advanced Intelligence Project\nRIKEN\nTokyoJapan\n\nKeio University\nYokohamaJapan\n",
"Hachem Kadri \nAix-Marseille University\nCNRS\nMarseilleLISFrance\n"
]
| [
"NTT Network Service Systems Laboratories\nNTT Corporation\nTokyoJapan",
"Center for Advanced Intelligence Project\nRIKEN\nTokyoJapan",
"Center for Advanced Intelligence Project\nRIKEN\nTokyoJapan",
"Keio University\nYokohamaJapan",
"Aix-Marseille University\nCNRS\nMarseilleLISFrance"
]
| []
| Supervised learning in reproducing kernel Hilbert space (RKHS) and vector-valued RKHS (vvRKHS) has been investigated for more than 30 years. In this paper, we provide a new twist to this rich literature by generalizing supervised learning in RKHS and vvRKHS to reproducing kernel Hilbert C * -module (RKHM), and show how to construct effective positive-definite kernels by considering the perspective of C * -algebra. Unlike the cases of RKHS and vvRKHS, we can use C * -algebras to enlarge representation spaces. This enables us to construct RKHMs whose representation power goes beyond RKHSs, vvRKHSs, and existing methods such as convolutional neural networks. Our framework is suitable, for example, for effectively analyzing image data by allowing the interaction of Fourier components. arXiv:2210.11855v2 [stat.ML] 13 Nov 2022• We define positive definite kernels from the perspective of C * -algebra, which are suitable for learning in RKHM and adapted to analyze image data.• We derive a generalization bound of the supervised learning problem in RKHM, which generalizes existing results of RKHS and vvRKHS. We also show that the computational complexity of our method can be reduced if parameters in the C * -algebra-valued positive definite kernels have specific structures.• We show that our framework generalizes existing methods based on convolution operations.Important applications of the supervised learning in RKHM are tasks whose inputs and outputs are images. If the proposed kernels have specific parameters, then the product structure is the convolution, which corresponds to the pointwise product of Fourier components. By extending the C * -algebra to a larger one, we can enjoy more general operations than the convolutions. This enables us to analyze image data effectively by making interactions between Fourier components. Regarding the generalization bound, we derive the same type of bound as those obtained for RKHS and vvRKHS via Rademacher complexity theory. This is to our knowledge, the first generalization bound for RKHM hypothesis classes. Concerning the connection with existing methods, we show that using our framework, we can reconstruct existing methods such as the convolutional neural network(LeCun et al., 1998)and the convolutional kernel(Mairal et al., 2014)and further generalize them. This fact implies that the representation power of our framework goes beyond the existing methods.The remainder of this paper is organized as follows: In Section 2, we review mathematical notions related to this paper. We propose C * -algebra-valued positive definite kernels in Section 3 and investigate supervised learning in RKHM in Section 4. Then, we show connections with existing convolution-based methods in Section 5. We confirm the advantage of our method numerically in Section 6 and conclude the paper in Section 7. All technical proofs are in Section B.PRELIMINARIES2. | 10.48550/arxiv.2210.11855 | [
"https://export.arxiv.org/pdf/2210.11855v2.pdf"
]
| 253,080,792 | 2210.11855 | c73066320d865fc530eac172372186eef4b1c926 |
Learning in RKHM: a C * -Algebraic Twist for Kernel Machines
Yuka Hashimoto
NTT Network Service Systems Laboratories
NTT Corporation
TokyoJapan
Center for Advanced Intelligence Project
RIKEN
TokyoJapan
Masahiro Ikeda
Center for Advanced Intelligence Project
RIKEN
TokyoJapan
Keio University
YokohamaJapan
Hachem Kadri
Aix-Marseille University
CNRS
MarseilleLISFrance
Learning in RKHM: a C * -Algebraic Twist for Kernel Machines
Supervised learning in reproducing kernel Hilbert space (RKHS) and vector-valued RKHS (vvRKHS) has been investigated for more than 30 years. In this paper, we provide a new twist to this rich literature by generalizing supervised learning in RKHS and vvRKHS to reproducing kernel Hilbert C * -module (RKHM), and show how to construct effective positive-definite kernels by considering the perspective of C * -algebra. Unlike the cases of RKHS and vvRKHS, we can use C * -algebras to enlarge representation spaces. This enables us to construct RKHMs whose representation power goes beyond RKHSs, vvRKHSs, and existing methods such as convolutional neural networks. Our framework is suitable, for example, for effectively analyzing image data by allowing the interaction of Fourier components. arXiv:2210.11855v2 [stat.ML] 13 Nov 2022• We define positive definite kernels from the perspective of C * -algebra, which are suitable for learning in RKHM and adapted to analyze image data.• We derive a generalization bound of the supervised learning problem in RKHM, which generalizes existing results of RKHS and vvRKHS. We also show that the computational complexity of our method can be reduced if parameters in the C * -algebra-valued positive definite kernels have specific structures.• We show that our framework generalizes existing methods based on convolution operations.Important applications of the supervised learning in RKHM are tasks whose inputs and outputs are images. If the proposed kernels have specific parameters, then the product structure is the convolution, which corresponds to the pointwise product of Fourier components. By extending the C * -algebra to a larger one, we can enjoy more general operations than the convolutions. This enables us to analyze image data effectively by making interactions between Fourier components. Regarding the generalization bound, we derive the same type of bound as those obtained for RKHS and vvRKHS via Rademacher complexity theory. This is to our knowledge, the first generalization bound for RKHM hypothesis classes. Concerning the connection with existing methods, we show that using our framework, we can reconstruct existing methods such as the convolutional neural network(LeCun et al., 1998)and the convolutional kernel(Mairal et al., 2014)and further generalize them. This fact implies that the representation power of our framework goes beyond the existing methods.The remainder of this paper is organized as follows: In Section 2, we review mathematical notions related to this paper. We propose C * -algebra-valued positive definite kernels in Section 3 and investigate supervised learning in RKHM in Section 4. Then, we show connections with existing convolution-based methods in Section 5. We confirm the advantage of our method numerically in Section 6 and conclude the paper in Section 7. All technical proofs are in Section B.PRELIMINARIES2.
INTRODUCTION
Supervised learning in reproducing kernel Hilbert space (RKHS) has been actively investigated since the early 1990s (Murphy, 2012;Christmann & Steinwart, 2008;Shawe-Taylor & Cristianini, 2004;Schölkopf & Smola, 2002;Boser et al., 1992). The notion of reproducing kernels as dot products in Hilbert spaces was first brought to the field of machine learning by Aizerman et al. (1964), while the theoretical foundation of reproducing kernels and their Hilbert spaces dates back to at least Aronszajn (1950). By virtue of the representer theorem (Schölkopf et al., 2001), we can compute the solution of an infinite-dimensional minimization problem in RKHS with given finite samples. In addition to the standard RKHSs, applying vector-valued RKHSs (vvRKHSs) to supervised learning has also been proposed and used in analyzing vector-valued data (Micchelli & Pontil, 2005;Álvarez et al., 2012;Kadri et al., 2016;Minh et al., 2016;Brouard et al., 2016;Laforgue et al., 2020;Huusari & Kadri, 2021). Generalization bounds of the supervised problems in RKHS and vvRKHS are also derived (Mohri et al., 2018;Caponnetto & De Vito, 2007;Audiffren & Kadri, 2013;Huusari & Kadri, 2021).
Reproducing kernel Hilbert C * -module (RKHM) is a generalization of RKHS and vvRKHS by means of C * -algebra. C * -algebra is a generalization of the space of complex values. It has a product and an involution structures. Important examples are the C * -algebra of bounded linear operators on a Hilbert space and the C * -algebra of continuous functions on a compact space. RKHMs have been originally studied for pure operator algebraic and mathematical physics problems (Manuilov & Troitsky, 2000;Heo, 2008;Moslehian, 2022). Recently, applying RKHMs to data analysis has been proposed by Hashimoto et al. (2021). They generalized the representer theorem in RKHS to RKHM, which allows us to analyze structured data such as functional data with C * -algebras.
In this paper, we investigate supervised learning in RKHM. This provides a new twist to the state-of-the-art kernel-based learning algorithms and the development of a novel kind of reproducing kernels. An advantage of RKHM over RKHS and vvRKHS is that we can enlarge the C * -algebra characterizing the RKHM to construct a representation space. This allows us to represent more functions than the case of RKHS and make use of the product structure in the C * -algebra. Our main contributions are: Lemma 2.3 The group C * -algebra C * (Z/pZ) is C * -isomorphic to Circ(p).
We now review important notions about C * -algebra. We denote a C * -algebra by A.
Definition 2.4 (Positive) An element a of A is called positive if there exists b ∈ A such that a = b * b holds. For a, b ∈ A, we write a ≤ A b if b − a is positive, and a A b if b − a is positive and not zero. We denote by A + the subset of A composed of all positive elements in A.
Definition 2.5 (Minimum) For a subset S of A, a ∈ A is said to be a lower bound with respect to the order ≤ A , if a ≤ A b for any b ∈ S. Then, a lower bound c ∈ A is said to be an infimum of S, if a ≤ A c for any lower bound a of S. If c ∈ S, then c is said to be a minimum of S.
Hilbert C * -module is a generalization of Hilbert space. We can define an A-valued inner product and a (real nonnegative-valued) norm as a natural generalization of the complex-valued inner product. See Section A for further details. Then, we define Hilbert C * -module as follows.
Definition 2.6 (Hilbert C * -module) Let M be a C * -module over A equipped with an A-valued inner product. If M is complete with respect to the norm induced by the A-valued inner product, it is called a Hilbert C * -module over A or Hilbert A-module.
Reproducing Kernel Hilbert C * -Module
RKHM is a generalization of RKHS by means of C * -algebra. Let X be a non-empty set for data.
Definition 2.7 (A-valued positive definite kernel) An A-valued map k : X × X → A is called a positive definite kernel if it satisfies the following conditions:
• k(x, y) = k(y, x) * for x, y ∈ X ,
• n i,j=1 c * i k(x i , x j )c j ≥ A 0 for n ∈ N, c i ∈ A, x i ∈ X .
Let φ : X → A X be the feature map associated with k, which is defined as φ(x) = k(·, x) for x ∈ X . We construct the following C * -module composed of A-valued functions:
M k,0 := n i=1 φ(x i )c i n ∈ N, c i ∈ A, x i ∈ X . Define an A-valued map ·, · M k : M k,0 × M k,0 → A as n i=1 φ(x i )c i , l j=1 φ(y j )b j M k := n i=1 l j=1 c * i k(x i , y j )b j .
By the properties of k in Definition 2.7, ·, · M k is well-defined and has the reproducing property
φ(x), v M k = v(x),
for v ∈ M k,0 and x ∈ X . Also, it is an A-valued inner product. The reproducing kernel Hilbert Amodule (RKHM) associated with k is defined as the completion of M k,0 . We denote by M k the RKHM associated with k. In the following, we denote the inner product, absolute value, and norm in M k by ·, · k , | · | k , and · k , respectively. Hashimoto et al. (2021) showed the representer theorem in RKHM.
Proposition 2.8 (Representer theorem) Let A be a unital C * -algebra. Let x 1 , . . . , x n ∈ X and y 1 , . . . , y n ∈ A. Let h : X × A × A → A + be an error function and let g :
A + → A + satisfy g(a) A g(b) for a A b.
Assume the module (algebraically) generated by {φ(x i )} n i=1 is closed. Then, any u ∈ M k minimizing n i=1 h(x i , y i , u(x i )) + g(|u| M k ) admits a representation of the form n i=1 φ(x i )c i for some c 1 , . . . , c n ∈ A.
C * -ALGEBRA-VALUED POSITIVE DEFINITE KERNELS
To investigate the supervised learning problem in RKHM, we begin by constructing suitable C * -algebra-valued positive definite kernels. The product structure used in these kernels will be shown to be effective in analyzing image data. However, the proposed kernels are general, and their application is not limited to image data. Let A 1 be a C * -algebra. By the Gelfand-Naimark theorem (see, for example, Murphy 1990), there exists a Hilbert space H such that A 1 is a subalgebra of the C * -algebra A 2 of bounded linear operators on H. For image data we can set A 1 and A 2 as follows.
Example 3.1 Let p ∈ N, A 1 = C * (Z/pZ), and A 2 = C p×p . Then, A 1 is a subalgebra of A 2 . Indeed, by Lemma 2.3, A 1 Circ(p). For example, in image processing, we represent filters by circulant matrices (Chanda & Majumder, 2011). If we regard Z/pZ as the space of p pixels, then elements in C * (Z/pZ) can be regarded as functions from pixels to intensities. Thus, we can also regard grayscale and color images with p pixels as elements in C * (Z/pZ) and C * (Z/pZ) 3 , respectively. Note that A 2 is noncommutative, although A 1 is commutative.
We consider the case where the inputs are in A d 1 for d ∈ N and define linear, polynomial, and Gaussian C *algebra-valued positive definite kernels as follows. For example, we can consider the case where inputs are d images.
Definition 3.2 Let X ⊆ A d 1 and x = [x 1 , . . . , x d ] ∈ X . 1. For a i,1 , a i,2 ∈ A 2 , the linear kernel k : X × X → A 2 is defined as k(x, y) = d i=1 a * i,1 x * i a * i,2 a i,2 y i a i,1 .
2. For q ∈ N and a i,j ∈ A 2 (i = 1, . . . d, j = 1, . . . q + 1), the polynomial kernel k : X × X → A 2 is defined as
k(x, y) = d i=1 q j=1 a * i,j x * i a * i,q+1 a i,q+1 q j=1 y i a i,q+1−j .
3. Let Ω be a measurable space and µ is an A 2 -valued positive measure on Ω. 1 For a i,1 , a i,2 : Ω → A 2 , the Gaussian kernel k : X × X → A 2 is defined as
k(x, y) = ω∈Ω e − √ −1 d i=1 ai,1(ω) * x * i ai,2(ω) * dµ(ω)e √ −1 d i=1 ai,2(ω)yiai,1(ω) .
Here, we assume the integral does not diverge.
Remark 3.3
We can construct new kernels by the composition of functions to the kernels defined in Definition 3.2. For example, let ψ i,j : A 1 → A 2 for i = 1, . . . d and j = 1, . . . , q + 1. Then, the map defined by replacing x i and y i in the polynomial kernel by ψ i,j (x i ) and ψ i,j (y i ) is also an C * -algebra-valued positive definite kernel.
If A 1 = A 2 = C, then the above kernels are reduced to the standard complex-valued positive definite kernels and the RKHMs associated with them are reduced to RKHSs. In this case, if X = A d , the input space and the RKHS are both Hilbert spaces (Hilbert C-modules). On the other hand, for RKHMs, if we choose A 1 A 2 , then the input space X is a Hilbert A 1 -module, but the RKHM is a Hilbert A 2 -module, not A 1module. Applying RKHMs, we can construct higher dimensional spaces than input spaces but also enlarge the C * -algebras characterizing the RKHMs, which allows us to represent more functions than RKHSs and make use of the product structure in A 2 . Figure 1 schematically shows the representation of samples in RKHM. We show an example related to image data below.
Example 3.4 If A 1 = C * (Z/pZ), A 2 = C p×p (A 1 A 2 )
, and a i,j ∈ A 1 , then a i,j in Definition 3.2 behaves as convolutional filters. In fact, by Definition 2.1, the multiplication of a i,j and x i is represented by the convolution. The convolution of two functions corresponds to the multiplication of each Fourier component of them. Thus, each Fourier component of x i does not interact with other Fourier components. Choosing a i,j ∈ A 2 outside A 1 corresponds to the multiplication of different Fourier components of two functions. Indeed, let x ∈ A 1 . Then, by Lemma 2.3, x is represented as a circulant matrix and by Lemma 2.2, it is decomposed as x = F Λ x F * . In this case, Λ x is the diagonal matrix whose ith diagonal is the ith Fourier component (FC) of x. Thus, if a i,j ∈ A 1 , then we have xa i,j = F Λ x Λ ai,j F * and each Fourier component of x is multiplied by the same Fourier component of a i,j . On the other hand, if a i,j ∈ A 2 \ A 1 , then Λ ai,j is not a diagonal matrix, and the elements of Λ x Λ ai,j are composed of the weighted sum of different Fourier components of x. Figure 2 summarizes this example.
Comparison with vvRKHS From the perspective of vvRKHS, defining kernels as in Definition 3.2 is difficult since for vvRKHS, the output space is a Hilbert space, and we do not have product structures in it. Indeed, the inner product in a vvRKHS is described by an action of an operator on a vector. We can regard the vector as a rank-one operator whose range is the one-dimensional space spanned by the vector. Thus, the action is regarded as the product of only two operators. On the other hand, from the perspective of C * -algebra, we can multiply more than two elements in C * -algebra, which allows us to define C * -algebra-valued kernels naturally in the same manner as complex-valued kernels. See Figure 3 for a schematic explanation.
SUPERVISED LEARNING IN RKHM
We investigate supervised learning in RKHM. We first formulate the problem and derive a learning algorithm. Then, we characterize its generalization error and investigate its computational complexity. We do not assume X ⊆ A d 1 in Subsections 4.1 and 4.2. The input space X can be an arbitrary nonempty set in these sections. Thus, although we focus on the case of X ⊆ A d 1 in this paper, the supervised learning in RKHM is applied to general problems whose output space is a C * -algebra A.
Problem Setting
Let x 1 , . . . , x n ∈ X be input training samples and y 1 , . . . , y n ∈ A be output training samples. Let k : X ×X → A be an A-valued positive definite kernel, and let φ and M k be the feature map and RKHM associated with k, respectively. We find a function f : X → A in M k that maps input data to output data. For this purpose, we consider the following minimization problem:
min f ∈M k n i=1 |f (x i ) − y i | 2 A + λ|f | 2 k ,(1)
where λ ≥ 0 is the regularization parameter. By the representer theorem (Proposition 2.8), we find a solution f in the submodule generated by {φ(x 1 ), . . . , φ(x n )}. As the case of RKHS (Schölkopf et al., 2001), representing f as
n j=1 φ(x j )c j (c j ∈ A), the problem is reduced to min cj ∈A n i=1 n j=1 k(x i , x j )c j − y i 2 A + λ n j=1 φ(x j )c j 2 k = min cj ∈A (c * G 2 c − c * Gy − y * Gc + λc * Gc),(2)
where G is the A n×n -valued Gram matrix whose (i, j)-entry is defined as k(
x i , x j )∈ A, c = [c 1 , . . . , c n ] T , y = [y 1 , . . . , y n ] T , and |a| A = (a * a) 1/2 for a ∈ A. If G + λI is invertible, the solution of Problem (2) is c = (G + λI) −1 y.
Generalization Bound
We derive a generalization bound of the supervised problem in RKHM. We first define an A-valued Rademacher complexity. Let (Ω, P ) be a probability space. For a random variable (measurable map) g : Ω → A, we denote by E[g] the Bochner integral of g, i.e., ω∈Ω g(ω)dP (ω).
Definition 4.1 Let σ 1 , . . . , σ n be i.i.d and mean zero A-valued random variables and let x 1 , . . . , x n ∈ X be given samples.
Let σ = {σ i } n i=1 and x = {x i } n i=1 . Let F be a class of functions from X to A. The A-valued empirical Rademacher complexityR(F, σ, x) is defined aŝ R(F, σ, x) = E sup f ∈F 1 n n i=1 f (x i ) * σ i A .
We derive an upper bound of the complexity of a function space related to the RKHM M k . We assume A is the C * -algebra of bounded linear operators on a Hilbert space.
Proposition 4.2 Let B > 0 and let F = {f ∈ M k | f k ≤ B} and let C = Ω σ i (ω) 2 A dP (ω). Then, we haveR (F, σ, x) ≤ A B √ C n n i=1 k(x i , x i ) A 1/2 I.
To prove Proposition 4.2, we first show the following A-valued version of Jensen's inequality.
Lemma 4.3 For a positive
A-valued random variable c : Ω → A + , we have E[c 1/2 ] ≤ A E[c] 1/2 .
In Example 3.4, we focused on the case of A = C p×p , which is effective, for example, in analyzing image data. In the following, we focus on that case and consider the trace of matrices. The trace is an appropriate operation for evaluating matrices. It is linear and forms the Hilbert-Schmidt inner product. Let B > 0 and E > 0. We put
F = {f ∈ M k | f k ≤ B, f (x) ∈ R p×p for any x ∈ X }, G(F) = {X × Y (x, y) → |f (x) − y| 2
A ∈ A | f ∈ F}, and Y = {y ∈ R p×p | y A ≤ E}. Let x 1 , . . . , x n ∈ X and y 1 , . . . , y n ∈ Y. We assume there exists D > 0 such that for any x ∈ X , k(x, x) A ≤ D and let L = 2 √ 2(B √ D + E). Using the upper bound of the Rademacher complexity, we derive the following generalization bound.
Proposition 4.4 Let tr(a) be the trace of a ∈ C p×p . For any g ∈ G(F), any random variable z : Ω → X × Y, and any δ ∈ (0, 1), with probability ≥ 1 − δ, we obtain
tr E[g(z)] − 1 n n i=1 g(x i , y i ) ≤ 2 LB √ Dp √ n + 3 √ 2Dp log(2/δ) n .
Note that the same type of bounds is derived for RKHS (Mohri et al., 2018
(x, y) → |f (x) − y| 2
A . We use Theorem 3 of Maurer (2016) to obtain the following bound. Lemma 4.5 Let s 1 , . . . , s n be {−1, 1}-valued Rademacher variables (i.e. independent uniform random variables taking values in {−1, 1}) and let σ 1 , . . . , σ n be i.i.d. A-valued random variables each of whose element is the
Rademacher variable. Let s = {s i } n i=1 , and z = {(x i , y i )} n i=1 . Then, we havê R(tr G(F), s, z) ≤ L trR(F, σ, x).
Next, we use Theorem 3.3 of Mohri et al. (2018) to derive an upper bound of the generalization error.
Lemma 4.6 Let z : Ω → X × Y be a random variable and let g ∈ G(F). Under the same notations and assumptions as Proposition 4.5, for any δ ∈ (0, 1), with probability ≥ 1 − δ, we have
tr E[g(z)] − 1 n n i=1 g(x i , y i ) ≤ 2R(tr G(F), s, z) + 3 √ 2Dp log(2/δ) n .
Computational Complexity
As mentioned at the beginning of this section, we need to compute (G + λI) −1 y for a Gram matrix G ∈ A n×n and a vector y ∈ A n for solving the minimization problem (2). When A = C p×p , we have A n×n = C np×np , and G is huge if n, the number of samples, or p, the dimension of A, is large. If we construct the np by np matrix explicitly and compute (G + λI) −1 y with a direct method such as Gaussian elimination and back substitution (for example, see Trefethen & Bau 1997), the computational complexity is O(n 3 p 3 ). However, if X = A d 1 , A 1 A 2 , and parameters in the positive definite kernel have a specific structure, then we can reduce the computational complexity. For example, applying the fast Fourier transform, we can compute a multiplication of the DFT matrix F and a vector with O(p log p) (Van Loan, 1992). Let A 1 = C * (Z/pZ) and let A 2 = C p×p . Let k be an A 1 or A 2 -valued positive definite kernel defined in Definition 3.2.
Proposition 4.7 For a i,j ∈ A 1 , the computational complexity for computing (G + λI) −1 y by direct methods for solving linear systems of equations is O(np 2 log p + n 3 p).
We can use an iteration method for linear systems, such as the conjugate gradient (CG) method (Hestenes & Stiefel, 1952) to reduce the complexity with respect to n. Note that we need O(np 2 log p) operations after all the iterations.
Proposition 4.8 For a i,j ∈ A 1 , the computational complexity for 1 iteration step of CG method is O(n 2 p).
Proposition 4.9 Let a i,j ∈ A 2 whose number of nonzero elements is O(p log p). Then, the computational complexity for 1 iteration step of CG method is O(n 2 p 2 log p). In the case of RKHSs, techniques such as the random Fourier feature have been proposed to alleviate the computational cost of kernel methods (Rahimi & Recht, 2007). It could be interesting to inspect how to further reduce the computational complexity of learning in RKHM using random feature approximations for C * -algebra-valued kernels; this is left for future work.
CONNECTION WITH EXISTING METHODS
Connection with Convolutional Neural Network
Convolutional neural network (CNN) has been one of the most successful methods for analyzing image data (Le-Cun et al., 1998;Li et al., 2021). We investigate the connection of the supervised learning problem in RKHM with CNN. In this subsection, we set X ⊆ A 1 = C * (Z/pZ) and A 2 = C p×p . Since the product in C * (Z/pZ) is characterized by the convolution, our framework with a specific A 1 -valued positive definite kernel enables us to reconstruct a similar model as the CNN.
We first provide an A 1 -valued positive definite kernel related to the CNN.
Proposition 5.1 For a 1 , . . . , a L , b 1 , . . . , b L ∈ A 1 and σ 1 , . . . , σ L : A 1 → A 1 each of which has an expansion σ j (x) = ∞ l=1 α j,l x l with α j,l ≥ 0, letk : X × X → A 1 be defined aŝ
k(x, y) =σ L (b * L b L + σ L−1 (b * L−1 b L−1 + · · · + σ 2 (b * 2 b 2 + σ 1 (b * 1 b 1 + x * a * 1 a 1 y)a * 2 a 2 ) · · · a * L−1 a L−1 )a * L a L ). (3)
Then,k is an A 1 -valued positive definite kernel.
Using the positive definite kernel (3), the solution f of the problem (2) is written as
f (x) = n i=1 σ L (b * L b L + σ L−1 (b * L−1 b L−1 + · · · + σ 2 (b * 2 b 2 + σ 1 (b * 1 b 1 + x * a * 1 a 1 x i )a * 2 a 2 ) · · · a * L−1 a L−1 )a * L a L )c i ,(4)
for some c i ∈ A 1 . We regard a * 1 a 1 x i and a * j a j for j = 2, . . . , L as convolutional filters, b * j b j for j = 1, . . . , L as biases, and σ j for j = 1, . . . , L as activation functions. Then, optimizing a 1 , . . . , a L , b 1 , . . . , b L simultaneously with c i corresponds to learning the CNN of the form (4).
The following proposition shows that the C * -algebra-valued polynomial kernel defined in Definition 3.2 is general enough to represent the A 1 -valued positive definite kernelk, related to the CNN. Therefore, by applying A 2 -valued polynomial kernel, not A 1 -valued polynomial kernel, we can go beyond the method with the convolution.
Proposition 5.2 The A 1 -valued positive definite kernelk defined as Eq. (3) is composed of the sum of A 1valued polynomial kernels.
Connection with Convolutional Kernel
For image data, a (C-valued) positive definite kernel called convolutional kernel is proposed to bridge a gap between kernel methods and neural networks (Mairal et al., 2014;Mairal, 2016). In this subsection, we construct two C * -algebra-valued positive definite kernels that generalize the convolutional kernel. Similar to the case of the CNN, we will first show that we can reconstruct the convolutional kernel using a C * -algebra-valued positive definite kernel. Moreover, we will show that our framework gives another generalization of the convolutional kernel. A generalization of neural networks to C * -algebra-valued networks is proposed (Hashimoto et al., 2022). This generalization allows us to generalize the analysis of the CNNs with kernel methods to that of C * -algebravalued CNNs.
Let Ω be a finite subset of Z m . For example, Ω is the space of m-dimensional grids. Letà 1 be the space of C-valued maps on Ω and X ⊆à 1 . The convolutional kernel is defined as follows (Mairal et al., 2014, Definition 2).
Definition 5.3 Let β, σ > 0. The convolutional kernelk : X × X → C is defined as
k(x, y) = z,z ∈Ω |x(z)| |y(z )|e − 1 2β 2 z−z 2 e − 1 2σ 2 |x(z)−ỹ(z )| 2 .(5)
Here, · is the standard norm in C m . In addition, for x ∈ X ,x(z) = x(z)/|x(z)|.
Let Ω = {z 1 , . . . , z p }, A 1 = C * (Z/pZ), and A 2 = C p×p . We first construct an A 1 -valued positive definite kernel, which reconstructs the convolutional kernel (5).
Proposition 5.4
Definek : X × X → A 1 aŝ
k(x, y) = R R m c x (ω, η) * c y (ω, η) dλ β (ω)dλ σ (η),(6)
where dλ β (ω) = βe − β 2 ω 2 2 dω for β > 0 and
c x (ω, η) = circ |x(z 1 )|e √ −1ω·z1 e √ −1η·x(z1) , · · · , |x(z p )|e √ −1ω·zp e √ −1η·x(zp) ,
for x ∈ X , ω ∈ R m , and η ∈ R. Then,k is an A 1 -valued positive definite kernel, and for any l = 1, . . . , p,k is written ask
(x, y) = 1 p p i,j=1k (x, y) i,j = p j=1k (x, y) l,j ,
wherek(x, y) i,j is the (i, j)-entry ofk(x, y).
Remark 5.5 Similar to Subsection 5.1, we can generalizek by replacing c x (·, ·) * c y (·, ·) by an A 2 -valued polynomial kernel with respect to c x (·, ·) and c y (·, ·) in Eq. (6).
Instead of A 1 -valued, we can also construct anà 1 -valued kernel, which reconstructs the convolutional kernel (5).
Definition 5.6 Let β, σ > 0. Defineǩ : X × X →Ã 1 aš k(x, y)(w) = z,z ∈Ω |x(ψ(z, w))| |y(ψ(z , w))|e −1 2β 2 ψ(z,w)−ψ(z ,w) 2 e −1 2σ 2 |x(ψ(z,w))−ỹ(ψ(z ,w))| 2 (7)
for w ∈ Ω. Here, ψ : Ω × Ω → Ω is a map satisfying ψ(z, 0) = z for any z ∈ Ω.
Theà 1 -valued mapǩ is a generalization of the (C-valued) convolutional kernelk in the following sense, which is directly derived from the definitions ofǩ andk.
Proposition 5.7 Fork andǩ defined as Eqs. (5) and (7), respectively, we haveǩ(x, y)(0) =k(x, y).
We further generalize theà 1 -valued kernelǩ to an A 2 -valued positive definite kernel.
Definition 5.8 Let β, σ > 0 and a i ∈ A 2 for i = 1, 2, 3, 4. Let ψ be the same map as that in Eq. (7). Let k : X × X → A 2 be defined as
k(x, y) = R R m z,z ∈Ω a * 1 x(z)a * 2 b(z, ω) * a * 3x (z, η) * a * 4 a 4ỹ (z , η)a 3 b(z , ω)a 2 y(z )a 1 dλ β (ω)dλ σ (η)(8)
for x, y ∈ X . Here, for x ∈ X ,
x(z) = diag(|x(ψ(z, z 1 )|, . . . , |x(ψ(z, z p ))|) ∈ A 2 , The following proposition shows k is a generalization ofǩ, which means we finally generalize the (C-valued) convolution kernelk to an A 2 -valued positive definite kernel. This allows us to generalize the relationship between the CNNs and the convolutional kernel to that of a C * -algebra-valued version of the CNNs and the C * -algebra-valued convolutional kernel k.
x(z, ω) = diag(e − √ −1ω·x(ψ(z,z1(k(x, y) = 3 i=1 (1 − cx · y) i ) k =kI 0.800 ± 0.032 k =kT 0.539 ± 0.012 Nonsep 0.539 ± 0.012 RKHM (k(x, y) = 3 i=1 R * x (I − cQ * x ) i (I − cQ y ) i R y ) 0.343 ± 0.022 T = 1 1 1 1 , Nonsep: k(x 1 , x 2 ) i,j =k(x 1,i , x 2,j )
Proposition 5.10 If a i = I, then the A 2 -valued positive definite kernel k defined as Eq. (8) is reduced to the A 1 -valued convolutional kernelǩ defined as Eq. (7).
NUMERICAL RESULTS
Experiments with Synthetic Data
We compared the performances of supervised learning in RKHMs and vvRKHSs. We generated n samples x 1 , . . . , x n in [0, 1] 2 each of whose elements is independently drawn from the uniform distribution on [0, 1]. For a generated sample x i = [x i,1 , x i,2 ], we added noise ξ i ∈ R 2 , each of whose elements is independently drawn from the Gaussian distribution with mean 0 and standard deviation 0.1. We generated the corresponding output sample y i as y
i = [sin(x i,1 +x i,2 ), sin(x i,1 +x i,2 ) + sin(0.5(x i,1 +x i,2 ))] ∈ R 2 , wherex i = x i + ξ i .
We learned a function f that maps x i to y i in different RKHMs and vvRKHSs and different values of the regularization parameter λ. To compare the performances, we generated 100 test input samplesx 1 , . . . ,x 100 in [0, 1] 2 each of whose elements is independently drawn from the uniform distribution on [0, 1]. We also generated y 1 , . . . ,ŷ 100 given byŷ i = [sin(x i,1 +x i,2 ), sin(x i,1 +x i,2 ) + sin(0.5(x i,1 +x i,2 ))]. We computed the mean error 1/100
100 i=1 f (x i ) −ŷ i .
The results for n = 30 are illustrated in Table 1 and Figure 4. Regarding Table 1, we executed a cross-validation grid search to find the best parameters c and λ, where c is a parameter in the positive definite kernels and λ is the regularization parameter. Regarding Figure 4 (a), we set c as the parameter found by the cross-validation and computed the error for different values of λ. We remark that the mean error for the RKHM becomes large as λ becomes large, but because of the scale of the vertical axis, we cannot see the change clearly in the figure. We can see that RKHM outperforms vvRKHSs. We also show the relationship between the mean error and the number of samples in Figure 4 (b). We can see that the mean error becomes small as the number of samples becomes large.
Regarding the learning in RKHMs, for i = 1, . . . , n, we transformed x i ∈ [0, 1] 2 into circ(x i ) ∈ Circ(2). Then, we set A 1 = Circ(2) and A 2 = C 2×2 . We computed the solution of the minimization problem (2) and obtained a functionf ∈ M k that maps circ(x i ) to circ(y i ). Since the output of the learned functionf takes its value on A 2 , we computed the mean value of (1, 1) and (2, 2) entries off (x i ) for obtaining the first element of the output vector in R 2 and that of (1, 2) and (2, 1) entries for the second element. Regarding the C * -algebravalued kernel for RKHM, we set k(x, y)
= 3 i=1 R * x (I − cQ * x ) i (I − cQ y ) i R y for x ∈ A 1 , where x = Q x R x is the QR decomposition of x.
Experiments with MNIST
We compared our method with CNNs using MNIST (LeCun et al., 1998). For i = 1, . . . , 20, we generated training samples as follows: We added noise to each pixel of an original image y i and generated a noisy image x i . The noise is drawn from the normal distribution with mean 0 and standard deviation 0.01. Moreover, each digit (0-9) is contained in the training sample set equally (i.e., the number of samples for each digit is 2). The image size is 28 × 28. We tried to find a function that maps a noisy image to its original image using an RKHM and a CNN. We represent input and output images x i and y i as the circulant matrices circ(x i ) and circ(y i ) whose first rows are x i and y i . Then, we learned the function in the RKHM associated with a polynomial kernel k(x, y) = (a * 3 σ(xa 1 + a 2 ) * + a * 4 )(σ(ya 1 + a 2 )a 3 + a 4 ), where σ(x) = (I − cQ x )R x + (I − cQ x ) 3 R x . Since k has 4 A 2 -valued parameters, it corresponds to a generalization of 2-layer CNN with 28×28 filters (see Subsection 5.1). Regarding the parameters a i , we used a gradient descent method and optimized them. We generated 100 noisy images for test samples in the same manner as the training samples and computed the mean error with respect to them. For comparison, we also trained a 2-layer CNN with 28 × 28 filters with the same training samples. The results are illustrated in Figure 6 (a). We can see that the RKHM outperforms the CNN. Moreover, we combined the RKHM with a 1-layer CNN with a 3×3 filter, whose inputs are the outputs of the function learned in the RKHM. We also trained a 3-layer CNN with 3 × 3 filters and a 2-layer CNN with 28 × 28 filters combined with a 1-layer CNN with a 3 × 3 filter. The results are illustrated in Figures 5 and 6 (b). We can see that by replacing convolutional layers with an RKHM, we can achieve better performance. RKHMs and convolutional layers with 28 × 28 filters capture global information of images. According to the results of the CNN with 28 × 28 filters and the RKHM in Figure 6 (b), we can see that the RKHM can capture global information of the images more effectively. On the other hand, convolutional layers with 3 × 3 filters capture local information. Since the 2-layer RKHM combined with a 1-layer CNN with a 3 × 3 filter outperforms a 3-layer CNN with 3 × 3 filters, we conclude that the combination of the RKHM and CNN captures the global and local information more effectively.
CONCLUSION
We investigated supervised learning in RKHM and provided a new twist and insights for kernel methods. We constructed C * -algebra-valued kernels from the perspective of C * -algebra, which is suitable, for example, for analyzing image data. We investigated the generalization bound and computational complexity for RKHM learning and showed the connection with existing methods. RKHMs enable us to construct larger representation spaces than the case of RKHSs and vvRKHSs, and generalize operations such as convolution. This fact implies the representation power of RKHMs goes beyond that of existing frameworks.
APPENDIX Notation
The typical notations in this paper are listed in Table A. A C * -algebra and Hilbert C * -module
We provide definitions and a lemma related to C * -algebra and Hilbert C * -module.
Definition A.1 (C * -algebra) A set A is called a C * -algebra if it satisfies the following conditions: 1. A is an algebra over C and equipped with a bijection (·) * : A → A that satisfies the following conditions for α, β ∈ C and a, b ∈ A:
• (αa + βb) * = αa * + βb * , • (ab) * = b * a * ,
• (a * ) * = a.
2.
A is a normed space endowed with · A , and for a, b ∈ A, ab A ≤ a A b A holds. In addition, A is complete with respect to · A .
3. For a ∈ A, a * a A = a 2 A holds.
A C * -algebra A is called unital if there exists a ∈ A such that ab = b = ba for any b ∈ A. We denote a by 1 A .
Definition A.2 (C * -module) Let M be an abelian group with an operation +. If M is equipped with a (right) A-multiplication, then M is called a (right) C * -module over A.
Definition A.3 (A-valued inner product) Let M be a C * -module over A. A C-linear map with respect to the second variable ·, · M : M × M → A is called an A-valued inner product if it satisfies the following properties for u, v, w ∈ M and a, b ∈ A: Similar to the case of Hilbert spaces, the following Cauchy-Schwarz inequality for A-valued inner products is available (Lance, 1995, Proposition 1.1).
1. u, va + wb M = u, v M a + u, w M b, 2. v, u M = u, v * M , 3. u, u M ≥ A 0, 4. If u, u M = 0, then u = 0.
Lemma A.5 (Cauchy-Schwarz inequality) Let M be a Hilbert A-module. For u, v ∈ M, the following inequality holds:
| u, v M | 2 A ≤ A u 2 M v, v M .A A C * -algebra A + The subset of A composed of all positive elements in A ≤ A For a, b ∈ A, a ≤ A b means b − a is positive A For a, b ∈ A, a A b means b − a is positive and nonzero | · | A The A-valued absolute value in A defined as |a| A = (a * a) 1/2 for a ∈ A. X An input space Y An output space k An A-valued positive definite kernel φ
The feature map endowed with k M k The RKHM associated with k G
The A-valued Gram matrix defined as G i,j = k(x i , x j ) for given samples x 1 , . . . , x n ∈ X F
The discrete Fourier transform (DFT) matrix, whose (i, j)-entry is ω (i−1)(j−1) / √ p
B Proofs
We provide the proofs of the propositions and lemmas in the main thesis.
Lemma 2.3 The group C * -algebra C * (Z/pZ) is C * -isomorphic to Circ(p).
Proof Let f : C * (Z/pZ) → Circ(p) be a map defined as f (x) = circ(x(0), . . . , x(p − 1)). Then, f is linear and invertible. In addition, we have
f (x)f (y) = circ z∈Z/pZ
x(0 − z)y(z), . . . ,
z∈Z/pZ x(p − 1 − z)y(z) = circ((x · y)(0), . . . , (x · y)(p − 1)) = f (x · y), f (x) * = circ(x(0), x(p − 1), . . . , x(1)) = f (x * ), f (x) = F diag z∈Z/pZ x(z)e 2π √ −1z·0/p , . . . , z∈Z/pZ x(z)e 2π √ −1z(p−1)/p F * = x ,
where the last formula is derived by Lemma 2.2. Thus, f is a C * -isomorphism.
In the following, for a probability space Ω and a random variable (measurable map) g : Ω → C, the integral of g is denoted by E[g]. for any x ∈ A + , we have
(ax + b) * (ax + b) − x = 1 4 xx −1 0 x + 1 4 x + 1 4 x + 1 4 x 0 − x = 1 2 x −1/2 0 x − 1 2 x 1/2 0 * 1 2 x −1/2 0 x − 1 2 x 1/2 0 = (ax − b) * (ax − b) ≥ A 0.
Thus, we have ax + b = |ax + b| A ≥ A |x 1/2 | A = x 1/2 . Therefore, we have
E (c + 1 A ) 1/2 ≤ A E[a(c + 1 A ) + b] = ax 0 + b = x 1/2 0 = E[(c + 1 A )] 1/2 . Since > 0 is arbitrary, we have E[c 1/2 ] ≤ A E[c] 1/2 . Proposition 4.2 Let B > 0 and let F = {f ∈ M k | f k ≤ B} and let C = Ω σ i (ω) 2 A dP (ω). Then, we haveR (F, σ, x) ≤ A B √ C n n i=1 k(x i , x i ) A 1/2 I.
Proof By Lemma 4.3, we havê
R(F, σ, x) = E sup f ∈F 1 n n i=1 f (x i ) * σ i A = 1 n E sup f ∈F f, n i=1 φ(x i )σ i k A ≤ 1 n E sup f ∈F n i=1 φ(x i )σ i k f k = 1 n E n i=1 φ(x i )σ i k B = B n E n i,j=1 σ * i k(x i , x j )σ j 1/2 ≤ B n E n i,j=1 σ * i k(x i , x j )σ j 1/2 = B n n i=1 E[σ * i k(x i , x i )σ i ] 1/2 ≤ B n n i=1 E[ σ * i k(x i , x i )σ i A ] 1/2 I ≤ B n E σ i 2 A 1/2 n i=1 k(x i , x i ) A 1/2 I = B √ C n n i=1 k(x i , x i ) A 1/2 I,
where the third inequality is derived by the Cauchy-Schwartz inequality (Lemma A.5).
In the following, we put A = C p×p . The following lemmas are applied for the proofs of Lemmas 4.5 and 4.6. Since > 0 is arbitrary, we have tr(sup s∈S s) ≥ sup s∈S tr(s). Lemma B.3 Let a ∈ R p×p . Then, tr(a) ≤ tr(|a| A ).
Proof Let λ 1 , . . . , λ p be eigenvalues of a, and let κ 1 , . . . , κ p be singular values of a. Then, by Weyl's inequality, we have
tr(a) = p i=1 λ i ≤ p i=1 |λ i | ≤ p i=1 κ i = tr(|a| A ).
We now show Lemmas 4.5 and 4.6.
Lemma 4.5 Let s 1 , . . . , s n be {−1, 1}-valued Rademacher variables (i.e. independent uniform random variables taking values in {−1, 1}) and let σ 1 , . . . , σ n be i.i.d. A-valued random variables each of whose element is the Rademacher variable. Let s = {s i } n i=1 , and z = {(x i , y i )} n i=1 . Then, we havê
R(tr G(F), s, z) ≤ L trR(F, σ, x). Proof For f 1 , f 2 ∈ F, we have tr(|f 1 (x i ) − y i | 2 A ) − tr(|f 2 (x i ) − y i | 2 A ) = tr((f 1 (x i ) − y i + f 2 (x i ) − y i ) * (f 1 (x i ) − y i − f 2 (x i ) + y i )) = f 1 (x i ) − y i + f 2 (x i ) − y i A f 1 (x i ) − y i − f 2 (x i ) + y i HS ,
where the first equality holds since for a 1 , a 2 ∈ R p×p , tr(a * 1 a 2 ) = tr(a * 2 a 1 ) and · HS is the Hilbert-Schmidt norm in C p×p . In addition, we have
f 1 (x i ) − y i + f 2 (x i ) − y i A ≤ φ(x i ), f 1 + f 2 k − 2y i A = k(x i , x i ) 1/2 f 1 + f 2 k + 2 y i A = 2(B √ D + E) = L √ 2 .
Thus, by setting
ψ i (f ) = tr(|f (x i ) − y i | 2 ), φ i (f ) = L/ √ 2f (x i ), and · = · HS in Theorem 3 of Maurer (2016), we obtain E sup f ∈F 1 n n i=1 s i tr(|f (x i ) − y i | 2 A ) ≤ √ 2 L √ 2 E sup f ∈F 1 n n i=1 σ i , f (x i ) HS ≤ LE sup f ∈F tr 1 n n i=1 f (x i ) * σ i A = L tr E sup f ∈F 1 n n i=1 f (x i ) * σ i A .
Lemma 4.6 Let z : Ω → X × Y be a random variable and let g ∈ G(F). Under the same notations and assumptions as Proposition 4.5, for any δ ∈ (0, 1), with probability ≥ 1 − δ, we have
tr E[g(z)] − 1 n n i=1 g(x i , y i ) ≤ 2R(tr G(F), s, z) + 3 √ 2Dp log(2/δ) n .
Proof For a random variable S = (z 1 , . . . , z n ) : Ω → (X ×Y) n , let Φ(S) = sup g∈G(F ) tr(E[g(z)]−1/n n i=1 g(z i )). For i = 1, . . . , n, let S i = (z 1 , . . . , z n ), where z j = z j for j = i and z i = z i . Then, we have
Φ(S) − Φ(S i ) ≤ sup g∈G(F ) tr E[g(z)] − 1 n n j=1 g(z j ) − sup g∈G(F ) tr E[g(z)] − 1 n n j=1 g(z j ) ≤ 1 n sup g∈G(F ) tr n j=1 g(z j ) − n j=1 g(z j ) = 1 n sup g∈G(F ) tr(g(z i ) − g(z i )) ≤ p n sup g∈G(F ) g(z i ) − g(z i ) A ≤ 2 √ Dp n .
By McDiarmid's inequality, for any δ ∈ (0, 1), with probability ≥ 1 − δ/2, we have
Φ(S) − E[Φ(S)] ≤ 1 2 n i=1 2 √ Dp n 2 log 2 δ = √ 2Dp log 2 δ n
Thus, for any g ∈ G(F), we have
tr E[g(z)] − 1 n n i=1 g(z i ) ≤ Φ(S) ≤ E[Φ(S)] + √ 2Dp log 2 δ n .
For the remaining part, the proof is the same as that of Theorem 3.3 of Mohri et al. (2018). Since
Φ(S) = sup g∈G(F ) E[tr(g(z))] − 1 n n i=1 tr(g(z i )) ,
we replace g in the proof of Theorem 3.3 in Mohri et al. (2018) by z → tr(g(z)) in our case and derive
tr E[g(z)] − 1 n n i=1 g(x i , y i ) ≤ 2E sup f ∈F 1 n n i=1 s i tr |f (x i ) − y i | 2 A + 3 √ 2Dp log 2 δ n ≤ 2R(tr G(F), s, z) + 3 √ 2Dp log 2 δ n .
Proposition 4.7 For a i,j ∈ A 1 , the computational complexity for computing (G + λI) −1 y by direct methods for solving linear systems of equations is O(np 2 log p + n 3 p).
Proof Since all the elements of G and y are in A 1 , we have
(G + λI) −1 y = (FΛ −1 G+λI F * )FΛ y F * = FΛ −1 G+λI Λ y F * ,
where F is the C p×p -valued n × n diagonal matrix whose diagonal elements are all F . In addition, Λ G+λI is the A 1 -valued n × n whose (i, j)-entry is Λ k(xi,xj ) , and Λ y is the vector in A n 1 whose ith element is Λ yi . If we use the fast Fourier transformation, then the computational complexity of computing F y for y ∈ C p×p is O(p 2 log p). Moreover, since the computational complexity of multiplication Λ x Λ y for x, y ∈ A 1 is O(p), using Gaussian elimination and back substitution, the computational complexity of computing Λ −1 G+λI Λ y is O(n 3 p). As a result, the total computational complexity is O(np 2 log p + n 3 p).
Proposition 4.9 Let a i,j ∈ A 2 whose number of nonzero elements is O(p log p). Then, the computational complexity for 1 iteration step of CG method is O(n 2 p 2 log p).
Proof The computational complexity for computing 1 iteration step of CG method is equal to that of computing (G + λI)b for b ∈ A n 2 . For b ∈ A 2 , the computational complexity of computing k(x i , x j )b is O(p 2 log p) since those of computing a i,j b and x i b are both O(p 2 log p). (For x i b, we use fast Fourier transformation.) Therefore, the computational complexity of computing (G + λI)b is O(n 2 p 2 log p).
Proposition 5.1 For a 1 , . . . , a L , b 1 , . . . , b L ∈ A 1 and σ 1 , . . . , σ L : A 1 → A 1 each of which has an expansion σ j (x) = ∞ l=1 α j,l x l with α j,l ≥ 0, letk : X × X → A 1 be defined aŝ k(x, y) =σ L (b * L b L + σ L−1 (b * L−1 b L−1 + · · · + σ 2 (b * 2 b 2 + σ 1 (b * 1 b 1 + x * a * 1 a 1 y)a * 2 a 2 ) · · · × a * L−1 a L−1 )a * L a L ). (3) Then,k is an A 1 -valued positive definite kernel.
Proof Let l : X × X → A 1 be an A 1 -valued positive definite kernel and σ : A 1 → A 1 be a map that has an expansion σ(x) = ∞ j=1 α j x j with α j ≥ 0. Then, σ • l is also an A 1 -valued positive definite kernel. Indeed, for d 1 , . . . , d n ∈ A 1 and x 1 , . . . , x n ∈ X , we have n i,j=1
d * i σ(l(x i , x j ))d j = n i,j=1 ∞ s=1 α s d * i l(x i , x j ) s d j ≥ A1 0.
Since (x, y) → b * 1 b 1 + x * a * 1 a 1 y is an A 1 -valued positive definite kernel, (x, y) → σ 1 (b * 1 b 1 + x * a * 1 a 1 y) is also an A 1 -valued positive definite kernel. Moreover, since σ 1 (b * 1 b 1 +x * a * 1 a 1 y) and b 2 are in A 1 , (x, y) → b * 2 b 2 +σ 1 (b * 1 b 1 + x * a * 1 a 1 y) is also an A 1 -valued positive definite kernel. We iteratively apply the above result and obtain the positive definiteness ofk.
Proposition 5.2 The A 1 -valued positive definite kernelk defined as Eq. (3) is composed of the sum of A 1valued polynomial kernels.
Proof Since (x, y) → b * 1 b 1 + x * a * 1 a 1 y is an A 1 -valued polynomial kernel and σ : A 1 → A 1 is a map that has an expansion σ(x) = ∞ j=1 α j x j ,k is composed of the sum of A 1 -valued polynomial kernels.
Proposition 5.4
Definek : X × X → A 1 aŝ
k(x, y) = R R m c x (ω, η) * c y (ω, η) dλ β (ω)dλ σ (η),(6)
where dλ β (ω) = βe − β 2 ω 2 2 dω for β > 0 and c x (ω, η) = circ |x(z 1 )|e √ −1ω·z1 e √ −1η·x(z1) , · · · , |x(z p )|e √ −1ω·zp e √ −1η·x(zp) , for x ∈ X , ω ∈ R m , and η ∈ R. Then,k is an A 1 -valued positive definite kernel, and for any l = 1, . . . , p,k is written ask (x, y) = 1 p p i,j=1k (x, y) i,j = p j=1k (x, y) l,j ,
wherek(x, y) i,j is the (i, j)-entry ofk(x, y).
Proof The positive definiteness ofk is trivial. As for the relationship betweenk andk, we havê
k(x, y) i,j = R R m p l=1 |x(z p−i+2+l )|e − √ −1ω·z p−i+2+l e − √ −1η·x(z p−i+2+l )
× |y(z p−j+2+l )|e Proposition 5.9 The A 2 -valued map k defined as Eq. (8) is an A 2 -valued positive definite kernel.
Proof For n ∈ N, c 1 , . . . , c n ∈ A 2 , and x 1 , . . . , x n ∈ A d 1 , we have n i,j=1
c * i k(x i , x j )c j = R R m n i=1 z∈Ω c * i a * 1 x i (z)a * 2 b(z, ω) * a * 3xi (z, η) * a * 4
n j=1 z ∈Ω a 4xj (z , η)a 3 b(z , ω)a 2 x j (z )a 1 c j dλ β (ω)dλ σ (η), which is positive semi-definite.
Figure 1 :Figure 2 : 4 Figure 3 :
1243Representing samples in RKHM Product in A 1 and A 2 in Example 3.Comparison of RKHM with vvRKHS
Remark 4.10 If we do not use the structure of A 1 , then the computational complexities in Propositions 4.7, 4.8, and 4.9 are O(n 3 p 3 ), O(n 2 p 3 ), and O(n 2 p 3 ), respectively.
)) , . . . , e − √ −1ω·x(ψ(z,zp)) ) ∈ A 2 , b(z, ω) = diag(e − √ −1ω·ψ(z,z1) , . . . , e − √ −1ω·ψ(z,zp) ) ∈ A 2 .
Figure 4 :Figure 5 :
45Mean test error versus hyperparameters (Mean value ± standard deviation of 5 runs)Comparison between RKHM and CNN Proposition 5.9 The A 2 -valued map k defined as Eq.(8)is an A 2 -valued positive definite kernel.
Figure 6 :
6Mean test error versus the number of epochs (Mean value ± standard deviation of 5 runs).
Definition A. 4 (
4A-valued absolute value and norm) Let M be a C * -module over A. For u ∈ M, the A-valued absolute value |u| M on M is defined by the positive element |u| M of A such that |u| 2 M = u, u M . The nonnegative real-valued norm · M on M is defined by u M = |u| M A .
Lemma 4. 3
3For a positive A-valued random variable c : Ω → A + , we have E[c 1/2 ] ≤ A E[c] 1/2 . Proof For any > 0, let x 0 = E[c + 1 A ],
Lemma B. 1
1Let a, b ∈ R p×p or a, b ∈ A + . If a ≤ A b, then tr(a) ≤ tr(b).Proof Since b − a ∈ A + , we have 0 ≤ tr(b − a) = tr(b) − tr(a).Lemma B.2 Let S be a subset of A + . Then, tr(sup s∈S s) ≥ sup s∈S tr(s).Proof Let > 0. Then, there exists t ∈ S such that (1 − ) sup s∈S tr(s) ≤ tr(t) ≤ tr(sup s∈S s).
|x(z l+j−i )| |y(z l )|e − 1 2β 2 |x(z l+j−i )−ỹ(z l )| 2 e − 1 2σ 2 z l+j−i −z l 2=k(x, y).
Table 1 :
1Comparison between an RKHM and vvRKHSs (Mean value ± standard deviation of 5 runs)
Table A :
ANotation table
See Hashimoto et al. (2021, Appendix B) for a rigorous definition.
AcknowledgementsHachem Kadri is partially supported by grant ANR-19-CE23-0011 from the French National Research Agency. Masahiro Ikeda is partially supported by grant JPMJCR1913 from JST CREST.
Theoretical foundations of the potential function method in pattern recognition learning. Automation and Remote Control. M A Aizerman, E M Braverman, L Rozonoer, 25Aizerman, M. A., Braverman, E. M., and Rozonoer, L. Theoretical foundations of the potential function method in pattern recognition learning. Automation and Remote Control, 25:821-837, 1964.
Kernels for vector-valued functions: A review. Foundations and Trends® in Machine Learning. M A Alvarez, L Rosasco, N D Lawrence, 4Alvarez, M. A., Rosasco, L., Lawrence, N. D., et al. Kernels for vector-valued functions: A review. Foundations and Trends® in Machine Learning, 4(3):195-266, 2012.
Theory of reproducing kernels. N Aronszajn, Transactions of the American mathematical society. 683Aronszajn, N. Theory of reproducing kernels. Transactions of the American mathematical society, 68(3): 337-404, 1950.
Stability of multi-task kernel regression algorithms. J Audiffren, H Kadri, Proceedings of the 5th Asian Conference on Machine Learning (ACML). the 5th Asian Conference on Machine Learning (ACML)Audiffren, J. and Kadri, H. Stability of multi-task kernel regression algorithms. In Proceedings of the 5th Asian Conference on Machine Learning (ACML), pp. 1-16, 2013.
A training algorithm for optimal margin classifiers. B E Boser, I M Guyon, V N Vapnik, Proceedings of the 5th annual workshop on Computational learning theory (COLT). the 5th annual workshop on Computational learning theory (COLT)Boser, B. E., Guyon, I. M., and Vapnik, V. N. A training algorithm for optimal margin classifiers. In Proceedings of the 5th annual workshop on Computational learning theory (COLT), pp. 144-152, 1992.
Input output kernel regression: Supervised and semi-supervised structured output prediction with operator-valued kernels. C Brouard, M Szafranski, F Buc, Journal of Machine Learning Research. 17Brouard, C., Szafranski, M., and d'Alché Buc, F. Input output kernel regression: Supervised and semi-supervised structured output prediction with operator-valued kernels. Journal of Machine Learning Research, 17:1-48, 2016.
Optimal rates for the regularized least-squares algorithm. A Caponnetto, E De Vito, Foundations of Computational Mathematics. 73Caponnetto, A. and De Vito, E. Optimal rates for the regularized least-squares algorithm. Foundations of Computational Mathematics, 7(3):331-368, 2007.
Christmann, A. and Steinwart, I. Support Vector Machines. B Chanda, D D Majumder, Digital Image Processing and Analysis. PHI Learning. Springer2nd editionChanda, B. and Majumder, D. D. Digital Image Processing and Analysis. PHI Learning, 2nd edition, 2011. Christmann, A. and Steinwart, I. Support Vector Machines. Springer, 2008.
Toeplitz and circulant matrices: A review. Foundations and Trends in Communications and Information Theory. R M Gray, 2Gray, R. M. Toeplitz and circulant matrices: A review. Foundations and Trends in Communications and Information Theory, 2(3):155-239, 2006.
Reproducing kernel Hilbert C * -module and kernel mean embeddings. Y Hashimoto, I Ishikawa, M Ikeda, F Komura, T Katsura, Y Kawahara, Journal of Machine Learning Research. 22267Hashimoto, Y., Ishikawa, I., Ikeda, M., Komura, F., Katsura, T., and Kawahara, Y. Reproducing kernel Hilbert C * -module and kernel mean embeddings. Journal of Machine Learning Research, 22(267):1-56, 2021.
C * -algebra net: a new approach generalizing neural network parameters to C * -algebra. Y Hashimoto, Z Wang, T Matsui, Proceedings of the 39th International Conference on Machine Learning (ICML). the 39th International Conference on Machine Learning (ICML)2022Hashimoto, Y., Wang, Z., and Matsui, T. C * -algebra net: a new approach generalizing neural network pa- rameters to C * -algebra. In Proceedings of the 39th International Conference on Machine Learning (ICML), 2022.
Reproducing kernel Hilbert C * -modules and kernels associated with cocycles. J Heo, Journal of Mathematical Physics. 4910103507Heo, J. Reproducing kernel Hilbert C * -modules and kernels associated with cocycles. Journal of Mathematical Physics, 49(10):103507, 2008.
Methods of conjugate gradients for solving linear systems. M R Hestenes, E Stiefel, Journal of Research of the National Bureau of Standards. 49Hestenes, M. R. and Stiefel, E. Methods of conjugate gradients for solving linear systems. Journal of Research of the National Bureau of Standards, 49:409-436, 1952.
Entangled kernels -beyond separability. R Huusari, H Kadri, Journal of Machine Learning Research. 2224Huusari, R. and Kadri, H. Entangled kernels -beyond separability. Journal of Machine Learning Research, 22 (24):1-40, 2021.
Operator-valued kernels for learning from functional response data. H Kadri, E Duflos, P Preux, S Canu, A Rakotomamonjy, Audiffren , J , Journal of Machine Learning Research. 1720Kadri, H., Duflos, E., Preux, P., Canu, S., Rakotomamonjy, A., and Audiffren, J. Operator-valued kernels for learning from functional response data. Journal of Machine Learning Research, 17(20):1-54, 2016.
Elements of the Theory of Representations. A A Kirillov, SpringerKirillov, A. A. Elements of the Theory of Representations. Springer, 1976.
Duality in RKHSs with infinite dimensional outputs: Application to robust losses. P Laforgue, A Lambert, L Brogat-Motte, F Alché-Buc, Proceedings of the 37th International Conference on Machine Learning (ICML). the 37th International Conference on Machine Learning (ICML)2020Laforgue, P., Lambert, A., Brogat-Motte, L., and d'Alché-Buc, F. Duality in RKHSs with infinite dimensional outputs: Application to robust losses. In Proceedings of the 37th International Conference on Machine Learning (ICML), 2020.
Hilbert C * -modules -a Toolkit for Operator Algebraists. E C Lance, London Mathematical Society Lecture Note Series. 210Cambridge University PressLance, E. C. Hilbert C * -modules -a Toolkit for Operator Algebraists. London Mathematical Society Lecture Note Series, vol. 210. Cambridge University Press, 1995.
Gradient-based learning applied to document recognition. Y Lecun, L Bottou, Y Bengio, P Haffner, Proceedings of the IEEE. 8611LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998.
Z Li, F Liu, W Yang, S Peng, J Zhou, A survey of convolutional neural networks: Analysis, applications, and prospects. IEEE Transactions on Neural Networks and Learning Systems. Li, Z., Liu, F., Yang, W., Peng, S., and Zhou, J. A survey of convolutional neural networks: Analysis, applications, and prospects. IEEE Transactions on Neural Networks and Learning Systems, 2021.
End-to-end kernel learning with supervised convolutional kernel networks. J Mairal, Proceedings of the Advances in Neural Information Processing Systems 29 (NIPS). the Advances in Neural Information Processing Systems 29 (NIPS)Mairal, J. End-to-end kernel learning with supervised convolutional kernel networks. In Proceedings of the Advances in Neural Information Processing Systems 29 (NIPS), 2016.
Convolutional kernel networks. J Mairal, P Koniusz, Z Harchaoui, C Schmid, Proceedings of the Advances in Neural Information Processing Systems 27 (NIPS). the Advances in Neural Information Processing Systems 27 (NIPS)Mairal, J., Koniusz, P., Harchaoui, Z., and Schmid, C. Convolutional kernel networks. In Proceedings of the Advances in Neural Information Processing Systems 27 (NIPS), 2014.
. V M Manuilov, E V Troitsky, C * Hilbert, W * -Modules , Journal of Mathematical Sciences. 982Manuilov, V. M. and Troitsky, E. V. Hilbert C * and W * -modules and their morphisms. Journal of Mathematical Sciences, 98(2):137-201, 2000.
A vector-contraction inequality for rademacher complexities. A Maurer, Proceedings of the 27th International Conference on Algorithmic Learning Theory. the 27th International Conference on Algorithmic Learning TheoryALTMaurer, A. A vector-contraction inequality for rademacher complexities. In Proceedings of the 27th International Conference on Algorithmic Learning Theory (ALT), 2016.
On learning vector-valued functions. C A Micchelli, M Pontil, Neural Computation. 171Micchelli, C. A. and Pontil, M. On learning vector-valued functions. Neural Computation, 17(1):177-204, 2005.
A unifying framework in vector-valued reproducing kernel Hilbert spaces for manifold regularization and co-regularized multi-view learning. H Q Minh, L Bazzani, V Murino, Journal of Machine Learning Research. 1725Minh, H. Q., Bazzani, L., and Murino, V. A unifying framework in vector-valued reproducing kernel Hilbert spaces for manifold regularization and co-regularized multi-view learning. Journal of Machine Learning Research, 17(25):1-72, 2016.
Foundations of Machine Learning. M Mohri, A Rostamizadeh, A Talwalkar, MIT pressMohri, M., Rostamizadeh, A., and Talwalkar, A. Foundations of Machine Learning. MIT press, 2018.
Vector-valued reproducing kernel Hilbert C * -modules. Complex Analysis and Operator Theory. M S Moslehian, 16Moslehian, M. S. Vector-valued reproducing kernel Hilbert C * -modules. Complex Analysis and Operator Theory, 16(1):Paper No. 2, 2022.
. G J Murphy, Operator C * -Algebras, Theory, Academic PressMurphy, G. J. C * -Algebras and Operator Theory. Academic Press, 1990.
K P Murphy, Machine Learning: A Probabilistic Perspective. The MIT PressMurphy, K. P. Machine Learning: A Probabilistic Perspective. The MIT Press, 2012.
Random features for large-scale kernel machines. A Rahimi, B Recht, Proceedings of the Advances in Neural Information Processing Systems 20 (NIPS). the Advances in Neural Information Processing Systems 20 (NIPS)Rahimi, A. and Recht, B. Random features for large-scale kernel machines. In Proceedings of the Advances in Neural Information Processing Systems 20 (NIPS), 2007.
Joint quantile regression in vector-valued RKHSs. M Sangnier, O Fercoq, F Alché-Buc, Proceedings of the Advances in Neural Information Processing Systems 29 (NIPS). the Advances in Neural Information Processing Systems 29 (NIPS)Sangnier, M., Fercoq, O., and d'Alché-Buc, F. Joint quantile regression in vector-valued RKHSs. In Proceedings of the Advances in Neural Information Processing Systems 29 (NIPS), 2016.
Learning with kernels: support vector machines, regularization, optimization, and beyond. B Schölkopf, A J Smola, MIT pressSchölkopf, B. and Smola, A. J. Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT press, 2002.
A generalized representer theorem. B Schölkopf, R Herbrich, A J Smola, Proceedings of the 14th Annual Conference on Computational Learning Theory (COLT). the 14th Annual Conference on Computational Learning Theory (COLT)Schölkopf, B., Herbrich, R., and Smola, A. J. A generalized representer theorem. In Proceedings of the 14th Annual Conference on Computational Learning Theory (COLT), 2001.
Kernel Methods for Pattern Analysis. J Shawe-Taylor, N Cristianini, Cambridge university pressShawe-Taylor, J. and Cristianini, N. Kernel Methods for Pattern Analysis. Cambridge university press, 2004.
Scalable matrix-valued kernel learning for high-dimensional nonlinear multivariate regression and granger causality. V Sindhwani, H Q Minh, A C Lozano, Proceedings of the 29th Conference on Uncertainty in Artificial Intelligence (UAI). the 29th Conference on Uncertainty in Artificial Intelligence (UAI)Sindhwani, V., Minh, H. Q., and Lozano, A. C. Scalable matrix-valued kernel learning for high-dimensional nonlinear multivariate regression and granger causality. In Proceedings of the 29th Conference on Uncertainty in Artificial Intelligence (UAI), 2013.
. L N Trefethen, D Bau, Numerical Linear Algebra. SIAM. Trefethen, L. N. and Bau, D. Numerical Linear Algebra. SIAM, 1997.
Computational Frameworks for the Fast Fourier Transform. C Van Loan, SIAMVan Loan, C. Computational Frameworks for the Fast Fourier Transform. SIAM, 1992.
| []
|
[
"Ballot stuffing and participation privacy in pollsite voting",
"Ballot stuffing and participation privacy in pollsite voting"
]
| [
"Prashant Agrawal [email protected] \nIIT Delhi\n\n",
"Abhinav Nakarmi [email protected] \nAshoka University\n\n",
"Mahabir Prasad Jhanwar [email protected] \nAshoka University\n\n",
"Subodh Sharma \nIIT Delhi\n\n",
"Subhashis Banerjee \nAshoka University\n\n"
]
| [
"IIT Delhi\n",
"Ashoka University\n",
"Ashoka University\n",
"IIT Delhi\n",
"Ashoka University\n"
]
| []
| We study the problem of simultaneously addressing both ballot stuffing and participation privacy for pollsite voting systems. Ballot stuffing is the attack where fake ballots (not cast by any eligible voter) are inserted into the system. Participation privacy is about hiding which eligible voters have actually cast their vote. So far, the combination of ballot stuffing and participation privacy has been mostly studied for internet voting, where voters are assumed to own trusted computing devices. Such approaches are inapplicable to pollsite voting where voters typically vote bare handed. We present an eligibility audit protocol to detect ballot stuffing in pollsite voting protocols. This is done while protecting participation privacy from a remote observer -one who does not physically observe voters during voting. Our protocol can be instantiated as an additional layer on top of most existing pollsite E2E-V voting protocols. To achieve our guarantees, we develop an efficient zero-knowledge proof (ZKP), that, given a value v and a set Φ of commitments, proves v is committed by some commitment in Φ, without revealing which one. We call this a ZKP of reverse set membership because of its relationship to the popular ZKPs of set membership. This ZKP may be of independent interest. | 10.48550/arxiv.2210.14833 | [
"https://export.arxiv.org/pdf/2210.14833v1.pdf"
]
| 253,116,665 | 2210.14833 | 8933f9bcbf306fabbcdf25a9c781acfd650f9361 |
Ballot stuffing and participation privacy in pollsite voting
Prashant Agrawal [email protected]
IIT Delhi
Abhinav Nakarmi [email protected]
Ashoka University
Mahabir Prasad Jhanwar [email protected]
Ashoka University
Subodh Sharma
IIT Delhi
Subhashis Banerjee
Ashoka University
Ballot stuffing and participation privacy in pollsite voting
We study the problem of simultaneously addressing both ballot stuffing and participation privacy for pollsite voting systems. Ballot stuffing is the attack where fake ballots (not cast by any eligible voter) are inserted into the system. Participation privacy is about hiding which eligible voters have actually cast their vote. So far, the combination of ballot stuffing and participation privacy has been mostly studied for internet voting, where voters are assumed to own trusted computing devices. Such approaches are inapplicable to pollsite voting where voters typically vote bare handed. We present an eligibility audit protocol to detect ballot stuffing in pollsite voting protocols. This is done while protecting participation privacy from a remote observer -one who does not physically observe voters during voting. Our protocol can be instantiated as an additional layer on top of most existing pollsite E2E-V voting protocols. To achieve our guarantees, we develop an efficient zero-knowledge proof (ZKP), that, given a value v and a set Φ of commitments, proves v is committed by some commitment in Φ, without revealing which one. We call this a ZKP of reverse set membership because of its relationship to the popular ZKPs of set membership. This ZKP may be of independent interest.
Introduction
Conducting large, binding elections in a fair and free manner is hard. Many existing end-to-end verifiable (E2E-V) voting schemes address this problem by providing cryptographic guarantees to any voter that their vote was correctly counted as they intended [2,18,19,35,14,10]. However, these systems generally rely on polling officers and traditional processes to ensure that only eligible voters had voted. This poses a serious concern as an adversary controlling the polling booth could launch a ballot stuffing attack, by letting ineligible voters vote, letting eligible voters vote more than once, or injecting fake votes in lieu of voters who did not show up for voting. Ballot stuffing is considered one of the most serious attacks against pollsite voting systems [29,17,31]. As such attacks are often seen in situations of a booth capture or when one party dominates the polling booth, simple solutions such as deploying multiple polling agents to oversee the eligibility verification process may not always suffice.
While detection of ballot stuffing is essentially about verifying that only eligible voters had voted, a conflicting requirement is that of protecting participation privacy [30], i.e., which eligible voters had actually voted. Although local observers physically observing voters on voting day necessarily know who voted and who did not, a cryptographic voting protocol should not introduce new ways of leaking this information to a remote observer. Widely publishing the list of participating voters, as many existing E2E-V protocols do, makes large-scale systematic targeting for coercion and profiling much easier. Concretely, it runs the risk of large-scale forced abstention attacks [28], whereby voters are forced to abstain from voting, either under duress or in exchange for money, similar to how they may be forced to vote for a given candidate. Further, since voter identifiers could be linked to a host of other sensitive information, participation information can be used for profiling and selective targeting (e.g., by ignoring communities with poor voter turnout during policymaking). Thus, often the very act of voting or abstaining is considered an aspect of voter privacy and publishing voter participation information is often disallowed [24,1].
In this paper, we propose an eligibility audit protocol for a pollsite voting system that allows a public auditor to detect whether ballot stuffing has happened or not, while protecting participation privacy from a remote observer. Although this problem has been studied in the context of internet voting [28,5,36,23,20,6,30], these solutions are not applicable to pollsite voting because they assume that voters have access to trusted computing devices -for computing signatures, NIZK proofs, etc. In pollsite voting, voters typically vote bare-handed [35,18,19,2] and no devices are trusted either for correctness or secrecy of cast votes. Our goal is specific to such bare-handed pollsite voting protocols. In fact, we want to add the eligibility audit capability as a modular layer on top of any existing pollsite E2E-V voting protocol such as [35,18,19,2].
Our contributions
Towards this end, we make the following main contributions.
First, we propose a modular protocol structure for an eligibility audit protocol on top of any pollsite E2E-V voting protocol. We require the E2E-V protocol to be such that the vote casting process produces an encrypted vote (a voter receipt), encrypted votes are published on a public bulletin board and are verifiably decrypted to obtain the final tally. To add the eligibility audit capabilities, we introduce the notion of a token, which acts as an identifier for a real-world voter but hides the voter's true identity. Tokens are issued to voters by a registrar during a registration step and are embedded in a voting access card, which voters need to present to a polling officer to cast their vote. Importantly, the polling officer cannot inject a vote without access to this information. After verifying the voter's identity card, the polling officer allows them to cast a vote in a private booth and scans the voter receipt obtained from the underlying E2E-V protocol. The polling officer uploads the receipt (its encrypted vote) to a backend teller along with the voter's token extracted from the voter's access card. Post polling, the teller publishes the list of encrypted votes along with the corresponding tokens and engages with a public auditor in an audit protocol to prove in zero-knowledge that no ballot stuffing took place. The recorded encrypted votes are further processed as per the underlying E2E-V scheme's tallying protocol.
Although the model requires voters to safekeep their voting access card until they cast their vote, the card design makes it robust against lapses in this safekeeping. First, losing the physical card does not disenfranchise the voter as the card's information can readily be copied on auxiliary media and presented to the polling officer. Second, losing the information on the card to another voter does not transfer the right to vote to the other voter because the card's information is cryptographically tied to the card owner's identity. To bypass this, the other voter would also need to corrupt the identity verification process at the polling booth. In this way, we provide protection against ballot stuffing if at least one of the voter's card and the polling booth is uncorrupted. Third, losing the information on the card does not reveal to a remote observer whether the card owner voted or not.
Second, we develop formal security definitions for ballot stuffing and participation privacy for pollsite voting. To the best of our knowledge, both these notions have not been formalised before in the context of pollsite voting. Our definition of ballot stuffing captures that the number of published encrypted votes should be at most the number of registered voters who had cast their vote plus the number of registered voters who leaked their voting access card. Ballot stuffing post decryption of the published encrypted votes already comes under the purview of verifiability definitions for E2E-V voting protocols [21]. For participation privacy, we adapt a standard definition by Bernhard et al. [13] to pollsite voting.
Third, we develop a novel cryptographic primitive called a ZKP of reverse set membership to instantiate a concrete eligibility audit protocol. Given a group G of prime order q with generators g, h, a set Φ of Pedersen commitments in group G and a value v ∈ Z q , a ZKP of reverse set membership is a zero-knowledge proof of knowledge of an r ∈ Z q such that g v h r ∈ Φ. The interesting property is that when the proof is requested for k values against the same set Φ, it provides an amortised O(|Φ| + k) time complexity instead of O(|Φ|k), making it suitable for large elections. The ZKP of reverse set membership is named as such because of its relationship to the popular ZKPs of (forward) set membership, which, given a commitment C ∈ G and a set φ of values in Z q , proves in zero-knowledge the knowledge of an opening (v, r) such that C = g v h r and v ∈ φ. This ZKP may be of independent interest.
Fourth, we provide benchmarks for our ZKP of reverse set membership for n = 10 6 , modelling an election of 10 6 voters in the worst case when each voter participates.
Related work
Most pollsite E2E-V voting schemes [2,18,19,35,14,10] consider the question of ballot stuffing outside their scope and trust traditional polling processes for eligibility verification.
There exist many internet voting protocols that provide both eligibility verifiability and participation privacy [28,5,36,23,20,6,30]. In all these protocols, each voter has access to a trusted computing device that generates a ciphertext encrypting the voter's vote and a non-interactive zero-knowledge proof establishing the eligibility of the sender. Adapting these protocols to the pollsite setting is non-trivial, because a) voters are bare-handed and b) delegating the computation of these ciphertexts/proofs to other devices/agents at the polling booth is problematic because it would leak the voters' vote to them. Akinyokun and Teague [4] proposed a pollsite protocol with similar goals as ours, of simultaneously preventing both ballot stuffing and forced abstention attacks. In their scheme, voters (who voted as well as who abstained) can verify for themselves whether their attendance was recorded correctly or not. However, as the authors point out, the scheme does not offer dispute resolution: the voter cannot prove to anyone else that a fake ballot was inserted against their identity. Dispute resolution is important to ensure that voters do not falsely claim ballot stuffing to discredit the election. In contrast, in our approach, the act of ballot stuffing can be publicly detected by anyone. See Appendix A for a more detailed description.
Prior work related to the ZKP of reverse set membership
Cramer et al. [22] gave a generic method to construct Σ-protocols for proving knowledge of a witness to one of many statements without revealing which one. Our ZKP can be cast in this framework (as PK{(r) :
C 1 = g v h r ∨ · · · ∨ C n = g v h r }, where Φ := {C 1 , .
. . , C n }). However, it leads to an O(|Φ|k) complexity when the proof is requested for k values of v, even when the set of commitments Φ remains the same.
Groth and Kohlweiss [27] gave a scheme where given a list C 1 , . . . , C n of commitments, the prover proves knowledge of an r such that one of the commitments opens to v = 0. The scheme improves the communication complexity over Cramer et al.'s generic construction to O(log(n)). Our ZKP can be considered as a generalisation of this scheme for any v ∈ Z q with amortised linear complexity for multiple proofs against the same list of commitments.
In the blockchain literature, Zerocash [11] uses a primitive similar to the reverse set membership proof, but it is based on a general circuit-based NIZK construction, whereas ours is a discrete logarithm based interactive proof construction.
Formal security definitions
In this section, we describe our formal security definitions for ballot stuffing and participation privacy. We begin by formally describing the protocol structure of a pollsite eligibility audit protocol.
Protocol structure
Our eligibility audit protocol has the following participants: a registrar R, a set of voters V, a polling officer P , a teller T and an auditor A. We assume that each voter V id ∈ V is identified by a unique real-world identifier id. The protocol structure is specified in terms of an underlying pollsite E2E-V voting protocol Π E2E-V . We assume that Π E2E-V provides the following sub-protocols: a) a vote casting protocol ev ← Π E2E-V .Cast(v) that takes as input a voter's intended vote v and outputs an encrypted voter receipt ev; and b) a tallying protocol Π E2E-V .Tally that takes as input a list of published encrypted votes and outputs the final tally.
Formally, a pollsite eligibility audit protocol is defined by sub-protocols (Setup, Register, Cast, Publish, Audit), where (see Figure 1):
pk X , sk X ← Setup(X 1 λ ) is a protocol run by each X ∈ {R, P, T } to generate its public key pk X and secret key sk X . All subsequent algorithms implicitly obtain pk X even though we will sometimes omit it for brevity. t, va, pub ← Register(R sk R , id ) is an algorithm run by registrar R after manually verifying the identity and eligibility of voter V id . The algorithm's input is R's secret key sk R and the voter's identifier id. Its output is a secret token t, some voting access information va (to be embedded as a QR-code in a voting access card and given to V id ); and pub, containing a public identifier and a proof of registration of V id , to be published to a public bulletin board BB 0 at index id. We call the voters whose identifiers are published on BB 0 the officially registered (or eligible) voters. -cvr, ev ← Cast Π E2E-V .Cast (P sk P , V va, v ) is a protocol between the polling officer P and a voter V ∈ V for vote casting. P 's private input is its secret key sk P ; V 's private input is its voting access information va (embedded in the access card) and its intended vote v. P verifies the voter's identity and the presented card information and allows V to cast her vote in a private booth as per the underlying E2E-V voting scheme's Π E2E-V .Cast protocol. P scans the encrypted vote ev from the voter receipt generated during the Π E2E-V .Cast protocol. With the information gathered so far, P obtains a cast vote record cvr that it sends to the teller T via an internal secure channel. V obtains the physical receipt ev.
-[(t j , ev j )] |CVR| j=1 , SEC ← Publish(T sk T , CVR ) is an algorithm run by T . The algorithm's input is T 's secret key sk T and a set CVR of cast vote records received from P during the Cast protocol. Its output is an array whose j th element contains a token t j and an encrypted vote ev j , to be published to another bulletin board BB 1 by T , and a derived secret information SEC, to be used by T in the Audit protocol later. ev j 's can be matched against voter receipts and are further processed as per the underlying E2E-V scheme's Π E2E-V .Tally protocol.
accept/reject ← Audit(BB 0 , BB 1 , T SEC , A ) is a protocol between the teller T and the auditor A. The common input is the information published to BB 0 and BB 1 ; T 's private input is the secret SEC obtained from the Publish algorithm. At the end of the protocol, A outputs accept or reject.
Ballot stuffing
We now explain our formal definition of ballot stuffing. Intuitively, we say that ballot stuffing has happened in an election if the number of published votes is greater than the number of registered voters participating in vote casting. 6 However, since the token embedded in a voter's card acts as their voting access credential, if the voter's card information is leaked, technically a vote can be published even for a voter who did not vote (provided the traditional identity and eligibility verification processes at the polling booth are also compromised). Further, the registrar may try to secretly generate tokens for some voters without officially registering them on BB 0 . Nevertheless, the adversary should be able to produce at most as many votes as the number of officially registered voters who participated in vote casting plus the number of officially registered voters who leaked their voter card. This is the notion our definition captures (see Definition 1). As shown in experiment Exp PBS in Figure 2a, the experimenter generates public/private keys for registrar R, modelling that the registrar is (at least partially) trusted. A minimum level of trust on the registrar is inevitable because the mapping of a human voter to a digital artifact such as a token requires manual intervention. The adversary A controls P and T and supplies their public keys. After this, A is allowed to run the entire election and populate bulletin boards BB 0 and BB 1 using oracles OReg, OLeak and OCast, respectively for registering a voter, obtaining a voter's access card information and casting a vote. We keep track of the tokens for all officially registered voters in set RT and the registered tokens leaked to A, either during vote casting or otherwise, in set RLT. A wins if it makes the Audit protocol pass (playing the role of T ) while having produced more number of entries on BB 1 than the total number of leaked tokens belonging to registered voters, i.e., the size of RLT.
In more detail, the OReg oracle allows A to register a voter with identifier id. It also expects A to supply a bit b pub indicating whether to officially publish id on BB 0 or not. If b pub = 1, the proof of registration pub id is published to BB 0 and the generated token is added to set RT. Note that A only obtains the publicly posted information and not the voter's card information va id , modelling that honest registered voters must keep their voting cards secure.
The OCast oracle allows A to model the vote casting process. Since A controls both P and T , the voter's card information necessarily gets leaked to the adversary during vote casting. In addition, A is also allowed to call the OLeak oracle, which models the leakage of voter card information outside of vote casting. The set of all leaked tokens are recorded in RLT. The size of set RLT thus represents the maximum number of encrypted votes A could report, without getting caught by the auditor.
Note that the size of RLT is updated only when the registered voters' tokens are leaked, to discount leakages caused for unofficially registered voters. Further, it does not get updated if the OCast or OLeak oracles are called multiple times for the same id (the former modelling that each voter should be allowed to cast only one vote and the latter modelling that each leaked token is counted only once). Definition 1 (Prevention against ballot stuffing). We say that an eligibility audit protocol Π elg := (Setup, Register, Cast, Publish, Audit) prevents ballot stuffing if for all PPT adversaries A and for all security parameters λ ∈ N, there exists a negligible function negl such that:
1 Exp A PBS (1 λ ) : 2 pk R , skR ← Setup(R 1 λ ) 3 pk P , pk T ← A(1 λ ) 4 initialise BB0 5 RT, RLT := ∅ 6 BB1 ← A OReg,OCast,OLeak () 7 result ← Audit(BB0, BB1, A(), A ) 8 return 1 if result = accept and |BB1| > |RLT| 9 10 OReg(id, bpub): 12 tid, vaid, pub id ← Register(R skR, id ) 13 if bpub = 1: 14 RT ← RT ∪ {tid} 15 append (id, pub id ) to BB0 16 return pub id 17 18 OCast(id, v): 19 if tid ∈ RT: 20 RLT ← RLT ∪ {tid} 21 cvrid, evid ← Cast(A(), V vaid, v ) 22 return (cvrid, evid) 23 24 OLeak(id): 25 if tid ∈ RT: 26 RLT ← RLT ∪ {tid} 27 return vaid 28 (a) Ballot stuffing 1 Exp A PP (1 λ , id0, id1, b) : 2 for each X ∈ {R, P, T }: 3 pk X , skX ← Setup(X 1 λ ) 4 CVR 0 , CVR 1 := ∅ 5 initialise BB0 6 A OReg,OCast,OCastChal (1 λ , pk R , pk P , pk T ) such that OReg(id0) and OReg(id1) must be called 7 BB 0 1 , SEC 0 := Publish(T skT , CVR 0 ) 8 BB 1 1 , SEC 1 := Publish(T skT , CVR 1 ) 9 Audit(BB0, BB b 1 , T SEC b , A()) 10 b ← A() 11 return b 12 13 OReg(id): 14 tid, vaid, pub id ← Register(R skR, id ) 15 append (id, pub id ) to BB0 16 return (id, vaid, pub id ) 17 18 OCast(id, va, v): 19 if id ∈ {id0, id1}: 20 cvrid, evid ← Cast(P skP , V va, v ) 21 CVR 0 := CVR 0 ∪ {cvrid} 22 CVR 1 := CVR 1 ∪ {cvrid} 23 return evid 24 25 OCastChal(v * ): 26 cvr * 0 , ev * 0 ← Cast(P skP , V vaid 0 , v * ) 27 cvr * 1 , ev * 1 ← Cast(P skP , V vaid 1 , v * ) 28 CVR 0 := CVR 0 ∪ {cvr * 0 } 29 CVR 1 := CVR 1 ∪ {cvr * 1 } 30 return ev * b (b) Participation privacyPr[Exp A PBS (1 λ ) = 1] ≤ negl(1 λ ) where Exp A PBS (1 λ ) is as defined in Figure 2a.
Participation privacy
For the formal modelling of participation privacy, we adapt the definition given by Bernhard et al. [13] to pollsite voting (the definition was given for an internet voting protocol called KTV-Helios [30]). The basic idea of this definition is that participation privacy is maintained if an adversary that corrupts all voters except only two, say V id 0 and V id 1 , cannot distinguish between the world where V id 0 votes and V id 1 abstains and the world where V id 1 votes and V id 0 abstains. We capture this idea in Definition 2. As shown in experiment Exp A PP in Figure 2b, we consider a remote and external adversary A for participation privacy that does not corrupt any of R, P or T . A uses the OReg oracle to register a voter, but obtains their voting access card. It can do so even for voters V id 0 and V id 1 , modelling that even forcing a voter to surrender its card should not affect A's belief about whether the voter voted or not (A must register both V id 0 and V id 1 for the experiment to proceed).
After this, A can use the OCast oracle to cast votes for corrupted voters by supplying their identifier, access card information and the vote value. At some point, A calls the OCastChal oracle once to ask the experimenter to cast a vote for either V id 0 or V id 1 (A's challenge would be to find out which one). For this call, A also supplies the vote that the challenge voter casts, to discount the case when the final tally itself reveals who voted. The experimenter keeps track of the cast vote records corresponding to both possible choices of the challenge voter and sends BB b 1 to A, where bit b corresponds to the world in which V id b votes and V id 1−b abstains. Finally, the experimenter engages with A in the Audit protocol, where A plays the role of the auditor. A wins if it can output a bit b such that b is the correct guess of b.
Note that we do not need to reveal the decryption of BB b 1 to A (as revealed by the Π E2E-V .Tally protocol) because the tally in both the worlds would be exactly the same. Also note that in all of OCast/OCastChal oracle calls, A only obtains the voter receipt ev id and not the internal cast vote record cvr id . This is consistent with our assumption of a remote and external adversary.
We believe that a remote and external adversary (that does not corrupt any of the polling officer, the teller or the registrar) is sufficient to model the threat of large-scale systematic targeting using participation information. Note that our definition does not assume voters to be honest and protects against malicious voters trying to prove to a remote coercer (e.g., by revealing their voter card information) that they did not vote. This is because the only way an abstaining voter interacts with the election system is when it obtains its access card, which the definition allows the coercer to obtain. Thus, the view of the coercer interacting with a voter who really abstained is identical to its view interacting with a voter that tries to evade coercion by casting its vote normally, hiding its receipts and claiming that it abstained.
Definition 2 (Participation privacy). We say that an eligibility audit protocol Π elg := (Setup, Register, Cast, Publish, Audit) protects participation privacy if for all PPT adversaries A, for all security parameters λ ∈ N, and any two identifiers id 0 , id 1 , there exists a negligible function negl such that:
| Pr[Exp A PP (1 λ , id 0 , id 1 , 0) = 1] − Pr[Exp A PP (1 λ , id 0 , id 1 , 1) = 1]| ≤ negl(1 λ ) where Exp A PP (1 λ , id 0 , id 1 , b) for b ∈ {0, 1} is as defined in Figure 2b.
Our eligibility audit protocol
Before we describe our proposed eligibility audit protocol, we define some notation and recall key cryptographic primitives we need.
Preliminaries
Notation. Let λ be a security parameter; n be a constant denoting the number of eligible voters and q be a prime of length exponential in λ. Let G 1 , G 2 , G T denote cyclic groups of prime order q such that they admit an efficiently computable bilinear map e : G 1 × G 2 → G T , i.e., for all a, b ∈ Z q and generators g 1 , g 2 of G 1 and G 2 respectively, e(g a 1 , g b 2 ) = e(g 1 , g 2 ) ab and e(g 1 , g 2 ) = 1 G T , where 1 G T denotes the identity element of G T . We assume that the n-Strong Diffie Hellman assumption [15] holds in groups (G 1 , G 2 ) and that the discrete logarithm problem is hard in group G 1 . We let f 1 , g 1 , h 1 ∈ G 1 , f 2 , g 2 ∈ G 2 and f T ∈ G T denote randomly chosen generators of groups G 1 , G 2 and G T respectively. It is assumed that these generators are generated securely before the protocol starts so that nobody knows the mutual discrete logarithms of f 1 , g 1 , and h 1 . This can be done by obtaining the generators from the output of a hash function, modelled as a random oracle, on some unpredictable input. We let Perm(n) denote the set of permutation functions with domain and range {1, . . . , n}.
Pedersen commitments. The quantity g v 1 h r 1 is a Pedersen commitment [34] to a value v ∈ Z q in group G 1 under randomness r ∈ Z q . Pedersen commitments are computationally binding: it is computationally hard to produce two pairs (v, r) and (v , r ) such that g v 1 h r 1 = g v 1 h r 1 . Pedersen commitments are also perfectly hiding: g v 1 h r 1 reveals no information about v if r is chosen uniformly at random from Z q . Public-key cryptosystem. We assume a public-key cryptosystem (PKC) for secure communication between the registrar, the polling officer and the teller (note that voters do not have to be involved in this PKC). We let Π PKC := (Keygen, Enc, Dec, Sign, Ver) denote such a cryptosystem, where Π PKE := (Keygen, Enc, Dec) forms an IND-CPA secure public-key encryption scheme and Π PKS := (Keygen, Sign, Ver) forms an EUF-CMA secure public-key digital signature scheme. We assume that the message space of Π PKE is Z q and that of Π PKS is {0, 1} * . Public bulletin boards. A public bulletin board represents an authenticated broadcast channel such that only the specified senders can successfully publish data to it and data once published cannot be changed. BBS+ signatures. We also depend on BBS+ signatures [7] for our ZKP of reverse set membership (see Section 5). A BBS+ signature scheme is given by algorithms (BBS+.Keygen, BBS+.Sign, BBS+.Ver) defined below (here, x denotes the signer's secret key, y denotes its public key, and m denotes a message to be signed):
-BBS+.Keygen: x $ ← − Z * q ; y ← f x 2 . -BBS+.Sign(m, x): c, r $ ← − Z q ; S ← (f 1 g m 1 h r 1 ) 1 c+x . Output σ ← (S,
The proposed protocol
At a high level, our protocol works as follows (see Figure 3). During the registration of a voter V id , registrar R generates a secret token t id uniformly sampled from Z q and publishes a commitment C id to this token to a public bulletin board BB 0 , along with a NIZK proof of knowledge of an opening to C id . The token is also embedded inside V id 's voting access card under an encryption against polling officer P 's public key. In addition, the card also contains an encryption of the commitment randomness r id against teller T 's public key (this will be used later by T in the audit protocol). Assuming a voter's access card was kept secure, P obtains the voter's token only when the voter participates in the Cast protocol.
If so, P securely uploads the token to T along with the voter-produced encrypted vote, passing along the encryption of the commitment randomness to T . Post polling, T publishes tokens and encrypted votes of all participating voters to another public bulletin board BB 1 , in a random order obtained by a secret permutation π. T also decrypts the r id 's to be used in the audit protocol. Specifically, T acts as the prover in our ZKP of reverse set membership (see Section 5) that convinces the auditor that each token on BB 1 was committed by some commitment on BB 0 , without revealing which one.
The ZKP implies that only those tokens can be published that were officially committed by R in the registration phase. Further, since tokens are randomly selected from a large space and P does not obtain a voter's token unless the voter arrives to cast her vote, ballot stuffing cannot be done for absentee voters who keep their access cards secure.
We note that the registrar's signature in the voting access card is only required to bind the identity of the voter to the issued token which facilitates the traditional identity verification process at the polling booth. Our cryptographic guarantee against ballot stuffing does not depend on the security of this signature scheme (see Theorem 2).
Setup(X 1 λ ) (for each X ∈ {R, P, T }): X: pk X , skX ← Keygen(1 λ ) X: Output pk X , skX Register(R skR, id ):
R: t id $ ← − Zq R: r id $ ← − Zq R: C id ← g t id 1 h r id 1 R: et id ← Enc(pk P , t id ) R: er id ← Enc(pk T , r id ) R: s id ← Sign(skR, id et id er id ) R: pr id ← NIZKPK{(t id , r id ) : C id = g t id 1 h r id 1 } R: Output: t id (R stores t id ) va id := (id, et id , er id , s id ) (Give va id to V id ) pub id := (C id , pr id ) (Append (id, pub id ) to BB0) Cast(P skP , V va = (id, et id , er id , s id ), v ): V : Give va to P P : VerifyID(V, id) (manual ID verification) P : Check Ver(pk R , id et id er id , s id ) = 1 V : ev id ← Π E2E-V .Cast(v) P : t id ← Dec(skP , et id ) P : Output (cvr id := (id, t id , ev id , er id ), ev id ) P : (Give cvr id to T ; give ev id to V )
Publish(T skT , CVR = {cvr id | cvr id given by P } ):
T : π $ ← − Perm(n) T :
for each j ∈ {1, . . . , n}: T :
(tj, evj) := (⊥, ⊥) T :ṙj := ⊥ T :
for each cvr id = (id, t id , ev id , er id ) ∈ CVR:
Lookup i s.t. BB0[i] = (id, C id ) j ← π(i) rj := Dec(skT , er id ) (tj, evj) := (t id , ev id ) T : Output (BB1 = [(tj, evj)] n j=1 , SEC = (ṙj) n j=1 ) T : (Publish BB1; store SEC) Audit(BB0 = [(idi, C id i , pr id i )] n i=1 , BB1 = [(tj, evj)] n j=1 , T SEC = (ṙj) n j=1 , A ): (Let Jcast := {j | j ∈ {1, . . . , n} ∧ BB1[j] = (⊥, ⊥)}) A:
Check ∀j1, j2 ∈ Jcast : j1 = j2 =⇒ tj 1 = tj 2 T, A : [ZKP of reverse set membership -see Fig. 4]:
Π ZKP-RSM ((C id 1 , . . . , C idn ), (pr id 1 , . . . , pr idn ), (tj) j∈Jcast , T (ṙj) j∈Jcast , A ) A:
Output accept if all checks pass; else reject
ZKP of reverse set membership
Before we present our ZKP of reverse set membership, we first present a ZKP of set membership protocol due to Camenisch et al. [16] to which our construction is closely related. ZKP of set membership (Camenisch et al. [16]). Recall that a ZKP of set membership, denoted as PK{(v, r) :
C = g v 1 h r 1 ∧ v ∈ φ},
proves that a given commitment C commits a value v in some public set φ. The main idea behind the Camenisch scheme is that the verifier sends to the prover Boneh-Boyen signatures [15] on elements of set φ under a fresh signing key. Then the prover can prove that C commits a member of the set by proving that it knows a signature on the value committed by C. This can be done in zero knowledge by revealing only a blinded signature to the verifier and proving knowledge of appropriate blinding factors from which a valid signature can be obtained. If C does not commit a member of φ then the proof fails because the prover does not obtain verifier's signatures on non-members of the set.
The scheme is an honest-verifier ZKP of set membership if |φ|-Strong Diffie Hellman assumption holds in (G 1 , G 2 ) [16]. A nice property of this scheme is that if proofs for k commitments are requested against the same set φ, the verifier's signatures can be reused, resulting in only an O(1) online overhead per commitment and thus an overall O(|φ| + k) amortised complexity. In contrast, a scheme based on the generic OR construction (PK{(v, r) :
C = g v 1 h r 1 ∧ (v = v 1 ∨ · · · ∨ v = v n )}), where φ = {v 1 , . . . , v n }, is O(|φ|k)
when for proving set membership for k commitments.
ZKP of reverse set membership. A ZKP of reverse set membership proves that a given value v is committed by one of a set Φ of commitments. In the context of Pedersen commitments, this can be formalised as a proof of knowledge of an r such that g v 1 h r 1 ∈ Φ, i.e., PK{(r) : g v 1 h r 1 ∈ Φ}. Our protocol for ZKP of reverse set membership follows the Camenisch scheme's theme of obtaining verifier's signatures on elements of a set and proving knowledge of valid signatures. However, the Camenisch scheme cannot be adapted trivially to our setting because the Boneh-Boyen signatures require messages to be in group Z q and cannot be directly used for signing Pedersen commitments, which are members of group G 1 . Recall, however, from Section 4.1 that the BBS+ signature scheme [8] allows a committer to present a commitment to the signer and obtain a signature on the committed value. (vi 1 , . . . , vi k ) ∈ Z k q , k ≤ n Private input of P: Witness (ri 1 , . . . , ri k )
s.t. Ci 1 = g v i 1 1 h r i 1 1 ∧ · · · ∧ Ci k = g v i k 1 h r i k 1
Stage 1 (Obtaining BBS+ signatures on vi 1 , . . . , vi k from V): P:
V:
x $ ← − Zq y ← f x 2
For each i ∈ {1, . . . , n}:
Verify
pr i (abort if failed) ci, r i $ ← − Zq Si ← (f1Cih r i 1 ) 1 x+c i σ i ← (Si, ci, r i ) y, (σ 1 , . . . , σ n ) ←−−−−−−−−−−−−−−−− For each j ∈ {i1, . . . , i k }:
(Sj, cj, r j ) := σ j r j ← r j + rj σj := (Sj, cj, r j ) if BBS+.Ver(σj, vj, y) = 1: abort Store (σi 1 , ..., σ1 k ) Stage 2 (Proving knowledge of BBS+ signatures on vi 1 , . . . , vi k (as per [8])) :
P:
V: For each j ∈ {i1, . . . , i k }: Thus, we let the reverse set membership verifier act as the signer that sends quasi-BBS+ signatures for each C i ∈ Φ after verifying that C i is actually committed, via the NIZK proof pr i (see Figure 4 -stage 1). Thus, by the property of BBS+ signatures, the prover can obtain valid BBS+ signatures on values committed by each C i ∈ Φ. To prove that the given value v is committed by some commitment in Φ, the prover gives a ZKP of knowledge of a BBS+ signature on v. Since the prover only obtains signatures on values committed by commitments in Φ, it cannot succeed if no C ∈ Φ committed v. For the proof of knowledge of a BBS+ signature in zero-knowledge, we use the technique proposed in [8] (stage 2). This protocol also enjoys an overall O(|Φ| + k) complexity for verifying reverse set membership for k values, because signatures can be reused.
ρ1 j , ρ2 j $ ← − Zq B1 j ← g ρ 1 j 1 h ρ 2 j 1 B2 j ← Sjg ρ 2 j 1 δ1 j ← cjρ1 j δ2 j ← cjρ2 j (B 1 i 1 ,B 2 i 1 ),...,(B 1 i k ,B 2 i k ) −−−−−−−−−−−−−−−−−−→ P, V : For each j ∈ {i1, . . . , i k }: Π PoK := PK{(ρ1 j , ρ2 j , cj, r j , δ1 j , δ2 j ) : B1 j = g ρ 1 j 1 h ρ 2 j 1 ∧ B c j 1 j = g δ 1 j 1 h δ 2 j 1 ∧ e(B 2 j ,y) e(f 1 ,f 2 ) = e(B2 j , f2) −c j e(g1, y) ρ 2 j e(g1, f2) δ 2 j e(h1, f2) r j e(g1, f2) v j }
Security analysis
Theorem 1 (ZKP of reverse set membership for k values). If the computational binding assumption for Pedersen commitments holds in group G 1 , the n-Strong Diffie-Hellman (n-SDH) assumption holds in (G 1 , G 2 ), and for each i ∈ {1, . . . , n}, then the protocol in Figure 4 is a ZKP of reverse set membership of values
(v i 1 , . . . , v i k ) against set Φ := {C 1 , . . . , C n }, i.e., PK{(r i 1 , . . . , r i k ) : g v i 1 1 h r i 1 1 ∈ Φ∧ . . . ∧g v i k 1 h r i k 1 ∈ Φ}.
Proof. Completeness: For each i ∈ {1, . . . , n}, the verifier passes the verification of pr i by the given condition.
Further, P obtains quasi-BBS+ signatures σ i = (S i , c i , r i ) = ((f 1 C i h r i 1 ) 1 x+c i , c i , r i ) = ((f 1 g v i 1 h r i +r i 1 ) 1 x+c i , c i , r i ) from V. From this, for each j ∈ {i 1 , . . . , i k }, P obtains valid BBS+ signatures σ j = (S j , c j , r j + r j ) = ((f 1 g v j 1 h r j +r j 1 ) 1
x+c j , c j , r j + r j ). Thus all BBS+.Ver checks pass.
In stage 2, for each j ∈ {i 1 , . . . , i k }, P sends B 1 j = g
ρ 1 j 1 h ρ 2 j 1 and B 2 j = S j g ρ 2 j 1 = (f 1 g v j 1 h r j +r j 1 ) 1 x+c j g ρ 2 j 1
to V for any randomly chosen ρ 1 j , ρ 2 j ∈ Z q . We now show that these values pass the proof of knowledge Π PoK .
First note that the first two conditions of Π PoK are trivially satisfied because B 1 j = g
ρ 1 j 1 h ρ 2 j 1 , δ 1 j = c j ρ 1 j and δ 2 j = c j ρ 2 j .
The RHS of the third condition of Π PoK simplifies to its LHS as follows:
e(B 2 j , f 2 ) −c j e(g 1 , y) ρ 2 j e(g 1 , f 2 ) δ 2 j e(h 1 , f 2 ) r j e(g 1 , f 2 ) v j = e((f 1 g v j 1 h r j 1 ) 1 x+c j g ρ 2 j 1 , f 2 ) −c j e(g 1 , f x 2 ) ρ 2 j e(g 1 , f 2 ) c j ρ 2 j e(h 1 , f 2 ) r j e(g v j 1 , f 2 ) = e((f 1 g v j 1 h r j 1 ) −c j x+c j g −c j ρ 2 j 1 , f 2 )e(g xρ 2 j 1 , f 2 )e(g c j ρ 2 j 1 , f 2 ) e(h r j 1 , f 2 )e(g v j 1 , f 2 ) = e((f 1 g v j 1 h r j 1 ) −c j x+c j g −c j ρ 2 j 1 g xρ 2 j 1 g c j ρ 2 j 1 h r j 1 g v j 1 , f 2 ) e(f 1 , f 2 ) e(f 1 , f 2 ) = e((f 1 g v j 1 h r j 1 ) −c j x+c j (f 1 g v j 1 h r j 1 )g xρ 2 j 1 , f 2 ) e(f 1 , f 2 ) = e((f 1 g v j 1 h r j 1 ) x x+c j g xρ 2 j 1 , f 2 ) e(f 1 , f 2 ) = e((f 1 g v j 1 h r j +r j 1 ) 1 x+c j g ρ 2 j 1 , f x 2 ) e(f 1 , f 2 ) = e(B 2 j , y) e(f 1 , f 2 )(1)
Special soundness: We show that if verifier V accepts then a PPT extractor E can extract (r i 1 , . . . ,
r i k ) such that g v i 1 1 h r i 1 1 ∈ Φ ∧ · · · ∧ g v i k 1 h r i k 1
∈ Φ. E simply runs the extractor E pr for NIZKs (pr 1 , . . . , pr n ) to obtain ((v 1 ,ṙ 1 ), . . . , (v n ,ṙ n )) and outputs (ṙ i 1 , . . . ,ṙ i k ) (by the special soundness of the NIZK proof of knowledge, E pr exists and is PPT). Note that if {v i 1 , . . . , v i k } ⊆ {v 1 , . . . ,v n } then E has been successful since g v i 1
1 hṙ i 1 1 ∈ Φ ∧ · · · ∧ g v i k 1 hṙ i k 1 ∈ Φ.
Claim: {v i 1 , . . . , v i k } ⊆ {v 1 , . . . ,v n }. Suppose for contradiction that there exists an i ∈ {i 1 , . . . , i k } such that v i ∈ {v 1 , . . . ,v n }. We show that if this happens then a forger F can be constructed against the BBS+ signature scheme. We show the construction of F in Figure 5.
Note that F produces a view for P that is indistinguishable from the view produced by the real verifier in for i = 1, ..., n: (vi,ṙi) ← Epr(Ci, pr i ) F → C: (v1, . . . ,vn) (signature queries) C → F: (σ1, . . . , σn) F:
for i = 1, ..., n: (Si, ci, ri) := σi σ i := (Si, ci, ri −ṙi) F → P: y, (σ 1 , . . . , σ n ) P → F: ((B1 i 1 , B2 i 1 ), . . . , (B1 i k , B2 i k )) F:
(ρ1, ρ2, c, r , δ1, δ2) ← E PoK (y, vi , B1 i , B2 i ) σ := (B2 i g −ρ 2 1 , c, r ) F:
Output (vi ,σ) Since v i ∈ {v 1 , . . . ,v n }, F had not queried a signature on message v i from C. Thus, (v i ,σ) is a BBS+ signature forgery. Since F queried n signatures in total, this is not possible under the n-Strong Diffie Hellman assumption, as shown in [7]. Lemma 1. If a PPT extractor can extract (ρ 1 , ρ 2 , c, r , δ 1 , δ 2 ) that satisfy:
B 1 = g ρ 1 1 h ρ 2 1 (2) B c 1 = g δ 1 1 h δ 2 1 (3) e(B 2 , y) e(f 1 , f 2 ) = e(B 2 , f 2 ) −c e(g 1 , y) ρ 2 e(g 1 , f 2 ) δ 2 e(h 1 , f 2 ) r e(g 1 , f 2 ) v (4)
then (Ŝ,ĉ,r) = (B 2 g −ρ 2 1 , c, r ) is a valid BBS+ signature on message v under public key y, i.e., it satisfies the following BBS+ signature verification equation: e(Ŝ, yfĉ 2 ) = e(f 1 g v 1 hr 1 , f 2 ). Proof. From Equations 2 and 3, we get g ρ 1 c 1 h ρ 2 c 1 = g δ 1 1 h δ 2 1 . Thus, it must be that δ 1 = ρ 1 c and δ 2 = ρ 2 c, otherwise the extractor can be used to produce two different openings (ρ 1 c, ρ 2 c) and (δ 1 , δ 2 ) of the Pedersen commitment B c 1 . Substituting this in Equation 4, we get:
e(B 2 , y) e(f 1 , f 2 ) = e(B 2 , f 2 ) −c e(g 1 , y) ρ 2 e(g 1 , f 2 ) ρ 2 c e(h 1 , f 2 ) r e(g 1 , f 2 ) v =⇒ e(B 2 , y) = e(B 2 , f 2 ) −c e(g 1 , y) ρ 2 e(g 1 , f 2 ) ρ 2 c e(f 1 g v 1 h r 1 , f 2 ) =⇒ e(B 2 , y)e(B 2 , f c 2 ) = e(g ρ 2 1 , y)e(g ρ 2 1 , f c 2 )e(f 1 g v 1 h r 1 , f 2 ) =⇒ e(B 2 , yf c 2 ) = e(g ρ 2 1 , yf c 2 )e(f 1 g v 1 h r 1 , f 2 ) =⇒ e(B 2 g −ρ 2 1 , yf c 2 ) = e(f 1 g v 1 h r 1 , f 2 )
Honest-verifier zero-knowledge: Consider the following sequence of experiments:
-E0: Real protocol between P and V.
-E1: Same as E0 except that for each i ∈ {1, . . . , n}, instead of supplying a real proof pr i , V is supplied a simulated proof. This is indistinguishable from E0 by the zero-knowledge property of NIZKs (pr 1 , . . . , pr n ). -E2: Same as E1 except that instead of running as per the real prover for Π PoK in stage 2, simulator S PoK is run instead. This is indistinguishable from E1 in the random oracle model by the zero-knowledge property of Π PoK .
-E3: Same as E2 except that a) in stage 1, the prover's steps are skipped; and b) in stage 2, for each
j ∈ {i 1 , . . . , i k }, B 2 j is obtained as B 2 j $ ← − G 1 instead of as B 2 j ← S j g ρ 2 j 1
and δ 1 j , δ 2 j assignment steps are skipped. This is indistinguishable from E2 because S j g ρ 2 j 1 is indistinguishable from a uniformly random element of G 1 for uniformly chosen ρ 2 j ∈ Z q , which implies that the input statements for S PoK in both the experiments are indistinguishable.
Since in E3, none of the prover's private inputs are being used, the protocol is zero-knowledge.
General zero-knowledge: Note that Π PoK in stage 2 can be directly converted from honest-verifier to general zero-knowledge in the random oracle model, using the standard Fiat-Shamir heuristic [26]. Further, the prover sends any message only if the verifier sends valid BBS+ quasi-signatures (because of the BBS+.Ver signature verification step). Thus, the view of the verifier is identically distributed to its view in the honest-verifier case. Hence the claim holds.
Theorem 2 (Prevention against ballot stuffing). If the computational binding assumption for Pedersen commitments holds in group G 1 , the n-Strong Diffie-Hellman (n-SDH) assumption holds in (G 1 , G 2 ), the protocol presented in Figure 3 prevents ballot stuffing as per Definition 1.
Proof. Suppose for contradiction that there exists a PPT adversary A such that the probability that Exp A PBS (λ) outputs 1 is non-negligible. This means that with non-negligible probability, the Audit protocol passed but there are more entries in BB 1 than the size of the RLT set (see Figure 2a). Let PT denote the set of tokens published on BB 1 . Because the Audit protocol verifies that all tokens published on BB 1 are distinct (see Figure 3), |PT| > |RLT|. Thus, there must exist at least one token t ∈ PT \ RLT. By the special soundness of the ZKP of reverse set membership (Theorem 1), there exists a PPT extractor that can extract anṙ such that g t 1 hṙ 1 ∈ Φ, where Φ denotes the set of commitments published on BB 0 . Let C id i be the commitment published on BB 0 such that C id i = g t 1 hṙ 1 . Let t id i denote the token committed by C id i during the Register protocol and r id i denote the corresponding randomness (see Figure 3).
Case 1: t = t id i : If this case arises, then one can successfully produce two different openings (t,ṙ) and (t id i , r id i ) for the commitment C id i .
Case 2: t = t id i : Since t ∈ RLT, this case leads to two sub-cases:
1. t ∈ RT: This case means that t was not an officially registered token. Note that RT = {t id | t id was generated during the OReg(id) call and id appears on BB 0 }. Thus, this case is not possible because t = t id i and t id i ∈ RT . 2. t ∈ RT \ RLT: This case means that a commitment to t was appended on BB 0 against some identifier id but A does not obtain the voter card va id from either the OLeak oracle or the OCast oracle. Since commitments are perfectly hiding, t is distributed identically to a uniform distribution in a large space Z q for A and the probability that A can guess it correctly is negligible.
Theorem 3 (Participation privacy). Assuming Π PKE = (Keygen, Enc, Dec) is an IND-CPA secure encryption scheme, the scheme presented in Figure 3 protects participation privacy as per Definition 2 in the random oracle model.
Proof. For each b ∈ {0, 1}, let E b denote the experiment that is identical to Exp A PP (λ, id 0 , id 1 , b) except that (I) instead of running the real prover for the ZKP of reverse set membership Π ZKP-RSM , it runs its simulator S ZKP-RSM ; and (II) for pr generation, during the registration, it runs a simulator S NIZKPK for NIZKPK. E b is indistinguishable from Exp A PP (λ, id 0 , id 1 , b) by the zero-knowledgeness property of both Π ZKP-RSM (see Theorem 1) and NIZKPK.
We now show that if a PPT adversary A can distinguish between experiments E b=0 and E b=1 with nonnegligible probability, then a PPT adversary B can break the IND-CPA security of encryption scheme Π PKE . We show the construction of B in Figure 6. -(t 0 , ev * b=0 ) when BB b=0 1 is published, where ev * b=0 := Π E2E-V .Cast(v * ) for the value v * supplied by A to the OCastChal oracle.
-va idτ = (id τ , et idτ , er idτ , s idτ ) and pub idτ = (C idτ , pr idτ ) from the OReg(id τ ) oracle call, τ ∈ {0, 1}, and CIND-CPA(λ, b ∈ {0, 1}) B(λ, id0, id1) A pk P , skP ← Keygen(1 λ ) pk P − − → pk T , skT ← Keygen(1 λ ) pk R , skR ← Keygen(1 λ ) pk R ,pk P ,pk T − −−−−−−− → for τ ∈ {0, 1} : tτ $ ← − Zq M0 := (t0, t1) M1 := (t1, t0) M 0 ,M 1 ← −−−− − E b 0 ← Enc(pk P , M b [0]) E b 1 ← Enc(pk P , M b [1]) E b 0 ,E b 1 −−−−−→ et0 := E b 0 ; et1 := E b 1 initialise BB0 . . . OReg(id),OCast(id,va,v) OracleCalls ←−−−−−−−−−−−−−−−−−−−− OReg(idτ ), where τ ∈{0,1} ← −−−−−−−−−−−−−− − t idτ ← tτ r idτ $ ← − Zq C idτ ← g t idτ 1 h r idτ 1 et idτ ← etτ er idτ ← Enc(pk T , r idτ ) s idτ ← Sign(skR, idτ et idτ er idτ ) pr idτ ← S NIZKPK (C idτ ) va idτ := (idτ , et idτ , er idτ , s idτ ) pub idτ := (C idτ , pr idτ ) append (idτ , pub idτ ) to BB0 (idτ ,va idτ ,pub idτ ) −−−−−−−−−−→ . . . OReg(id),OCast(id,va,v) OracleCalls ←−−−−−−−−−−−−−−−−−−−− OCastChal(v * ) ← −−−−−−− − ev * b := Π E2E-V .Cast(v * ) ev * b − − → . . . OReg(id),OCast(id,va,v) OracleCalls ←−−−−−−−−−−−−−−−−−−−− BB b 1 = . . . (t0, ev * b ) . . . Audit(BB 0 ,BB b 1 ,S ZKP-RSM (1 λ ),A(1 λ )) ← −−−−−−−−−−−−−−−−−−−−− → b ← − b ← −
For τ ∈ {0, 1}, the distribution of [id τ , er idτ , s idτ , C idτ ] as part of B's response is identical to that of E 0 . For et id 0 , the encrypted token component in va id 0 of id 0 , we need to show that it decrypts to the same token which has been revealed in BB b=0 1 . In our reduction, the token revealed in BB b=0 1 is t 0 . We now see that et id 0 also decrypts to t 0 . Indeed, for τ ∈ {0, 1}, et idτ = et τ = E 0τ = Enc(pk P , M 0 [τ ]) = Enc(pk P , t τ ). Therefore, et id 0 = Enc(pk P , t 0 ). Finally, it is easy to check that B simulates OReg(id) and OCast(id, va, v) correctly. Claim 2: If C IND-CPA selects b = 1, then A's view (while interacting with B) is identical to its view in experiment E b=1 . A obtains from B, -va idτ = (id τ , et idτ , er idτ , s idτ ) and pub idτ = (C idτ , pr idτ ) from the OReg(id τ ) oracle call, τ ∈ {0, 1}, and
-(t 0 , ev * b=1 ) when BB b=1 1 is published, where ev * b=1 := Π E2E-V .Cast(v * ) for the value v * supplied by A to the OCastChal oracle.
For τ ∈ {0, 1}, the distribution of [id τ , er idτ , s idτ ] as part of B's response is identical to that of E b=1 . In our reduction, the token revealed in BB b=1 1 is also t 0 . But we see that et id 1 also decrypts to t 0 . Indeed, for τ ∈ {0, 1}, et idτ = et τ = E 1τ = Enc(pk P , M 1 [τ ]) = Enc(pk P , t 1−τ ). Therefore, et id 1 = Enc(pk P , t 1−1 ) = Enc(t 0 ). Moreover, for τ ∈ {0, 1}, the distribution of [et idτ , C idτ ] as part of B's response is identical to that of E b=1 . Indeed, since commitments are perfectly hiding, we have [et idτ = Enc(pk P , t 1−τ ),
C idτ = g tτ 1 h rτ 1 ] ≈ [et idτ = Enc(pk P , t 1−τ ), g 1−tτ 1 h 1−rτ 1 ]. Finally, like E b=0 , B simulates OReg(id) and OCast(id, va, v) in E b=1 correctly.
Therefore, if A has non-negligible advantage in distinguishing between experiments E b=0 and E b=1 , then We now briefly present some illustrative performance characteristics of our protocol, specifically, of our ZKP of reverse set membership. We implemented our ZKP using the Charm cryptographic library [3] with a PBC library [32] backend and chose the BN254 elliptic curve [9] to instantiate the pairing groups (G 1 , G 2 ). We ran our benchmarks on an Intel(R) Xeon(R) Silver 4210R CPU @ 2.40GHz with 46 GB of RAM and 104 cores. Figure 7 shows space/time benchmarks for a dummy election with n = 10 6 voters. We model the worst case where each voter had cast their vote so that the ZKP is to be given for n tokens. Note from Figure 4 that each loop iteration is independent of any other iteration, which gives our ZKP an embarrassingly parallel character. We exploit this fact to distribute proof generation and verification among 100 cores. However, the signature-based nature of our ZKPs implies that they have to be given separately for each auditor. As the total verification takes about half an hour for the teller per auditor, it may be feasible to support independent audit only by some major stakeholders (such as civil society groups and the political parties). Nevertheless, against the signatures supplied by these trusted auditors in the preprocessing stage, anyone can verify individual tokens or a statistical sample of tokens rather efficiently (in a few seconds).
| Pr[b = 1 | b = 0] − Pr[b = 1 | b = 1]| is
Conclusion and future work
We have proposed an eligibility audit protocol to detect ballot stuffing for pollsite voting while also protecting participation privacy from a remote coercer. Below we mention some ways in which our considered threat model could be strengthened in future work:
Correctness of voting access cards. To prevent R from producing cards containing incorrect tokens, we can employ a card audit protocol similar to a cast-or-challenge audit popular in E2E-V voting [12]. A statistically significant number of issued cards could be randomly audited by asking R to reveal the card's secrets and verifying that they match the published commitments. All audited cards should be marked as cancelled and fresh cards should be issued to the affected voters. Alternatively, R could also embed a NIZK proof in each card proving its correctness. Multiple registrars and tellers. To prevent the registrar from leaking a voter's token, this trust can be distributed to multiple registrars using standard threshold cryptography techniques. For distributing trust to multiple tellers, a distributed version of our ZKP has to be developed. Making P unable to prove who voted. Although protecting participation privacy from P may be impossible, currently it can also prove participation of a voter to a remote coercer. To solve this problem, we can make use of designated-verifier signatures [33]. Specifically, R's signature on the voter's access card can be made a designated-verifier signature verifiable only by P . This allows P to verify that the card is authentic but does not enable it to prove its authenticity to any other party, even if it divulges all its secrets. abstention attack (otherwise a coercer can derive whether the voter attended or not). In our scheme, we avoid this dichotomy by providing universal verification against ballot stuffing instead of providing guarantees to individual voters. if pi = 1 : eai ← (ebi) = (g r , g b i · h r ) = (g r , g a i · h r ) if pi = −1 : eai ← (ebi) q−1 = (g −r , g −b i · h −r ) = (g −r , g a i · h −r ) append (pi, eai) to BBi // eai's are homomorphically added and verifiably decrypted to obtain n diff = n attended − n abstained . // Since n attended + n abstained = n, n attended ← (n + n diff )/2 can be computed publicly. [4].
Fig. 1 :
1An overview of our eligibility audit protocol
Fig. 2 :
2Security experiments for ballot stuffing and participation privacy
c, r). -BBS+.Ver(σ = (S, c, r), m, y): If e(S, yf c 2 ) = e(f 1 g m 1 h r 1 , f 2 ), output 1; else output 0. Obtaining signatures on committed values. An interesting property of BBS+ signatures is that given a Pedersen commitment C = g m 1 h r 1 , a signature on the committed value m can be obtained by sending to the signer only C and a non-interactive zero-knowledge proof of knowledge (NIZKPoK) of its opening: 1. The committer sends C = g m 1 h r 1 and a NIZKPoK NIZKPK{(m, r) : C = g m 1 h r 1 } to the signer. The signer verifies the NIZKPoK. 2. The signer computes a quasi signature by choosing c, r $ ← − Z q and computing S ← (f 1 Ch r 1 ) 1 c+x . It sends σ := (S, c, r ) to the committer. 3. The committer computes σ ← (S, c, r + r). Note that σ = ((f 1 g m 1 h r +r 1 ) 1 c+x , c, r + r) is a valid BBS+ signature on message m under the signer's public key y = f x 2 . Note that the signer learns nothing about the message because the message is committed.
Fig. 3 :
3Our eligibility audit protocol
(
C1, . . . , Cn), (pr 1 , . . . , pr n ), (vi 1 , . . . , vi k ) s.t. (∀i ∈ {1, . . . , n} : Ci ∈ G1 ∧ pr i = NIZKPK{(v, r) : Ci = g v 1 h r 1 }) and
Fig. 4 :
4Π ZKP-RSM : ZKP of reverse set membership for values {vi 1 , . . . , vi k } against set Φ := {C1, . . . , Cn}. The prover is denoted by P and the verifier is denoted by V.
Figure 4 .
4In particular, F sends quasi-BBS+ signaturesσ i = ((f 1 h r i 1 g v i 1 ) 1 x+c i , c i , r i −ṙ i )for randomly chosen c i and r i by C, which are identically distributed to quasi-BBS+ signatures sent by the real verifier in stage 1 F((C1, . . . , Cn), (pr 1 , . . . , pr n ), (vi 1 , . . . , vi k ), i ): C → F: y (BBS+ signature public key) F:
Fig. 5 :
5Forger F breaking the BBS+ signature unforgeability game. C denotes the challenger of the unforgeability game and P denotes the prover of the reverse set membership protocol.ofFigure 4. Thus, F obtains ((B 1 i 1 , B 2 i 1 ), . . . , (B 1 i k , B 2 i k ))identically distributed to what the real verifier obtains in stage 2. By the soundness of Π PoK in stage 2, values ρ 1 , ρ 2 , c, r , δ 1 , δ 2 extracted by its extractor E PoK satisfy the conditions of Π PoK . Thus, by Lemma 1,σ = (B 2 i g −ρ2 1 , c, r ) is a valid BBS+ signature on message v i under public key y.
Claim 1 :
1If C IND-CPA selects b = 0, then A's view (while interacting with B) is identical to its view in experiment E b=0 . A obtains from B,
Fig. 6 :
6Adversary B breaking the IND-CPA security of Π PKE , given an adversary A that distinguishes between experiments E b=0 and E b=1 . C IND-CPA denotes the challenger of the IND-CPA game for Π PKE .
Fig. 7 :
7Benchmarks for n = 10 6 eligible voters when each voter had cast their vote. Run on a single machine with 100 cores.
Zq; hj ← g x i Output PKR := h := Π k j∈1 hj, SKR j := xj Register(Rj PKR, id ): // registration of a voter id by the voter'Zq eb := Enc(PKR, b) := (g r , g b · h r ) Append (id, eb) to bulletin board BB and embed in voter's ID card. Give b to the voter in a receipt-free fashion. PostProcess(BB, ID attended , (Rj SK R j ) j∈{1,...,k} ): for each i ∈ [n] : // total number of registered voters. (idi, ebi) := BBi bi := Dec(ebi, (Rj SKR j ) j∈{1,...,k} ) // joint decryption of ebi if idi ∈ ID attended : pi ← bi else : pi ← −bi // Compute an encryption eai of the attendance bit ai (1 if attended; -1 otherwise)
IndVerify(V id, b ): // individual verification by the voter (who attended as well as who did not attend) Look up id on BB If attended, check pi = b Otherwise, check pi = −b
Fig. 8 :
8Akinyokun and Teague's scheme
non-negligible and thus B breaks the IND-CPA security of Π PKE . Performance(1) Time to generate n pr NIZKs (teller) 14 s (2) Time to verify n pr NIZKs (auditor) 56 s (3) Time to generate n BBS+ quasi-signatures (auditor) 51 s (4) Time to verify n BBS+ quasi-signatures 1 (teller) 11 s (5) Time to generate n ΠPoK NIZKs (teller) 1725 s (6) Time to verify n ΠPoK NIZKs (auditor) 4500 s 1 Optimised using the batch verification techniques suggested in [25]. Size of n BBS+ signatures 90 MB Size of n pr NIZK proofs 89 MB Size of n ΠPoK NIZK proofs 572 MB7 Total time for audit:
Auditor [(2)+(3)+(6)]
4607 s
Teller [(1)+(4)+(5)]
1750 s
Note that we do not address the concern of an eligible voter's vote being removed or changed during publication, because that is the concern of an E2E-V voting protocol.
A Akinyokun and Teague's scheme[4]We briefly discuss the individually and universally verifiable poll attendance scheme due to Akinyokun and Teague (seeFigure 8). The scheme begins by a set of registrars jointly generating an El Gamal public key whose secret key is distributed among themselves. During registration, a designated registrar provides each voter a 1-bit secret b (1 or -1) that they should keep confidential. The secret b is provided in a receipt-free fashion, which means the voters do not hold any provable record of having received a given value for it. b is then encrypted against the joint public key of the registrars and the resulting encryption eb is uploaded to a bulletin board BB, indexed by the voter id. After the election, the polling officer sends to the registrars a list ID attended representing the voters who attended polling. For every registered voter on BB, say the i th voter, the registrars compute a bit p i = b i if the voter attended and p i = −b i if they did not attend. Additionally, the registrars compute an encryption ea i of the attendance bit a i (1 if attended; -1 if not attended) using information supplied by the polling officer. All voters can check whether their attendance bit has been correctly registered by checking if p i matches the bit b given to them if they attended and −b if they did not attend. Assuming a statistical number of voters perform this check, the number of voters who attended can be universally verified.Although voters can check for themselves whether their attendance has been recorded correctly, they cannot prove this fact to anyone else because the bit b is given to them in a receipt-free fashion. This makes dispute resolution hard and weakens the guarantee against ballot stuffing because voters could be falsely claiming ballot stuffing to discredit the election. Nevertheless, giving b in a receipt-free fashion is required to prevent the forced
Electing a university president using open-audit voting: Analysis of real-world use of helios. B Adida, O De Marneffe, O Pereira, J J Quisquater, EVT/WOTE. 910Adida, B., De Marneffe, O., Pereira, O., Quisquater, J.J., et al.: Electing a university president using open-audit voting: Analysis of real-world use of helios. EVT/WOTE 9(10) (2009)
Scratch & vote: Self-contained paper-based cryptographic voting. B Adida, R L Rivest, http:/doi.acm.org/10.1145/1179601.1179607Proceedings of the 5th ACM Workshop on Privacy in Electronic Society. the 5th ACM Workshop on Privacy in Electronic SocietyNew York, NY, USAACMWPES '06Adida, B., Rivest, R.L.: Scratch & vote: Self-contained paper-based cryptographic voting. In: Proceedings of the 5th ACM Work- shop on Privacy in Electronic Society. pp. 29-40. WPES '06, ACM, New York, NY, USA (2006). https://doi.org/10. 1145/1179601.1179607, http://doi.acm.org/10.1145/1179601.1179607
Charm: a framework for rapidly prototyping cryptosystems. J A Akinyele, C Garman, I Miers, M W Pagano, M Rushanan, M Green, A D Rubin, 10.1007/s13389-013-0057-3Journal of Cryptographic Engineering. 32Akinyele, J.A., Garman, C., Miers, I., Pagano, M.W., Rushanan, M., Green, M., Rubin, A.D.: Charm: a framework for rapidly prototyping cryptosystems. Journal of Cryptographic Engineering 3(2), 111-128 (2013). https://doi.org/10.1007/ s13389-013-0057-3, http://dx.doi.org/10.1007/s13389-013-0057-3
Receipt-free, universally and individually verifiable poll attendance. N Akinyokun, V Teague, Proceedings of the Australasian Computer Science Week Multiconference. the Australasian Computer Science Week MulticonferenceAkinyokun, N., Teague, V.: Receipt-free, universally and individually verifiable poll attendance. In: Proceedings of the Australasian Computer Science Week Multiconference. pp. 1-10 (2019)
Towards practical and secure coercion-resistant electronic elections. R Araújo, N B Rajeb, R Robbana, J Traoré, S Youssfi, International Conference on Cryptology and Network Security. SpringerAraújo, R., Rajeb, N.B., Robbana, R., Traoré, J., Youssfi, S.: Towards practical and secure coercion-resistant electronic elections. In: International Conference on Cryptology and Network Security. pp. 278-297. Springer (2010)
A practical coercion resistant voting scheme revisited. R Araújo, J Traoré, International Conference on E-Voting and Identity. SpringerAraújo, R., Traoré, J.: A practical coercion resistant voting scheme revisited. In: International Conference on E-Voting and Identity. pp. 193-209. Springer (2013)
Security and Cryptography for Networks. M H Au, W Susilo, Y Mu, De Prisco, R., Yung, M.SpringerBerlin Heidelberg; Berlin, HeidelbergConstant-Size Dynamic k-TAAAu, M.H., Susilo, W., Mu, Y.: Constant-Size Dynamic k-TAA. In: De Prisco, R., Yung, M. (eds.) Security and Cryptography for Networks. pp. 111-125. Springer Berlin Heidelberg, Berlin, Heidelberg (2006)
Security and Cryptography for Networks. M H Au, W Susilo, Y Mu, De Prisco, R., Yung, M.SpringerBerlin Heidelberg; Berlin, HeidelbergConstant-Size Dynamic k-TAAAu, M.H., Susilo, W., Mu, Y.: Constant-Size Dynamic k-TAA. In: De Prisco, R., Yung, M. (eds.) Security and Cryptography for Networks. pp. 111-125. Springer Berlin Heidelberg, Berlin, Heidelberg (2006)
Pairing-friendly elliptic curves of prime order. P S L M Barreto, M Naehrig, Preneel, B., Tavares, S.SpringerBerlin Heidelberg; Berlin, HeidelbergSelected Areas in CryptographyBarreto, P.S.L.M., Naehrig, M.: Pairing-friendly elliptic curves of prime order. In: Preneel, B., Tavares, S. (eds.) Selected Areas in Cryptography. pp. 319-331. Springer Berlin Heidelberg, Berlin, Heidelberg (2006)
Star-vote: A secure, transparent, auditable, and reliable voting system. S Bell, J Benaloh, M D Byrne, D Debeauvoir, B Eakin, P Kortum, N Mcburnett, O Pereira, P B Stark, D S Wallach, G Fisher, J Montoya, M Parker, M ; Winn, D C , 2013 Electronic Voting Technology Workshop/Workshop on Trustworthy Elections (EVT/WOTE 13). USENIX Association, Washington. Bell, S., Benaloh, J., Byrne, M.D., Debeauvoir, D., Eakin, B., Kortum, P., McBurnett, N., Pereira, O., Stark, P.B., Wallach, D.S., Fisher, G., Montoya, J., Parker, M., Winn, M.: Star-vote: A secure, transparent, auditable, and reliable voting system. In: 2013 Electronic Voting Technology Workshop/Workshop on Trustworthy Elections (EVT/WOTE 13). USENIX Association, Washing- ton, D.C. (2013), https://www.usenix.org/conference/evtwote13/workshop-program/presentation/ bell
Zerocash: Decentralized anonymous payments from bitcoin. E Ben Sasson, A Chiesa, C Garman, M Green, I Miers, E Tromer, M Virza, 10.1109/SP.2014.362014 IEEE Symposium on Security and Privacy. Ben Sasson, E., Chiesa, A., Garman, C., Green, M., Miers, I., Tromer, E., Virza, M.: Zerocash: Decentralized anonymous payments from bitcoin. In: 2014 IEEE Symposium on Security and Privacy. pp. 459-474 (2014). https://doi.org/10.1109/SP. 2014.36
Ballot casting assurance via voter-initiated poll station auditing. J Benaloh, Proceedings of the USENIX Workshop on Accurate Electronic Voting Technology. the USENIX Workshop on Accurate Electronic Voting TechnologyBerkeley, CA, USAEVT'07, USENIX AssociationBenaloh, J.: Ballot casting assurance via voter-initiated poll station auditing. In: Proceedings of the USENIX Workshop on Accu- rate Electronic Voting Technology. pp. 14-14. EVT'07, USENIX Association, Berkeley, CA, USA (2007), http://dl.acm. org/citation.cfm?id=1323111.1323125
Security proofs for participation privacy, receipt-freeness, ballot privacy, and verifiability against malicious bulletin board for the helios voting scheme. D Bernhard, O Kulyk, M Volkamer, Cryptology ePrint Archive. Paper 2016/431 (2016Bernhard, D., Kulyk, O., Volkamer, M.: Security proofs for participation privacy, receipt-freeness, ballot privacy, and verifiability against malicious bulletin board for the helios voting scheme. Cryptology ePrint Archive, Paper 2016/431 (2016), https:// eprint.iacr.org/2016/431, https://eprint.iacr.org/2016/431
Bingo voting: Secure and coercion-free voting using a trusted random number generator. J M Bohli, J Müller-Quade, S Röhrich, VOTE-ID'07Proceedings of the 1st International Conference on E-voting and Identity. the 1st International Conference on E-voting and IdentityBerlin, HeidelbergSpringer-VerlagBohli, J.M., Müller-Quade, J., Röhrich, S.: Bingo voting: Secure and coercion-free voting using a trusted random number generator. In: Proceedings of the 1st International Conference on E-voting and Identity. pp. 111-124. VOTE-ID'07, Springer-Verlag, Berlin, Heidelberg (2007), http://dl.acm.org/citation.cfm?id=1787456.1787470
Short signatures without random oracles. D Boneh, X Boyen, Advances in Cryptology -EUROCRYPT. Cachin, C., Camenisch, J.L.Berlin Heidelberg; Berlin, HeidelbergSpringerBoneh, D., Boyen, X.: Short signatures without random oracles. In: Cachin, C., Camenisch, J.L. (eds.) Advances in Cryptology - EUROCRYPT 2004. pp. 56-73. Springer Berlin Heidelberg, Berlin, Heidelberg (2004)
Efficient protocols for set membership and range proofs. J Camenisch, R Chaabouni, A Shelat, 10.1007/978-3-540-89255-7_15Proceedings of the 14th International Conference on the Theory and Application of Cryptology and Information Security: Advances in Cryptology. pp. 234-252. ASIACRYPT '08. the 14th International Conference on the Theory and Application of Cryptology and Information Security: Advances in Cryptology. pp. 234-252. ASIACRYPT '08Berlin, HeidelbergSpringer-VerlagCamenisch, J., Chaabouni, R., Shelat, A.: Efficient protocols for set membership and range proofs. In: Proceedings of the 14th Inter- national Conference on the Theory and Application of Cryptology and Information Security: Advances in Cryptology. pp. 234-252. ASIACRYPT '08, Springer-Verlag, Berlin, Heidelberg (2008). https://doi.org/10.1007/978-3-540-89255-7_15, http://dx.doi.org/10.1007/978-3-540-89255-7_15
Protecting Politics: Deterring the Influence of Organized Crime on Elections. Netherlands Institute of International Relations. D G Ivan Briscoe, Catalina Uribe BurcherClingendael InstituteCatalina Uribe Burcher (editor), Ivan Briscoe, D.G.: Protecting Politics: Deterring the Influence of Organized Crime on Elections. Netherlands Institute of International Relations (Clingendael Institute) (2016)
Scantegrity: End-to-end voter-verifiable optical-scan voting. D Chaum, A Essex, R Carback, J Clark, S Popoveniuc, A Sherman, P Vora, 10.1109/MSP.2008.70IEEE Security and Privacy. 63Chaum, D., Essex, A., Carback, R., Clark, J., Popoveniuc, S., Sherman, A., Vora, P.: Scantegrity: End-to-end voter-verifiable optical-scan voting. IEEE Security and Privacy 6(3), 40-46 (May 2008). https://doi.org/10.1109/MSP.2008.70
Scantegrity ii: End-to-end verifiability by voters of optical scan elections through confirmation codes. D Chaum, R T Carback, J Clark, A Essex, S Popoveniuc, R L Rivest, P Y A Ryan, E Shen, A T Sherman, P L Vora, 10.1109/TIFS.2009.2034919Trans. Info. For. Sec. 44Chaum, D., Carback, R.T., Clark, J., Essex, A., Popoveniuc, S., Rivest, R.L., Ryan, P.Y.A., Shen, E., Sherman, A.T., Vora, P.L.: Scantegrity ii: End-to-end verifiability by voters of optical scan elections through confirmation codes. Trans. Info. For. Sec. 4(4), 611-627 (Dec 2009). https://doi.org/10.1109/TIFS.2009.2034919, https://doi.org/10.1109/TIFS. 2009.2034919
Selections: Internet voting with over-the-shoulder coercion-resistance. J Clark, U Hengartner, International Conference on Financial Cryptography and Data Security. SpringerClark, J., Hengartner, U.: Selections: Internet voting with over-the-shoulder coercion-resistance. In: International Conference on Financial Cryptography and Data Security. pp. 47-61. Springer (2011)
Sok: Verifiability notions for e-voting protocols. V Cortier, D Galindo, R Küsters, J Müller, T Truderung, 2016 IEEE Symposium on Security and Privacy (SP). IEEECortier, V., Galindo, D., Küsters, R., Müller, J., Truderung, T.: Sok: Verifiability notions for e-voting protocols. In: 2016 IEEE Symposium on Security and Privacy (SP). pp. 779-798. IEEE (2016)
Proofs of partial knowledge and simplified design of witness hiding protocols. R Cramer, I Damgård, B Schoenmakers, Annual International Cryptology Conference. SpringerCramer, R., Damgård, I., Schoenmakers, B.: Proofs of partial knowledge and simplified design of witness hiding protocols. In: Annual International Cryptology Conference. pp. 174-187. Springer (1994)
Cobra: Toward concurrent ballot authorization for internet voting. A Essex, J Clark, U Hengartner, A Facilitators, EVT/WOTE. 12Publishing lists of the people who voted in an election. Online; accessed 18thEssex, A., Clark, J., Hengartner, U.: Cobra: Toward concurrent ballot authorization for internet voting. EVT/WOTE 12 (2012) 24. facilitators, A.: Publishing lists of the people who voted in an election (2013), https://aceproject.org/ electoral-advice/archive/questions/replies/352111816, [Online; accessed 18th August 2022]
Practical short signature batch verification. A L Ferrara, M Green, S Hohenberger, M Ø Pedersen, Topics in Cryptology -CT-RSA 2009. Fischlin, M.Berlin Heidelberg; Berlin, HeidelbergSpringerFerrara, A.L., Green, M., Hohenberger, S., Pedersen, M.Ø.: Practical short signature batch verification. In: Fischlin, M. (ed.) Topics in Cryptology -CT-RSA 2009. pp. 309-324. Springer Berlin Heidelberg, Berlin, Heidelberg (2009)
How To Prove Yourself: Practical Solutions to Identification and Signature Problems. A Fiat, A Shamir, Advances in Cryptology -CRYPTO' 86. Odlyzko, A.M.Berlin Heidelberg; Berlin, HeidelbergSpringerFiat, A., Shamir, A.: How To Prove Yourself: Practical Solutions to Identification and Signature Problems. In: Odlyzko, A.M. (ed.) Advances in Cryptology -CRYPTO' 86. pp. 186-194. Springer Berlin Heidelberg, Berlin, Heidelberg (1987)
One-out-of-many proofs: Or how to leak a secret and spend a coin. J Groth, M Kohlweiss, Annual International Conference on the Theory and Applications of Cryptographic Techniques. SpringerGroth, J., Kohlweiss, M.: One-out-of-many proofs: Or how to leak a secret and spend a coin. In: Annual International Conference on the Theory and Applications of Cryptographic Techniques. pp. 253-280. Springer (2015)
Coercion-resistant electronic elections. A Juels, D Catalano, M Jakobsson, Proceedings of the 2005 ACM Workshop on Privacy in the Electronic Society. the 2005 ACM Workshop on Privacy in the Electronic SocietyJuels, A., Catalano, D., Jakobsson, M.: Coercion-resistant electronic elections. In: Proceedings of the 2005 ACM Workshop on Privacy in the Electronic Society. pp. 61-70 (2005)
Statistical detection of systematic election irregularities. P Klimek, Y Yegorov, R Hanel, S Thurner, https:/www.pnas.org/doi/abs/10.1073/pnas.1210722109Proceedings of the National Academy of Sciences. 10941Klimek, P., Yegorov, Y., Hanel, R., Thurner, S.: Statistical detection of systematic election irregularities. Proceedings of the Na- tional Academy of Sciences 109(41), 16469-16473 (2012). https://doi.org/10.1073/pnas.1210722109, https: //www.pnas.org/doi/abs/10.1073/pnas.1210722109
Extending helios towards private eligibility verifiability. O Kulyk, V Teague, M Volkamer, International Conference on E-Voting and Identity. SpringerKulyk, O., Teague, V., Volkamer, M.: Extending helios towards private eligibility verifiability. In: International Conference on E-Voting and Identity. pp. 57-73. Springer (2015)
Assessing electoral fraud in new democracies: a basic conceptual framework. Washington DC International Foundation for Electoral Systems. R López-Pintor, White Paper Series Electoral Fraud. López-Pintor, R.: Assessing electoral fraud in new democracies: a basic conceptual framework. Washington DC International Foundation for Electoral Systems: White Paper Series Electoral Fraud (2010)
B Lynn, PBC Library. pbc-0.5.14. AccessedLynn, B.: PBC Library (pbc-0.5.14). https://crypto.stanford.edu/pbc/ (2013), [Accessed June 10, 2019]
U Maurer, 10.1007/3-540-49677-7_301007/3-540-49677-7_30EUROCRYPT '96. Berlin Heidelberg; Berlin; HeidelbergSpringerMaurer, U.: EUROCRYPT '96, pp. 199-205. Springer Berlin Heidelberg, Berlin, Heidelberg (1998). https://doi.org/10. 1007/3-540-49677-7_30, https://doi.org/10.1007/3-540-49677-7_30
Non-interactive and information-theoretic secure verifiable secret sharing. T P Pedersen, Proceedings of the 11th Annual International Cryptology Conference on Advances in Cryptology. pp. 129-140. CRYPTO '91. the 11th Annual International Cryptology Conference on Advances in Cryptology. pp. 129-140. CRYPTO '91London, UK, UKSpringer-VerlagPedersen, T.P.: Non-interactive and information-theoretic secure verifiable secret sharing. In: Proceedings of the 11th Annual International Cryptology Conference on Advances in Cryptology. pp. 129-140. CRYPTO '91, Springer-Verlag, London, UK, UK (1992), http://dl.acm.org/citation.cfm?id=646756.705507
Prêtà voter: A voter-verifiable voting system. P Y A Ryan, D Bismark, J Heather, S Schneider, Z Xia, 10.1109/TIFS.2009.2033233Trans. Info. For. Sec. 44Ryan, P.Y.A., Bismark, D., Heather, J., Schneider, S., Xia, Z.: Prêtà voter: A voter-verifiable voting system. Trans. Info. For. Sec. 4(4), 662-673 (Dec 2009). https://doi.org/10.1109/TIFS.2009.2033233, http://dx.doi.org/10.1109/ TIFS.2009.2033233
A new approach towards coercion-resistant remote e-voting in linear time. O Spycher, R Koenig, R Haenni, M Schläpfer, International Conference on Financial Cryptography and Data Security. SpringerSpycher, O., Koenig, R., Haenni, R., Schläpfer, M.: A new approach towards coercion-resistant remote e-voting in linear time. In: International Conference on Financial Cryptography and Data Security. pp. 182-189. Springer (2011)
| []
|
[
"Nitrogen enhancements 440 Myr after the Big Bang: super-solar N/O, a tidal disruption event or a dense stellar cluster in GN-z11?",
"Nitrogen enhancements 440 Myr after the Big Bang: super-solar N/O, a tidal disruption event or a dense stellar cluster in GN-z11?"
]
| [
"Alex J Cameron \nDepartment of Physics\nUniversity of Oxford\nDenys Wilkinson Building, Keble RoadOX1 3RHOxfordUK\n",
"Harley Katz \nDepartment of Physics\nUniversity of Oxford\nDenys Wilkinson Building, Keble RoadOX1 3RHOxfordUK\n",
"Martin P Rey \nDepartment of Physics\nUniversity of Oxford\nDenys Wilkinson Building, Keble RoadOX1 3RHOxfordUK\n",
"Aayush Saxena \nDepartment of Physics\nUniversity of Oxford\nDenys Wilkinson Building, Keble RoadOX1 3RHOxfordUK\n\nDepartment of Physics and Astronomy\nUniversity College London\nGower StreetWC1E 6BTLondonUK\n"
]
| [
"Department of Physics\nUniversity of Oxford\nDenys Wilkinson Building, Keble RoadOX1 3RHOxfordUK",
"Department of Physics\nUniversity of Oxford\nDenys Wilkinson Building, Keble RoadOX1 3RHOxfordUK",
"Department of Physics\nUniversity of Oxford\nDenys Wilkinson Building, Keble RoadOX1 3RHOxfordUK",
"Department of Physics\nUniversity of Oxford\nDenys Wilkinson Building, Keble RoadOX1 3RHOxfordUK",
"Department of Physics and Astronomy\nUniversity College London\nGower StreetWC1E 6BTLondonUK"
]
| [
"MNRAS"
]
| Recent observations of GN-z11 with JWST/NIRSpec revealed numerous oxygen, carbon, nitrogen, and helium emission lines at = 10.6. Using the measured line fluxes, we derive abundance ratios of individual elements within the interstellar medium (ISM) of this super-luminous galaxy. Driven by the unusually-bright N ] 1750 and N ] 1486 emission lines (and by comparison faint O ] 1660, 1666 lines), our fiducial model prefers log(N/O) > −0.25, greater than four times solar and in stark contrast to lower-redshift star-forming galaxies. The derived log(C/O) > −0.78, (≈30 % solar) is also elevated with respect to galaxies of similar metallicity (12 + log(O/H) ≈ 7.82), although less at odds with lower-redshift measurements. Given the long timescale typically expected to enrich nitrogen with stellar winds, traditional scenarios require a very fine-tuned formation history to reproduce such an elevated N/O. We find no compelling evidence that nitrogen enhancement in GN-z11 can be explained by enrichment from metal-free Population III stars. Interestingly, yields from runaway stellar collisions in a dense stellar cluster or a tidal disruption event provide promising solutions to give rise to these unusual emission lines at = 10.6, and explain the resemblance between GN-z11 and a nitrogen-loud quasar. These recent observations showcase the new frontier opened by JWST to constrain galactic enrichment and stellar evolution within 440 Myr of the Big Bang. | 10.1093/mnras/stad1579 | [
"https://export.arxiv.org/pdf/2302.10142v2.pdf"
]
| 257,038,152 | 2302.10142 | 1cb284641138b2ba98b91a6229d3bec576e17710 |
Nitrogen enhancements 440 Myr after the Big Bang: super-solar N/O, a tidal disruption event or a dense stellar cluster in GN-z11?
2023
Alex J Cameron
Department of Physics
University of Oxford
Denys Wilkinson Building, Keble RoadOX1 3RHOxfordUK
Harley Katz
Department of Physics
University of Oxford
Denys Wilkinson Building, Keble RoadOX1 3RHOxfordUK
Martin P Rey
Department of Physics
University of Oxford
Denys Wilkinson Building, Keble RoadOX1 3RHOxfordUK
Aayush Saxena
Department of Physics
University of Oxford
Denys Wilkinson Building, Keble RoadOX1 3RHOxfordUK
Department of Physics and Astronomy
University College London
Gower StreetWC1E 6BTLondonUK
Nitrogen enhancements 440 Myr after the Big Bang: super-solar N/O, a tidal disruption event or a dense stellar cluster in GN-z11?
MNRAS
0002023Accepted XXX. Received YYY; in original form ZZZPreprint 23 February 2023 Compiled using MNRAS L A T E X style file v3.0galaxies: abundances -galaxies: high-redshift -galaxies: ISM
Recent observations of GN-z11 with JWST/NIRSpec revealed numerous oxygen, carbon, nitrogen, and helium emission lines at = 10.6. Using the measured line fluxes, we derive abundance ratios of individual elements within the interstellar medium (ISM) of this super-luminous galaxy. Driven by the unusually-bright N ] 1750 and N ] 1486 emission lines (and by comparison faint O ] 1660, 1666 lines), our fiducial model prefers log(N/O) > −0.25, greater than four times solar and in stark contrast to lower-redshift star-forming galaxies. The derived log(C/O) > −0.78, (≈30 % solar) is also elevated with respect to galaxies of similar metallicity (12 + log(O/H) ≈ 7.82), although less at odds with lower-redshift measurements. Given the long timescale typically expected to enrich nitrogen with stellar winds, traditional scenarios require a very fine-tuned formation history to reproduce such an elevated N/O. We find no compelling evidence that nitrogen enhancement in GN-z11 can be explained by enrichment from metal-free Population III stars. Interestingly, yields from runaway stellar collisions in a dense stellar cluster or a tidal disruption event provide promising solutions to give rise to these unusual emission lines at = 10.6, and explain the resemblance between GN-z11 and a nitrogen-loud quasar. These recent observations showcase the new frontier opened by JWST to constrain galactic enrichment and stellar evolution within 440 Myr of the Big Bang.
INTRODUCTION
Chemical abundance measurements provide powerful constraints on the physical mechanisms underlying galaxy formation and evolution. Elements heavier than hydrogen and helium (metals) are formed via processes associated with the stellar life cycle, and the assembly of each galaxy is hence inherently linked with chemical enrichment (see Maiolino & Mannucci 2019 for a review). One way this connection manifests is through the mass-metallicity relation. The correlation between metal enrichment (metallicity) and galaxy stellar mass has been well established across the history of the Universe, both for metals in the gas-phase (Lequeux et al. 1979;Tremonti et al. 2004;Mannucci et al. 2010;Andrews & Martini 2013;Steidel et al. 2016;Yates et al. 2020;Sanders et al. 2021) and for metals locked in stars (Gallazzi et al. 2005;Kirby et al. 2013;Zahid et al. 2017;Cullen et al. 2019;Kashino et al. 2022). These studies demonstrate a general trend whereby metal enrichment tracks star formation across the Universe, with more evolved galaxies being more enriched with metals, and higher-redshift galaxies having lower metallicities (e.g. Maiolino & Mannucci 2019 for a summary).
Studies of individual heavy elements and their relative abundance ratios with respect to each other can provide further constraints on galaxy evolution. Certain elements are formed by distinct astrophysical channels that occur on different timescales (see e.g. Kobayashi & ★ E-mail: [email protected] Taylor 2023 for a review). The relative abundances of metals formed through these different channels will thus vary as a galaxy evolves. Hence, the relative abundances of heavy elements can reveal the underlying physical process that dominated the growth of a galaxy.
Of particular interest are oxygen, carbon and nitrogen. These three elements are amongst the most abundant metals in the Universe. They can be readily observed in the ISM of galaxies via prominent emission lines, and they are formed preferentially by specific astrophysical processes with distinct enrichment timescales. Oxygen is predominantly formed in core-collapse supernovae (CCSN) that occur on short timescales following the onset of star-formation (see e.g. Nomoto et al. 2013 for a review). Moderate levels of carbon and nitrogen are formed in CCSN, but these two metals are also enriched via stellar winds during the asymptotic giant branch (AGB) phase of intermediate mass stars (see e.g. Karakas & Lattanzio 2014 for a review). Since such intermediate-mass stars have longer main-sequence lifetimes before their giant phase, enrichment in carbon and nitrogen is expected to occur on longer timescales, with nitrogen potentially lagging behind carbon. Thus, the canonical picture of chemical evolution (see e.g. discussion in Kobayashi et al. 2020) is that galaxies are rapidly enriched with oxygen (and other -elements) following a burst of star formation, while the nitrogen and carbon content of a galaxy slowly grows as the stellar populations age. Emission-line galaxies represent an ideal laboratory to quantitatively test these galactic chemical evolution models.
Collisionally-excited emission lines (CELs), particularly [O ] 5007 and [O ] 3726, 3729 arising from ionised oxygen, have been extensively used to derive gas-phase oxygen abundances (O/H) in the ISM of galaxies (e.g. Andrews & Martini 2013;Curti et al. 2020;Sanders et al. 2021) and now extend to 7 thanks to the JWST (Curti et al. 2023;Tang et al. 2023;Nakajima et al. 2023). The gas-phase nitrogen abundance, and its ratio to oxygen (N/O) is typically probed with the [N ] 6583 emission line. Such studies have revealed that N/O is well-correlated with O/H both in the local Universe and at higher redshift (Pilyugin et al. 2012;Pérez-Montero et al. 2013;Masters et al. 2014;Amorín et al. 2017;Berg et al. 2020;Hayden-Pawson et al. 2022), with a characteristic shape showing a plateau below 12 + log(O/H) ≈ 8.1 and a steady increase toward higher metallicities. High N/O ratios are typically only found in galaxies with super-solar metallicities, consistent with the expectation of slow nitrogen enrichment over the course of the Universe, while low-metallicity galaxies tend to stay well below log(N/O) −0.5.
A similar method can be used to derive gas-phase carbon abundances using rest-frame ultraviolet (UV) emission to estimate the evolution of C/O over time (e.g. Garnett et al. 1995Garnett et al. , 1999Berg et al. 2016Berg et al. , 2019aSteidel et al. 2016;Llerena et al. 2022 and references therein). Such studies find a comparable picture to that of nitrogen: galaxies with higher O/H also exhibit higher C/O. Measurements of C/O have been extended to ≥ 6 galaxies with JWST, providing evidence that carbon enrichment has already proceeded beyond that expected from CCSN alone (Arellano-Córdova et al. 2022;Jones et al. 2023).
Recently, Bunker et al. (2023) (B23 hereafter) reported a JWST/NIRSpec spectrum for GN-z11. This galaxy was first identified as a high-redshift candidate in (Oesch et al. 2015), was tentatively confirmed to have a redshift of = 11.1 based on HST grism spectroscopy of the Lyman-break (Oesch et al. 2016), and is now confirmed unambiguously at = 10.60 (Bunker et al. 2023). GN-z11 is remarkably luminous compared to the ∼ 10−11 luminosity function (see Robertson 2022 for a review and e.g. Bouwens et al. 2022;Finkelstein et al. 2022;Harikane et al. 2022;Pérez-González et al. 2023 for recent determinations with the JWST). Of further intrigue, the NIRSpec spectrum published in B23 also shows numerous emission lines, arising from both oxygen ([O ] . Even more surprisingly, unusuallybright nitrogen emission is detected (N ] 1750, N ] 1486) with the measured line fluxes being higher than that measured for oxygen in the rest-frame UV. N ] 1750 2 is rarely observed in galaxy spectra, and when detected it is typically much fainter than other nearby rest-frame UV lines (particularly O ] 1660,1666). This is verified both in local analogues to high-redshift galaxies (e.g. Mingozzi et al. 2022), individual galaxies at ∼ 2 (Berg et al. 2018), stacked spectra of ∼ 2 − 4 galaxies (Le Fèvre et al. 2019;Saxena et al. 2022), and in a tentative detection at > 7 with JWST/NIRISS (Roberts-Borsani et al. 2022 ous signature of AGN activity in GN-z11 (which we revisit in the discussion of Section 3.1).
A possible interpretation to explain such bright nitrogen emission lines is an unusually high nitrogen content, and an elevated N/O ratio. Indeed, this is suggested by B23. In this paper, we use the emission line flux ratios published by B23 to quantify the N/O and C/O abundance ratios in GN-z11 (Section 2). Our fiducial model implies super-solar N/O and elevated C/O at = 10.6 where the age of the Universe is less than 500 Myr. We discuss in Section 3 that these abundance ratios are challenging to explain with traditional enrichment arguments where nitrogen and carbon are enriched by stellar evolution channels on long timescales, and review other more exotic scenarios that could explain such elevated values at = 10.6. We present a summary in Section 4.
ABUNDANCE MEASUREMENTS
In this section we outline our methods for deriving limits on a series of ion abundance ratios in GN-z11. We use these calculations to place constraints on the N/O, C/O and O/H abundance ratios in GN-z11, which we summarise in Tables 1 and A1. Discussion of the implications of these derived values can be found in Section 3.
Emissivity calculations in this section are performed with (Luridiana et al. 2015), using the atomic data from the database (version 10.0.2; Dere et al. 1997;Del Zanna et al. 2021
where is the emissivity of each emission line that depends on the electron temperature, , and density, . Curti et al. 2023;Katz et al. 2023), and also that expected from the reported presence of N ] emission. This low electron temperature is however consistent with the galaxy being at higher metallicity compared to other high-redshift objects. We consider an alternative approach for estimating the electron temperature, making use of the [O ] 4363/[Ne ] 3869 ratio. Neon and oxygen are both -elements and the Ne/O abundance ratio has been observed to be quite consistent across a large range of abundances and redshifts (e.g. Berg et al. 2019bBerg et al. , 2020Arellano-Córdova et al. 2022 To bracket the range of temperatures implied by this spectrum, we perform our abundance analysis also at these temperatures and finally at 3.0×10 4 K as a final bounding case 3 , although this scenario is not favoured.
Electron temperature constraints
Throughout this calculation we assume these quoted temperatures apply to the 'high-ionisation' zone, (i.e., (O ) = (N ) = (C )). We assume that O + traces a different ionisation zone with different . We consider two conversions from (O ) to (O ): (i) the calibration provided in Pilyugin et al. (2009) and a more exaggerated case where (O ) = 0.7 × (O ). We initially adopt = 100 cm −3 , although the effect of density variations is also discussed below. Abundance ratios derived for these different temperatures are provided in full in Table A1.
Constraints on N/O in GN-z11
We now turn to exploring the possible N/O ratios implied by the emission line measurements reported by B23 for 6583 emission line is not within the spectral coverage which has historically been the most common way of determining nitrogen abundances (e.g. Pérez-Montero 2017). Although it is not a widely used tracer, nitrogen abundance constraints have been reported from N ] 1750 at ∼ 0 (Garnett et al. 1999) and ∼ 2 (Berg et al. 2018).
We first consider N ++ /O ++ from the N ] 1750 / O ] 1660, 1660 ratio. Since the O ] 1660, 1660 emission is reported as an upper limit, for a given adopted temperature, this ratio provides a lower limit on the N ++ /O ++ ratio. With our fiducial temperature we derive a value of log(N ++ /O ++ ) > −0.07. Considering our range of temperatures (Section 2.1), we obtain ion abundance ratios between −0.12 ≤ log(N ++ /O ++ ) ≤ 0.1, exhibiting only a mild dependence. 3 Temperatures of ∼ 3 × 10 4 K have been reported in > 8 galaxies (e.g. Katz et al. 2023) We can also derive an estimate of N ++ /O ++ from the N ] 1750 / [O ] 4363 ratio. This has the disadvantage of a much larger wavelength difference, making the ratio sensitive to any wavelength-dependent dust corrections. Assuming there is no dust, we find log(N ++ /O ++ ) = −0.19 for our fiducial temperature. We note that invoking the presence of dust preferentially increases the N ] 1750 / [O ] 4363 flux ratio, which serves to increase the inferred N ++ /O ++ . Considering our range of temperature, we find that, derived from N ] 1750 / [O ] 4363, the N ++ /O ++ ratio has a somewhat larger -dependence, and that the dependence is opposite, varying between −0.48 ≤ log(N ++ /O ++ ) ≤ 0.04 (Table A1).
The emissivity ratio of O ]1660,1666 / N ]1750 is essentially constant with density up to ∼ 10 5 cm −3 , suggesting that the impact of density variations is minor. A small density dependence appears at 10 6 cm −3 , although not significant enough to appreciably reduce the derived N ++ /O ++ . Furthermore, the -dependence of [O ]4363 / N ]1750 implies higher N ++ /O ++ at high density, further disfavouring this solution.
Although N ++ and O ++ are both high-ionisation ions and will trace similar regions of the nebula, we cannot necessarily assume N/O = N ++ /O ++ . Since the N ] 1750 line has not been widely studied in the literature, ionisation correction factors (ICF) for the N ++ /O ++ ratio are typically not considered (e.g. Pérez-Montero 2017; Amayo et al. 2021). The second and third ionisation energies of nitrogen (29.6 eV and 47.4 eV) and oxygen (35.1 eV and 54.9 eV) imply that the nebular zone probed by emission from N ++ ions should contain both O + and O ++ ions (Kramida et al. 2022). We therefore assume that N ++ /(O + + O ++ ) provides a lower limit on the total N/O abundance ratio, and thus we derive the N ++ /O + ratio from the detected [O ] 3726, 3729 lines.
Unlike the N ++ /O ++ calculation, we do not assume that (O ) = (N ). We do not have any direct constraints on the temperature of the low-ionisation zone, so we instead derive (O ) for each assumed temperature using the relation from Pilyugin et al. (2009), yielding (O ) = [1.14, 1.48, 2.24, 2.77] ×10 4 K.
Assuming this two-zone model, we find 0.38 ≤ log(N ++ /O + ) ≤ 1.69, consistent with the finding from B23 that GN-z11 has a very highly ionised ISM. We note that such (O )-(O ) calibrations are highly uncertain (Yates et al. 2020;Cameron et al. 2021). Given that lower (O ) leads to lower [O ] emissivity, and thus lower N ++ /O + , we repeat this calculation with an exaggerated case where (O ) = 0.7 × (O ). This yields slightly lower values between 0.23 ≤ log(N ++ /O + ) ≤ 0.96, which we include as part of our conservative model.
As for N ++ and O ++ , dust corrections might need to be applied for GN-z11, but we note that they would only increase the derived N ++ /O + , and thus N/O. Furthermore, high densities are once again disfavoured. The derived N ++ /O + shows limited density dependence up to 10 5 cm −3 , above which the emissivity of [O ] 3726, 3729 drops precipitously, dramatically decreasing the inferred N ++ /O + . However, such densities would make the emergence of the resonant Lyman-and Mg difficult to explain, except in the complete absence of dust or if we are conveniently looking down an optically thin channel, and would imply super-solar O/H. Similarly, as discussed above, high densities would not explain the high N ++ /O ++ .
To remain conservative, for each value, we take the N ++ /O ++ and N ++ /O + values that yield the lowest nitrogen abundance, and combine these to obtain log N ++ /(O + + O ++ ), treating this as our lower limit on the total N/O. Our fiducial case thus implies log (N/O) > −0.25, more than four times higher than solar (log (N/O) = −0.86; Table 1 cases, while our ultra-high-temperature bounding case still yields log (N/O) > −0.55, twice that of the solar ratio. line ratios, we infer lower limits on the total N/O abundance ratio that most conservatively suggest log(N/O) > −0.55, which is twice the solar value, or, with more realistic assumptions, log(N/O) > −0.25, which is four times the solar value. We will return to the surprising implications of this in Section 3 and now repeat these arguments to derive carbon abundances.
Constraints on C/O
Given the detection of C ] 1909 reported by B23, we can derive constraints on the C/O abundance ratio following the same set of assumptions and procedure outlined in Section 2.2 for N/O. We reiterate that, for each temperature, the reported abundance ratios are lower limits (see Section 2.2).
As with nitrogen, ionisation potentials of carbon are such that the C ++ ionisation zone should overlap with the O + and O ++ zones. Therefore, we again assume that C ++ /(O + + O ++ ) provides a lower limit on the total C/O ratio yielding log(C/O) > −0.78 in our fiducial case, and log(C/O) > −0.95 under more conservative assumptions (see Tables 1 and A1). These values are somewhat higher than previously reported in high-redshift galaxies (Arellano-Córdova et al. 2022;Jones et al. 2023), but are reasonably consistent with lowerredshift objects (Figure 1). Although the weak He emission reported by B23 could suggest some fraction of oxygen is present as O 3+ , the uncertainty in our measurement is more likely dominated by our inability to precisely constrain temperature. Table A1 demonstrates the large temperature dependence of the total O/H ratio, which changes by almost two orders of magnitude across our adopted range. Nonetheless, our fiducial temperature yields 12 + log(O/H) = 7.82, broadly consistent with the value inferred by B23 from both strong-line and SED fitting methods.
Constraints on O/H
DISCUSSION
The abundance ratios inferred from the bright nitrogen emission lines in the spectrum of GN-z11 imply a highly nitrogen-enriched ISM. In this section, we argue that such super-solar N/O ratios are particularly peculiar at = 10.6, and propose several scenarios that may explain this behaviour.
Could GN-z11 be powered by a massive black hole?
Although N ] 1750 has rarely been observed in star-forming galaxies, a 'nitrogen-loud' population of quasars has been identified in SDSS exhibiting strong N ] 1750 and N ] 1486 emission (Jiang et al. 2008). Furthermore, a recent spectrum of a = 5.5 AGN (Übler et al. 2023) also shows these nitrogen lines. B23 found that rest-frame UV emission line ratios in GN-z11 are generally more consistent with star-formation models (e.g. Feltre et al. 2016;Hirschmann et al. 2019), but there is overlap with the parameter-space inhabited by some AGN models (Nakajima & Maiolino 2022), and the possibility that GN-z11 hosts an AGN cannot not conclusively be ruled out (see also Jiang et al. 2021). It is unclear whether applying the method as outlined in Section 2 to derive metal abundance ratios is valid in the case of an AGN, or how to interpret the emission line fluxes at hand if they arise from a high-density broad-line region, and we thus discuss here the possibility that GN-z11 is powered by a massive black hole.
Similar to GN-z11, nitrogen loud quasars have been suggested to arise due to enhanced nitrogen abundance and are rare, comprising only ∼ 1 % of the SDSS quasar sample (Jiang et al. 2008). However, they are observed at much lower redshift with longer possible metal enrichment timescales, and whether the elevated N/O is simply tracing an increase in O/H via secondary enrichment (Batra & Baldwin 2014) or whether nitrogen is specifically enriched (Araki et al. 2012;Kochanek 2016;Matsuoka et al. 2017) remains debated.
The equivalent width (EW) ratio of EW(N ])/EW(C ]) in GN-z11 would place it in the top ∼ 2% of the Jiang et al. (2008) sample (already sampling only ∼ 1% of SDSS quasars). Considering that AGN are expected to be rare at > 10 given the drop in the AGN luminosity function (Kulkarni et al. 2019), it would be interesting if GN-z11 is part of such a rare subpopulation of AGN and would imply that nitrogen-loud quasars dominate the AGN population in the early Universe. Even if GN-z11 is an AGN, explaining the nitrogen-loud behaviour would likely require super-solar N/O and N/C ratios.
As we discuss below, explaining super-solar N/O and N/C is difficult with standard stellar nucleosynthesis models. One alternative to explain the rarity and the nitrogen-enrichment in such quasars is enrichment by tidal disruption events (TDEs, Kochanek 2016). As a star nears a supermassive black hole, it can be tidally disrupted. Since the core of intermediate-mass stars are rich in light elements, such TDEs can result in abundance anomalies with high N/C ratios (e.g. Cenko et al. 2016;Yang et al. 2017;Brown et al. 2018;Sheng et al. 2021) and help explain the emission patterns of nitrogen-rich quasars (Kochanek 2016).
To summarize, we cannot conclusively determine whether GN-z11 is a nitrogen-loud quasar and/or powered by a TDE, although its emission properties would put it amongst the rarest objects known in this category. Furthermore, the likelihood of such scenarios would have to be confronted quantitatively against the expected demographics of super-massive black holes which are expected to plummet at high redshift (e.g. Volonteri 2010). Even in this case, GN-z11 may still require super-solar N/O ratios which are challenging to explain with typical galactic enrichment models (Section 3.2). We note that deep high resolution NIRSpec spectroscopy of this object could help reveal the presence or absence of broadened lines and shed light on its nature.
Is the over-abundance of light elements in GN-z11 from traditional evolved stars?
Under the traditional paradigm for galactic chemical evolution, oxygen (and other -elements) is enriched first via core-collapse supernova (CCSN), while carbon appears on a slightly longer timescale through both CCSN and winds from asymptotic giant branch (AGB) stars, and nitrogen lags behind mainly as a product of AGB stars (see e.g. Nomoto et al. 2013; Kobayashi & Taylor 2023 for reviews). A massive progenitor for an AGB star (≈ 6 M ) requires ≈ 50 Myr to move off the main-sequence and enter the giant phase, where its winds will deposit (primarily) carbon and nitrogen in the surrounding gas. Given that the age of the Universe is only 440 Myr at = 10.6, this would put the birth of such AGB progenitors in a star formation burst at ≥ 12. Requiring a significant contribution from intermediate-mass progenitors (≈ 3 − 4 M ) would push these requirements to even earlier times ( ≥ 14). This is possible given that observed high-redshift Balmer Breaks may indicate star formation as early at 250 Myr after the Big Bang (Hashimoto et al. 2018); however, if the timescale gets moved too far back, we may enter a regime dominated by Population III (Pop. III) star formation (e.g. Bromm 2013) where the IMF and yields may be different (see below). While the timescales for AGB stars are reasonable, a crucial aspect of GN-z11 is the over-enrichment of nitrogen compared to oxygen. This implies efficient light element production, but also very inefficient oxygen enrichment by CCSNe given the fiducial metallicity of the object. This must apply both in the hypothetical first burst of star formation at 14 giving rise to AGB progenitors, and in the current event at = 10.6 powering the observed emission lines with star formation rate ∼ 20 M yr −1 (B23, Tacchella et al. 2023). A coincidental sequence of events could provide a mechanism to maintain the observed N/O and C/O. For example, the older star formation event could have either expelled most of the early oxygen in a powerful outflow or failed to produce it by collapsing most SNe progenitors directly into black holes. Next, AGBs enriched the gas in nitrogen over tens of Myrs, and we are catching GN-z11 just before the most recent CCSNe at = 10.6 enrich its ISM significantly in oxygen.
Such a fine-tuned scenario to explain the observed N/O and C/O at = 10.6 through AGB enrichment is thus possible but rather contrived, and would need careful quantitative validation against models of galactic enrichment. In this case, most high-redshift luminous galaxies should be nitrogen-enriched. Stellar evolution models of AGB stars at low and very low metallicities exhibit shorter mainsequence lifetimes to reach the giant phase, and increased nitrogen and carbon production (e.g. Cristallo et al. 2015;Ventura et al. 2021;Gil-Pons et al. 2022). This could help quicken nitrogen and carbon enrichment timescales and relieve potential tensions, as would including more massive AGB and super-AGB progenitors that evolve quicker but whose yields remains uncertain (e.g. Ventura & Marigo 2010;Siess 2010;Doherty et al. 2014;Gil-Pons et al. 2022; see Karakas & Lattanzio 2014 for a review). Alternatively, stellar rotation and magnetic fields could also modify metal yields during the AGB phase (e.g. Meynet & Maeder 2002;Piersanti et al. 2013;den Hartogh et al. 2019), but a consensus on the respective importance of these mechanisms for galactic-scale enrichment remains lacking.
The spectrum of GN-z11 also presents a tentative detection of the He 1486 line that could be associated with young, massive stars in the Wolf-Rayet phase. Such stars evolve quickly off the mainsequence (2 − 3 Myr at solar metallicity; Meynet 1995) bypassing the longer timescales associated to AGB stars, and power winds that could contribute significant carbon and nitrogen to the chemical enrichment of the galaxy (see e.g. Crowther 2007; Vink 2022 for reviews). Galaxies dominated by Wolf-Rayet features have been linked to elevated N/O ratios at lower redshifts (e.g. Brinchmann et al. 2008;Berg et al. 2011;Masters et al. 2014), although their reported N/O ratios (log(N/O) −0.5) remain much lower than reported here in GN-z11 (log(N/O) −0.3). However, the weak He line would imply limited contribution of Wolf-Rayet starlight to the integrated spectrum, and such helium-line ratios can also be explained by harder ionizing stellar populations at higher redshift (see e.g. discussion in Steidel et al. 2016), disfavouring this interpretation. It also remains unclear whether enough Wolf-Rayet stars could be present without CCSNe, which would rapidly enrich the ISM with oxygen and lower N/O and C/O. Nonetheless, if galaxies at ≥ 10 commonly undergo a Wolf-Rayet-dominated phase, we would expect to see more systems with elevated N/O abundance ratios at high-redshift, that are yet to be detected but could be probed by further JWST observations (e.g. Roberts-Borsani et al. 2022;Cameron et al. 2023).
To summarize, explaining the super-solar N/O and C/O abundance ratios in GN-z11 using traditional stellar evolutionary tracks is possible, but likely requires a very specific formation scenario.
Are we witnessing the chemical signatures of primordial or exotic stellar evolution channels?
Another possibility to explain the observed high N/O ratios in the ISM of GN-z11 is that the stars powering the bright emission lines at = 10.6 are rapidly enriching the ISM in nitrogen. Such a production mechanism would necessitate unusual stellar evolution channels, likely to be rare or cease to operate at later times (or both), as no metal-poor galaxies exhibit this level of nitrogen enhancement at lower redshifts (Figure 1). Furthermore, if this channel were to be common, its chemical signatures would likely be imprinted on the abundances of lowmetallicity halo stars around our Milky Way. Carbon enhancements are commonly detected in halo stars (e.g. Frebel & Norris 2015 for a review). In contrast, nitrogen enhancements are rare (e.g. Johnson et al. 2007;Pols et al. 2012;Simpson & Martell 2019), and are often attributed to binary mass-transfer at later times (e.g. Suda et al. 2004;Pols et al. 2012;Fernández-Trincado et al. 2019;Roriz et al. 2023) rather than being set by the birth environment of the star. However, there are important examples where binary evolution is not the preferred explanation. In particular, HE 1327-2326 (Frebel et al. 2008) and J0815+4729 (González Hernández et al. 2020) exhibit no signatures of mass transfer and binary companions, despite their drastic enhancements of carbon and nitrogen (Aoki et al. 2006). Many have thus proposed that the abundances of HE 1327-2326 were set at high redshift, possibly by Population III (Pop. III) primordial stars (Iwamoto et al. 2005;Frebel et al. 2005;Hirschi 2007;Heger & Woosley 2010;Ezzeddine et al. 2019).
Given the parallels with such nitrogen-enhanced metal-poor stars, we now assess the likelihood that in-situ Pop. III star formation could be responsible for the observed abundance ratios of GN-z11. To this end, we explore a compilation of Pop. III and lowmetallicity SN yields, scanning across the available parameter space to identify models that produce abundance ratios close to our derived fiducial values. Namely, we look for (i) a carbon to oxygen ratio such that log(C/O) > −0.78, (ii) a high ratio of nitrogen to oxygen (log(N/O) > −0.25), (iii) more nitrogen than carbon (log(N/C) > 0.53), and (iv) a significant amount of nitrogen mass per event (at least 0.01 M ). These thresholds correspond to our fiducial model (see Table A1).
We consider Pop. III SN yields from Heger & Woosley (2010) that include metal production for various stellar mass progenitors, SNe energies, piston location, and mixing amounts. We include yields from Takahashi et al. (2018) that further account for stellar rotation and magnetic fields, in addition to progenitor mass. Finally, we search the yields of more metal-enriched rotating stars from Limongi & Chieffi (2018) where the rotation is known to enhance the nitrogen abundance. While this compilation of yields is by no definition complete, they span the range of relevant physical mechanisms that could help explain the observed abundances of GN-z11.
Within the Heger & Woosley (2010) data set we find stars with a mass of 25 M or 39 M , with explosion energies of 0.3 × 10 51 erg and 0.6 × 10 51 erg, respectively, exhibit the required abundance patterns. No models amongst those presented in Takahashi et al. (2018) that satisfy our criteria. For the yields from Limongi & Chieffi (2018), we find two stars 4 : an 80 M non-rotating star at 0.1 and an 80 M star rotating at 300 km/s at 0.001
. We emphasize here that by no means is this search a reflection of the accuracy of these stellar evolution calculations, rather it is an exercise to determine whether Pop. III SN, rotating stars, or faint SN have the potential to explain GN-z11 or whether other physics is required.
Our search demonstrates that in certain cases, the abundance patterns observed in GN-z11 can be reproduced both by certain Pop. III SN as well as more metal-enriched models. However, we note that within the parameter space of possible SN explosions, only a select few models are able to reproduce the abundances which results in a fine-tuning problem. It is highly unlikely that every star present in the metal enriched environment of GN-z11 is exactly 80 M . Furthermore, why should the Pop. III stars that potentially enriched GN-z11 only appear at 25 M or 39 M ? GN-z11 is bright and a significant amount of nitrogen is required to produce the observed luminosity. Each of these explosions produces < 0.1 M of nitrogen, so a significant number are needed in order to enrich the galaxy to levels where emission lines are detectable. It is unclear how so many stars could form at a particular mass or how to maintain Pop. stars really formed at particular masses, nitrogen enrichment in the stellar halo might not be so uncommon. We disfavour solutions of similar ilk, e.g. Pop. III stellar winds that can similarly produce the correct abundances but require fine-tuning (Hirschi 2007).
In summary, similar to the AGB wind scenario presented above, faint Pop. III SN or low-metallicity rotating stars have the capability of producing the yields reported for GN-z11; however, the scenario is fine-tuned and significant deviations from the local stellar IMF would be required for such abundance ratios as observed in GN-z11 to manifest.
Are we observing stellar encounters in dense star clusters?
Following the apparent fine-tuning required to explain the abundance ratios of GN-z11 through specific stellar evolution mechanisms, we explore an alternative scenario where the metal content results from dynamical processes within the particular environment of GN-z11. More specifically, runaway stellar collisions in dense early star clusters could provide a high-redshift-only, rare mechanism to elevate nitrogen production that fits the compact morphology and high starformation rate observed in GN-z11 (Tacchella et al. 2023).
Within dense stellar clusters, high-mass stars can rapidly sink to the centre due to mass segregation (e.g. Portegies Zwart & McMillan 2002;Gürkan et al. 2004) and collide. If the cluster is dense enough, multiple collisions can occur on time scales shorter than the mainsequence lifetimes of massive stars, thus before the first SNe explode, and form very massive stars in its centre (Portegies Zwart et al. 1999). This scenario is much more likely to occur in high-redshift galaxies due to the higher gas densities and increased merger rates and can provide a mechanism for the production of massive black hole seeds to explain the origins of high-redshift supermassive black holes (e.g. Katz et al. 2015).
In the case of GN-z11, the presence of such very massive stars embedded in a star cluster could help explain its abundance ratios. Massive stars produced by runaway collisions are may be well-mixed (e.g. Gaburov et al. 2008), bringing light elements towards their surface. While at low metallicities the very massive star is either expected to collapse to a black hole with minimal mass loss or explode as a pair-instability SN (the outcome depends on mass), metal enriched very massive stars are expected to host powerful stellar winds (Vink 2022). This can lead to fast and efficient enrichment of light elements such as carbon and nitrogen at the expense of oxygen (Glebbeek et al. 2009). Furthermore, the number of SNe is reduced in such a formation scenario as most SN progenitors merge quickly into a single object, reducing the production of carbon and oxygen and helping to increase N/O.
It remains unclear whether a single cluster undergoing collisional runaway produces enough metal and nitrogen mass to power the emission lines of GN-z11. However, multiple massive stellar clusters are expected to form simultaneously if the galaxy is undergoing an external trigger inducing strong compressive tides (e.g. a merger; Renaud et al. 2015;Li et al. 2017), a process best observed in the Antennae galaxies (Bastian et al. 2009). Furthermore, not all stellar clusters need to undergo runaway collisions to create the observed spectrum of GN-z11 -star clusters where collisional runaway is efficient could be responsible for the nitrogen emission lines, whereas the oxygen and carbon emission lines can be spread throughout the other, more classical star clusters of the galaxy. Subtly different conditions in each cluster could thus participate in driving enhanced N/O integrated over the galaxy.
While the winds of a collisionally-produced very massive star may produce the nitrogen observed in GN-z11, one of the limitations of this scenario is the fact that the very massive star must form quickly, before the stars can explode via SN. However, regardless of whether this process ensues, some massive stars in the cluster may collapse into stellar mass black holes (i.e. < 100 M ). The dense environment of the stellar cluster could favour close encounters between stars and black holes resulting in TDEs, which could help explain the nitrogen and helium emission (e.g. Kochanek 2016 and discussion in Section 3.1). This parallels the TDE scenario for AGN but uses lower mass black holes.
To summarize, this collisional runaway scenario evokes exotic dynamical processes much less likely to occur at the lower densities of the lower-redshift Universe, and thus fits the rarity and peculiarity of GN-z11 without modifying our base understanding of stellar evolution. There are however large remaining uncertainties associated with modelling stellar evolution during runaway collisions (e.g. mixing during stellar collisions, the nucleosynthesis and stellar winds associated to the central massive star), as well as how much nitrogen, carbon, and oxygen are released during a TDE. Nonetheless, these findings strongly advocate future theoretical studies exploring and testing these scenarios quantitatively.
SUMMARY
Based on its luminosity alone, GN-z11 is a remarkable object at > 10 ( Oesch et al. 2015Oesch et al. , 2016. Its recent spectroscopic follow-up in B23 further revealed the extent of this peculiarity, highlighting emission lines from carbon, oxygen and nitrogen amongst others, and showcasing the power of JWST/NIRSpec spectroscopy to characterize the physical properties of galaxies less than 500 Myr after the Big Bang.
In particular, the presence of strong N ] 1750 and C ] 1909 emission lines allows for unprecedented constraints on chemical abundance ratios, and the high N ] 1750 / O ] 1660, 1666 ratio could imply unusually high N/O (B23). In this paper, we quantitatively derive the abundance ratios implied by these emission We explore how our derived values vary with different assumptions of temperature, density, dust, and ionisation corrections, finding that none of these can reasonably explain the high N ] 1750 / O ] 1660, 1666 ratio without invoking a high N/O ratio. Given the longer enrichment timescales typically associated with nitrogen compared to oxygen, this over-enrichment is highly unexpected and seemingly at odds with the young age of the Universe at = 10.6.
We review whether the emission pattern observed in GNz11 could be powered by an AGN, disfavoured in B23, but which could bias the inferred N/O. We find qualitative parallels between this object and the population of rare 'nitrogen-loud' quasars, although emission line ratios observed in GNz11 would put it as an outlier of this alreadyrare population. We cannot conclusively exclude this scenario, but note that the mechanisms invoked to explain these nitrogen-loud objects either involve significant nitrogen enrichment or tidal disruption events, both of which have interesting implications at = 10.6.
Assuming instead that GN-z11 is indeed a star-forming galaxy, as preferred by B23, we then review stellar processes that could produce high N/O at such early cosmic times. Traditional models of nitrogen-enrichment from AGB winds would likely require a highly contrived formation scenario, which cannot be ruled out but requires extensive validation against quantitative galactic enrichment models. Similarly, metal yields from exotic stellar evolution channels, including rotating and Pop. III massive stars, generally disfavour high nitrogen-to-oxygen production. Individual progenitor models can lead to high N/O, but generalizing across the galaxy would require an extremely finely-tuned progenitor mass function and initial conditions.
Lastly, we explore whether exotic dynamical mechanisms operating at high redshift could explain the apparent nitrogen-enhancement in GN-z11. Runaway stellar collisions in the cores of dense, highredshift stellar clusters can lead to the formation of very massive stars, leading to rapid and abundant nitrogen production and an underproduction of oxygen. There are large quantitative uncertainties with this scenario, but it provides an avenue to simultaneously explain the high N/O in GN-z11 and the lack of low-redshift counterparts where gas densities become lower. These same star clusters would also be ideal sites to host TDEs which could also lead to nitrogen enhancements as discussed for the AGN scenario.
Ultimately, we cannot conclusively distinguish between these scenarios and acknowledge our proposed list is unlikely to be exhaustive. Rather, the prominent and unusual N ] 1750 emission observed in GN-z11 should stimulate further studies that both quantitatively establish the likelihood of our proposed options and explore additional models that could explain such high N/O at = 10.6.
Nonetheless, the fact that one of the first emission spectra observed at > 10 reveals such prominent nitrogen emission, uncommon at lower redshifts, suggests that we are only just opening a new frontier. JWST/NIRSpec spectroscopic programs targeting larger samples of high-redshift galaxies will allow us to quantify the frequency of such bright N ] 1750 emission amongst the > 10 population and refine our understanding of how the first metals appeared in the Universe.
Row Abundance ratio = 1.05 × 10 4 K = 1.46 × 10 4 K = 2.36 × 10 4 K = 3.0 × 10 4 K Notes = 100 cm −3 = 100 cm −3 = 100 cm −3 = 100 cm −3 Table A1. Abundance ratios calculated for = 100 cm −3 under a range of temperature assumptions. The temperature given in the header row is the adopted (N ), (C ) and (O ) for that column. (O ) inferred in two different ways, either adopting the calibration from Pilyugin et al. (2009), which yields (O ) = [1.14, 1.48, 2.24, 2.77] ×10 4 K for each column, or taking the more exaggerated assumption that (O ) = 0.7 × (O ) which gives (O ) = [0.74, 1.02, 1.65, 2.10] ×10 4 K. † Values in this row are a lower limit on the abundance ratio, since the O ] 1660, 1666 value used is the 2 upper limit reported in B23. ‡ Values in this row can be thought of as a lower limit on the abundance ratio, since it was computed assuming no dust reddening, and invoking the presence of dust would preferentially boost the shorter wavelength line in our calculation. * This row gives the minimum possible X ++ /(O + + O ++ ) ratio that can be obtained from summing the possible X ++ /O + and X ++ /O ++ values from each column (where X is nitrogen or carbon, as per the 'abundance ratio' column).
Figure 1 .
1and Figure 1). We obtain log (N/O) > −0.13 and log (N/O) > −0.49 in our conservative low and high temperature Pink shaded regions show the range of abundance ratios for GN-z11 implied by our fiducial model (dashed) and our more conservative assumptions (dotted). Left: Nitrogen-to-oxygen abundance ratio compared to total oxygen abundance. We show comparison samples of ∼ 0 H regions (green circles from Pilyugin et al. 2012; blue squares from Garnett et al. 1999; yellow diamonds from Esteban et al. 2014; orange triangles from Berg et al. 2020), ∼ 2 galaxies from Berg et al. (2018) (orange star) and Hayden-Pawson et al. (2022) (brown pentagons), and the = 2.4 composite spectrum from Steidel et al. (2016) (red hexagon). Our inferred N/O for GN-z11 is highly super-solar and unlike lower-redshift galaxies. Right: Carbon-to-oxygen abundance ratio compared to total oxygen abundance. We show comparison samples of ∼ 0 H regions (turquoise squares from Garnett et al. 1995; blue squares from Garnett et al. 1999; green triangles from García-Rojas et al. 2007; yellow diamonds from Esteban et al. 2014). The = 2.4 composite from Steidel et al. (2016) is shown by the red hexagon. We show two measurements of C/O in individual galaxies with JWST/NIRSpec: a galaxy at = 6.229 from Jones et al. (2023) (pink star) and a galaxy at = 8.495 from Arellano-Córdova et al. (2022) (light blue star).
III star formation for such an extended period of time. Similarly, if Pop. III 4 If we switch our constraints to the most lenient region of our allowed parameter space, only one additional star, a 40 M star at 0.1 rotating at 150 km/s fits our criteria. Likewise, adopting more conservative thresholds does not change our results for the Heger & Woosley (2010) or Takahashi et al. (2018) data sets.
line fluxes and find log(N/O) > −0.25, log(C/O) > −0.78, and log(O/H) ≈ 7.82 for our fiducial model. This indicates super-solar nitrogen enrichment in GN-z11 within the first ∼440 Myr of cosmic history. More conservative assumptions in our modelling suggest log(N/O) > −0.49, log(C/O) > −0.95 and log(O/H) 8.60, still yielding a super-solar N/O.
For reference, solar values are: log(N/O) = −0.86, log(C/O) = −0.26, 12 + log(O/H) = 8.69 (Asplund et al. 2009).
4363, [O ] 3726, 3729 and tentative O ] 1660, 1666) and carbon ([C ] 1907 + C ] 1909 1 , and tentative C ])
). A small subset of SDSS quasars exhibit prominent N ] 1750 emission (so called 'nitrogen-loud' quasars; Jiang et al. 2008; Batra & Baldwin 2014), but B23 do not find any unambigu-1 For brevity, we will hereafter refer to [C ] 1907 + C ] 1909 as C ] 1909. 2 We note that this N ] emission feature consists of a quintet of emission lines between rest-frame 1746 Å and 1755 Å. Throughout this paper we refer to the sum of this complex as N ] 1750. Table 1. Summary of the abundance limits derived in Section 2. The 'fiducial' column takes its values from the = 1.46 × 10 4 K column of Table A1. The N/O and C/O ratios in the 'conservative' column are the lowest values obtained from any combination of assumptions in Table A1, excluding the strongly disfavoured = 3 × 10 4 K. The 'conservative' O/H value adopts the highest O/H value from Table A1 as an upper limit. Note that O/H abundance ratios are much more sensitive to modelling assumptions than metal abundance ratios. For reference, solar values are: log(N/O) = −0.86, log(C/O) = −0.26, 12 + log(O/H) = 8.69(Asplund et al. 2009).Abundance ratio
Fiducial
Conservative
log(N/O)
> −0.25
> −0.49
log(C/O)
> −0.78
> −0.95
12 + log(O/H)
7.82
< 8.6
). Emission lines fluxes used in these calculations are taken from Table 1 in B23, adopting measurements from their medium-resolution grating spectra. However, the [O ] 4363 line is only detected in the prism spectrum, for which we note all reported fluxes are systematically lower. Thus, we scale the reported [O ] 4363 prism flux to match those of the grating using the nearby H line, which is reported in both the prism and the grating. Abundance ratios are calculated from reported flux ratios and estimated ratios of emission line emissivities. For example, for N ++ /O ++ , we haveN ++
O ++ =
N ]1750
O ]1660,1666
×
O ]1660,1666
N ]1750
,
. With measurements reported for the N ] 1750 and N ] 1486 emission lines, we can sample the N ++ and N 3+ ions. Fluxes (or limits) are reported for three oxygen lines: O ] 1660, 1666, [O ] 3726, 3729, and [O ] 4363. The [N ]
The calculations presented here have not included the N ] emission line. Considering this higher ionisation state only makes this picture more puzzling, since some N ] emission can originate from the O ++ zone. The weak He 1486 emission and low derived N 3+ /N ++ ratio (Table A1) would seem to imply the O 3+ abundance is relatively low. Thus, considering the N 3+ ion would likely only increase the inferred N/O ratio. In summary, from measurements and limits on the N ] 1750 / O ] 1660,1666, N ] 1750 / [O ] 4363, and N ] 1750 / [O ] 3726, 3729 emission
To remain consistent with the approaches used in Sections 2.2 and 2.3, we derive -based O/H values, adopting the range of tempera-++ /H + from the ratio of [O ] 4363 / H , and O + /H + from the ratio of [O ] 3726, 3729 / H , and assume that the total oxygen abundance of GN-z11 is well approximated as O/H ≈ (O ++ + O + )/H + .tures assumed in Section 2.1 for (O ), and converting these into
(O ) using the calibration from Pilyugin et al. (2009). We derive
O
MNRAS 000, 1-9 (2023)
ACKNOWLEDGEMENTSWe thank James Matthews for helpful discussions in relation to this work. AJC and AS have received funding from the European Research Council (ERC) under the European Union's Horizon 2020 Advanced Grant 789056 "First Galaxies". MR and HK are supported by the Beecroft Fellowship funded by Adrian Beecroft. CHIANTI is a collaborative project involving George Mason University, the University of Michigan (USA), University of Cambridge (UK) and NASA Goddard Space Flight Center (USA). For the purpose of Open Access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission.DATA AVAILABILITYEmission line fluxes measured in GN-z11 are publicly available inBunker et al. (2023). Derived abundance ratios from these fluxes are available inTable A1.APPENDIX A: FULL TABLE OF ABUNDANCE CALCULATION RESULTS.InTable A1we present the full range of ion abundance ratios we obtain from the calculations presented in Section 2. This paper has been typeset from a T E X/L A T E X file prepared by the author.
. A Amayo, G Delgado-Inglada, G Stasińska, 10.1093/mnras/stab1467MNRAS. 5052361Amayo A., Delgado-Inglada G., Stasińska G., 2021, MNRAS, 505, 2361
. R Amorín, 10.1038/s41550-017-0052Nature Astronomy. 152Amorín R., et al., 2017, Nature Astronomy, 1, 0052
. B H Andrews, P Martini, 10.1088/0004-637X/765/2/140ApJ. 765140Andrews B. H., Martini P., 2013, ApJ, 765, 140
. W Aoki, 10.1086/497906ApJ. 639897Aoki W., et al., 2006, ApJ, 639, 897
. N Araki, 10.1051/0004-6361/201118477A&A. 543143Araki N., et al., 2012, A&A, 543, A143
. K Z Arellano-Córdova, 10.3847/2041-8213/ac9ab2ApJ. 23Arellano-Córdova K. Z., et al., 2022, ApJ, 940, L23
. M Asplund, N Grevesse, A J Sauval, P Scott, 10.1146/annurev.astro.46.060407.145222ARA&A. 47481Asplund M., Grevesse N., Sauval A. J., Scott P., 2009, ARA&A, 47, 481
. N Bastian, G Trancho, I S Konstantopoulos, B W Miller, 10.1088/0004-637X/701/1/607ApJ. 701607Bastian N., Trancho G., Konstantopoulos I. S., Miller B. W., 2009, ApJ, 701, 607
. N D Batra, J A Baldwin, 10.1093/mnras/stu007MNRAS. 439771Batra N. D., Baldwin J. A., 2014, MNRAS, 439, 771
. D A Berg, E D Skillman, A R Marble, 10.1088/0004-637X/738/1/2ApJ. 7382Berg D. A., Skillman E. D., Marble A. R., 2011, ApJ, 738, 2
. D A Berg, E D Skillman, R B C Henry, D K Erb, L Carigi, 10.3847/0004-637X/827/2/126ApJ. 827126Berg D. A., Skillman E. D., Henry R. B. C., Erb D. K., Carigi L., 2016, ApJ, 827, 126
. D A Berg, D K Erb, M W Auger, M Pettini, G B Brammer, 10.3847/1538-4357/aab7faApJ. 859164Berg D. A., Erb D. K., Auger M. W., Pettini M., Brammer G. B., 2018, ApJ, 859, 164
. D A Berg, D K Erb, R B C Henry, E D Skillman, K B W Mcquinn, 10.3847/1538-4357/ab020aApJ. 87493Berg D. A., Erb D. K., Henry R. B. C., Skillman E. D., McQuinn K. B. W., 2019a, ApJ, 874, 93
. D A Berg, D K Erb, R B C Henry, E D Skillman, K B W Mcquinn, 10.3847/1538-4357/ab020aApJ. 87493Berg D. A., Erb D. K., Henry R. B. C., Skillman E. D., McQuinn K. B. W., 2019b, ApJ, 874, 93
. D A Berg, R W Pogge, E D Skillman, K V Croxall, J Moustakas, N S J Rogers, J Sun, 10.3847/1538-4357/ab7eabApJ. 89396Berg D. A., Pogge R. W., Skillman E. D., Croxall K. V., Moustakas J., Rogers N. S. J., Sun J., 2020, ApJ, 893, 96
. R J Bouwens, 10.48550/arXiv.2211.02607arXiv:2211.02607arXiv e-printsBouwens R. J., et al., 2022, arXiv e-prints, p. arXiv:2211.02607
. J Brinchmann, D Kunth, F Durret, 10.1051/0004-6361:200809783A&A. 485657Brinchmann J., Kunth D., Durret F., 2008, A&A, 485, 657
. V Bromm, 10.1088/0034-4885/76/11/112901Reports on Progress in Physics. 76112901Bromm V., 2013, Reports on Progress in Physics, 76, 112901
. J S Brown, 10.1093/mnras/stx2372MNRAS. 4731130Brown J. S., et al., 2018, MNRAS, 473, 1130
. A J Bunker, arXiv:2302.07256arXiv e-printsBunker A. J., et al., 2023, arXiv e-prints, p. arXiv:2302.07256
. A J Cameron, T Yuan, M Trenti, D C Nicholls, L J Kewley, 10.1093/mnras/staa3757MNRAS. 5013695Cameron A. J., Yuan T., Trenti M., Nicholls D. C., Kewley L. J., 2021, MNRAS, 501, 3695
. A J Cameron, 10.48550/arXiv.2302.04298arXiv:2302.04298arXiv e-printsCameron A. J., et al., 2023, arXiv e-prints, p. arXiv:2302.04298
. S B Cenko, 10.3847/2041-8205/818/2/L32ApJ. 81832Cenko S. B., et al., 2016, ApJ, 818, L32
. S Cristallo, O Straniero, L Piersanti, D Gobrecht, 10.1088/0067-0049/219/2/40ApJS. 21940Cristallo S., Straniero O., Piersanti L., Gobrecht D., 2015, ApJS, 219, 40
. P A Crowther, 10.1146/annurev.astro.45.051806.110615ARA&A. 45177Crowther P. A., 2007, ARA&A, 45, 177
. F Cullen, 10.1093/mnras/stz1402MNRAS. 4872038Cullen F., et al., 2019, MNRAS, 487, 2038
. M Curti, F Mannucci, G Cresci, R Maiolino, 10.1093/mnras/stz2910MNRAS. 491944Curti M., Mannucci F., Cresci G., Maiolino R., 2020, MNRAS, 491, 944
. M Curti, 10.1093/mnras/stac2737MNRAS. 518425Curti M., et al., 2023, MNRAS, 518, 425
. Del Zanna, G Dere, K P Young, P R Landi, E , 10.3847/1538-4357/abd8ceApJ. 90938Del Zanna G., Dere K. P., Young P. R., Landi E., 2021, ApJ, 909, 38
. K P Dere, E Landi, H E Mason, B C Monsignori Fossi, P R Young, 10.1051/aas:1997368A&AS. 125149Dere K. P., Landi E., Mason H. E., Monsignori Fossi B. C., Young P. R., 1997, A&AS, 125, 149
. C L Doherty, P Gil-Pons, H H B Lau, J C Lattanzio, L Siess, S W Campbell, 10.1093/mnras/stu571MNRAS. 441582Doherty C. L., Gil-Pons P., Lau H. H. B., Lattanzio J. C., Siess L., Campbell S. W., 2014, MNRAS, 441, 582
. C Esteban, J García-Rojas, L Carigi, M Peimbert, F Bresolin, A R López-Sánchez, A Mesa-Delgado, 10.1093/mnras/stu1177MNRAS. 443624Esteban C., García-Rojas J., Carigi L., Peimbert M., Bresolin F., López- Sánchez A. R., Mesa-Delgado A., 2014, MNRAS, 443, 624
. R Ezzeddine, 10.3847/1538-4357/ab14e7ApJ. 87697Ezzeddine R., et al., 2019, ApJ, 876, 97
. A Feltre, S Charlot, J Gutkin, 10.1093/mnras/stv2794MNRAS. 4563354Feltre A., Charlot S., Gutkin J., 2016, MNRAS, 456, 3354
. J G Fernández-Trincado, 10.1051/0004-6361/201935369A&A. 631Fernández-Trincado J. G., et al., 2019, A&A, 631, A97
. S L Finkelstein, 10.48550/arXiv.2211.05792arXiv:2211.05792arXiv e-printsFinkelstein S. L., et al., 2022, arXiv e-prints, p. arXiv:2211.05792
. A Frebel, J E Norris, 10.1146/annurev-astro-082214-122423ARA&A. 53631Frebel A., Norris J. E., 2015, ARA&A, 53, 631
. A Frebel, 10.1038/nature03455Nature. 434871Frebel A., et al., 2005, Nature, 434, 871
. A Frebel, R Collet, K Eriksson, N Christlieb, W Aoki, 10.1086/590327ApJ. 684588Frebel A., Collet R., Eriksson K., Christlieb N., Aoki W., 2008, ApJ, 684, 588
. E Gaburov, J C Lombardi, 10.1111/j.1745-3933.2007.00399.xPortegies Zwart S. 3835MNRASGaburov E., Lombardi J. C., Portegies Zwart S., 2008, MNRAS, 383, L5
. A Gallazzi, S Charlot, J Brinchmann, S D M White, C A Tremonti, 10.1111/j.1365-2966.2005.09321.xMNRAS. 36241Gallazzi A., Charlot S., Brinchmann J., White S. D. M., Tremonti C. A., 2005, MNRAS, 362, 41
. J García-Rojas, C Esteban, A Peimbert, M Rodríguez, M Peimbert, M T Ruiz, 10.48550/arXiv.astro-ph/0610065Rev. Mex. Astron. Astrofis. 433García-Rojas J., Esteban C., Peimbert A., Rodríguez M., Peimbert M., Ruiz M. T., 2007, Rev. Mex. Astron. Astrofis., 43, 3
. D R Garnett, E D Skillman, R J Dufour, M Peimbert, S Torres-Peimbert, R Terlevich, E Terlevich, G A Shields, 10.1086/175503ApJ. 44364Garnett D. R., Skillman E. D., Dufour R. J., Peimbert M., Torres-Peimbert S., Terlevich R., Terlevich E., Shields G. A., 1995, ApJ, 443, 64
. D R Garnett, G A Shields, M Peimbert, S Torres-Peimbert, E D Skillman, R J Dufour, E Terlevich, R J Terlevich, 10.1086/306860ApJ. 513168Garnett D. R., Shields G. A., Peimbert M., Torres-Peimbert S., Skillman E. D., Dufour R. J., Terlevich E., Terlevich R. J., 1999, ApJ, 513, 168
. P Gil-Pons, C L Doherty, S W Campbell, J Gutiérrez, 10.1051/0004-6361/202244062A&A. 668100Gil-Pons P., Doherty C. L., Campbell S. W., Gutiérrez J., 2022, A&A, 668, A100
. E Glebbeek, E Gaburov, S E De Mink, O R Pols, 10.1051/0004-6361/200810425Portegies Zwart S. F. 497255A&AGlebbeek E., Gaburov E., de Mink S. E., Pols O. R., Portegies Zwart S. F., 2009, A&A, 497, 255
. González Hernández, J I Aguado, D S , Allende Prieto, C Burgasser, A J Rebolo, R , 10.3847/2041-8213/ab62aeApJ. 88913González Hernández J. I., Aguado D. S., Allende Prieto C., Burgasser A. J., Rebolo R., 2020, ApJ, 889, L13
. M A Gürkan, M Freitag, F A Rasio, 10.1086/381968ApJ. 604632Gürkan M. A., Freitag M., Rasio F. A., 2004, ApJ, 604, 632
. Y Harikane, 10.48550/arXiv.2208.01612arXiv:2208.01612arXiv e-printsHarikane Y., et al., 2022, arXiv e-prints, p. arXiv:2208.01612
. T Hashimoto, 10.1038/s41586-018-0117-zNature. 557392Hashimoto T., et al., 2018, Nature, 557, 392
. C Hayden-Pawson, 10.1093/mnras/stac584MNRAS. 5122867Hayden-Pawson C., et al., 2022, MNRAS, 512, 2867
. A Heger, S E Woosley, 10.1088/0004-637X/724/1/341ApJ. 724341Heger A., Woosley S. E., 2010, ApJ, 724, 341
. R Hirschi, 10.1051/0004-6361:20065356A&A. 461571Hirschi R., 2007, A&A, 461, 571
. M Hirschmann, S Charlot, A Feltre, T Naab, R S Somerville, E Choi, 10.1093/mnras/stz1256MNRAS. 487333Hirschmann M., Charlot S., Feltre A., Naab T., Somerville R. S., Choi E., 2019, MNRAS, 487, 333
. N Iwamoto, H Umeda, N Tominaga, K Nomoto, K Maeda, 10.1126/science.1112997Science. 309451Iwamoto N., Umeda H., Tominaga N., Nomoto K., Maeda K., 2005, Science, 309, 451
. L Jiang, X Fan, M Vestergaard, 10.1086/587868ApJ. 679962Jiang L., Fan X., Vestergaard M., 2008, ApJ, 679, 962
. L Jiang, 10.1038/s41550-020-01275-yNature Astronomy. 5256Jiang L., et al., 2021, Nature Astronomy, 5, 256
. J A Johnson, F Herwig, T C Beers, N Christlieb, 10.1086/510114ApJ. 6581203Johnson J. A., Herwig F., Beers T. C., Christlieb N., 2007, ApJ, 658, 1203
. T Jones, 10.48550/arXiv.2301.07126arXiv:2301.07126arXiv e-printsJones T., et al., 2023, arXiv e-prints, p. arXiv:2301.07126
. A I Karakas, J C Lattanzio, 10.1017/pasa.2014.21Publ. Astron. Soc. Australia3130Karakas A. I., Lattanzio J. C., 2014, Publ. Astron. Soc. Australia, 31, e030
. D Kashino, 10.3847/1538-4357/ac399eApJ. 92582Kashino D., et al., 2022, ApJ, 925, 82
. H Katz, D Sijacki, M G Haehnelt, 10.1093/mnras/stv1048MNRAS. 4512352Katz H., Sijacki D., Haehnelt M. G., 2015, MNRAS, 451, 2352
. H Katz, 10.1093/mnras/stac2657MNRAS. 518592Katz H., et al., 2023, MNRAS, 518, 592
. E N Kirby, J G Cohen, P Guhathakurta, L Cheng, J S Bullock, A Gallazzi, 10.1088/0004-637X/779/2/102ApJ. 779102Kirby E. N., Cohen J. G., Guhathakurta P., Cheng L., Bullock J. S., Gallazzi A., 2013, ApJ, 779, 102
. C Kobayashi, P Taylor, 10.48550/arXiv.2302.07255arXiv:2302.07255arXiv e-printsKobayashi C., Taylor P., 2023, arXiv e-prints, p. arXiv:2302.07255
. C Kobayashi, A I Karakas, M Lugaro, 10.3847/1538-4357/abae65ApJ. 900179Kobayashi C., Karakas A. I., Lugaro M., 2020, ApJ, 900, 179
. C S Kochanek, 10.1093/mnras/stw267MNRAS. 458127Kochanek C. S., 2016, MNRAS, 458, 127
A Kramida, Yu, J Reader, Team, National Institute of Standards and Technology. Gaithersburg, MDNIST Atomic Spectra Database (ver. 5.10)Kramida A., Yu. Ralchenko Reader J., and NIST ASD Team 2022, NIST Atomic Spectra Database (ver. 5.10), [Online]. Available: https://physics.nist.gov/asd [2023, February 17]. National In- stitute of Standards and Technology, Gaithersburg, MD.
. G Kulkarni, G Worseck, J F Hennawi, 10.1093/mnras/stz1493MNRAS. 4881035Kulkarni G., Worseck G., Hennawi J. F., 2019, MNRAS, 488, 1035
. Le Fèvre, O , 10.1051/0004-6361/201732197A&A. 62551Le Fèvre O., et al., 2019, A&A, 625, A51
. J Lequeux, M Peimbert, J F Rayo, A Serrano, S Torres-Peimbert, A&A. 80155Lequeux J., Peimbert M., Rayo J. F., Serrano A., Torres-Peimbert S., 1979, A&A, 80, 155
. H Li, O Y Gnedin, N Y Gnedin, X Meng, V A Semenov, A V Kravtsov, 10.3847/1538-4357/834/1/69ApJ. 83469Li H., Gnedin O. Y., Gnedin N. Y., Meng X., Semenov V. A., Kravtsov A. V., 2017, ApJ, 834, 69
. M Limongi, A Chieffi, 10.3847/1538-4365/aacb24ApJS. 23713Limongi M., Chieffi A., 2018, ApJS, 237, 13
. M Llerena, 10.1051/0004-6361/202141651A&A. 65916Llerena M., et al., 2022, A&A, 659, A16
. V Luridiana, C Morisset, R A Shaw, 10.1051/0004-6361/201323152A&A. 57342Luridiana V., Morisset C., Shaw R. A., 2015, A&A, 573, A42
. R Maiolino, F Mannucci, 10.1007/s00159-018-0112-2A&ARv. 273Maiolino R., Mannucci F., 2019, A&ARv, 27, 3
. F Mannucci, G Cresci, R Maiolino, A Marconi, A Gnerucci, 10.1111/j.1365-2966.2010.17291.x4082115MN-RASMannucci F., Cresci G., Maiolino R., Marconi A., Gnerucci A., 2010, MN- RAS, 408, 2115
. D Masters, 10.1088/0004-637X/785/2/153ApJ. 785153Masters D., et al., 2014, ApJ, 785, 153
. K Matsuoka, T Nagao, R Maiolino, A Marconi, D Park, Y Taniguchi, 10.1051/0004-6361/201629878A&A. 60890Matsuoka K., Nagao T., Maiolino R., Marconi A., Park D., Taniguchi Y., 2017, A&A, 608, A90
. G Meynet, A&A. 298767Meynet G., 1995, A&A, 298, 767
. G Meynet, A Maeder, 10.1051/0004-6361:20020755A&A. 390561Meynet G., Maeder A., 2002, A&A, 390, 561
. M Mingozzi, 10.3847/1538-4357/ac952cApJ. 939110Mingozzi M., et al., 2022, ApJ, 939, 110
. K Nakajima, R Maiolino, 10.1093/mnras/stac1242MNRAS. 5135134Nakajima K., Maiolino R., 2022, MNRAS, 513, 5134
. K Nakajima, M Ouchi, Y Isobe, Y Harikane, Y Zhang, Y Ono, H Umeda, M Oguri, 10.48550/arXiv.2301.12825arXiv:2301.12825Nakajima K., Ouchi M., Isobe Y., Harikane Y., Zhang Y., Ono Y., Umeda H., Oguri M., 2023, arXiv e-prints, p. arXiv:2301.12825
. K Nomoto, C Kobayashi, N Tominaga, 10.1146/annurev-astro-082812-140956ARA&A. 51457Nomoto K., Kobayashi C., Tominaga N., 2013, ARA&A, 51, 457
. P A Oesch, 10.1088/2041-8205/804/2/L30ApJ. 80430Oesch P. A., et al., 2015, ApJ, 804, L30
. P A Oesch, 10.3847/0004-637X/819/2/129ApJ. 819129Oesch P. A., et al., 2016, ApJ, 819, 129
. P G Pérez-González, 10.48550/arXiv.2302.02429arXiv:2302.02429arXiv e-printsPérez-González P. G., et al., 2023, arXiv e-prints, p. arXiv:2302.02429
. E Pérez-Montero, 10.1088/1538-3873/aa5abbPASP. 12943001Pérez-Montero E., 2017, PASP, 129, 043001
. E Pérez-Montero, 10.1051/0004-6361/201220070A&A. 54925Pérez-Montero E., et al., 2013, A&A, 549, A25
. L Piersanti, S Cristallo, O Straniero, 10.1088/0004-637X/774/2/98ApJ. 77498Piersanti L., Cristallo S., Straniero O., 2013, ApJ, 774, 98
. L S Pilyugin, L Mattsson, J M Vílchez, B Cedrés, 10.1111/j.1365-2966.2009.15182.xMNRAS. 398485Pilyugin L. S., Mattsson L., Vílchez J. M., Cedrés B., 2009, MNRAS, 398, 485
. L S Pilyugin, J M Vílchez, L Mattsson, T X Thuan, 10.1111/j.1365-2966.2012.20420.xMNRAS. 4211624Pilyugin L. S., Vílchez J. M., Mattsson L., Thuan T. X., 2012, MNRAS, 421, 1624
. O R Pols, R G Izzard, R J Stancliffe, E Glebbeek, 10.1051/0004-6361/201219597A&A. 54776Pols O. R., Izzard R. G., Stancliffe R. J., Glebbeek E., 2012, A&A, 547, A76
. Portegies Zwart, S F Mcmillan, S L W , 10.1086/341798ApJ. 576899Portegies Zwart S. F., McMillan S. L. W., 2002, ApJ, 576, 899
. Portegies Zwart, S F Makino, J Mcmillan, S L W , 10.48550/arXiv.astro-ph/9812006Hut P. 348117A&APortegies Zwart S. F., Makino J., McMillan S. L. W., Hut P., 1999, A&A, 348, 117
. F Renaud, F Bournaud, P.-A Duc, 10.1093/mnras/stu2208MNRAS. 4462038Renaud F., Bournaud F., Duc P.-A., 2015, MNRAS, 446, 2038
. G Roberts-Borsani, 10.3847/2041-8213/ac8e6eApJ. 93813Roberts-Borsani G., et al., 2022, ApJ, 938, L13
. B E Robertson, 10.1146/annurev-astro-120221-044656ARA&A. 60121Robertson B. E., 2022, ARA&A, 60, 121
. M P Roriz, C B Pereira, S Junqueira, M Lugaro, N A Drake, C Sneden, 10.1093/mnras/stac3378MNRAS. 5185414Roriz M. P., Pereira C. B., Junqueira S., Lugaro M., Drake N. A., Sneden C., 2023, MNRAS, 518, 5414
. R L Sanders, 10.3847/1538-4357/abf4c1ApJ. 91419Sanders R. L., et al., 2021, ApJ, 914, 19
. A Saxena, 10.1093/mnras/stac2742MNRAS. 5171098Saxena A., et al., 2022, MNRAS, 517, 1098
. Z Sheng, T Wang, G Ferland, X Shu, C Yang, N Jiang, Y Chen, 10.3847/2041-8213/ac2251ApJ. 92025Sheng Z., Wang T., Ferland G., Shu X., Yang C., Jiang N., Chen Y., 2021, ApJ, 920, L25
. L Siess, 10.1051/0004-6361/200913556A&A. 51210Siess L., 2010, A&A, 512, A10
. J D Simpson, S L Martell, 10.1093/mnras/stz2611MNRAS. 490741Simpson J. D., Martell S. L., 2019, MNRAS, 490, 741
. C C Steidel, A L Strom, M Pettini, G C Rudie, N A Reddy, R F Trainor, 10.3847/0004-637X/826/2/159ApJ. 826159Steidel C. C., Strom A. L., Pettini M., Rudie G. C., Reddy N. A., Trainor R. F., 2016, ApJ, 826, 159
. T Suda, M Aikawa, M N Machida, M Y Fujimoto, 10.1086/422135Iben Icko J. 611476ApJSuda T., Aikawa M., Machida M. N., Fujimoto M. Y., Iben Icko J., 2004, ApJ, 611, 476
. S Tacchella, 10.48550/arXiv.2302.07234arXiv:2302.07234arXiv e-printsTacchella S., et al., 2023, arXiv e-prints, p. arXiv:2302.07234
. K Takahashi, T Yoshida, H Umeda, 10.3847/1538-4357/aab95fApJ. 857111Takahashi K., Yoshida T., Umeda H., 2018, ApJ, 857, 111
. M Tang, 10.48550/arXiv.2301.07072arXiv:2301.07072arXiv e-printsTang M., et al., 2023, arXiv e-prints, p. arXiv:2301.07072
. C A Tremonti, 10.1086/423264ApJ. 613898Tremonti C. A., et al., 2004, ApJ, 613, 898
. H Übler, 10.48550/arXiv.2302.06647arXiv:2302.06647arXiv e-printsÜbler H., et al., 2023, arXiv e-prints, p. arXiv:2302.06647
. P Ventura, P Marigo, 10.1111/j.1365-2966.2010.17304.xMNRAS. 4082476Ventura P., Marigo P., 2010, MNRAS, 408, 2476
. P Ventura, 10.1051/0004-6361/202141017A&A. 6556Ventura P., et al., 2021, A&A, 655, A6
. J S Vink, 10.1146/annurev-astro-052920-094949ARA&A. 60203Vink J. S., 2022, ARA&A, 60, 203
. M Volonteri, 10.1007/s00159-010-0029-xA&ARv. 18279Volonteri M., 2010, A&ARv, 18, 279
. J Witstok, R Smit, R Maiolino, M Curti, N Laporte, R Massey, J Richard, M Swinbank, 10.1093/mnras/stab2591MNRAS. 5081686Witstok J., Smit R., Maiolino R., Curti M., Laporte N., Massey R., Richard J., Swinbank M., 2021, MNRAS, 508, 1686
. C Yang, T Wang, G J Ferland, L Dou, H Zhou, N Jiang, Z Sheng, 10.3847/1538-4357/aa8598ApJ. 846150Yang C., Wang T., Ferland G. J., Dou L., Zhou H., Jiang N., Sheng Z., 2017, ApJ, 846, 150
. R M Yates, P Schady, T W Chen, T Schweyer, P Wiseman, 10.1051/0004-6361/201936506A&A. 634107Yates R. M., Schady P., Chen T. W., Schweyer T., Wiseman P., 2020, A&A, 634, A107
. H J Zahid, R.-P Kudritzki, C Conroy, B Andrews, I T Ho, 10.3847/1538-4357/aa88aeApJ. 84718Zahid H. J., Kudritzki R.-P., Conroy C., Andrews B., Ho I. T., 2017, ApJ, 847, 18
. J W Den Hartogh, R Hirschi, M Lugaro, C L Doherty, U Battino, F Herwig, M Pignatari, P Eggenberger, 10.1051/0004-6361/201935476A&A. 629123den Hartogh J. W., Hirschi R., Lugaro M., Doherty C. L., Battino U., Herwig F., Pignatari M., Eggenberger P., 2019, A&A, 629, A123
| []
|
[
"Schwinger modelà la Very Special Relativity",
"Schwinger modelà la Very Special Relativity"
]
| [
"Jorge Alfaro \nInstituto de Física\nPontificia Universidad de Católica de Chile\nAv. Vicuña Mackenna 4860SantiagoChile\n",
"Alex Soto \nInstituto de Física\nPontificia Universidad de Católica de Chile\nAv. Vicuña Mackenna 4860SantiagoChile\n"
]
| [
"Instituto de Física\nPontificia Universidad de Católica de Chile\nAv. Vicuña Mackenna 4860SantiagoChile",
"Instituto de Física\nPontificia Universidad de Católica de Chile\nAv. Vicuña Mackenna 4860SantiagoChile"
]
| []
| In this work, we show that Lorentz invariant theories in 1+1 dimensions admit new terms inspired by Very Special Relativity (VSR) theories. We have studied the Schwinger model in VSR. We show the axial current is classically conserved in the presence of a mass term coming from the VSR invariant terms but without standard Lorentz invariant mass. Furthermore, it is shown that both the vector current as well as the axial current are modified with respect to the free case when the fermion is coupled to an external electromagnetic field due to the nonlocal operator present in the theory. The axial anomaly is computed, and we found the same standard topological invariant with a modification in the coefficient. | 10.1016/j.physletb.2019.134923 | [
"https://export.arxiv.org/pdf/1907.06273v2.pdf"
]
| 196,621,746 | 1907.06273 | 1d6b47fad879a91b63eaa2a85a7e6c6a82dcdfd2 |
Schwinger modelà la Very Special Relativity
Jorge Alfaro
Instituto de Física
Pontificia Universidad de Católica de Chile
Av. Vicuña Mackenna 4860SantiagoChile
Alex Soto
Instituto de Física
Pontificia Universidad de Católica de Chile
Av. Vicuña Mackenna 4860SantiagoChile
Schwinger modelà la Very Special Relativity
(Dated: March 18, 2022)
In this work, we show that Lorentz invariant theories in 1+1 dimensions admit new terms inspired by Very Special Relativity (VSR) theories. We have studied the Schwinger model in VSR. We show the axial current is classically conserved in the presence of a mass term coming from the VSR invariant terms but without standard Lorentz invariant mass. Furthermore, it is shown that both the vector current as well as the axial current are modified with respect to the free case when the fermion is coupled to an external electromagnetic field due to the nonlocal operator present in the theory. The axial anomaly is computed, and we found the same standard topological invariant with a modification in the coefficient.
In this work, we show that Lorentz invariant theories in 1+1 dimensions admit new terms inspired by Very Special Relativity (VSR) theories. We have studied the Schwinger model in VSR. We show the axial current is classically conserved in the presence of a mass term coming from the VSR invariant terms but without standard Lorentz invariant mass. Furthermore, it is shown that both the vector current as well as the axial current are modified with respect to the free case when the fermion is coupled to an external electromagnetic field due to the nonlocal operator present in the theory. The axial anomaly is computed, and we found the same standard topological invariant with a modification in the coefficient.
I. INTRODUCTION
Quantum Electrodynamics in 1 + 1 dimensions (QED 2 ) has been studied, and it has an exact solution discovered by Schwinger [1] when the fermion remains massless. In this model, called Schwinger Model, the photon acquires a mass e 2 /π. This model has been studied and reviewed extensively (see for example [2][3][4][5]) since this model presents confinement, because in two-dimensional space-time the Coulomb potential increases linearly, and it prevents the fermions to become free [6,7]. Interesting properties as the nontrivial vacuum structure were studied in the work of Lowenstein and Swieca [8]. Moreover, instantons in this model have been analyzed by Smilga [9]. In addition, despite the massive Schwinger model is not exactly solvable, it has been reviewed too [10,11]. Another essential feature in the Schwinger model is the presence of the chiral anomaly, which is easier to compute than in four dimensions. The axial vector current, which classically is conserved, gets a new term after radiative corrections. Thus,
∂ µ j µ5 = e 2π µν F µν .(1)
For good reviews of this anomaly using a perturbative treatment see [12,13].
Anomalies have not been studied yet in the context of SIM (2) invariant theories. These theories began with the claim of Cohen and Glashow that nature could be described only with the Lorentz subgroup SIM (2) [14]. This theory, studied in four dimensions and called Very Special Relativity (VSR), does not have invariant tensors, and it has the same important features of Special Relativity, like time dilation, velocity addition, and maximum attainable velocity. Under this framework, a fixed null vector n transforms with a phase. Hence, new invariant terms like n · p/n · q can be constructed in the lagrangian. As a consequence of this, the neutrino gets mass without new particles or violation of leptonic number [15]. In four dimensions VSR has been studied in electrodynamics [16], and an important feature is the possibility and new consequences of a non-violating gauge invariant photon mass [17]. Also, the electroweak model under this formalism has been reviewed [18]. In addition, we found VSR studies in locally anisotropic cosmology [19], considering two-time physics [20], curved space-time [21] and using gaugeon formalism [22].
In this work, we will focus our attention on two dimensions instead of four, and we will analyze the chiral anomaly in the VSR-QED without the standard Lorentz invariant mass. The anomaly computation is easier than in four dimensions and it is simpler to test the new VSR-like terms here before we apply them in four-dimensional models. Meanwhile we have to keep in mind that in lower dimensions we have less degrees of freedom than in real life. In fact there are not true dynamical degrees of freedom associated with the electromagnetic field in two dimensions [11].
Thus, the outline of this work is as follows. In section II we will analyze the Lorentz group for two dimensions and the connection with VSR theories. In section III we will review the classical conservation of the free vector and axial current under the VSR formalism. In section IV we will derive the vector and axial current when the fermion is coupled with an external electromagnetic field. Section V presents the path integral view of the lagrangian reviewed in section IV, and we will compute the vacuum polarization of the photon. In section VI, we will compute the expectation value of the vector current. In section VII, we present the result of the axial anomaly in the VSR two dimensional model, and finally, in section VIII, we will summarize and highlight the results. We recall the Lorentz group as the group of transformations that left invariant the metric
g µν = g ρσ Λ µ ρ Λ ν σ .(2)
In two dimensions we choose the metric as g = diag(1, −1). If we consider an infinitesimal transformation Λ µ ρ = δ µ ρ +ω µ ρ , it is easy to see that ω µν = ω µ ρ g ρν is antisymmetric. Thus, due to the antisymmetry, the Lorentz group in 1 + 1 dimensions has only one parameter. It means we have one generator of the Lorentz group. We define it as
K = 0 1 1 0 .(3)
We notice that we can construct the Lorentz transformations applying successive transformations to the identity
Λ(θ) = exp (Kθ),(4)
where θ is a parameter. With this, the transformation reads
Λ = cosh θ sinh θ sinh θ cosh θ .(5)
We can check Λ effectively satisfies the identity (2). Moreover, is easy to see there are not invariant tensors under this transformation. However, if we observe the following null vector
n = 1 1 ,(6)
it transforms with a phase under this transformation, Λn = e θ n. Therefore, we can add to the lagrangian terms with fractions which contain n as in the numerator as in the denominator, because they are invariant under the Lorentz transformation. This kind of terms have been studied in four dimensional VSR theories (see for instance [15][16][17][18]), where the null vector (1, 0, 0, 1) transforms in the same way under SIM (2) group transformations. Nevertheless, these terms have not been incorporated in two dimensional works in Lorentz invariant theories and they could be added.
III. CLASSICAL AXIAL AND VECTOR CURRENTS FOR FREE VSR FERMIONS
Since the two dimensional Lorentz theories admit terms with the null vector n = (1, 1) we proceed as in the VSR theories. The general VSR lagrangian for a free fermion is given by
L =ψ i / ∂ − M + i m 2 2 / n n · ∂ ψ.(7)
Here, we use the slash notation / n = γ µ n µ and / ∂ = γ µ ∂ µ . The γ µ are the gamma matrices, which we have chosen in two dimensions as
γ 0 = 0 −i i 0 , γ 1 = 0 i i 0 .(8)
Also, we can define γ 5 = γ 0 γ 1 , which will be useful later. In this representation, γ 5 is given by
γ 5 = 1 0 0 −1 .(9)
In addition m is a mass parameter in the SIM(2) theory which parametrizes the deviations of the Lorentz Symmetry. Also, we can associate this parameter with the neutrino mass (see [15]). We notice if m → 0, we recover the standard result. In addition, M is the standard Lorentz invariant fermion mass. We will consider from here M = 0 to study the Schwinger model in VSR. Nevertheless, under this assumption, our model is a fermion with a small mass m.
Although there is not standard mass, our fermion is massive due to the VSR term. Thus, in VSR we have a slightly different massive Schwinger model. Also, ψ is a two component spinor. As M = 0, the lagrangian is
L 0 =ψ i / ∂ + i m 2 2 / n n · ∂ ψ.(10)
From the equation (10) we compute the equations of motion forψ and ψ
ψ −i ← / ∂ − i m 2 2 / n n · ← ∂ = 0 (11) i / ∂ + i m 2 2 / n n · ∂ ψ = 0(12)
where ← ∂ indicates the derivative acts over the object in the left. If we multiply by ψ in the right in (11) an byψ in the left in (12) and we sum both we get
∂ µ ψ γ µ ψ + m 2 2 1 n · ∂ψ / nn µ 1 n · ∂ ψ = 0.(13)
Therefore, we define the expression in the bracket as the free vector current
j µ f ree =ψγ µ ψ + m 2 2 1 n · ∂ψ / nn µ 1 n · ∂ ψ .(14)
We can proceed in an analogue way to define the axial current j µ5 as
j µ5 f ree =ψγ µ γ 5 ψ + m 2 2 1 n · ∂ψ / nγ 5 n µ 1 n · ∂ ψ .(15)
We observe that ∂ µ j µ f ree = 0 and ∂ µ j µ5 f ree = 0; thus, both currents are conserved at the classical level.
In two dimensions the relation γ µ γ 5 = − µν γ ν holds, where µν is the two-dimensional Levi-Civita symbol. Hence, we apply this relation in the equation (15) and we use the equation (14) to get
j µ5 f ree = − µν j f ree ν + m 2 2 1 n · ∂ψ ( µν / nn ν + / nγ 5 n µ ) 1 n · ∂ ψ .(16)
Since n 0 = n 1 = 1 and n 0 = −n 1 = 1 we found µν / nn ν + / nγ 5 n µ = 0. Thus
j µ5 f ree = − µν j f ree ν ,(17)
as in the standard case.
IV. CURRENTS FOR FERMIONS IN AN EXTERNAL ELECTROMAGNETIC FIELD
Let us consider the lagrangian for a fermion with only a VSR mass term coupled with an external electromagnetic field A µ .
L =ψ i / D + i m 2 2 / n n · D ψ,(18)
where D µ = ∂ µ + ieA µ . From this lagrangian we get the equations of motion for ψ andψ
i / D + i m 2 2 / n n · D ψ = 0,(19)(D † µψ )γ µ + 1 2 m 2 1 n · D †ψ / n = 0,(20)
where the operator D † µ is defined as D † µ = ∂ µ − ieA µ .
We proceed similarly as in the free case, multiplying byψ in the left in (19), by ψ on the right in (20) and sum both. It results in
∂ µ (ψγ µ ψ) + 1 2 m 2 ψ / n 1 n · D ψ + 1 n · D †ψ / nψ = 0.(21)
We can rewrite this expression as
∂ µ ψ γ µ ψ + 1 2 m 2 1 n · D †ψ / nn µ 1 n · D ψ = 0.(22)
Thus, the current in this case is given by
j µ =ψγ µ ψ + 1 2 m 2 1 n · D †ψ / nn µ 1 n · D ψ .(23)
We notice the current for the fermion in the electromagnetic field is modified. Due to the non local term (n · D) −1 the current j µ acquires a new dependence on A. We proceed in an analogue way to get j µ5 and we get:
j µ5 =ψγ µ γ 5 ψ + 1 2 m 2 1 n · D †ψ / nn µ γ 5 1 n · D ψ(24)
Despite the modification of the current from its free counterpart, for the axial and vector currents the relation
j µ5 = − µν j ν ,(25)
still holds, and they are classically conserved (∂ µ j µ = ∂ µ j µ5 = 0).
Notice that both currents are gauge invariant under ψ → e −ieχ ψ and A µ → A µ + ∂ µ χ. Then we can work in the gauge n · A = 0. In this gauge both currents reduce to the free case currents.
V. PATH INTEGRAL, FEYNMAN RULES AND PHOTON SELF-ENERGY
In this section we will analyze the fermion coupled to an external electromagnetic field using the path integral formalism. The generating functional for this situation is given by
Z = DψDψ exp i d 2 xψ i / D + i m 2 2 / n n · D ψ .(26)
From the equation (26) we get the Feynman rules for VSR-QED. Due to the nonlocal term (n · D) −1 there will be an infinite number of vertices in the series. However, if we work in the Light Cone Gauge (LCG) those vertices with two or more photon legs will not contribute. Nevertheless, for the general case, we list in figure 1 the Feynman rules including the first new vertex.
To see how it works, we will compute the lowest order vacuum polarization Π µν in two dimensions. We observe a new diagram appears due to the new vertex coming from the non-local operator expansion, which is displayed in the figure 2.
As we mentioned before, in the light cone gauge the operator (n · D) −1 reduces to (n · ∂) −1 . Here, we will compute the diagrams without working in a specific gauge to show the result satisfies the Ward identity. In the next section, when we will compute the currents, for the sake of simplicity, we will use the light cone gauge. Therefore, using the Feynman rules in the figure 1 the vacuum polarization is written as We proceed using dimensional regularization. In order to compute the SIM(2) integrals we use the following decomposition formula
iΠ 1µν = −e 2 d 2 p (2π) 2 1 (p 2 − m 2 + iε)((p − q) 2 − m 2 + iε) tr γ µ + 1 2 m 2 nn µ n · pn · (p − q) / p − m 2 2 / n n · p × γ ν + 1 2 m 2 / nn ν n · pn · (p − q) / p − / q − m 2 2 / n n · (p − q) ,(27)iΠ 2µν = − 1 2 e 2 m 2 n µ n ν d 2 p (2π) 2 1 (n · p) 2 1 n · (p + q) + 1 n · (p − q) 1 p 2 − m 2 + iε tr / n / p − m 2 2 n n · p .(281 (n · (p + k i ))(n · (p + k j )) = 1 n · (k i − k j ) 1 (n · (p + k j ) − 1 n · (p + k i ) ,(29)
Next, we make a change of variables wherever is necessary, and the integrals with (n · p) −1 are computed using the Mandelstam-Leibbrandt prescription. The important formulas were derived in [25] starting from a symmetry property of n and a new null vectorn introduced to regulate the integrals. These integrals are listed in the Appendix A. The introduction of the new null vectorn breaks the SIM(2) invariance. To recover it we follow the reference [26] tradinḡ n with a linear combination of n and the external momentum as
n µ = − q 2 2(n · q) 2 n µ + q µ n · q .(30)
After the calculation we get iΠ µν = α(q 2 ) q 2 g µν − q µ q ν + β(q 2 ) −g µν + n ν q µ + n µ q ν n · q − q 2 n µ n ν (n · q) 2 ,
where
α(q 2 ) = − ie 2 π 1 0 dx x(1 − x) m 2 − x(1 − x)q 2 − iε , (32) β(q 2 ) = ie 2 m 2 2π 1 0 dx xq 2 (m 2 − xq 2 − iε)(m 2 − x(1 − x)q 2 − iε)
.
(33)
The equation (31) satisfies the Ward identity q µ Π µν = 0 as it is required by the gauge invariance. From the expressions (32) and (33) we see that α(0) is finite and β(0) = 0. It means in the two dimensional QED-VSR the photon does not get mass, as in the standard QED.
When m 2 goes to zero in (32) and (33) we recover the standard result. Here, as the VSR result contains the mass term, when we perform the limit ε → 0 we observe in α(q 2 ) there is a branch cut where m 2 − x(1 − x)q 2 < 0. The product x(1 − x) is at most 1/4. Hence, the branch cut begins at q 2 = 4m 2 , which corresponds to the threshold for the creation of an electron-positron pair. So, the theory admits pair production in two dimensions.
We will leave the expressions for α(q 2 ) and β(q 2 ) in this way to move on to the computations of the currents and next, the axial anomaly, where only α(q 2 ) will be necessary. The integrals are computed in the Appendix B.
VI. VECTOR CURRENT
Now, we will compute expectation value for the current j µ . From the generating functional in the equation (26) we can get the effective action Γ using Γ = log Z. The current is obtained deriving respect to A µ ,
j µ = δΓ δA µ .(34)
Thus, we have
j µ = 1 Z DψDψ ψ γ µ ψ + 1 2 m 2 1 n · D †ψ / nn µ 1 n · D ψ exp i d 2 xL .(35)
We will work in the LCG and we get
j µ = 1 Z DψDψ ψ γ µ + m 2 2 / nn µ (n · ← ∂ )(n · ∂) ψ exp i d 2 xL 0 exp −ie d 2 xψ / Aψ ,(36)
where we recall L 0 is the free fermion lagrangian in the equation (10). We proceed perturbatively only to one loop, because following the argument in [13], the chiral anomaly is exact at one loop. Although the argument was used in the standard computation, it holds here, since we can interpret the anomaly topologically. Thus, as topological quantities cannot change continuously, perturbative corrections at higher than one loop should not appear. Hence
j µ (x) = lim x →x tr γ µ + 1 2 m 2 1 n · ∂ x / nn µ 1 n · ∂ x S F (x − x ) − lim x →x ie d 2 y tr γ µ + 1 2 m 2 1 n · ∂ x / nn µ 1 n · ∂ x S F (x − y) / A(y)S F (y − x ) .(37)
Where, we have replaced the dependence on x inψ with the limit x → x to distinguish where the non-local operator n · ∂ acts, and we indicate with a subscript in the partial derivatives the variable to be derived. We write the above expression in the Fourier space and after some algebra we get
j µ (q) = (−ie) d 2 p (2π) 2 tr γ µ + 1 2 m 2 / nn µ (n · (p − q))(n · p) i / p − m 2 2 / n n·p (γ ν ) i / p − / q − m 2 2 / n n·(p−q) A ν (q).(38)
This object is the same vacuum polarization after using the condition n · A = 0, except an i/e factor. Thus, the expectation value for the current is given by
j µ (q) = i e α(q 2 ) q 2 A µ − q µ q · A + i e β(q 2 ) −A µ + n µ q · A n · q .(39)
We notice the expectation value of the current is conserved, q µ j µ (q) = 0.
VII. AXIAL ANOMALY
We will compute the expectation value for j µ5 . We use the equation (25), which relates j µ with j µ5 , and we use the equation (39) to get
j µ5 = − i e α(q 2 ) µν q 2 A ν − q ν q · A − i e β(q 2 ) µν −A ν + n ν q · A n · q .(40)
We contract (40) with q µ and we write it in terms of F µν . Therefore,
q µ j µ5 = − i e (α(q 2 )q 2 − β(q 2 )) 1 2 ε µν F µν (q) − β(q 2 ) n α q β F αβ (n · q) 2 µν q µ n ν .(41)
Seemingly, a new anomaly term appears. Nevertheless, since we are working in two dimensions is easy to see that n α q β F αβ = 1 2 ε µν n µ q ν ε αβ F αβ . Thus,
q µ j µ5 = − i e (α(q 2 )q 2 − β(q 2 )) 1 2 ε µν F µν (q) + β(q 2 ) (ε αβ n α q β ) 2 2(n · q) 2 ε µν F µν .(42)
In addition, using n 0 = −n 1 = 1 we get as result ε αβ n α q β n · q = 1,
Therefore, the terms with β(q 2 ) cancel out and we get
q µ j µ5 = − i 2e α(q 2 )q 2 ε µν F µν (q),(44)
We use the result for α(q 2 ) in the appendix B and finally
q µ j µ5 = e 2π + em 2 πq 2 1 − 4m 2 −iε q 2 log 1 + q 2 −4m 2 +iε q 2 −1 + q 2 −4m 2 +iε q 2 ε µν F µν (q).(45)
In the limit m → 0, we recover the standard result. In that case, the equation (45) reduces to the result displayed in the equation (1). In the limit q 2 → 0 and ε → 0 the result is zero. So, the anomaly vanishes. The standard result reached in the limit m → 0 is equivalent to take q → ∞. Thus, our result interpolates between large momentum (short distances) where an anomaly is appreciated and the low momentum (large distances) where there is not anomaly. Furthermore, we see in the equation (45) that the same topological invariant appears as in the standard computation. However, there is a modification in the coefficient next to the anomaly term due to the VSR term.
VIII. CONCLUSIONS
The existence of the null vector n = (1, 1) that transforms by a phase in the two dimensional Lorentz group allowed us to study the Schwinger model under the VSR framework. Terms which contains n in fractions could be incorporated elsewhere.
We have found in section V that the photon does not receive a mass since α(0) is finite and β(0) = 0. This result is completely different from the Schwinger work in the standard QED 2 , where the photon acquires a mass e 2 /π. In addition, from the vacuum polarization computation, we observe pair production in this model.
Both the free vector current and the free axial current change when we couple the fermion with only a VSR mass to an external electromagnetic field due to the non-local operator (n · D) −1 . However, it reduces to the free case in the light cone gauge n · A = 0. Since the VSR current is different from the standard case, our calculation of the axial anomaly presents a difference in the coefficient which accompanies the anomaly term respect to the standard result.
1 0 dx xq 2 (m 2 − xq 2 − iε)(m 2 − x(1 − x)q 2 − iε) .(B2)
To simplify the computation we split in partial fractions as α(q 2 ) as β(q 2 ). To do this in β(q 2 ) is essential to keep m 2 = 0. After this, we have
α(q 2 ) = ie 2 πq 2 + ie 2 m 2 π(q 2 ) 2 1 − 4m 2 −iε q 2 (I 1 − I 2 ), (B3) β(q 2 ) = ie 2 4π 1 1 − 4m 2 −iε q 2 + 1 I 1 − ie 2 4π 1 1 − 4m 2 −iε q 2 − 1 I 2 − ie 2 2π I 3 ,(B4)
where the terms I 1 , I 2 and I 3 are defined as
I 1 = 1 0 dx 1 x − 1 2 1 − 1 − 4m 2 −iε q 2 ,(B5)I 2 = 1 0 dx 1 x − 1 2 1 + 1 − 4m 2 −iε q 2 ,(B6)I 3 = 1 0 dx 1 x − m 2 −iε q 2 .
(B7)
We solve the integrals and we get
I 1 = log 1 + q 2 −4m 2 +iε q 2 −1 + q 2 −4m 2 +iε q 2 ,(B8)I 2 = log −1 + q 2 −4m 2 +iε q 2 1 + q 2 −4m 2 +iε q 2 ,(B9)I 3 = log(q 2 − m 2 + iε) − log(−m 2 + iε).(B10)
and the values of α(q 2 ) and β(q 2 ) are
α(q 2 ) = ie 2 πq 2 + 2ie 2 m 2 π(q 2 ) 2 1 − 4m 2 −iε q 2 log 1 + q 2 −4m 2 +iε q 2 −1 + q 2 −4m 2 +iε q 2 ,(B11)β(q 2 ) = − ie 2 2π [log(q 2 − m 2 + iε) − log(−m 2 + iε)] + ie 2 2π 1 − 4m 2 q 2 log 1 + q 2 −4m 2 +iε q 2 −1 + q 2 −4m 2 +iε q 2 .(B12)
) FIG. 1 :
)1Useful Feynman rules in VSR-QED. FIG. 2: Photon vacuum polarization diagrams in VSR.
II. LORENTZ GROUP IN 1+1 DIMENSIONSarXiv:1907.06273v2 [hep-th] 4 Sep 2019
AcknowledgmentsThe work of A. Soto is supported by the CONICYT-PFCHA/Doctorado Nacional/2017-21171194 and Fondecyt 1150390. The work of J. Alfaro is partially supported by Fondecyt 1150390 and CONICYT-PIA-ACT1417.Appendix A: Integration with (n · p)−1In this appendix we list the main integrals needed to compute our expressions with (n · p) −1 in the paper, which are obtained from ref.[25]. They areTaking a derivative with respect to q in (A1) we getTaking a second derivative the integral isAppendix B: Solution of the integrals in the vacuum polarizationHere we will compute α(q 2 ) and β(q 2 ). We recall the expressions (32) and (33)β(q 2 ) = ie 2 m 2 2π
Gauge Invariance and Mass. 2. J S Schwinger, Phys. Rev. 1282425J. S. Schwinger, "Gauge Invariance and Mass. 2.," Phys. Rev. 128, 2425 (1962).
Two-dimensional quantum field theory: Examples and applications. E Abdalla, hep-th/9704192E. Abdalla, "Two-dimensional quantum field theory: Examples and applications," hep-th/9704192.
Nonperturbative methods in two-dimensional quantum field theory. E Abdalla, M C B Abdalla, K D Rothe, World Scientific728pSingapore, SingaporeE. Abdalla, M. C. B. Abdalla and K. D. Rothe, "Nonperturbative methods in two-dimensional quantum field theory," Singapore, Singapore: World Scientific (1991) 728 p
Selected Topics in Gauge Theories. W Dittrich, M Reuter, Lect. Notes Phys. 2441W. Dittrich and M. Reuter, "Selected Topics in Gauge Theories," Lect. Notes Phys. 244, 1 (1986).
The Schwinger Model and Its Axial Anomaly. N S Manton, Annals Phys. 159220N. S. Manton, "The Schwinger Model and Its Axial Anomaly," Annals Phys. 159, 220 (1985).
Physics of the Schwinger Model. D Wolf, J Zittartz, Z. Physik B -Condensed Matter. 59117D. Wolf and J. Zittartz, "Physics of the Schwinger Model," Z. Physik B -Condensed Matter (1985) 59: 117.
Vacuum polarization and the absence of free quarks. A Casher, J B Kogut, L Susskind, Phys. Rev. D. 10732A. Casher, J. B. Kogut and L. Susskind, "Vacuum polarization and the absence of free quarks," Phys. Rev. D 10, 732 (1974).
Quantum electrodynamics in two-dimensions. J H Lowenstein, J A Swieca, Annals Phys. 68172J. H. Lowenstein and J. A. Swieca, "Quantum electrodynamics in two-dimensions," Annals Phys. 68, 172 (1971).
Instantons in Schwinger model. A V Smilga, hep-th/9312110Phys. Rev. D. 495480A. V. Smilga, "Instantons in Schwinger model," Phys. Rev. D 49, 5480 (1994) [hep-th/9312110].
Charge Shielding and Quark Confinement in the Massive Schwinger Model. S R Coleman, R Jackiw, L Susskind, Annals Phys. 93267S. R. Coleman, R. Jackiw and L. Susskind, "Charge Shielding and Quark Confinement in the Massive Schwinger Model," Annals Phys. 93, 267 (1975).
More About the Massive Schwinger Model. S R Coleman, Annals Phys. 101239S. R. Coleman, "More About the Massive Schwinger Model," Annals Phys. 101, 239 (1976).
An Introduction to quantum field theory. M E Peskin, D V Schroeder, Addison-Wesleychapter 19.1M. E. Peskin and D. V. Schroeder, "An Introduction to quantum field theory," Addison-Wesley (1995), chapter 19.1.
J A Harvey, hep-th/0509097TASI 2003 lectures on anomalies. J. A. Harvey, "TASI 2003 lectures on anomalies," hep-th/0509097.
Very special relativity. A G Cohen, S L Glashow, hep-ph/0601236Phys. Rev. Lett. 9721601A. G. Cohen and S. L. Glashow, "Very special relativity," Phys. Rev. Lett. 97, 021601 (2006) hep-ph/0601236.
A Lorentz-Violating Origin of Neutrino Mass?. A G Cohen, S L Glashow, hep-ph/0605036A. G. Cohen and S. L. Glashow, "A Lorentz-Violating Origin of Neutrino Mass?," hep-ph/0605036.
SIM(2)-invariant Modifications of Electrodynamic Theory. S Cheon, C Lee, S J Lee, arXiv:0904.2065Phys. Lett. B. 67973hep-thS. Cheon, C. Lee and S. J. Lee, "SIM(2)-invariant Modifications of Electrodynamic Theory," Phys. Lett. B 679, 73 (2009) [arXiv:0904.2065 [hep-th]].
On the photon mass in Very Special Relativity. J Alfaro, A Soto, arXiv:1901.08011hep-thJ. Alfaro and A. Soto, "On the photon mass in Very Special Relativity," arXiv:1901.08011 [hep-th].
Electroweak standard model with very special relativity. J Alfaro, P Gonzalez, R Avila, arXiv:1504.04222Phys. Rev. D. 9112129904Phys. Rev. D. hep-phJ. Alfaro, P. Gonzalez and R. Avila, "Electroweak standard model with very special relativity," Phys. Rev. D 91, 105007 (2015) Addendum: [Phys. Rev. D 91, no. 12, 129904 (2015)] [arXiv:1504.04222 [hep-ph]].
The General Very Special Relativity in Finsler Cosmology. A P Kouretsis, M Stathakopoulos, P C Stavrinos, arXiv:0810.3267Phys. Rev. D. 79104011gr-qcA. P. Kouretsis, M. Stathakopoulos and P. C. Stavrinos, "The General Very Special Relativity in Finsler Cosmology," Phys. Rev. D 79, 104011 (2009) [arXiv:0810.3267 [gr-qc]].
Free particle in very special relativity, gauge symmetry and two-time physics. J M Romero, E Escobar, E Vazquez, arXiv:1203.2642Mod. Phys. Lett. A. 281350004hep-thJ. M. Romero, E. Escobar and E. Vazquez, "Free particle in very special relativity, gauge symmetry and two-time physics," Mod. Phys. Lett. A 28, 1350004 (2013) [arXiv:1203.2642 [hep-th]].
Very Special Relativity in Curved Space-Times. W Muck, arXiv:0806.0737Phys. Lett. B. 67095hep-thW. Muck, "Very Special Relativity in Curved Space-Times," Phys. Lett. B 670, 95 (2008) [arXiv:0806.0737 [hep-th]].
Quantum Gauge Freedom in Very Special Relativity. S Upadhyay, P K Panigrahi, arXiv:1608.03947Nucl. Phys. B. 915168hep-thS. Upadhyay and P. K. Panigrahi, "Quantum Gauge Freedom in Very Special Relativity," Nucl. Phys. B 915, 168 (2017) [arXiv:1608.03947 [hep-th]].
Light Cone Superspace and the Ultraviolet Finiteness of the N=4 Model. S Mandelstam, Nucl. Phys. B. 213149S. Mandelstam, "Light Cone Superspace and the Ultraviolet Finiteness of the N=4 Model," Nucl. Phys. B 213, 149 (1983).
The Light Cone Gauge in Yang-Mills Theory. G Leibbrandt, Phys. Rev. D. 291699G. Leibbrandt, "The Light Cone Gauge in Yang-Mills Theory," Phys. Rev. D 29, 1699 (1984).
Mandelstam-Leibbrandt prescription. J Alfaro, arXiv:1603.06453Phys. Rev. D. 93649901Phys. Rev. D. hep-thJ. Alfaro, "Mandelstam-Leibbrandt prescription," Phys. Rev. D 93, no. 6, 065033 (2016) Erratum: [Phys. Rev. D 94, no. 4, 049901 (2016)] [arXiv:1603.06453 [hep-th]].
A Sim(2) invariant dimensional regularization. J Alfaro, arXiv:1704.02299Phys. Lett. B. 772100hep-thJ. Alfaro, "A Sim(2) invariant dimensional regularization," Phys. Lett. B 772, 100 (2017) [arXiv:1704.02299 [hep-th]].
| []
|
[]
| [
"\nInstitute of Automation and Control Processes\nFar Eastern Branch of Russian Academy of Sciences\n5 Radio St690041VladivostokRussia\n"
]
| [
"Institute of Automation and Control Processes\nFar Eastern Branch of Russian Academy of Sciences\n5 Radio St690041VladivostokRussia"
]
| [
"Bulletin of the Russian Academy of Sciences: Physics"
]
| Possibility of anapole state in dielectric nanohole array metasurfaces with different hole shapesA. V. Panov(https://orcid.org/0000-0002-9624-4303) a, * Abstract-The optical anapole resonances in nanostructures display strong field confinement and substantially suppressed scattering. In this study using three-dimensional finite-difference timedomain simulations, it is shown that high refractive index dielectric nanohole array metasurfaces having different profiles of the holes can possess the anapole state. The multipole decomposition including the dipole electric toroidal moment for the lattice elements of the arrays is provided. The anapole state in the lattice elements is illustrated by time-averaged distributions of the energy. As a result, high-index freestanding metasurfaces with anapole state can be designed. | 10.3103/s1062873822700605 | [
"https://export.arxiv.org/pdf/2301.05418v1.pdf"
]
| 255,825,686 | 2301.05418 | a80e5858361c1ed8d16fda040ad9b1d088645685 |
2022
Institute of Automation and Control Processes
Far Eastern Branch of Russian Academy of Sciences
5 Radio St690041VladivostokRussia
Bulletin of the Russian Academy of Sciences: Physics
86202210.3103/S1062873822700605Received September 20, 2022; revised October 14, 2022; accepted October 20, 2022anapole statedielectric metasurfacefreestanding metasurfacesilicon
Possibility of anapole state in dielectric nanohole array metasurfaces with different hole shapesA. V. Panov(https://orcid.org/0000-0002-9624-4303) a, * Abstract-The optical anapole resonances in nanostructures display strong field confinement and substantially suppressed scattering. In this study using three-dimensional finite-difference timedomain simulations, it is shown that high refractive index dielectric nanohole array metasurfaces having different profiles of the holes can possess the anapole state. The multipole decomposition including the dipole electric toroidal moment for the lattice elements of the arrays is provided. The anapole state in the lattice elements is illustrated by time-averaged distributions of the energy. As a result, high-index freestanding metasurfaces with anapole state can be designed.
During the past years, significant attention by researchers has been given to the anapole states of high-index all-dielectric nanostructures. In the nanostructures, the anapole modes arise from destructive interference of the electric and toroidal dipole resonant modes of a specific chargecurrent distribution which display strong field enhancement along with reduced scattering [1]. Typically, the anapole states are observed in standalone dielectric nanoobjects (disks, spheres, cuboids etc.). Recently, high-index dielectric metasurfaces with circular nanopores were shown by the three-dimensional finite-difference time-domain (FDTD) simulations to possess the anapole state resulting in the effective optical Kerr nonlinearity increase by orders of magnitude [2]. This metasurface can be realized as a freestanding dielectric metasurface (membrane) that can be elaborated now for visible and near-infrared ranges [3,4]. Particularly, for near-infrared range this freestanding metasurface may be fabricated from silicon.
As shown in [2] by the multipole decomposition of scattering cross sections, for the specific geometric parameters of the circular nanohole array metasurface the maximum of the dipole electric toroidal moment is observed. This peak coincides with the minimum of the total scattering and can be illustrated by the double toroidal distribution of electric field energy. The transmission spectra of the circular nanohole arrays display high transmission in the vicinity to the anapole state. The optical nonlinearity is enhanced by two orders of magnitude than that of the unstructured material in proximity to the anapole state due to the energy confinement within the nanostructure.
In this study, the influence of the nanopore shape on the anapole state of the array lattice element is investigated. The base wavelength for the simulations is selected as λ=1034 nm that is typical for the Yb:YAG lasers. Figure 1 depicts schematics of the the nanohole square and hexagonal array metasurface. Here b 4 or b 6 are the sides of the nanohole, a=500 nm is the lattice constant, h is the thickness of the metasurface which is equal to 200 nm in this work. The simulated linearly polarized Gaussian beam falls perpendicularly on the metasurface. The electric field of the incident beam is polarized along the x-axis. After FDTD simulations, there were found geometric parameters for elements of the lattice with a=500 nm and h=200 nm for the square, hexagonal and octagonal nanopores. The lattice elements are delineated by the blue-lined square in Fig. 1. Figure 2 illustrates the time-averaged electric | E | 2 and magnetic | H | 2 energy distributions at transverse section of the lattice element at h/2 thickness. The electric energy distributions have shapes consisting of two loops giving rise to the anapole state. These energy distributions are like those obtained for the circular nanohole arrays in Ref. [2]. The lattice elements of the square and octagonal shapes of the nanoholes have approximately two times larger electric energy enhancement near the edges of the pore than that of the hexagonal shape. But it seems that this could not be realized in practice because of the imperfectness of the nanotechnology.
The existence of the anapole mode can be confirmed by the multipole analysis. A detailed multipole decomposition obtained after three-dimensional FDTD simulations of the scattered fields is presented in Fig. 3. The multipole decomposition of scattering cross sections for a lattice element reveals that the dipole electric toroidal moment T and its intensity C T has a maximum in the scattering cross section spectrum near the wavelength of interest 1034 nm at these sizes. Simultaneously, the total scattering cross section C sca tot and the electric dipole cross section C sca p have minima at λ=1034 nm that is the scattering from the lattice element is suppressed. The description of the decomposition moments and the procedure is given in [2]. All the spectra are very similar.
The transmission spectra of the Si nanohole arrays with a=500 nm and h=200 nm are depicted in Fig. 4. For wavelengths above the anapole state the transmission rapidly arises. This is a typical behavior for the anapole mode. The similar behavior of the transmission was observed for the lattices of circular nanoholes [2]. The dip in proximity to the anapole state corresponds to a spike in the scattering and the negative second order refractive index [2]. Since the transmission near the anapole state for Si nanohole array is below unity the optimal geometric parameters of the metasurface are yet to be determined before its implementation on the experiment. For example, GaP nanohole arrays show sharper increase in the transmission at the anapole state [2].
As can be seen from Figures 2-4, the shape of the nanopore has a minor effect on the existence of the anapole mode. Mostly, the geometric parameters for the anapole state are changed. The areas of the nanohole transverse sections S for all of the shapes at the anapole state are close: for square S = 0.085 m 2 , for hexagon S = 0.102 m 2 , for octagon S = 0.098 m 2 . Thus, it can be implemented in the manufactured metasurfaces which may be far from ideal geometric shapes due to the imperfectness of the technology.
In conclusion, the effect of the nanopore shape of the silicon nanohole lattice array on the existence of the electric dipole toroidal mode is studied. It is shown that all the investigated nanohole shapes (square, hexagonal and octagonal) allow the possibility of the anapole state with high electromagnetic energy confinement.
The results were obtained with the use of IACP FEB RAS Shared Resource Center "Far Eastern Computing Resource" equipment (https://www.cc.dvo.ru). Scattering cross section spectra for the multipole contributions (electric dipole C sca p , magnetic dipole C sca m , electric quadrupole C sca Q e , magnetic quadrupole C sca Q m ), their sum C sca tot and the intensity of the electric dipole toroidal moment C T for different Si lattice elements with a=500 nm, h=200 nm. The types of the nanohole and the nanohole transverse section side sizes are displayed above.
Fig. 2 .Fig. 3 .
23Time-average distributions of electric | E | 2 (left parts, red color) and magnetic | H | 2 (right parts, blue color) energy densities in the standalone Si lattice elements at the anapole modes (a=500 nm, h=200 nm, λ=1034 nm) for the different types of the nanopores. The types of the nanohole and the polygon side sizes are displayed above. The distributions are calculated within the plane intersecting geometric center of the disk and parallel to its base. The incident light beam is polarized along the vertical direction.
Fig. 4 .
4Transmission spectra for the Si nanohole arrays of the different shape with a=500 nm and h=200 nm. The types of the nanohole and the side sizes are displayed above.
CONFLICT OF INTERESTThe authors declare that they have no conflicts of interest.
. R Alaee, C Rockstuhl, I Fernandez-Corbaton, Opt. Comm. 40717R. Alaee, C. Rockstuhl, and I. Fernandez-Corbaton, Opt. Comm. 407, 17 (2018).
. A V Panov, Opt. Lett. 472866A. V. Panov, Opt. Lett. 47, 2866 (2022).
. A Karvounis, V Nalla, K F Macdonald, N I Zheludev, Adv. Mater. 301707354A. Karvounis, V. Nalla, K. F. MacDonald, and N. I. Zheludev, Adv. Mater. 30, 1707354 (2018).
. S W D Lim, M L Meretska, F Capasso, Nano Lett. 218642S. W. D. Lim, M. L. Meretska, and F. Capasso, Nano Lett. 21, 8642 (2021).
Schematics of the simulated metasurfaces comprising a lattice arrays of square or hexagonal nanoholes in a high refractive index slab. The Gaussian beam is incident normally on the metasurface. Fig. 1. Schematics of the simulated metasurfaces comprising a lattice arrays of square or hexagonal nanoholes in a high refractive index slab. The Gaussian beam is incident normally on the metasurface.
| []
|
[
"Interpolation Macdonald operators at infinity",
"Interpolation Macdonald operators at infinity"
]
| [
"Cesar Cuenca "
]
| []
| []
| We study the interpolation Macdonald functions, remarkable inhomogeneous generalizations of Macdonald functions, and a sequence A 1 , A 2 , . . . of commuting operators that are diagonalized by them. Such a sequence of operators arises in the projective limit of finite families of commuting q-difference operators studied by Okounkov, Knop and Sahi. The main theorem is an explicit formula for the operators A k . Our formula involves the family of Hall-Littlewood functions and a new family of inhomogeneous Hall-Littlewood functions, for which we give an explicit construction and identify as a degeneration of the interpolation Macdonald functions in the regime q → 0. This article is inspired by the recent papers of Nazarov-Sklyanin on Macdonald and Sekiguchi-Debiard operators, and our main theorem is an extension of their results.where (Q HL λ (·; t −1 )) * is the adjoint of the operator of multiplication by Q HL λ (·; t −1 ), with respect to the Macdonald inner product, see Theorem 3.2 in the text. | 10.1016/j.aam.2018.07.003 | [
"https://arxiv.org/pdf/1712.08014v1.pdf"
]
| 119,157,001 | 1712.08014 | bdff25a1c34c807df963d5107642d5054cc6cd96 |
Interpolation Macdonald operators at infinity
21 Dec 2017
Cesar Cuenca
Interpolation Macdonald operators at infinity
21 Dec 2017
We study the interpolation Macdonald functions, remarkable inhomogeneous generalizations of Macdonald functions, and a sequence A 1 , A 2 , . . . of commuting operators that are diagonalized by them. Such a sequence of operators arises in the projective limit of finite families of commuting q-difference operators studied by Okounkov, Knop and Sahi. The main theorem is an explicit formula for the operators A k . Our formula involves the family of Hall-Littlewood functions and a new family of inhomogeneous Hall-Littlewood functions, for which we give an explicit construction and identify as a degeneration of the interpolation Macdonald functions in the regime q → 0. This article is inspired by the recent papers of Nazarov-Sklyanin on Macdonald and Sekiguchi-Debiard operators, and our main theorem is an extension of their results.where (Q HL λ (·; t −1 )) * is the adjoint of the operator of multiplication by Q HL λ (·; t −1 ), with respect to the Macdonald inner product, see Theorem 3.2 in the text.
Introduction
The theory of Macdonald functions and their associated operators has recently enjoyed considerable attention and found applications in various areas, see for example, [1,4,5,15,12,13] and references therein. In the present work, we study an inhomogeneous version of the Macdonald functions, called the interpolation Macdonald functions. More explicitly, for any partition λ, one can construct a symmetric function I λ (·; q, t) with coefficients in the field Q(q, t) and such that its top-degree homogeneous term is the Macdonald function parametrized by λ. The interpolation Macdonald function I λ (·; q, t) is certain limit of the interpolation (or shifted ) Macdonald polynomials I λ|N (x 1 , . . . , x N ; q, t) considered previously by Knop [6], Okounkov [16,18,17], Olshanski [19,20], Sahi [22], among others.
In [17], a finite hierarchy of q-difference operators diagonalizing the interpolation Macdonald polynomials I λ|N was exhibited; we prefer to use the notation from [20]. Explicitly, D 0 N = 1, D 1 N , . . . , D N N are linear operators, in the algebra of symmetric polynomials on x 1 , . . . , x N with coefficients in Q(q, t), given by
N k=0 D k N z k := 1 1≤i<j≤N (x i − x j ) det 1≤i,j≤N x N −i−1 j (x j t 1−N − 1)t N −i zT q,xj + (x j + z) ,
where T q,xj are the q-shift operators. They diagonalize the interpolation Macdonald polynomials I λ|N (x 1 , . . . , x N ; q, t), ℓ(λ) ≤ N , in fact, D k N I λ|N (x 1 , . . . , x N ; q, t) = e k (q λ1 , q λ2 t −1 , . . . , q λN t 1−N )I λ|N (x 1 , . . . , x N ; q, t),
where e k (y 1 , . . . , y N ) is an elementary symmetric polynomial.
It is natural to ask if there is some kind of limit operator D k = lim N →∞ D k N acting on symmetric functions that diagonalizes the interpolation Macdonald functions I λ (·; q, t), and to ask for an explicit formula for such operator. The analogous problems for Macdonald and Sekiguchi-Debiard operators were answered by Nazarov-Sklyanin in [11,12]. Their work is our starting point; we wanted to know if there is an extension of their formulas to the inhomogeneous setting of interpolation Macdonald operators. In this paper, using ideas from Nazarov-Sklyanin, we answer the question for interpolation Macdonald operators in the affirmative. By considering a simple renormalization of the operators D k N , to be denoted A k N , we can take a natural projective limit A k := lim N →∞ A k N that is an operator acting on the ring of symmetric functions. Explicitly, these operators can be described nicely by the action of the generating function A ∞ (u; q, t) := 1 +
A 1 1 − u + A 2 (1 − u)(1 − ut) + A 3 (1 − u)(1 − ut)(1 − ut 2 ) + . . .
on the interpolation Macdonald functions:
A ∞ (u; q, t)I µ (·; q, t) = ∞ i=1 q µi − t i−1 u 1 − t i−1 u · I µ (·; q, t),
for all partitions µ = (µ 1 ≥ µ 2 ≥ . . . ). The definition of the operators {A k N } k in terms of {D k N } k , as well as the formal definition of the operators A k as limits of the finite ones A k N is made explicit in the text below, see Section 3.
The main result of this article is an explicit formula for the operators {A k } k∈N . The answer is given in terms of the dual Hall-Littlewood functions Q HL λ (·; t) and a new family of inhomogeneous Hall-Littlewood functions, that we denote by F HL λ (·; t). The formula is the following
INTRODUCTION
The symmetric function F HL λ (·; t) is a limit of a sequence of (a new family of) inhomogeneous Hall-Littlewood polynomials F HL λ (x 1 , . . . , x N ; t). These polynomials are defined by a formula involving a sum over the symmetric group, and resemble very much the usual Hall-Littlewood polynomials.
We also study these polynomials in more depth and prove that they are degenerations of interpolation Macdonald polynomials: F HL λ (x 1 , . . . , x N ; t) = lim q→0 I λ|N (x 1 , . . . , x N ; 1/q, 1/t).
Thus one can derive further properties for our polynomials F HL λ (x 1 , . . . , x N ; t), and for the corresponding functions F HL λ (·; t), as degenerations of known results for interpolation Macdonald polynomials. As examples, we derive closed-form formulas for the generating functions of onerow and one-column inhomogeneous Hall-Littlewood functions. As a consequence of the closed expression for the generating function of one-row inhomogeneous Hall-Littlewood functions, we obtain the following vertex operator form for A 1 :
A 1 = t t − 1 |z|≪1 dz 2π √ −1z exp ∞ n=1 z n (1 − t −n ) n p n exp ∞ n=1 z −n (q n − 1) ∂ ∂p n − t t − 1 + 1 − q 1 − t ∂ ∂p 1 ,
where p 1 , p 2 , . . . are the Newton power sums, also known as the collective variables in the physics literature. The Macdonald polynomial M λ|N (x 1 , . . . , x N ; q, t) is the top-degree homogeneous component of the interpolation Macdonald polynomial I λ|N (x 1 , . . . , x N ; q, t), i.e., M λ|N (x 1 , . . . , x N ; q, t) = lim a→+∞ I λ|N (ax 1 , . . . , ax N ; q, t) a deg I λ|N .
(1.1)
As mentioned above, Nazarov-Sklyanin studied the same problem we are dealing with, but for Macdonald and Sekiguchi-Debiard operators, degenerations corresponding to Macdonald and Jack functions, see [10,11,12,13,14]. From the relation (1.1) between the Macdonald and interpolation Macdonald polynomials, we can deduce the main theorem of [12] (as well as its degeneration in [11]) from Theorem 3.2. At the combinatorial level, our main theorem is a refined Cauchy identity for interpolation Macdonald functions; the arguments to prove this combinatorial identity differ from those in the articles of Nazarov-Sklyanin. Lastly, let us mention that the operators considered in [10,13,14], form a different hierarchy of operators than the ones in papers [11,12], and they admit an explicit formula via a Lax matrix. It would be interesting to see if one can apply similar techniques to produce such a different hierarchy of operators for interpolation Macdonald functions. We end this introduction with the organization of this article. In Section 2, we recall some basics of symmetric functions from [8], and some facts about interpolation functions from [20]. The new material is Subsection 2.5, where we construct an inhomogeneous analogue to the Hall-Littlewood function. Next, in Section 3, we discuss the operators of Okounkov that diagonalize the interpolation Macdonald polynomials, define the limit of such operator and state the main theorem. The proof of the main theorem is based on the symbol of our operators with respect to the Macdonald inner product, and it reduces to a combinatorial identity, like in [11,12]. However, our proof of the resulting identity differs from these papers, and is carried out in Section 4. As a complement to the main result, in Section 5 we study the inhomogeneous Hall-Littlewood polynomials; our second main result states that they are limits of interpolation Macdonald polynomials as q → 0. As an application, we also find a vertex operator representation of the first operator A 1 of the hierarchy.
Acknowledgements
The author is grateful to Alexei Borodin, and especially to Grigori Olshanski for helpful conversations. In particular, Grigori Olshanski conjectured Theorem 5.11; sections 5.1 and 5.2 of the text are largely based on my discussions with him.
Symmetric Functions
Preliminaries
We assume that the reader is familiar with the language of symmetric polynomials and symmetric functions, as in [8,Ch. I]. We recall only a few notions and set our terminology.
For any N ∈ N, let Λ N,F be the algebra of symmetric polynomials on N variables x 1 , . . . , x N and with coefficients in the field F (also Λ 0,F := F). In what follows we only need F = Q, Q(t), or Q(q, t), for two formal parameters q, t. The substitution x N = 0 gives a homomorphism π N : Λ N,F → Λ N −1,F for any N ≥ 1. The projective limit of the chain {Λ N,F , π N }, in the category of graded algebras, is denoted Λ F and called the algebra of symmetric functions. Equivalently, Λ F is the polynomial ring on infinitely many variables p 1 , p 2 , . . . that are identified with the Newton power sums.
For any N ∈ N, the set of partitions λ of length ℓ(λ) ≤ N is denoted Y(N ). We also let Y(0) be the singleton containing only the empty partition, and Y := ∪ N ≥0 Y(N ) the set of all partitions. All familiar bases of Λ N,F are parametrized by Y(N ), for example the set
{m λ (x 1 , . . . , x N ) : λ ∈ Y(N )} of monomial symmetric polynomials, m λ (x 1 , . . . , x N ) := 1≤i1<...<i k ≤N σ∈S k c −1 λ x λ1 i σ(1) · · · x λ k i σ(k) , (2.1)
is a basis of Λ N,Q . In the display (2.1), we used k instead of ℓ(λ), and c λ := m 1 (λ)! · m 2 (λ)! · · · , where m i = m i (λ) denotes the multiplicity of i in the partition λ. With this notation, we usually write λ = (1 m1 2 m2 · · · ). The monomial symmetric polynomials are stable: m λ (x 1 , . . . , x N , 0) = m λ (x 1 , . . . , x N ). This property leads us to consider the monomial symmetric function m λ = m λ (x 1 , x 2 , . . .) in Λ Q . Clearly {m λ : λ ∈ Y} is a basis of Λ Q . A different basis of Λ Q is given by {p µ := i≥1 p µi ∀µ ∈ Y}. In the next subsections, we recall other bases of Λ F , for F = Q(t) and Q(q, t), including those given by Hall-Littlewood and Macdonald functions.
Hall-Littlewood functions
Let N ∈ N, λ ∈ Y(N ). The Hall-Littlewood (to be abbreviated HL henceforth) polynomial
P HL λ (x 1 , . . . , x N ; t) ∈ Λ N,Q(t) is defined by, see [8, Ch. III], P HL λ (x 1 , . . . , x N ; t) := i≥0 mi j=1 1 − t 1 − t j w∈SN w N i=1 x λi i 1≤i<j≤N x i − tx j x i − x j ,(2.2)
where wf (x 1 , . . . , x N ) = f (x w(1) , . . . , x w(N ) ) for any permutation w ∈ S N and any function f (x 1 , . . . , x N ), λ = (1 m1 2 m2 . . . ), and m 0 = m 0 (λ) = N −ℓ(λ). If ℓ(λ) > N , we set P HL λ (x 1 , . . . , x N ; t) := 0.
It is known that P HL λ (x 1 , . . . , x N ; t) is a polynomial in the variables x 1 , . . . , x N , it has degree |λ| := i λ i , and has coefficients in Z[t]. It is also known that P HL ∅ (x 1 , . . . , x N ; t) = 1, which implies
w∈SN w 1≤i<j≤N x i − tx j x i − x j = N i=1 1 − t i 1 − t . (2.3)
The HL polynomials also have the stability property
P HL λ (x 1 , . . . , x N , 0; t) = P HL λ (x 1 , . . . , x N ; t), if λ ∈ Y(N ), 0, if λ / ∈ Y(N ). (2.4)
This stability allows one to define the HL function P HL λ (·; t) = P HL λ (x 1 , x 2 , . . . ; t) ∈ Λ Q(t) . The set {P HL λ (·; t) : λ ∈ Y} is a basis of Λ Q(t) . We shall also need the dual basis given by
Q HL λ (·; t) := b λ (t)P HL λ (·; t), ∀λ ∈ Y, where b λ (t) := i≥1 mi j=1 (1 − t j ). (2.5)
The basis {Q HL λ (·; t) : λ ∈ Y} is dual to {P HL λ (·; t) : λ ∈ Y} with respect to the inner product in (2.8) with q = 0; we shall not make use of this fact. This duality is equivalent to the following combinatorial identity, known as the Cauchy identity for HL polynomials:
λ∈Y P λ (x 1 , x 2 , . . . ; t)Q λ (y 1 , y 2 , . . . ; t) = i≥1 j≥1 1 − tx i y j 1 − x i y j . (2.6)
Macdonald functions
The Macdonald polynomials M λ|N (x 1 , . . . , x N ; q, t) ∈ Λ N,Q(q,t) , λ ∈ Y(N ),H 0 N := 1, H r N := t ( r 2 ) I⊂{1,...,N } |I|=r i∈I j / ∈I tx i − x j x i − x j i∈I T q,xi ∀1 ≤ r ≤ N. (2.7) For any λ ∈ Y(N ), the Macdonald polynomial M λ|N (x 1 , . . . , x N ; q, t) is the unique element of Λ N,Q(q,t) of the form m λ (x 1 , . . . , x N ) + terms c µ m µ (x 1 , . . . , x N ),
where µ ranges over partitions in Y(N ) with |µ| = |λ| and µ < λ in the dominance order, and moreover H r N M λ|N (x 1 , . . . , x N ; q, t) = e r (q λ1 t N −1 , . . . , q λN ) · M λ|N (x 1 , . . . , x N ; q, t) ∀r = 0, 1, . . . , N, where e r (y 1 , . . . , y N ) = 1≤i1<...<ir ≤N y i1 · · · y ir is an elementary symmetric polynomial.
For each N ∈ N, the set {M λ|N (x 1 , . . . , x N ; q, t) : λ ∈ Y(N )} is a basis of Λ N,Q(q,t) . The Macdonald polynomials are stable, i.e., M λ|N +1 (x 1 , . . . , x N , 0; q, t) = M λ|N (x 1 , . . . , x N ; q, t), which allows us to define the Macdonald functions M λ (·; q, t) ∈ Λ Q(q,t) . It follows that {M λ (·; q, t) : λ ∈ Y} is a basis of Λ Q(q,t) ; it is known that this basis is orthogonal with respect to the inner product (·, ·) q,t , defined by declaring
(p µ , p ν ) q,t := δ µ,ν i≥1 (i mi(µ) m i (µ)!) · ℓ(µ) i=1 1 − q µi 1 − t µi ∀µ, ν ∈ Y. (2.8) Macdonald functions generalize HL functions, since Λ Q(q,t) q=0 −−→ Λ Q(t) maps M λ (·; q, t) to P HL λ (·; t).
Interpolation Macdonald functions
We define the interpolation Macdonald functions in the notation of Olshanski [19,20]. They also appear in papers by Knop [6], Okounkov [16,17,18], and Sahi [22] 1 . Unlike the functions from previous subsections, the interpolation Macdonald functions are inhomogeneous. For any λ ∈ Y(N ), the interpolation Macdonald polynomial I λ|N (x 1 , . . . , x N ; q, t) is the unique element of Λ N,Q(q,t) satisfying the following conditions:
1. deg I λ|N (x 1 , . . . , x N ; q, t) = |λ|, 2. I λ|N (q −µ1 , q −µ2 t, . . . , q −µN t N −1 ; q, t) = 0 for all µ = λ with |µ| ≤ |λ|, 3. I λ|N (q −λ1 , q −λ2 t, . . . , q −λN t N −1 ; q, t) = 0, 4. I λ|N (x 1 , . . . , x N ; q, t) = m λ (x 1 , · · · , x N ) + · · · ,
where the dots in the last condition stand for a linear combination of monomial symmetric polynomials m µ (x 1 , · · · , x N ), with either |µ| < |λ|, or |µ| = |λ| and µ ≤ λ in the dominance order of partitions.
Items 3 and 4 of the definition can be replaced by
I λ|N (q −λ1 , q −λ2 t, . . . , q −λN t N −1 ; q, t) = C(λ; q, t),(2.9)
where the normalization constant C(λ; q, t) is
C(λ; q, t) := q −2n(λ ′ )−|λ| t n(λ) s∈λ
(1 − q a(s)+1 t l(s) ).
(2.10)
In formula (2.10), we need to recall some notation: for any partition κ = (κ 1 , κ 2 , . . . ), we denote
n(κ) := i≥1 (i − 1)κ i = j≥1 κ ′ j 2 ,
and for any square in the Young diagram s = (i, j) ∈ κ, we let
a(s) := λ i − j, l(s) := λ ′ j − i, a ′ (s) := j − 1, l ′ (s) = i − 1,
be the arm length, leg length, coarm length and coleg length of s = (i, j), respectively. It turns out that the top-degree homogeneous component of I λ|N (x 1 , . . . , x N ; q, t) is the Macdonald polynomial M λ|N (x 1 , . . . , x N ; q, t). As a consequence, {I λ|N (x 1 , . . . , x N ; q, t) : λ ∈ Y(N )} is a basis of Λ N,Q(q,t) .
The relation between Okounkov's notation P * λ (x 1 , . . . , x N ; q, t) and ours is
I λ|N (x 1 , . . . , x N ; q, t) = P * λ (x 1 , x 2 /t, . . . , x N /t N −1 ; 1/q, 1/t).
Thus the translation of [16, (1.9)] to our notation is the special value
I µ|N (a, at, . . . , at N −1 ; q, t) = t n(µ) · s∈µ (1 − q a ′ (s) t N −l ′ (s) )(a − q −a ′ (s) t l ′ (s) ) (1 − q a(s) t l(s)+1 ) .
In particular, by using
s∈µ a ′ (s) = n(µ ′ ), s∈µ l ′ (s) = n(µ),
we obtain
I µ|N (0 N ; q, t) = I µ|N (0, . . . , 0 N zeroes ; q, t) = (−1) |µ| q −n(µ ′ ) t 2n(µ) · s∈µ 1 − q a ′ (s) t N −l ′ (s) 1 − q a(s) t l(s)+1 . (2.11)
In the spirit of the classical theory of symmetric functions, we want to define an element of Λ Q(q,t) that is a limit of the interpolation Macdonald polynomials I λ|N (x 1 , . . . , x N ; q, t), as N tends to infinity. The suitable stability property is the following: I λ|N +1 (x 1 , . . . , x N , t N ; q, t) = I λ|N (x 1 , . . . , x N ; q, t).
(2.12)
Now consider the chain of algebra homomorphisms
Q(q, t) x1=1 ← −− − Λ 1,Q(q,t) x2=t ← −− − Λ 2,Q(q,t) ← · · · ← Λ N −1,Q(q,t) xN =t N −1 ← −−−−− − Λ N,Q(q,t) xN+1=t N ← −−−−− − · · ·
and let Λ ′ Q(q,t) be the projective limit in the category of filtered algebras. Because of the stability property (2.12), the sequence {I λ|N (x 1 , . . . , x N ; q, t) : N ≥ ℓ(λ)} defines an element of Λ ′ Q(q,t) , for each λ ∈ Y.
A natural algebra isomorphism Λ Q(q,t) → Λ ′ Q(q,t) is defined by sending each Newton power sum p n , n ≥ 1, to some "regularization" of {p n (x 1 , . . . , x N ) + t nN + t n(N +1) + · · · : N ≥ 1}; formally, the map is the unique algebra homomorphism such that
Λ Q(q,t) ∋ p n → p n (x 1 , . . . , x N ) + t nN 1 − t n = x n 1 + . . . + x n N + t nN 1 − t n ∈ Λ ′ Q(q,t) .
It is easily shown to be an isomorphism, see the proof of Proposition 2.7 for a similar statement. The element of Λ Q(q,t) that corresponds to
{I λ|N (x 1 , . . . , x N ; q, t) : N ≥ ℓ(λ)} ∈ Λ ′ Q(q,t)
is denoted by I λ (·; q, t) = I λ (x 1 , x 2 , . . . ; q, t) and called the interpolation Macdonald function parametrized by λ ∈ Y. The set {I λ (·; q, t) : λ ∈ Y} is an (inhomogeneous) basis of Λ Q(q,t) .
Inhomogeneous Hall-Littlewood functions
To write down an expression for the limits of interpolation Macdonald operators, we need a new family of symmetric functions, which are a sort of inhomogeneous version of HL functions. We study some of their properties, from a purely combinatorial point of view.
Definition 2.1. Let N ∈ N and λ ∈ Y(N ) a partition of length 0 ≤ ℓ(λ) = k ≤ N . Define the inhomogeneous Hall-Littlewood polynomial F HL λ (x 1 , . . . , x N ; t) ∈ Λ N,Q(t) by F HL λ (x 1 , . . . , x N ; t) := i≥0 mi(λ) j=1 1 − t 1 − t j w∈SN w k i=1 (1 − t 1−N x −1 i )x λi i 1≤i<j≤N x i − tx j x i − x j , (2.13) where m 0 (λ) = N − ℓ(λ). If λ ∈ Y \ Y(N ), i.e. if ℓ(λ) > N , then set F HL λ (x 1 , . . . , x N ; t) := 0.
(2.14)
Remark 2.2. When ℓ(λ) = 0, i.e., when λ = ∅, then F HL ∅ (x 1 , . . . , x N ; t) = 1. When ℓ(λ) = N , then (2.2) and (2.13) Lemma 2.5. Let N ∈ N and λ ∈ Y(N ) be a partition of length 0 ≤ ℓ(λ) = k ≤ N . Then
imply F HL λ (x 1 , . . . , x N ; t) = N i=1 (1 − t 1−N x −1 i )P HL λ (x 1 , . . . , x N ; t).F HL λ (x 1 , . . . , x N ; t) = w∈SN /S λ N w k i=1 (1 − t 1−N x −1 i )x λi i 1≤i<j≤N λi>λj x i − tx j x i − x j ,
where S λ N := {w ∈ S N : λ w(i) = λ i ∀i = 1, 2, . . . , N } and the sum above is over representatives w of the coset space S N /S λ N .
Proof. The proof follows exactly the argument of [8, Ch. III, (1.5)]. Details are left to the reader.
Proposition 2.6. Let N ∈ N, N ≥ 2, and λ ∈ Y. Then
F HL λ (x 1 , . . . , x N −1 , t 1−N ; t) = F HL λ (x 1 , . . . , x N −1 ; t) (2.15)
Proof. If ℓ(λ) > N , then both sides of the equality are zero, by definition (2.14).
If ℓ(λ) = N , then the right side of the equality is zero by (2.14). On the other hand, since ℓ(λ) = N , then the product over i inside the brackets of (2.13) is invariant under permutations of its variables by w ∈ S N , so that product can be factored out of the sum. It follows that
F HL λ (x 1 , . . . , x N ; t) = i≥0 mi(λ) j=1 1 − t 1 − t j N i=1 (1 − t 1−N x −1 i )x λi i w∈SN w 1≤i<j≤N x i − tx j x i − x j .
It is clear from the formula above that the factor
1 − t 1−N x −1 N vanishes if x N = t 1−N , and therefore F HL λ (x 1 , . . . , x N −1 , t 1−N ; t) = 0. Finally assume 0 ≤ ℓ(λ) = k ≤ N − 1.
In this case, we can apply Lemma 2.5 to both sides of (2.15). When we apply it to the left side, we note that some terms in the sum over w ∈ S N /S λ N are zero. In fact, those terms corresponding to w ∈ S N /S λ N such that w −1 (N ) ∈ {1, 2, . . . , k} vanish because the factor
k i=1 ((1 − t 1−N x −1 w(i) )x λi w(i) ) equals zero when x N = t 1−N .
Thus the sum is actually over those w ∈ S N /S λ N with w −1 (N ) ∈ {k + 1, . . . , N }. Note that permutations of {k + 1, . . . , N } belong to S λ N since λ k+1 = . . . = λ N = 0. This means that each coset representative w ∈ S N /S λ N can be chosen so that w(N ) = N , and therefore the restriction w| SN−1 runs naturally over elements of
S N −1 /S λ N −1 , as w runs over S N /S λ N . Finally observe that w(N ) = N implies k i=1 (1 − t 1−N x −1 w(i) )x λi w(i) 1≤i<j≤N λi>λj x w(i) − tx w(j) x w(i) − x w(j) xN =t 1−N = k i=1 (1 − t 1−N x −1 w(i) )x λi w(i) 1≤i<j≤N −1 λi>λj x w(i) − tx w(j) x w(i) − x w(j) k i=1 x w(i) − t 2−N x w(i) − t 1−N = k i=1 (1 − t 2−N x −1 w(i) )x λi w(i) 1≤i<j≤N −1 λi>λj x w(i) − tx w(j) x w(i) − x w(j)
and we end up with the formula of Lemma 2.5 for the right side of (2.15).
Using Proposition 2.6, we construct for each λ ∈ Y an element of Λ Q(t) that uniquely corresponds to the coherent sequence {F HL λ (x 1 , . . . , x N ; t) : N ≥ ℓ(λ)}. For any N ∈ N, consider the map
π N N −1 : Λ N,Q(t) xN =t 1−N − −−−−− → Λ N −1,Q(t) (2.16)
given by the specialization x N = t 1−N . Also, for any N ∈ N, consider the following (unital) algebra homomorphism specified by the action on the set of Newton power sums {p m : m ≥ 1}, which is a generator set for Λ Q(t) :
π ∞ N : Λ Q(t) −→ Λ N,Q(t) , π ∞ N (p m ) = p m (x 1 , . . . , x N ) + t −mN 1 − t −m , for all m ≥ 1, (2.17) where p m (x 1 , . . . , x N ) = x m 1 + . . . + x m N . The expression t −mN /(1 − t −m ) in (2.17) comes from the geometric sum (t −N ) m + (t −N −1 ) m + . . ., if we assume that t > 1. Proposition/Definition 2.7. Let λ ∈ Y be arbitrary. There exists a unique F HL λ (·; t) ∈ Λ Q(t) such that π ∞ N F HL λ (·; t) = F HL λ (x 1 , . . . , x N ; t) for all N ∈ N. We call such unique element the inhomogeneous Hall-Littlewood function F HL λ (·; t) parametrized by λ. The set {F HL λ (·; t) : λ ∈ Y} is a basis of Λ Q(t) .
Proof. The proof is similar to that of [20,Prop. 2.8]. We repeat the proof, with necessary modifications, for the reader's convenience.
Let Λ ′ Q(t) be the projective limit of the chain {Λ N,Q(t) , π N +1 N } in the category of filtered algebras, and for each
N ∈ N, let (π ∞ N ) ′ : Λ ′ Q(t) → Λ N,Q(t) be the projection. Next, define a homomorphism π : Λ Q(t) → Λ ′ Q(t)
specified by the action on the set of Newton power sums {p n : n ≥ 1}, as follows:
Λ Q(t) ∋ p m → p m (x 1 , . . . , x N ) + t −mN 1 − t −m = x m 1 + . . . + x m N + t −mN 1 − t −m ∈ Λ ′ Q(t) .
Note that the sequence above determines an element of Λ ′ Q(t) because of the coherence relation
(x m 1 + . . . + x m N −1 + (t 1−N ) m ) + t −mN 1 − t −m = (x m 1 + . . . + x m N −1 ) + t −m(N −1) 1 − t −m .
Observe that for any fixed d ∈ N, the map π N +1 N induces a linear isomorphism between the subspaces of degree ≤ d in Λ N,Q(t) and Λ N −1,Q(t) , provided that N > d. If follows that
π : Λ Q(t) → Λ ′ Q(t) is an algebra isomorphism. Lastly note that (π ∞ N ) ′ • π : Λ Q(t) → Λ N,Q(t) maps each p m to p m (x 1 , . . . , x N ) + t −mN
1−t −m , meaning that the composition coincides with the map π ∞ N in (2.17). Due to the stability property of Proposition 2.6, the sequence {F λ|N (x 1 , . . . , x N ; t)} N defines an element of Λ ′ Q(t) . Let F λ (·; q, t) be the corresponding element of Λ Q(t) . From the discussion above, it is clear that it has the required property. The uniqueness is a consequence of the fact that π :
Λ Q(t) → Λ ′ Q(t) is an isomorphism. Finally, recall that for each N ∈ N the top homogeneous part of F HL λ (x 1 , . . . , x N ; t) is the HL polynomial P HL λ (x 1 , . . . , x N ; t); by the construction it follows that the top-degree homogeneous component of F HL λ (·; t) is the HL function P HL λ (·; t). Since {P HL λ (·; t) : λ ∈ Y} is a basis of Λ Q(t)
, then so is the set {F HL λ (·; t) : λ ∈ Y}, thus the last statement is proved.
Remark 2.8. Instead of π N N −1 and π ∞ N , consider the maps π N N −1 : Λ N,Q(t) → Λ N −1,Q(t) , x N = t N −1 , and π ∞ N : Λ Q(t) → Λ N,Q(t) , p m → p m (x 1 , . . . , x N ) + t mN /(1 − t m ) for all m ≥ 1.
A similar construction as that of Proposition/Definition 2.7 gives a new family of symmetric functions that we naturally denote by F HL λ (·; t −1 ).
Interpolation Macdonald operators and their limits 3.1 Operators for interpolation Macdonald polynomials
We recall the operators of Okounkov [17] in the notation of [20]. The interpolation Macdonald
operators D 1 N , D 2 N , . . . , D N N on Λ N,Q(q,t) are defined by the equations D N (z; q, t) := 1 + N k=1 D k N z k , D N (z; q, t) := 1 V (x 1 , . . . , x N ) det 1≤i,j≤N x N −i−1 j (x j t 1−N − 1)t N −i zT q,xj + (x j + z) , (3.1) where V (x 1 , . . . , x N ) := 1≤i<j≤N (x i − x j ) is the Vandermonde determinant, and {T q,xj } 1≤j≤N are the q-shift operators, given by (T q,xj f )(x 1 , . . . , x N ) := f (x 1 , . . . , x j−1 , qx j , x j+1 , . . . , x N ).
Observe that D 1 N , . . . , D N N depend on q, t, but we suppress them from the notation. They diagonalize the interpolation Macdonald polynomials
{I µ|N (x 1 , . . . , x N ; q, t) : µ ∈ Y(N )}; in fact, D N (z; q, t)I µ|N (x 1 , . . . , x N ; q, t) = N i=1 (1 + q µi t 1−i z) · I µ|N (x 1 , . . . , x N ; q, t) ∀µ ∈ Y(N ). (3.2)
In particular, {D k N : 1 ≤ k ≤ N } is a pairwise commuting family of q-difference operators. We prefer to consider a renormalization of these operators, namely
A 1 N , A 2 N , . . . , A N N , given by 1 + N k=1 A k N (u; t) k = A N (u; q, t) := D N (−u −1 ; q, t)(−u) N t N (N −1) 2 (u; t) N , where (u; t) k := k−1 i=0 (1 − t i−1 u) is the usual Pochhammer symbol. From (3.1), we deduce A N (u; q, t) = 1 V (x 1 , . . . , x N ) · (u; t) N • det 1≤i,j≤N x N −i−1 j (x j − t N −1 )T q,xj + t i−1 (1 − x j u) .
(3.3) They also diagonalize each I µ|N (x 1 , . . . , x N ; q, t); in fact, (3.2) yields
A N (u; q, t)I µ|N (x 1 , . . . , x N ; q, t) = N i=1 q µi − t i−1 u 1 − t i−1 u I µ|N (x 1 , . . . , x N ; q, t). (3.4)
As before, {A k N : 1 ≤ k ≤ N } is a pairwise commuting family of q-difference operators. By treating u as a complex variable and expanding the fractions 1/(1−t i−1 u) near u = 0, we obtain, in the right side of (3.4), an element of
Λ N,Q(q,t) [[u]]. Since {I µ|N (x 1 , . . . , x N ; q, t) : µ ∈ Y(N )} is a basis of Λ N,Q(q,t) , it follows that A N (u; q, t) : Λ N,Q(q,t) → Λ N,Q(q,t) [[u]]
is a well-defined operator.
Operators for interpolation Macdonald functions
We normalized D N (z; q, t) into A N (u; q, t) to obtain equation (3.4). The key property of this eigenrelation is that the factors entering the eigenvalue, namely (q µi − t i−1 u)/(1 − t i−1 u), equal 1 when µ i = 0. We can deduce, by using also that {I µ|N (x 1 , . . . , x N ; q, t) : µ ∈ Y(N )} is a basis of Λ N,Q(q,t) , the following coherence property
A N −1 (u; q, t)π N N −1 = π N N −1 A N (u; q, t) : Λ N,Q(q,t) → Λ N −1,Q(q,t) [[u]]. (3.5) Lemma 3.1. There exists a unique linear operator A ∞ (u; q, t) : Λ Q(q,t) → Λ Q(q,t) [[u]] such that A N (u; q, t)π ∞ N = π ∞ N A ∞ (u; q, t) : Λ Q(q,t) → Λ N,Q(q,t) , (3.6)
for all N ≥ 1. It is given by
A ∞ (u; q, t) : Λ Q(q,t) → Λ Q(q,t) [[u]] I µ (·; q, t) → ∞ i=1 q µi − t i−1 u 1 − t i−1 u I µ (·; q, t) ∀µ ∈ Y. (3.7)
Proof. Observe that for any µ ∈ Y, the product in the display (3.7) is finite; in fact, the only terms unequal to 1 are those ranging from i = 1 to i = ℓ(µ). Since {I µ (·; q, t) : µ ∈ Y} is a basis of Λ Q(q,t) , the definition above completely determines the operator. Moreover, (3.7) and the definition of the interpolation Macdonald function I µ (·; q, t) easily imply
A N (u; q, t)π ∞ N I µ (·; q, t) = π ∞ N A ∞ (u; q, t)I µ (·; q, t) ∀µ ∈ Y.
Again, since {I µ (·; q, t) : µ ∈ Y} is a basis of Λ Q(q,t) , then the equality of operators (3.6) ensues.
The following explicit formula for A ∞ (u; q, t) is the main result of this paper.
Theorem 3.2. We can write
A ∞ (u; q, t) = 1 + A 1 (u; t) 1 + A 2 (u; t) 2 + . . . , (3.8) where A 1 , A 2 , · · · : Λ Q(q,t) → Λ Q(q,t) are given by A k = ℓ(λ)=k t λ1+λ2+... F HL λ (·; t −1 )(Q HL λ (·; t −1 )) * . (3.9)
In the formula above, (Q λ (·; t −1 )) * is the adjoint of the operator Λ Q(q,t) → Λ Q(q,t) of multiplication by Q λ (·; t −1 ) with respect to the Macdonald inner product (·, ·) q,t , and F λ (·; t −1 ) is the operator of multiplication by the function defined in Remark 2.8 (see also Proposition/Definition 2.7).
Remark 3.3. From (3.1) and [12, (1.16)], the top-degree (of degree zero) of the interpolation Macdonald operator D k N is the Macdonald q-difference operator H k N . It follows that the degree zero part of A k diagonalizes the Macdonald functions and is given by
A k Macdonald = ℓ(λ)=k t λ1+λ2+... P HL λ (·; t −1 )(Q HL λ (·; t −1 )) * , because top homogeneous part of F HL λ (·; t −1 ) is P HL λ (·; t −1 )
. The Macdonald operators at infinity, in [12], are expressed slightly differently. Let us show how to obtain their formula from (3.8), (3.9). Since the Macdonald function M λ (·; q, t) is invariant under the simultaneous change of parameters (q, t) ↔ (1/q, 1/t), the top-degree of the operator A ∞ (u; 1/q, 1/t) also diagonalizes the Macdonald functions. These are, in fact, the operators considered in [12]. From Theorem 3.2, we can write the top-degree part of A ∞ (u; 1/q, 1/t) as
1 + A 1 Macdonald (u; t −1 ) 1 + A 2 Macdonald (u; t −1 ) 2 + . . . , where A k Macdonald = ℓ(λ)=k t −λ1−λ2−... P HL λ (·; t)(Q HL λ (·; t)) − * ,(3.10)
and f − * is the adjoint of multiplication by f ∈ Λ Q(q,t) with respect to the inner product (·, ·) 1/q,1/t determined by
(p λ , p µ ) 1/q,1/t = δ µ,ν i≥1 (i mi(µ) m i (µ)!) · ℓ(µ) i=1 1 − (1/q) µi 1 − (1/t) µi = (tq −1 ) µ1+µ2+... (p λ , p µ ) q,t ∀λ, µ ∈ Y.
It follows that p − * n = (t/q) n p * n for all n ≥ 1 and, more generally,
f − * = (t/q) deg f f * for all homogeneous f ∈ Λ Q(q,t) . It follows that (3.10) equals A k Macdonald = ℓ(λ)=k q −λ1−λ2−... P HL λ (·; t)(Q HL λ (·; t)) * ,
which is exactly the formula in the Theorem of [12].
Proof of the Main Theorem
In this section, we prove Theorem 3.2. As in [11,12], the statement of Theorem 3.2 reduces to a combinatorial identity between the families of symmetric polynomials/functions {Q HL λ } λ and {F HL λ } λ . The identity to prove is the refined Cauchy identity in Proposition 4.2. Our proof of this proposition is new and yields, as a corollary, the main results of the aforementioned papers.
Formalities on completed tensor products
We have been using the sequence of variables X = (x 1 , x 2 , . . .) and Λ F for the algebra of symmetric functions on the set of variables X. We shall need a second sequence of variables Y = (y 1 , y 2 , . . .), which is why we write Λ X,F and Λ Y,F to distinguish the corresponding algebras of symmetric functions. We also write Λ X,N,F for the algebra of polynomials on x 1 , . . . , x N .
As it is usual, Λ X,F ⊗ Λ Y,F is the tensor product of these algebras, whose elements are of the form
λ∈Y c λ (a λ (x 1 , x 2 , . . .) ⊗ b λ (y 1 , y 2 , . . .)), c λ ∈ F, (4.1) {a λ (x 1 , x 2 , . . .) : λ ∈ Y}, {b λ (y 1 , y 2 , . . .)
: λ ∈ Y} are bases of Λ X,F and Λ Y,F , and c λ = 0 for all but finitely many λ ∈ Y. Moreover we need the completed tensor product Λ X,F ⊗Λ Y,F , which is the algebra whose elements are of the form (4.1), except that now c λ can be nonzero for infinitely many λ ∈ Y. The most important element of Λ X,F ⊗Λ Y,F , for our purposes, is the (Macdonald) reproducing kernel
Π := i≥1 j≥1 (tx i y j ; q) ∞ (x i y j ; q) ∞ , where (z; q) ∞ := (1 − z)(1 − zq) · · · isΠ = λ∈Y P λ (x 1 , x 2 , . . . ; q, t)Q λ (y 1 , y 2 , . . . ; q, t).
Given an operator A on Λ F = Λ X,F , we can easily extend it to the tensor product Λ X,F ⊗ Λ Y,F and to the completed tensor product Λ X,F ⊗Λ Y,F , by making the operator act on the first coordinate. Similarly we can extend an operator B on Λ F = Λ Y,F to Λ X,F ⊗ Λ Y,F and Λ X,F ⊗Λ Y,F by making the operator act on the second coordinate. The new operators are denoted by the same letter A or B. This construction is used many times below.
Reduction to an identity of symmetric functions
Lemma 4.1 (Lemma from [12]). Let f = f (x 1 , x 2 , . . .) ∈ Λ Q(q,t) be arbitrary. Also let f denote the operator Λ Q(q,t) → Λ Q(q,t) of multiplication by f , and let f * be the adjoint of f with respect to the Macdonald inner product (·, ·) q,t in (2.8). Then
f * (Π) = f (y 1 , y 2 , . . .) · Π.
In the above equation, the operator f * and the operator of multiplication by f (y 1 , y 2 , . . .) act on Λ X,Q(q,t) ⊗Λ Y,Q(q,t) , as explained at the end of subsection 4.1.
Assume that we have an operator
A on Λ Q(q,t) = Λ X,Q(q,t) such that A(Π) = 0. This implies A λ∈Y P λ (x 1 , x 2 , . . . ; q, t)Q λ (y 1 , y 2 , . . . ; q, t) = λ∈Y A(P λ (x 1 , x 2 , . . . ; q, t))Q λ (y 1 , y 2 , . . . ; q, t) = 0, and so AP λ (·; q, t) = 0, for all λ ∈ Y. Since {P λ (·; q, t) : λ ∈ Y} is a basis of Λ Q(q,t) , then A = 0.
From the previous discussion and Lemma 4.1, the main theorem is reduced to
A k (Π) = ℓ(λ)=k t λ1+λ2+... F HL λ (x 1 , x 2 , . . . ; t −1 )Q HL λ (y 1 , y 2 , . . . ; t −1 ) · Π, (4.2)
as an equality in Λ X,Q(q,t) ⊗Λ Y,Q(q,t) . Next, multiply both sides of (4.2) by (u; t) −1 k = ((1 − u)(1 − tu) · · · (1 − t k−1 u)) −1 and add from k = 0 to ∞. It is then clear that (4.2) holds if and only if
A ∞ (u; q, t)(Π) = ∞ k=0 1 (u; t) k ℓ(λ)=k t λ1+λ2+... F HL λ (x 1 , x 2 , . . . ; t −1 )Q HL λ (y 1 , y 2 , . . . ; t −1 ) · Π (4.3)
holds as an equality in Λ X,Q(q,t) ⊗Λ Y,Q(q,t) [[u]] (after Taylor expanding each 1/(u; t) k near u = 0). Our goal is to prove (4.3). In the next subsection, we make a further reduction of this equality to a combinatorial identity involving only HL functions and the inhomogeneous HL polynomials.
Reduction to a refined Cauchy identity
Let us begin with an observation. If G ∈ Λ Q(t) is such that π ∞ N G = 0 holds for all N ∈ N, then G = 0 (recall the maps π ∞ N : Λ Q(t) → Λ N,Q(t) were defined in (2.17)). Indeed, this can be deduced from the proof of Proposition/Definition 2.7. As discussed at the end of subsection 4.1, each operator π ∞ N can be extended to an operator of the form Λ X,
Q(t) ⊗Λ Y,Q(t) [[u]] → Λ X,N,Q(t) ⊗Λ Y,Q(t) [[u]
] by acting on the first coordinate (of each coefficient of a power u k ). A similar statement holds: if G ∈ Λ X,Q(t) ⊗Λ Y,Q(t) [[u]] is such that π ∞ N G = 0 for all N ∈ N, then G = 0.
Therefore it follows that (4.3) holds if, for each N ∈ N, it also holds after applying to it the operator π ∞ N . As for the left side, we have π ∞ N A ∞ (u; q, t)(Π) = A N (u; q, t)π ∞ N (Π), by Lemma 3.1. Next, using the well-known identity [8, Ch. VI, (2.6)], we have 4) where Π N denotes
π ∞ N (Π) = π ∞ N exp ∞ n=1 1 − t n n(1 − q n ) p n (x 1 , x 2 , . . .)p n (y 1 , y 2 , . . .) = exp ∞ n=1 1 − t n n(1 − q n ) (p n (x 1 , . . . , x N ) + t −nN 1 − t −n )p n (y 1 , y 2 , . . .) = exp ∞ n=1 1 − t n n(1 − q n ) p n (x 1 , . . . , x N )p n (y 1 , y 2 , . . .) exp − ∞ n=1 p n (t 1−N y 1 , t 1−N y 2 , . . .) n(1 − q n ) = Π N · exp − ∞ n=1 p n (t 1−N y 1 , t 1−N y 2 , . . .) n(1 − q n ) = Π N · N i=1 1 (t 1−N y i ; q) ∞ ,(4.Π N := N i=1 j≥1 (tx i y j ; q) ∞ (x i y j ; q) ∞ .
We note that the second to last equality in the display (4.4) follows from [8, Ch. VI, (2.6)] after setting x N +1 = x N +2 = . . . = 0, whereas the last equality in display (4.4) follows from the same identity but now after setting t = 0, and then using the variables t 1−N y i instead of y i . Next we find an expression for the right side of (4.4) after applying π ∞ N to it. Since π ∞ N is a homomorphism of algebras, we obtain the same right side, except that we replace F HL λ (x 1 , x 2 , . . . ; t −1 ) and Π by
π ∞ N F HL λ (x 1 , x 2 , . . . ; t −1 ) = F HL λ (x 1 , . . . , x N ; t −1 ), π ∞ N (Π) = Π N · N i=1 1 (t 1−N y i ; q) ∞ .
After factoring out the factor N i=1 (t 1−N y i ; q) −1 ∞ from both sides, we have that the result of applying π ∞ N to (4.15) is equivalent to
Π −1 N A N (u; q, t)(Π N ) = ∞ k=0 1 (u; t) k ℓ(λ)=k t λ1+λ2+... F HL λ (x 1 , . . . , x N ; t −1 )Q HL λ (y 1 , y 2 , . . . ; t −1 ).
(4.5) We can still simplify the left side of (4.5). After expanding the determinant in (3.3), we have
A N (u; q, t) = 1 V (x 1 , . . . , x N )(u; t) N w∈SN ǫ(w) N i=1 x N −w(i)−1 i {(x i − t N −1 )T q,xi + t w(i)−1 (1 − x i u)}
where ǫ(w) is the signature of the permutation w. Also note that all N operators (indexed by i = 1, . . . , N ) pairwise commute, so the order in the product does not matter. From the evident
Π −1 N T q,xi Π N = ∞ l=1 1 − x i y l 1 − tx i y l , we deduce Π −1 N A N (u; q, t)(Π N ) = 1 V (x 1 , . . . , x N )(u; t) N × w∈SN ǫ(w) N i=1 x N −w(i)−1 i {(x i − t N −1 ) ∞ l=1 1 − x i y l 1 − tx i y l + t w(i)−1 (1 − x i u)} = 1 V (x 1 , . . . , x N )(u; t) N det 1≤i,j≤N x N −i−1 j {(x j − t N −1 ) ∞ l=1 1 − x j y l 1 − tx j y l + t i−1 (1 − x j u)} (4.6)
To summarize, Theorem 3.2 has been reduced to prove that (4.6) equals the right side of (4.5), for any N ∈ N. This equality follows from Proposition 4.2 below, after sending y i to t −1 y i , using the homogeneity t λ1+λ2+... Q HL λ (t −1 y 1 , t −1 y 2 , . . . ; t −1 ) = Q HL λ (y 1 , y 2 , . . . ; t −1 ), and then replacing t by t −1 . Equalities of this sort were called refined Cauchy identities in [25]. Some discussion of their work, in connection to the papers [12,24] and ours, is given in subsection 4.5.
Proof of the refined Cauchy identity
The goal of this subsection is to prove the following proposition which, by the arguments in the previous subsection, concludes the proof of the main theorem. Observe that the parameter q that gave rise to the Macdonald inner product and to the adjoint in formula (3.9) is gone.
1 + N k=1 1 (u; t −1 ) k ℓ(λ)=k F HL λ (x 1 , . . . , x N ; t)Q HL λ (y 1 , y 2 , . . . ; t) = 1 V (x 1 , . . . , x N ) · (u; t −1 ) N det 1≤i,j≤N x N −i−1 j (x j − t 1−N ) ∞ l=1 1 − tx j y l 1 − x j y l + t 1−i (1 − x j u) .
(4.7)
Denote by M the N × N matrix on the right side of (4.7). For any subset I ⊂ {1, 2, . . . , N }, define the N × N matrix M I by
(M I ) i,j := x N −i−1 j (x j − t 1−N ) ∞ l=1 1−txj y l 1−xj yj , if j ∈ I, x N −i−1 j t 1−i (1 − x j u), if j / ∈ I.1 − tx i y l 1 − x i y l × i∈I x −1 i (x i − t 1−N ) × j / ∈I (x −1 j t 1−N (1 − x j u)) × det(A I ), (4.9) where the N × N matrix A I is (A I ) i,j := x N −i j , if j ∈ I, (tx j ) N −i , if j / ∈ I.
Observe that A I is the Vandermonde determinant on variables {x j : j ∈ I} ⊔ {tx j : j / ∈ I}, and the ordering of these variables (in the matrix) is inherited from 1 < 2 < · · · < N . We deduce
det(A I ) V (x 1 , . . . , x N ) = t ( N −|I| 2 ) i∈I j / ∈I x i − tx j x i − x j . (4.10)
Plugging (4.10) and (2.6) into (4.9), we obtain
det(M I ) V (x 1 , . . . , x N ) = t ( N −|I| 2 ) i∈I (1 − t 1−N x −1 i ) j / ∈I (t 1−N (x −1 j − u)) i∈I j / ∈I x i − tx j x i − x j × ℓ(λ)≤|I| P HL λ ({x i : i ∈ I}; t)Q HL λ (y 1 , y 2 , . . . ; t).1 (u; t −1 ) N I⊆{1,...,N } t ( N −|I| 2 ) i∈I (1 − t 1−N x −1 i ) j / ∈I (t 1−N (x −1 j − u)) i∈I j / ∈I x i − tx j x i − x j ℓ(λ)≤|I| P HL λ ({x i : i ∈ I}; t)Q HL λ (y 1 , y 2 , . . . ; t).
(4.12)
We must prove that (4.12) equals the left side of (4.7). Both expressions are of the form λ∈Y(N ) G λ (x 1 , . . . , x N ; t)Q HL λ (y 1 , y 2 , . . . ; t). Thus let us choose any λ ∈ Y(N ) of length 0 ≤ ℓ(λ) = k ≤ N and show that the symmetric polynomial on x 1 , . . . , x N that accompanies Q HL λ (y 1 , y 2 , . . . ; t) in (4.12) equals F HL λ (x 1 , . . . , x N ; t)/(u; t −1 ) k , which accompanies Q HL λ (y 1 , y 2 , . . . ; t)
in the left side of (4.2). After simple algebraic manipulations, the identity to prove becomes I⊆{1,...,N } |I|≥k
t − (N +|I|−1)(N −|I|) 2 i∈I (1 − t 1−N x −1 i ) j / ∈I (x −1 j − u) i∈I j / ∈I x i − tx j x i − x j P HL λ ({x i : i ∈ I}; t) ? = (u; t −1 ) N (u; t −1 ) k F HL λ (x 1 , . . . , x N ; t).
(4.13) Before proving (4.13) in full generality, let us look first at the extreme cases. Case 1. ℓ(λ) = k = N . In this case, the sum in the left side of (4.13) has only one term corresponding to I = {1, 2, . . . , N }. Such term is
N i=1 (1 − t 1−N x −1 i )P HL λ (x 1 , . . . , x N ; t).
The latter expression equals F HL λ (x 1 , . . . , x N ; t), see Remark 2.2. Therefore (4.13) holds in this case.
t ( K−|J| 2 ) i∈J (1 − z i ) · j∈A\J (z j − q) · i∈J j∈A\J tz i − z j z i − z j = (q; t) K .
(4.14)
General case. 1 ≤ ℓ(λ) = k ≤ N − 1. The idea is to use the definition of the HL polynomial for P HL λ ({x i : i ∈ I}) and the definition of the inhomogeneous HL polynomial for F HL λ (x 1 , . . . , x N ; t), then write both sides of (4.13) as big sums and match terms of these sums with the help of Lemma 4.3.
The prefactors for both P HL λ ({x i : i ∈ I}; t) and F HL λ (x 1 , . . . , x N ; t) are very similar and almost match each other, except for factor corresponding to the part 0 of the partition. For P HL λ ({x i : i ∈ I}; t), this factor is
|I|−k j=1 (1 − t)/(1 − t j ), whereas for F HL λ (x 1 , . . . , x N ; t), this factor is N −k j=1 (1 − t)/(1 − t j )
. For any I ⊂ {1, . . . , N }, let S I be the group of permutations of elements of I; it is finite of size |I|!. Also denote by i 1 < i 2 < . . . < i k the smallest elements of I. With these considerations, the equality (4.13) we wish to prove becomes Next we break each of the two sides of the equality above into N (N − 1) · · · (N − k + 1) terms, and match those on the left with those on the right. Let R := (r 1 , . . . , r k ) ∈ {1, 2, . . . , N } k be an arbitrary tuple of size k. For our argument, let us fix one such k-tuple R. We say that σ ∈ S N is R-restricted if σ(1) = r 1 , . . . , σ(k) = r k . The right side of (4.15) with σ ∈ S N replaced by only those R-restricted σ ∈ S N is called the R-restricted right side of (4.15). Next consider a pair (I ⊆ {1, . . . , N }, w ∈ S N ) such that |I| ≥ k and I = {i 1 < . . . < i k < . . .} are the elements of I in increasing order. We say that (I, w) is R-restricted if {r 1 , . . . , r k } ⊂ I and w(i 1 ) = r 1 , . . . , w(i k ) = r k . The left side of (4.15) with the double sum over (I, w) replaced by only those R-restricted pairs is called the R-restricted left side of (4.15).
I⊆{1,...,N } |I|≥k w∈SI t − (N +|I|−1)(N −|I|) 2 |I|−k j=1 1 − t 1 − t j i∈I (1 − t 1−N x −1 i ) j / ∈I (x −1 j − u) × i∈I j / ∈I x i − tx j x i − x j w x λ1 i1 · · · x λ k i k i,j∈I i<j x i − tx j x i − x j ? = (u; t −1 ) N (u; t −1 ) k N −k j=1 1 − t 1 − t j × σ∈SN σ k i=1 (1 − t 1−N x −1 i )x λ1 1 · · · x λ k k 1≤i<j≤N x i − tx j x i − x j .
Let us simplify the R-restricted sides of the equality and then show they are equal to each other. Begin with the right side. Let I 0 := {r 1 , . . . , r k } and denote by S(I 0 ) be the set of bijective mappings σ ′ : {k + 1, . . . , N } → {1, . . . , N } \ {r 1 , . . . , r k }. Then the R-restricted right side of (4.15), corresponding to R = (r 1 , . . . , r k ), equals
(u; t −1 ) N (u; t −1 ) k i∈I0 (1 − t 1−N x −1 i ) k i=1 x λi ri 1≤i<j≤k x ri − tx rj x ri − x rj i∈I0 j / ∈I0 x i − tx j x i − x j × N −k j=1 1 − t 1 − t j σ ′ ∈S(I0) σ ′ k+1≤i<j≤N x i − tx j x i − x j .
(4.16)
The second line in the display (4.16) is equal to 1 by virtue of (2.3). It follows that the Rrestricted right side of (4.15) is equal to the first line in the display (4.16). We switch to simplifying the R-restricted left side of (4.15). Again set I 0 := {r 1 , . . . , r k }. It is clear that x λi ri 1≤i<j≤k (4.16). This implies that, for a fixed R = (r 1 , . . . , r k ), the R-restricted left side and R-restricted right side of (4.15) are equal. Since this holds for any R ∈ {1, . . . , N } k , adding these equalities over all N (N − 1) · · · (N − k + 1) distinct k-tuples R, the identity (4.15) follows. It only remains to prove the key Lemma 4.3.
w x λ1 i1 · · · x λ k i k i,j∈I i<j x i − tx j x i − x j = k i=1 x λi ri 1≤i<j≤k x ri − tx rj x ri − x rj i∈I0 j∈I\I0 x i − tx j x i − x j × w i,j∈I\{i1,...,i k } i<j x i − tx j x i − x j .x ri − tx rj x ri − x rj I0⊆I⊆{1,...,N } t − (N +|I|−1)(N −|I|) 2 i∈I0 j∈I\I0 x i − tx j x i − x j i∈I (1 − t 1−N x −1 i ) j / ∈I (x −1 j − u) i∈I j / ∈I x i − tx j x i − x j × |I|−k j=1 1 − t 1 − t j w ′ ∈S({i1,...,i k },I0) w ′ i,j∈I\{i1,...,i k } i<j x i − tx j x i − x j = k i=1 x λi ri 1≤i<j≤k x ri − tx rj x ri − x rj × I0⊆I⊆{1,...,N } t − (N +|I|−1)(N −|I|) 2 i∈I0 j∈I\I0 x i − tx j x i − x j i∈I (1 − t 1−N x −1 i ) j / ∈I (x −1 j − u) i∈I j / ∈I x i − tx j x i − x j = k i=1 x λi ri 1≤i<j≤k x ri − tx rj x ri − x rj i∈I0 (1 − t 1−N x −1 i ) i∈I0 j / ∈I0 x i − tx j x i − x j × I1⊆{1,...,N }\I0 t ( N −k−|I 0 | 2 ) i∈I1 (1 − t 1−N x −1 i ) j∈({1,...,N }\I0)\I1 (t 1−N x −1 j − ut 1−N ) i∈I1 j∈({1,...,N }\I0)\I1 x i − tx j x i − x j = k i=1 x λi ri 1≤i<j≤k x ri − tx rj x ri − x rj i∈I0 (1 − t 1−N x −1 i ) i∈I0 j / ∈I0 x i − tx j x i − x j × (ut 1−N ; t) N −k ,
Proof of Lemma 4.3. For convenience, let A = {1, 2, . . . , K}, so the variables are z 1 , z 2 , . . . , z K . Let us make the change of variables z i ↔ y −1 i for i = 1, 2, . . . , K and let the sum run over subsets I := {1, . . . , K} \ J. After minor algebraic manipulations, the identity to prove (4.14) becomes
I⊆{1,...,K} t ( |I| 2 ) i / ∈I y i − 1 1 − y i q i∈I j / ∈I ty i − y j y i − y j = (q; t) K · K i=1 (y −1 i − q) −1 . (4.18)
We prove (4.18) with the help of Macdonald q-difference operators 2.7. Begin with the equality
i∈I T q,yi K i=1 (y i − 1) = j / ∈I (y j − 1) i∈I (qy i − 1) = (−1) |I| K i=1 (1 − qy i ) i / ∈I y i − 1 1 − y i q ,
which implies that the left side of (4.18) equals
K i=1 (1 − qy i ) −1 × K k=0 (−1) k I⊂{1,...,K} |I|=k t ( k 2 ) i∈I j / ∈I ty i − y j y i − y j i∈I T q,yi K i=1 (y i − 1) = K i=1 (1 − qy i ) −1 × K k=0 (−1) k H k K K i=1 (y i − 1) = K i=1 (1 − qy i ) −1 × K k=0 (−1) k H k K K i=0 (−1) K−i e i (K k=0 (−1) k H k K K i=0 (−1) K−i e i (y 1 , . . . , y K ) = (q; t) K (y 1 y 2 · · · y N ). (4.19)
The essential property of the Macdonald q-difference operators is that they diagonalize the Macdonald polynomials, in particular they diagonalize the elementary symmetric polynomials e i (y 1 , . . . , y K ) = P (1 i ) (y 1 , . . . , y K ), [8,Ch. VI (4.8)]. The eigenvalue is also known; in fact, we have H k K e i (y 1 , . . . , y K ) = e k (qt K−1 , . . . , qt K−i , t K−i−1 , . . . , 1)e i (y 1 , . . . , y K ). Thus the left side of (4.19) equals
Some corollaries
By equating the top homogeneous components of the equality in Proposition 4.2, and using that the top-degree homogeneous component of F HL λ (x 1 , . . . , x N ; t) is P HL λ (x 1 , . . . , x N ; t), we obtain: Corollary 4.4. The following is an equality in Λ X,N,
Q(t) ⊗Λ Y,Q(t) [[u]]: 1 V (x 1 , . . . , x N )(u; t −1 ) N det 1≤i,j≤N x N −i j ∞ l=1 1 − tx j y l 1 − x j y l − t 1−i u = 1 + N k=1 1 (u; t −1 ) k ℓ(λ)=k P HL λ (x 1 , . . . , x N ; t)Q HL λ (y 1 , y 2 , . . . ; t).
(4.20)
An equivalent version of (4.20) was needed for the result of Nazarov-Sklyanin, [12]. Their proof is different from ours; it uses induction on N , and the Pieri rule for HL polynomials. We do not have a Pieri rule for the inhomogeneous HL polynomials, so we devised a different method.
The identity of Corollary 4.4 was also proved by Wheeler and Zinn-Justin [25] (who introduced the name refined Cauchy identity) and by Warnaar [24]. The identity proved in both of these papers is when y N +1 = y N +2 = . . . , but this turns out to be equivalent to (4.20). Comparing (4.20) with the result of [25,24], one obtains the following nontrivial equality of degree N polynomials in u:
1 V (x 1 , . . . , x N ) det 1≤i,j≤N x N −i j N l=1 1 − qx j y l 1 − x j y l − q 1−i u = N i=1 N j=1 (1 − qx i y j ) V (x 1 , . . . , x N )V (y 1 , . . . , y N ) det 1≤i,j≤N 1 − uq 1−N + (uq 1−N − q)x i y j (1 − qx i y j )(1 − x i y j ) .
In a different direction, we can set u = 0 in Proposition 4.2. We obtain the following inhomogeneous Cauchy identity.
Corollary 4.5. The following is an equality in Λ X,N,Q(t) ⊗Λ Y,Q(t) :
ℓ(λ)≤N F HL λ (x 1 , . . . , x N ; t)Q HL λ (y 1 , y 2 , . . . ; t) = 1 V (x 1 , . . . , x N ) det 1≤i,j≤N x N −i−1 j (x j − t 1−N ) ∞ l=1 1 − tx j y l 1 − x j y l + t 1−i .
(4.21)
By equating the top-degree homogeneous components of both sides of identity (4.21), we obtain the usual Cauchy identity (2.6). However, the right side of (4.21) does not have the usual factorized form of a reproducing kernel.
Inhomogeneous Hall-Littlewood polynomials
In this section, we study the inhomogeneous HL polynomials F HL λ (x 1 , . . . , x N ; t). The main result is Theorem 5.11, which proves that they are limits of interpolation Macdonald polynomials in the regime q → 0. The statement of this corollary was conjectured by Grigori Olshanski, who also suggested a proof; the elaboration of this idea is in the first two subsections below.
Expansion in the basis of Hall-Littlewood polynomials
For n ∈ N 0 , let
φ n (t) := (1 − t)(1 − t 2 ) · · · (1 − t n ), if n ≥ 1, 1, if n = 0.
The t-analogue of the factorial is [n]! := φ n (t)/(1 − t) n . For integers n ≥ k ≥ 0, the t-binomial coefficient n k is
n k := [n]! [k]![n − k]! = φ n (t) φ k (t)φ n−k (t)
.
For convenience, we extend the definition to all integers k ∈ Z by setting n k := 0 ∀k ∈ Z with k < 0 or k > n.
Next, for any partitions λ, µ ∈ Y(N ), define
τ λ/µ (t; N ) := (−t 1−N ) |λ|−|µ| N − µ ′ 1 λ ′ 1 − µ ′ 1 i≥1 µ ′ i − µ ′ i+1 λ ′ i+1 − µ ′ i+1 . (5.1)
Note that τ λ/µ (t; N ) = 0 unless µ ⊆ λ and λ/µ is a vertical strip.
Proposition 5.1. For any λ ∈ Y(N ), we have
F HL λ (x 1 , . . . , x N ; t) = µ τ λ/µ (t; N )P HL µ (x 1 , . . . , x N ; t).
We need some preparatory lemmas; in the rest of this subsection, N is a fixed positive integer. First, we slightly extend the definition of Hall-Littlewood polynomial. An N -tuple l = (l 1 , . . . , l N ) ∈ N N 0 is called an almost-partition if • for each i = 1, 2, . . . , N − 1, either l i ≥ l i+1 , or l i = l i+1 − 1;
• there do not exist indices 1 ≤ i < j < k ≤ N with l i < l j < l k .
For such l ∈ N N 0 , define m k (l) := #{1 ≤ i ≤ N : l i = k} ∀k ≥ 0, inv(l) := #{1 ≤ i < j ≤ N : l i > l j }, v l (t) := i≥0 φ mi(l) (t) (1 − t) N = i≥0 mi(l) j=1 1 − t j 1 − t , (5.2) P HL l (x 1 , . . . , x N ; t) := v l (t) −1 w∈SN w x l1 1 · · · x lN N 1≤i<j≤N x i − tx j x i − x j . (5.3)
Given an almost-partition l ∈ N N 0 , let λ ∈ Y(N ) be the partition given by
λ := (1 m1(l) 2 m2(l) . . . ).
We say that λ ∈ Y(N ) is the partition linked to l ∈ N N 0 . In particular, we have
m k (l) = m k (λ) ∀k ≥ 1, m 0 (l) = N − ℓ(λ), v λ (t) = v l (t),
and the N -tuple λ = (λ 1 , . . . , λ N ) is obtained after applying inv(l) transpositions to l = (l 1 , . . . , l N ).
Lemma 5.2. Let l ∈ N N 0 be an almost-partition and λ ∈ Y(N ) be the partition linked to l, as defined above. Then P HL l (x 1 , . . . , x N ; t) = t inv(l) P HL λ (x 1 , . . . , x N ; t).
Proof. This is a very special case of [9,Lemma]. See also [7]. In fact, the Lemma in [9] gives a Hall-Littlewood polynomial expansion for P HL l (x 1 , . . . , x N ; t) and any N -tuple l of nonnegative integers, being the definition (5.3) extended in the obvious way. Such expansion is more complicated in the general case, but it simplifies greatly for almost-partitions.
Next let λ ∈ Y(N ) be a partition with ℓ(λ) = k ≤ N , and I ⊆ {1, 2, . . . , k}. Consider the almost-partition p ∈ N N 0 , defined from (λ, I), via
p := (λ 1 − 1 {1∈I} , . . . , λ k − 1 {k∈I} , 0, . . . , 0 N −k zeroes
).
We write p = Π(λ, I) for this dependence. We denote the partition linked to p = Π(λ, I) by π(λ, I). Some evident relations are |I| = |λ| − |π(λ, I)|, v Π(λ,I) (t) = v π(λ,I) (t). Proof. Let d := λ 1 , and for each 1 ≤ j ≤ d, let X j := {i : λ i = j}. Then |X j | = m j (λ). Also, for I ⊆ {1, . . . , k}, let I j := I ∩ X j and i j := |I j |, for each 1 ≤ j ≤ d, so that 0 ≤ i j ≤ m j (λ) and i 1 + . . . + i d = |I|. If µ = π(λ, I), then the construction of the map π implies
m d (µ) = m d (λ) − i d , m d−1 (µ) = m d−1 (λ) − i d−1 + i d , · · · m 1 (µ) = m 1 (λ) − i 1 + i 2 . From m d (λ) = λ ′ d , m i (λ) = λ ′ i − λ ′ i+1
for i < d, and the analogous relations for µ, we deduce
i d = λ ′ d − µ ′ d , i d−1 = λ ′ d−1 − µ ′ d−1 , · · · i 1 = λ ′ 1 − µ ′ 1 .
The bounds 0 ≤ i j ≤ m j (λ) = λ ′ j − λ ′ j+1 then yield the interlacing relation
λ ′ 1 ≥ µ ′ 1 ≥ λ ′ 2 ≥ · · · ≥ λ ′ d−1 ≥ µ ′ d−1 ≥ λ ′ d . (5.5)
In particular, this implies that if µ = π(λ, I) for some I ⊆ {1, . . . , k}, then µ ⊆ λ and λ/µ is a vertical strip. Conversely, if λ/µ is a vertical strip, then the interlacing relation (5.5) is satisfied. Then the previous argument shows that any
I ⊆ {1, . . . , k} with |I ∩ X j | = λ ′ j − µ ′ j
gives π(λ, I) = µ.
Lemma 5.4. Let λ ∈ Y(N ) be arbitrary with ℓ(λ) = k ≤ N , and let µ ∈ Y(N ) be such that λ/µ is a vertical strip. Then I⊆{1,...,k}: π(λ,I)=µ
t inv(Π(λ,I)) = i≥1 λ ′ i − λ ′ i+1 µ ′ i − λ ′ i+1 . (5.6)
Proof. We begin with the simplest case, which is λ = (a k ) = (a, . . . , a Next, the only µ ∈ Y(N ) such that λ/µ is a vertical strip are those of the form µ = (a, . . . , a, a − 1, . . . , a − 1). Let us say that µ has (k − s) entries that are a and s entries that are a − 1. Then the sets I such that π(λ, I) = µ are exactly those of size s; it follows that I:π(λ,I)=µ t inv(Π(λ,I)) = For a general partition λ ∈ Y(N ) of length k, let us use the notation of the previous lemma. Let d := λ 1 , and for each 1 ≤ j ≤ d, let X j := {i : λ i = j}. Then |X j | = m j (λ). Also, for I ⊆ {1, . . . , k}, let I j := I ∩ X j and i j := |I j |, for each 1 ≤ j ≤ d, so that 0 ≤ i j ≤ m j (λ) and
i 1 + . . . + i d = |I|.
Denote also a = (a 1 , . . . , a N ) := Π(λ,
I) = (λ 1 − 1 {1∈I} , . . . , λ k − 1 {k∈I} , 0 N −k )
. It is clear that if (i, j) is a pair such that 1 ≤ i < j ≤ N , a i < a j , then i, j ∈ X r for some r. Thus, if we let s j + 1 be the smallest element of X j , so that X j = {s j + 1, . . . , s j + m j }, and t j the number of involutions needed to transform (a sj +1 , . . . , a sj +mj ) into a partition, it follows that the number of involutions needed to transform a into a partition is the sum t 1 + . . . + t d . By definition, this is denoted as inv(Π(λ, I)) = t 1 + . . . + t d . On the other hand, it is clear that t j is also the number of involutions needed to transform (1 − 1 {sj +1∈Ij } , . . . , 1 − 1 {sj +mj∈Ij } ) into a partition. Therefore inv(Π(λ, I)) =
|I d | = λ ′ d − µ ′ d , |I d−1 | = λ ′ d−1 − µ ′ d−1 , · · · |I 1 | = λ ′ 1 − µ ′ 1 .
Thus the sum in the left side of (5.6) is over those
I ⊆ {1, . . . , k} with |I j | = λ ′ j − µ ′ j .
Combining this observation with (5.7) and the case previously considered, we obtain I⊆{1,...,k}: π(λ,I)=µ
t inv(Π(λ,I)) = d j=1 Ij ⊆{1,...,mj(λ)} |Ij |=λ ′ j −µ ′ j t inv(Π((1 m j (λ) ),Ij −sj )) = d j=1 m j (λ) λ ′ j − µ ′ j .
Since m j (λ) = λ ′ j − λ ′ j+1 , the lemma follows.
Proof of Proposition 5.1. For λ ∈ Y(N ), let v λ (t) be the factor in front of the (inhomogeneous) HL polynomials (same formula as in (5.2)). Then, from the definition (5.3), we have
F HL λ (x 1 , . . . , x N ; t) = v λ (t) −1 w∈SN w k i=1 (1 − t 1−N x −1 i ) k i=1 x λi i 1≤i<j≤N x i − tx j x i − x j = v λ (t) −1 w∈SN w I⊆{1,...,k} (−t 1−N ) |I| k i=1 x λi−1 {i∈I} i 1≤i<j≤N x i − tx j x i − x j = v λ (t) −1 I⊆{1,...,k} (−t 1−N ) |I| w∈SN w k i=1 x λi−1 {i∈I} i 1≤i<j≤N x i − tx j x i − x j = v λ (t) −1 I⊆{1,...,k} (−t 1−N ) |I| · v Π(λ,I) (t)P HL Π(λ,I) (x 1 , . . . , x N ; t).
From Lemma 5.2, we have P HL Π(λ,I) (x 1 , . . . , x N ; t) = t inv(Π(λ,I)) P HL π(λ,I) (x 1 , . . . , x N ; t), and therefore, by using also (5.4), we obtain
F HL λ (x 1 , . . . , x N ; t) = v λ (t) −1 I⊆{1,...,k} (−t 1−N ) |I| · v Π(λ,I) (t)t inv(Π(λ,I)) P HL π(λ,I) (x 1 , . . . , x N ; t) = v λ (t) −1 µ P HL µ (x 1 , . . . , x N ; t) I:π(λ,I)=µ (−t 1−N ) |I| · v Π(λ,I) (t)t inv(Π(λ,I)) = v λ (t) −1 µ P HL µ (x 1 , . . . , x N ; t)(−t 1−N ) |λ|−|µ| v µ (t)
I:π(λ,I)=µ t inv(Π(λ,I)) .
To finish the proof of the proposition, it then suffices to show
(−t 1−N ) |λ|−|µ| v λ (t) −1 v µ (t)
I:π(λ,I)=µ t inv(Π(λ,I)) = τ λ/µ (t; N ).
From Lemma 5.4, the definition of τ λ/µ (t; N ), and the definition (5.2), the latter equation is equivalent to
φ N −µ ′ 1 (t) φ N −λ ′ 1 (t) i≥1 φ µ ′ i −µ ′ i+1 (t) φ λ ′ i −λ ′ i+1 (t) i≥1 φ λ ′ i −λ ′ i+1 (t) φ µ ′ i −λ ′ i+1 (t)φ λ ′ i −µ ′ i (t) = N − µ ′ 1 λ ′ 1 − µ ′ 1 i≥1 µ ′ i − µ ′ i+1 µ ′ i − λ ′ i+1 ,(5.φ λ ′ i −µ ′ i (t) = φ λ ′ 1 −µ ′ 1 (t) i≥1 φ λ ′ i+1 −µ ′ i+1 (t),
the left side of (5.8) can be written as
φ N −µ ′ 1 (t) φ N −λ ′ 1 (t)φ λ ′ 1 −µ ′ 1 (t) i≥1 φ µ ′ i −µ ′ i+1 (t) φ µ ′ i −λ ′ i+1 (t)φ λ ′ i+1 −µ ′ i+1 (t) = N − µ ′ 1 λ ′ 1 − µ ′ 1 i≥1 µ ′ i − µ ′ i+1 µ ′ i − λ ′ i+1 .
A degeneration of interpolation Macdonald polynomials
Let us begin with the Hall-Littlewood degeneration of the interpolation Macdonald polynomials, see [20]. which is a polynomial.
2. One has the following combinatorial formula
F HL µ (x 1 , . . . , x N ; t) = T ∈Tab(µ,N ) ψ T (t) (i,j)∈µ (x T (i,j) − δ j,1 t T (i,j)−N −i+1 ),
where Tab Proof. This is proved in [20,Lem. 9.2].
Next we degenerate the well known binomial formula for interpolation Macdonald polynomials, [16], in the limit regime q → 0. We need several lemmas. Lemma 5.6. For any µ ∈ Y(N ), lim q→0 q 2n(µ ′ )+|µ| · I µ|N (q −µ1 , q −µ2 t, . . . , q −µN t N −1 ; q, t) = t n(µ) .
Proof. From (2.9) and (2.10), we have q 2n(µ ′ )+|µ| I µ|N (q −µ1 , q −µ2 t, . . . , q −µN t N −1 ; q, t) = q 2n(µ ′ )+|µ| C(µ; q, t) = t n(µ) s∈µ (1 − q a(s)+1 t l(s) ) and the result follows because lim q→0 q a(s)+1 = 0.
Lemma 5.7. For any µ ∈ Y(N ), we have lim q→0 q −n(µ ′ ) I µ|N (0 N ; 1/q, 1/t) = (−t 1−N ) |µ| N µ ′ 1 i≥1 µ ′ i µ ′ i+1 .
Proof. From (2.11), with (q, t) replaced by (1/q, 1/t), we obtain
q −n(µ ′ ) I µ|N (0 N ; 1/q, 1/t) = (−1) |µ| t −2n(µ) s∈µ q −a ′ (s) t l ′ (s)−N − 1 q −a(s) t −l(s)−1 − 1 = (−1) |µ| t −2n(µ) s∈µ t 1−N +l ′ (s)+l(s) 1 − q a ′ (s) t N −l ′ (s) 1 − q a(s) t l(s)+1 = (−t 1−N ) |µ| s∈µ 1 − q a ′ (s) t N −l ′ (s) 1 − q a(s) t l(s)+1 ,
where the middle equality holds because of the equalities s∈µ a(s) = s∈µ a ′ (s), whereas the last one holds because s∈µ l(s) = s∈µ l ′ (s) = n(µ). Therefore
lim q→0 q −n(µ ′ ) I µ|N (0 N ; 1/q, 1/t) = (−t 1−N ) |µ| s∈µ 1 − t N −l ′ (s) 1 {a ′ (s)=0} 1 − t l(s)+1 1 {a(s)=0} . (5.9)
The coarm length a ′ (s) vanishes if and only if s = (i, 1), for i = 1, . . . , µ ′ 1 . Therefore
s∈µ (1 − t N −l ′ (s) 1 {a ′ (s)=0} ) = µ ′ 1 i=1 (1 − t N −i+1 ) = (1 − t) µ ′ 1 · [N ]!/[N − µ ′ 1 ]!.
On the other hand, the arm length a(s) vanishes if and only if s = (i, µ i ), for i = 1, . . . , µ ′ 1 .
Let {1, 2, . . . , µ ′ 1 } = X 1 ⊔ X 2 ⊔ . . . , where X k := {1 ≤ i ≤ µ ′ 1 : µ i = k}; observe that |X k | = m k (µ) = µ ′ k − µ ′ k+1 . Clearly s=(i,j):i∈X k (1 − t l(s)+1 1 {a(s)=0} ) = s=(i,λi):i∈X k (1 − t l(s)+1 ) = (1 − t)(1 − t 2 ) · · · (1 − t |X k | ) = (1 − t) |X k | · [|X k |]! = (1 − t) µ ′ k −µ ′ k+1 · [µ ′ k − µ ′ k+1 ]!. Then s∈µ (1 − t l(s)+1 1 {a(s)=0} ) = (1 − t) µ ′ 1 k≥1 [µ ′ k − µ ′ k+1 ]
!. Therefore, from (5.9) and the previous simplifications:
lim q→0 q −n(µ ′ ) I µ|N (0 N ; 1/q, 1/t) = (−t 1−N ) |µ| · [N ]! [N − µ ′ 1 ]! k≥1 [µ ′ k − µ ′ k+1 ]! = (−t 1−N ) |µ| N µ ′ 1 i≥1 µ ′ i µ ′ i+1 .
For the next limiting statement, Lemma 5.9, we need a preparatory lemma. Proof. By using
n(µ ′ ) = N i=1 µ i 2 , n(λ ′ ) = N i=1 λ i 2 ,
the statement of the lemma is equivalent to
µ i 2 + λ i 2 + µ i − µi j=1 max(λ T (i,j) , j − 1) ≥ 0 ∀i = 1, 2, . . . , N,(5.11)
with equality if and only if λ i ∈ {µ i , µ i + 1} and λ T (i,j) = λ i .
Since
T (i, 1) ≤ T (i, 2) ≤ · · · ≤ T (i, µ i ), we have λ T (i,1) ≥ λ T (i,2) ≥ · · · ≥ λ T (i,λi) . Thus λ T (i,j) is decreasing in j, whereas j − 1 is increasing. This implies there exists 0 ≤ a i ≤ µ i such that max(λ T (i,j) , j − 1) = λ T (i,j) , if j ≤ a i , j − 1, if j > a i .Then we deduce µi j=1 max(λ T (i,j) , j − 1) = ai j=1 λ T (i,j) + µi j=ai+1 (j − 1) ≤ a i λ i + µ i 2 − a i 2 , (5.12) because T (i, j) > T (i − 1, j) > · · · > T (1, j) implies T (i, j) ≥ i and λ T (i,j) ≤ λ i ; equality holds if and only if λ i = λ T (i,j) for all 1 ≤ j ≤ a i .
From (5.12), the left side of (5.11) multiplied by two is at least equal to
λ i (λ i − 1) + 2µ i − 2a i λ i + a i (a i − 1) = (λ i − a i ) 2 + (2µ i − λ i − a i ). (5.13)
Since µ ⊆ λ, then λ i ≥ µ i ≥ a i , which implies that (5.13) is lower bounded by
(λ i − a i ) 2 + (2µ i − λ i − a i ) ≥ (λ i − a i ) 2 − (λ i − a i ) = (λ i − a i )(λ i − a i − 1) ≥ 0,
and equality holds if and only if µ i = a i and λ i = µ i or λ i = µ i + 1. Putting everything together, the lemma is proved.
Lemma 5.9. For any µ, λ ∈ Y(N ) with µ ⊆ λ, we have
lim q→0 q n(µ ′ )+n(λ ′ )+|µ| · I µ|N (q −λ1 , q −λ2 t, . . . , q −λN t N −1 ; q, t) = t n(µ) i≥1 λ ′ i − λ ′ i+1 λ ′ i − µ ′ i .
(5.14)
In particular, this limit is zero unless λ/µ is a vertical strip.
Proof.
Step 1. The combinatorial formula for interpolation Macdonald polynomials, see [17,Thm. III], is
I µ|N (q −λ1 , . . . , q −λN t N −1 ; q, t) = T ∈Tab(µ,N ) ψ T (q, t) (i,j)∈µ (q −λ T (i,j) t T (i,j)−1 − q 1−j t N −1+i−T (i,j) ).
We want to show that for all tableaux T ∈ Tab(µ, N ), the limit lim q→0 q n(µ ′ )+n(λ ′ )+|µ|
(i,j)∈µ (q −λ T (i,j) t i−1 − q 1−j t N −1+i−T (i,j) ) (5.15)
exists and we want to determine conditions on λ ⊇ µ and T ∈ Tab(µ, N ) for which the limit is nonzero. Write the prelimit expression above as q n(µ ′ )+n(λ ′ )+|µ|− (i,j)∈µ max(λ T (i,j) ,j−1) j) ). (5.16) It is clear that the second line of (5.16) has a limit as q → 0; in fact, this limit is
× (i,j)∈µ (q max(0, j−1−λ T (i,j) ) t i−1 − q max(0, λ T (i,j) −j+1) t N −1+i−T (i,(i,j)∈µ 1 {λ T (i,j) +1≥j} t i−1 − 1 {j≥λ T (i,j) −1} t N −1+i−T (i,j) . (5.17)
From Lemma 5.8, the first line of (5.16) has a limit as q → 0, and that limit is zero unless λ/µ is a vertical strip and λ T (i,j) = λ i for all (i, j) ∈ µ. So far, we have proved that if λ/µ is not a vertical strip, then lim q→0 q n(µ ′ )+n(λ ′ )+|µ| · I µ|N (q −λ1 , q −λ2 t, . . . , q −λN t N −1 ; q, t) = 0.
Moreover, if λ/µ is a vertical strip, then
lim q→0 q n(µ ′ )+n(λ ′ )+|µ| · I µ|N (q −λ1 , q −λ2 t, . . . , q −λN t N −1 ; q, t) = T ∈Tab(µ,N ) λ T (i,j) =λi ∀(i,j)∈µ ψ T (t) (i,j)∈µ 1 {λ T (i,j) +1≥j} t T (i,j)−1 − 1 {j≥λ T (i,j) −1} t N −1+i−T (i,j) . (5.18)
It remains to simplify the last expression. Assume in the remainder of the proof that λ/µ is a vertical strip.
Step 2. If T ∈ Tab(µ, N ) satisfies λ T (i,j) = λ i for any (i, j) ∈ µ, then λ T (i,j) +1 = λ i +1 > j. Thus each factor simplifies as (
1 {λ T (i,j) +1≥j} t T (i,j)−1 − 1 {j≥λ T (i,j) −1} t N −1+i−T (i,j) ) = t T (i,j)−1 .
Therefore, the second line of (5.18) is simplified to
T ∈Tab(µ,N ) λ T (i,j) =λi ∀(i,j)∈µ ψ T (t) · (i,j)∈µ t T (i,j)−1 . (5.19)
In the remaining steps we show that (5.19) equals the right side of (5.14). More explicitly, we show in Step 3 that for any T ∈ Tab(µ, N ) with λ T (i,j) = λ i for all (i, j) ∈ µ, one has ψ T (t) = 1. Finally, in Step 4, it is shown that T (i,j)∈µ t T (i,j)−1 , the sum being over T ∈ Tab(µ, N ) with λ T (i,j) = λ i , equals the right side of (5.14).
Step 3. Let T ∈ Tab(µ, N ) be such that λ T (i,j) = λ i for all (i, j) ∈ µ. We recall the definition of ψ T (t). The tableau T is given by a sequence µ = µ (N ) ≻ µ (N −1) ≻ · · · ≻ µ (1) ≻ µ (0) = ∅, where µ (k) is the set of boxes of µ filled with numbers ≤ k. Given ν ≻ κ, θ := ν − κ is a horizontal strip; set
ψ ν/κ (t) := j∈J (1 − t mj (κ) ),(5.20)
where J := {j : θ ′ j = 0, θ ′ j+1 = 1}. Then by definition
ψ T (t) := N i=1 ψ µ (i) /µ (i−1) (t).
For T as described above, and any i ≥ 1, we will argue that ψ µ (i) /µ (i−1) (t) = 1; this will show ψ T (t) = 1. For any 1 ≤ k ≤ µ ′ 1 , let
Y k := {i : µ i = k, λ i = k + 1}, Z k := {i : µ i = k = λ i }.
Since λ/µ is a vertical strip, we deduce We claim that for any i ∈ Y k , then T (i, j) = i. In fact, {i :
|Y k | = λ ′ k+1 − µ ′ k+1 , |Z k | = µ ′ k − λ ′ k+1 .λ i = k + 1} = Y k ⊔ Z k+1 . Say Z k+1 = {z 1 < . . . < z r }, Y k = {y 1 < .
. . < y r }, so that z r < y 1 . By definition of Young tableau, T (z 1 , j) < . . . < T (z r , j) < T (y 1 , j) < . . . < T (y r , j). But by assumption T (z 1 , j), . . . , T (z r , j), T (y 1 , j), . . . , T (y r , j) ∈ Y k ⊔ Z k+1 = {z 1 < . . . < z r < y 1 < . . . < y r }. In particular, we have T (y i , j) = y i for all i, i.e., T (i, j) = i for any i ∈ Y k , as claimed.
By a similar reasoning as above, we deduce: if i ∈ Z k and j = k, then T (i, j) = i; and if i ∈ Z k and Y k−1 = ∅, then also T (i, j) = i.
From the claims above we see that, possibly, the only boxes (i, j) ∈ µ with T (i, j) = i are those with j = µ i = λ i and for which there exist k > i with λ k = µ k + 1 = j. For a fixed j, there exist µ ′ j − λ ′ j+1 such boxes (i, j), and the numbers on those boxes can be chosen from the set {λ ′ j+1 +1, λ ′ j+1 +2, . . . , λ ′ j } in such a way that they are strictly increasing from bottom to top. See Figure 1 for an illustration in the case µ = (6, 5, 5, 4, 4, 4, 2, 2, 1), λ = (7, 5, 5, 5, 5, 5, 2, 2, 1, 1, 1).
From these considerations, it is clear that ψ µ (i) /µ (i−1) (t) = 1 for any i, because each set J in the definition (5.20) is the empty set. This is what we wished to prove.
Step 4. From the previous step, T (i, j) = i, unless j = µ i = λ i and λ ′ j−1 − λ ′ j > 0. In the latter case, T (i, j) − i could be any number in the set {0, 1, . . . , λ ′ j−1 − λ ′ j − 1}; we note also that for a given j, there are µ ′ j − λ ′ j+1 boxes like these.
Since (i,j)∈µ t i−1 = t µ2+2µ3+... = t n(µ) , we deduce T ∈Tab(µ,N ) λ T (i,j) =λi ∀(i,j)∈µ (i,j)∈µ t T (i,j)−1 = t n(µ) j≥1 0≤k1≤...≤k µ ′ j −λ ′ j+1 ≤λ ′ j−1 −λ ′ j −1 t k1+...+k µ ′ j −λ ′ j+1 .
The inner sum above can be calculated:
0≤k1≤...≤k µ ′ j −λ ′ j+1 ≤λ ′ j−1 −λ ′ j −1 t k1+...+k µ ′ j −λ ′ j+1 = h µ ′ j −λ ′ j+1 (1, t, . . . , t λ ′ j −λ ′ j+1 −1 ) = λ ′ j − λ ′ j+1 µ ′ j − λ ′ j+1
as in the proof of Lemma 5.4; the proof is now finished.
Proposition 5.10. For any λ ∈ Y(N ), we have
F HL λ (x 1 , . . . , x N ; t) = µ τ λ/µ (t; N )P HL µ (x 1 , . . . , x N ; t),
where the expressions τ λ/µ (t; N ) were defined in (5.1).
Proof. From the binomial formula for interpolation Macdonald polynomials in [16] with (1/q, 1/t) instead of (q, t), and using the well-known symmetry M µ (x; q, t) = M µ (x; 1/q, 1/t), we obtain
I λ|N (x 1 , . . . , x N ; 1/q, 1/t) = µ⊆λ q −n(λ ′ ) I λ|N (0 N ; 1/q, 1/t) q −n(µ ′ ) I µ|N (0 N ; 1/q, 1/t) q n(µ ′ )+n(λ ′ )+|µ| I µ|N (q −λ1 , . . . , q −λN t N −1 ; q, t) q 2n(µ ′ )+|µ| I µ|N (q −µ1 , . . . , q −µN t N −1 ; q, t) M µ|N (x 1 , . . . , x N ; q, t) .
(5.21)
We know
lim q→0 I λ|N (x 1 , . . . , x N ; 1/q, 1/t) = F HL λ (x 1 , . . . , x N ; t), lim q→0 M µ|N (x 1 , . . . , x N ; q, t) = P HL µ (x 1 , . . . , x N ; t).
From Lemmas 5.6, 5.7 and 5.9, the binomial formula in (5.21) has a limit as q tends to zero. The limiting coefficient that accompanies P HL µ (x 1 , . . . , x N ; t) is
(−t 1−N ) |λ| N λ ′ 1 i≥1 λ ′ i λ ′ i+1 (−t 1−N ) |µ| N µ ′ 1 i≥1 µ ′ i µ ′ i+1 × t n(µ) i≥1 λ ′ i −λ ′ i+1 λ ′ i −µ ′ i t n(µ) ,
which is easily seen to be equal to τ λ/µ (t; N ).
From Propositions 5.1 and 5.10, we obtain the main result of this section:
Theorem 5.11. For any λ ∈ Y(N ), we have The polynomials E HL k (x 1 , . . . , x N ; t) are inhomogeneous analogues of the elementary symmetric polynomials e k (x 1 . . . , x N ) = 1≤i1<...<i k ≤N x i1 · · · x i k . Proof. The second equality in (5.22) is in the article of Okounkov [16, (1.6)]. Recall that Okounkov's notation and ours, for interpolation polynomials, are related by I µ|N (x 1 , . . . , x N ; q, t) = P * µ (x 1 , x 2 /t, . . . , x N /t N −1 ; 1/q, 1/t). Note that I (1 k )|N (x 1 , . . . , x N ; q −1 , t −1 ) does not depend on q. Thus Theorem 5.11 yields the first equality E HL k (x 1 , . . . , x N ; t) = F HL (1 k ) (x 1 , . . . , x N ; t) = I (1 k )|N (x 1 , . . . , x N ; q −1 , t −1 ).
F HL λ (x 1 , . . . , x N ; t) = F HL λ (x 1 , . . . , x N ; t) = T ∈Tab(λ,N ) ψ T (t) (i,j)∈λ (x T (i,j) − δ j,1 t T (i,j)−N −i+1 ).
Special cases: one column partition and one row partition
Corollary 5.13.
N k=0 E HL k (x 1 , . . . , x N ; t) (u + 1)(u + t −1 ) · · · (u + t 1−k ) =
N i=1 1 + x i /u 1 + t 1−i /u .
Proof. This is a rewriting of [16, (2.9)]. Let us observe that we can replace N by ∞ in the upper limit of the sum and product, if we also replace E HL k (x 1 , . . . , x N ; t) by E HL k (·; t).
Next, let us denote Proposition 5.14. The following is the generating series for H HL n (x 1 , . . . , x N ; t):
∞ n=0 H HL n (x 1 , . . . , x N ; t)u n = 1 − t 1−N u
1 − t N i=1 1 − x i tu 1 − x i u − t(1 − u) 1 − t ,(5.25)
and consequently, if we denote H HL n (·; t) := F HL (n) (·.t), we have ∞ n=0 H HL n (·; t)u n =
1 1 − t ∞ i=1 1 − x i tu 1 − x i u − t(1 − u) 1 − t . (5.26)
Proof. The generating function (5.25) is a consequence of (5.24) and (5.23).
To obtain (5.26) from (5.25), informally, treat t as a real number with t > 1 and send N to infinity. This agrees with the remarks made before Proposition/Definition 2.7, regarding the construction of the maps π ∞ n as limits of the maps π N N −1 • · · · • π n+1 n . Formally, one can write the right side of (5.26) in terms of the power sums {p n : n ≥ 1} and the right side of (5.25) in terms of the set of generators {p n (x 1 , . . . , x N ) : 1 ≤ n ≤ N }. One then checks that after replacing each p n by π ∞ N p n = p n (x 1 , . . . , x N ) + t −nN /(1 − t −n ), the right side of (5.26) becomes the right side of (5.25), cf. the argument in (4.4).
One can also derive (5.26) from [16, (2.10)] and Theorem 5.11.
Proof of Proposition 5.16. Begin with Proposition 4.2, after setting x i → x i /(ut N −1 ), y i → y i ut N −1 , and using the homogeneity of the dual HL polynomials:
1 + N k=1 1 (u; t −1 ) k ℓ(λ)=k (ut N −1 ) λ1+λ2+... F HL λ (
x 1 ut N −1 , . . . ,
x N ut N −1 ; t)Q HL λ (y 1 , y 2 , . . . ; t) =
u − N (N −1) 2 t − N (N −1) 2 2 V (x 1 , . . . , x N )(u; t −1 ) N det 1≤i,j≤N x N −i−1 j (x j t 1−N − 1)t N −i (−u) + (x j − u) ∞ l=1
1 − tx j y l 1 − x j y l . where M Q λ (·; q, t) stands for the dual Macdonald function, which differs from M λ (·; q, t) by a constant not depending on λ and, more importantly for us, it specializes to Q HL λ (·; t) when we set q = 0. With a calculation that is similar to that of (4.6), we can write down an expression for the conjugation of the operator (5.31) by the right side of (5.32). Then we deduce that the result of acting with (5.31) on (5.32), and then dividing by (u; t −1 ) N , is 1 − x j y l 1 − tx j y l + (x j − u) .
(5.33) We now want to set q = 0 in equation (5.33). As for the right side, only the product in the first line of that display is affected, and it becomes N j=1 ∞ l=1 1−txjy l 1−xj y l . Then we can make the factor corresponding to j multiply each term in the j-th column of the matrix in the second line of (5.33). Thus the result of setting q = 0 in (5.33) is ℓ(λ)≤N D N (u; q, t)M λ|N (x 1 , . . . , x N ; q, t) q=0 (u; t −1 ) N · Q HL λ (y 1 , y 2 , . . . ; t) = 1 V (x 1 , . . . , x N ) · (u; t −1 ) N × det 1≤i,j≤N
x N −i−1 j (x j t 1−N − 1)t N −i (−u) + (x j − u) ∞ l=1 1 − tx j y l 1 − x j y l .
(5.34) By comparing (5.30) and (5.34), we deduce (ut N −1 ) λ1+λ2+... F HL λ (
x 1 ut N −1 , . . . ,
x N ut N −1 ; t) = (u; t −1 ) ℓ(λ) (u; t −1 ) N D N (u; q, t)M λ|N (x 1 , . . . , x N ; q, t) q=0 .
(5.35) To obtain (5.29), let us replace each x i by x i ut N −1 in (5.35). From the homogeneity of Macdonald polynomials, we have M λ (x 1 ut N −1 , . . . , x N ut N −1 ; q, t) = (ut N −1 ) λ1+λ2+... M λ (x 1 , . . . , x N ; q, t), so that the factors (ut N −1 ) λ1+λ2+... on both sides of (5.35) cancel out. After the change, the operator D N (u; q, t) becomes almost the operator in the right side of (5.29), except with T q,xj instead of T xj , but one still has to set q = 0. Since clearly (T q,xj f )(x 1 , . . . , x N ) q=0 = (T xj f )(x 1 , . . . , x N ) for any function f (x 1 , . . . , x N ), we obtain the desired result.
We lastly remark that the formula in the Proposition generalizes the special cases of Remark 2.2; they also serve as a check to our formula in the cases λ = ∅ and ℓ(λ) = N .
Remark 2. 3 .
3Our symmetric polynomials F HL λ (x 1 , . . . , x N ; t) look very similar to the symmetric rational functions G λ (u 1 , . . . , u N ; Ξ, S) of Borodin-Petrov[3], e.g., compare (2.1) with[3, (4.24)]. However, our polynomials seem not to be a degeneration of the family of functions of Borodin-Petrov.Remark 2.4. Borodin defined a different kind of inhomogeneous HL polynomial as a degeneration of the homogeneous version of the symmetric rational function G λ (u 1 , . . . , u N ; Ξ, S); see [2, Sec. 8.2] for their definition. It is not immediately clear from the definition, but F HL λ (x 1 , . . . , x N ; t) is a symmetric polynomial in the variables x 1 , . . . , x N , of degree |λ|, and with coefficients in Z[t, t −1 ]. This can be proved in the same fashion as one proves the analogous properties for HL polynomials, as defined in (2.2), see [8, Ch. III, Sec. 1] for details. From (2.2) and (2.13), we deduce that if ℓ(λ) ≤ N , the top-degree homogeneous component of F HL λ (x 1 , . . . , x N ; t) is the HL polynomial P HL λ (x 1 , . . . , x N ; t). It follows that {F HL λ (x 1 , . . . , x N ; t) : λ ∈ Y(N )} is a basis of Λ N,Q(t) .
Proposition 4. 2 .
2As an identity on Λ X,N,Q(t) ⊗Λ Y,Q(t)[[u]], the following holds
out common factors from each column, the determinant of M I equals det(M I ) = ∞ l=1 i∈I
definition of the matrix M I and multilinearity of the determinant, we have det(M ) = I⊆{1,...,N } det(M I ). Therefore the expression in the right side of (4.7) equals
Case 2 .
2ℓ(λ) = 0, i.e., λ = ∅. In this case we can use P HL ∅ ({x i : i ∈ I}; t) = F HL λ (x 1 , . . . , x N ; t) = 1, so that identity (4.13) is easily deduced from the following lemma for K = N , A = {1, 2, . . . , N }, and under the identification of variables x i ↔ t 1−N z −1 i ∀i = 1, . . . , N , u ↔ t N −1 q. The proof of Lemma 4.3 is postponed till the end of this subsection.
Lemma 4 . 3 .
43Let K ∈ N, let A be a set of size K, and let (z a ) a∈A be a set of variables indexed by A. Moreover q, t are two additional formal parameters. Then
J⊆A
Let S({i 1
1, . . . , i k }, I 0 ) be the set of bijective maps I \ {i 1 , . . . , i k } → I \ I 0 ; the restriction of w ∈ S N to I \ {i 1 , . . . , i k }, to be denoted w ′ , is an element of S({i 1 , . . . , i k }, I 0 ). Now we can write the R-restricted left side of (
( 4 .
417) where the first equality in display (4.17) follows from (2.3), the second equality is a simple algebraic manipulation, and the third equality is a consequence ofLemma 4.3 applied to K = N − k, A = {1, . . . , N } \ I 0 , z i ↔ t 1−N x −1 i ∀i = 1, . . . , N , and q ↔ ut 1−N . Finally, observe (ut 1−N ; t) N −k = (u; t −1 ) N /(u; t −1 ) k , so the last line of display (4.17) equals the first line of display
y 1 , . . . , y K ) , where {H k K : k = 1, . . . , K} are the Macdonald q-difference operators, H 0 K := 1, and {e i (y 1 , . . . , y K ) : i = 0, . . . , K} are the elementary symmetric polynomials. It follows that (4.18) is equivalent to
k e k (qt K−1 , . . . , qt K−i , t K−i−1 , . . . , t, 1) and because K k=0 (−1) k e k (a 1 , . . . ,a K ) = K i=1 (1 − a i ), the coefficient of e i (y 1 , . . . , y K ) vanishes if 0 ≤ i < K (because the factor i s=1 (1 − qt K−s ) K−i r=1(1 − q r−1 ) contains 1 − q 0 = 0 unless i = K). The coefficient of e K (y 1 , . . . , y K ) = (y 1 y 2 · · · y K ) is K s=1 (1 − qt K−s ) = (q; t) K .
( 5 . 4 )
54Lemma 5.3. Let λ, µ ∈ Y(N ) be arbitrary, and ℓ(λ) = k ≤ N . There exists I ⊆ {1, . . . , k} such that π(λ, I) = µ if and only if µ ⊆ λ and λ/µ is a vertical strip.
)
, for some a ≥ 1. Then given any I ⊆ {1, 2, . . . , k}, say I = {i 1 < . . . < i s }, we clearly have a ′ := Π(λ, I) = (a − 1 {1∈I} , . . . , a − 1 {k∈I} , 0 N −k ), and {(i, j) : i < j, a ′ i > a ′ j } = {(i, j) : i < j, i ∈ I, j / ∈ I}. Thus inv(a ′ ) = inv(Π(λ, I)) = s r=1 (k − i r + r − s).
The latter sum equals h s (1, t, . . . , t k−s ), where h r is the r-th complete homogeneous symmetric polynomial. There is an explicit formula for this evaluation, see e.g.[8, Ch. I.3, Ex. 1], and it yields the desired result: I:π(λ,I)=µ t inv(Π(λ,I)) = k s .
mj ) is the column partition of length m j and I j − s j := {i − s j : i ∈ I j } ⊆ {1, 2, . . . , m j }. To proceed, observe that by inspecting the proof of Lemma 5.3, π(λ, I) = µ if and only if
For any µ ∈ Y(N ), there exists a limit F HL µ (x 1 , . . . , x N ; t) = lim q→0 I µ|N (x 1 , . . . , x N ; 1/q, 1/t),
(µ, N ) is the set of semistandard Young tableaux of shape µ, filled with numbers in the set {1, 2, . . . , N }, and each ψ T (t) is the weight of the tableau T that shows up in the combinatorial formula for Hall-Littlewood polynomials; see [8, Ch. III, (5.9')].
Lemma 5. 8 .
8Let µ ⊆ λ be two partitions of length ≤ N . Let T be a semistandard Young tableau of shape µ and filled with numbers in {1, 2, . . . , N }. Thenn(µ ′ ) + n(λ ′ ) + |µ| − (i,j)∈µ max(λ T (i,j) , j − 1) ≥ 0,(5.10)and equality holds if and only if λ/µ is a vertical strip, and λ T (i,j) = λ i for all (i, j) ∈ µ.
. Filled squares (with numbers or letters) belong to µ, whereas the dark-grey squares belong to λ/µ. Any T ∈ Tab(µ, 15) with λ T (i,j) = λi has T (i, j) = i for most squares (i, j) ∈ µ. The numbers have been written in those squares. For the light-grey squares (i, j) ∈ µ, T (i, j) could take one of several values. In our example, i < j take values in {2, 3, 4, 5}, whereas k takes values in {9, 10, 11}.
For any k ≥ 1 E
1HL k (x 1 , . . . , x N ; t) := 1, if k = 0, F HL (1 k ) (x 1 , . . . , x N ; t), if 1 ≤ k ≤ N.
Proposition 5 . 12 .
512For any 0 ≤ k ≤ N , we have E HL k (x 1 , . . . , x N ; t) = I (1 k )|N (x 1 , . . . , x N ; q −1 , t −1 ) = 1≤i1<···<i k ≤N k s=1 (x is − t s+1−k−is ).(5.22) Consequently, if we denote the symmetric function E HL k (·; t) := F HL (1 k ) (·; t), then E HL k (·; t) = I (1 k )|N (·; q −1 , t −1 ).
H
HL k (x 1 , . . . , x N ; t) := F HL (k) (x 1 , . . . , x N ; t), if k ≥ 0, in particular H HL 0 (x 1 , . . . , x N ; t) = 1. The polynomials H HL k (x 1 , . . . , x N ; t) are inhomogeneous analogues of complete homogeneous symmetric polynomials h k (x 1 . . . , x N ) = 1≤i1≤...≤i k ≤N x i1 · · · x i k .From Proposition 5.1, we haveH HL 1 (x 1 , . . . , x N ; t) = P HL (1) (x 1 , . . . , x N ; t) − t 1−N (1 − t N ) 1 − t , H HL k (x 1 , . . . , x N ; t) = P HL (k) (x 1 , . . . , x N ; t) − t 1−N P HL (k−1) (x 1 , . . . , x N ; t), for k ≥ 2.
j t 1−N − 1)t N −i (−u)T q,xj + (x j − u) .
this operator to the Cauchy identity for Macdonald polynomials, [8, Ch. VI, (4.13)] ℓ(λ)≤N M λ (x 1 , . . . , x N ; q, t) · M Q λ (y 1 , y 2 , . . . ; j y l ; q) ∞ (x j y l ; q) ∞ , (5.32)
D(
N (u; q, t)M λ|N (x 1 , . . . , x N ; q, t) (u; t −1 ) N · M Q λ (y 1 , y 2 , . . . ; q, t) txjy l ;q)∞ (xj y l ;q)∞ V (x 1 , . . . , x N ) · (u; t −1 )
are a generalization of HL polynomials. To define them, introduce the Macdonald difference operators
the infinite Pochhammer symbol. For example, by the Cauchy identity for Macdonald polynomials, [8, (4.13)], we can write it as
Actually, only the interpolation Macdonald polynomials appear in the papers[6,16,17,18,22]. It seems like Olshanski was the first who considered their lifts to the ring of symmetric functions. However, Sergeev and Veselov[23], as well as Rains[21], have previously studied lifts of Jack-Laurent polynomials and BCN -symmetric polynomials, respectively
INHOMOGENEOUS HALL-LITTLEWOOD POLYNOMIALS
Vertex operator representation for the first operatorAs an application of the material from the previous subsection, we give a formula for the operator A 1 , other than that in equation(3.9).From (5.26), we obtainNext, to the equation (5.24), set t −1 x i instead of x i , then t −1 instead of t, and finally send N to infinity. By recalling Q HL (n) (·;From the definition (2.8) of the Macdonald inner product, we have p * n = n 1−q n 1−t n ∂ ∂pn . After taking the adjoint of the last equation and using u −1 instead of u, we then obtainFrom Theorem 3.2 for k = 1, the operator A 1 is the constant coefficient of the product of generating functions on the left sides of (5.27) and (5.28). We deduce the following vertex operator representation for A 1 . It would be interesting to obtain similar formulas for all operators A k . Proposition 5.15.Another relation to Hall-Littlewood polynomialsObserve that the left side of (5.29) does not depend on the variable u, therefore neither does the right side of that equality. In particular, by setting u = 0 we havewhich together with (2.4) gives an explicit formula for F HL λ (x 1 , . . . , x N ; t) in terms of the set of HL polynomials {P HL λ ({x i : i ∈ I}; t)} I⊆{1,...,N },|I|≥ℓ(λ) .
Macdonald operators and homological invariants of the colored Hopf link. Hidetoshi Awata, Hiroaki Kanno, Journal of Physics A: Mathematical and Theoretical. 4437Hidetoshi Awata and Hiroaki Kanno, Macdonald operators and homological invariants of the colored Hopf link, Journal of Physics A: Mathematical and Theoretical 44 (2011), no. 37, 375201-375221.
On a family of symmetric rational functions. Alexei Borodin, Advances in Mathematics. 306Alexei Borodin, On a family of symmetric rational functions, Advances in Mathematics 306 (2017), 973-1018.
Higher spin six vertex model and symmetric rational functions. Alexei Borodin, Leonid Petrov, Selecta Mathematica (N.S.Alexei Borodin and Leonid Petrov, Higher spin six vertex model and symmetric rational functions, Selecta Mathematica (N.S.) (2016), 1-124.
A commutative algebra on degenerate CP1 and Macdonald polynomials. K Hashizume, A Hoshino, J Shiraishi, S Yanagida, B Feigin, Journal of Mathematical Physics. 509Hashizume K. Hoshino A. Shiraishi J. & Yanagida S. Feigin, B., A commutative algebra on degenerate CP1 and Macdonald polynomials, Journal of Mathematical Physics 50 (2009), no. 9, 095215-095215.
Interlacing adjacent levels of β-Jacobi corners processes. Vadim Gorin, Lingfu Zhang, arXiv:1612.02321PreprintVadim Gorin and Lingfu Zhang, Interlacing adjacent levels of β-Jacobi corners processes, (2016), Preprint, arXiv:1612.02321.
Symmetric and non-symmetric quantum Capelli polynomials. Friedrich Knop, Commentarii Mathematici Helvetici. 721Friedrich Knop, Symmetric and non-symmetric quantum Capelli polynomials, Commentarii Mathematici Helvetici 72 (1997), no. 1, 84-100.
On certain symmetric functions. D E Littlewood, Proceedings of the London Mathematical Society. 31D. E. Littlewood, On certain symmetric functions, Proceedings of the London Mathemati- cal Society 3 (1961), no. 1, 485-498.
Ian G Macdonald, Symmetric functions and Hall polynomials. New YorkOxford University Press Inc2 ed.Ian G. Macdonald, Symmetric functions and Hall polynomials, 2 ed., Oxford University Press Inc., New York, 1995.
A note on the multiplication of Hall functions. A O Morris, Journal of the London Mathematical Society. 11A. O. Morris, A note on the multiplication of Hall functions, Journal of the London Math- ematical Society 1 (1964), no. 1, 481-488.
Integrable Hierarchy of the Quantum Benjamin-Ono Equation, Symmetry, Integrability and Geometry. Maxim Nazarov, Evgeni Sklyanin, Methods and Applications. 9Maxim Nazarov and Evgeni Sklyanin, Integrable Hierarchy of the Quantum Benjamin-Ono Equation, Symmetry, Integrability and Geometry. Methods and Applications 9 (2013).
Sekiguchi-Debiard operators at infinity. Communications in Mathematical Physics. 3243, Sekiguchi-Debiard operators at infinity, Communications in Mathematical Physics 324 (2013), no. 3, 831-849.
Macdonald operators at infinity. Journal of Algebraic Combinatorics. 401, Macdonald operators at infinity, Journal of Algebraic Combinatorics 40 (2014), no. 1, 23-44.
Lax operator for Macdonald symmetric functions. Letters in Mathematical Physics. 1057, Lax operator for Macdonald symmetric functions, Letters in Mathematical Physics 105 (2015), no. 7, 901-916.
IMRN-2017-186.R1Cherednik Operators and Ruijsenaars-Schneider Model at Infinity, International Mathematics Research Notices. , Cherednik Operators and Ruijsenaars-Schneider Model at Infinity, International Mathematics Research Notices (2017), IMRN-2017-186.R1.
The shuffle algebra revisited. Andrei Negut, International Mathematics Research Notices. 22Andrei Negut, The shuffle algebra revisited, International Mathematics Research Notices (2013), no. 22, 6242-6275.
Andrei Okounkov, Binomial formula for Macdonald polynomials and applications. 4Andrei Okounkov, Binomial formula for Macdonald polynomials and applications, Mathe- matical Research Letters 4 (1997), no. 4, 533-553.
Macdonald polynomials: q-integral representation and combinatorial formula. Compositio Mathematica. 1122shifted, (shifted) Macdonald polynomials: q-integral representation and combinatorial for- mula, Compositio Mathematica 112 (1998), no. 2, 147-182.
A remark on the Fourier pairing and the binomial formula for the Macdonald polynomials. Functional Analysis and Its Applications. 362, A remark on the Fourier pairing and the binomial formula for the Macdonald polynomials, Functional Analysis and Its Applications 36 (2002), no. 2, 134-139.
An Analogue of Big q-Jacobi Polynomials in the Algebra of Symmetric Functions. Grigori Olshanski, Funktsional. Anal. i Prilozhen. 513Grigori Olshanski, An Analogue of Big q-Jacobi Polynomials in the Algebra of Symmetric Functions, Funktsional. Anal. i Prilozhen. 51 (2017), no. 3, 56-76.
Interpolation Macdonald polynomials and Cauchy-type identities. arXiv:1712Preprint, Interpolation Macdonald polynomials and Cauchy-type identities, (2017), Preprint arXiv:1712.?????
BCn-symmetric polynomials. Eric M Rains, Transformation groups. 101Eric M. Rains, BCn-symmetric polynomials, Transformation groups 10 (2005), no. 1, 63- 132.
Interpolation, integrality, and a generalization of Macdonald's polynomials. Siddhartha Sahi, International Mathematics Research Notices. 10Siddhartha Sahi, Interpolation, integrality, and a generalization of Macdonald's polynomi- als, International Mathematics Research Notices 10 (1996), 457-471.
Jack-Laurent symmetric functions. A N Sergeev, A P Veselov, Proceedings of the London Mathematical Society. 1111A. N. Sergeev and A. P. Veselov, Jack-Laurent symmetric functions, Proceedings of the London Mathematical Society 111 (2015), no. 1, 63-92.
Bisymmetric functions, Macdonald polynomials and basic hypergeometric series. S , Ole Warnaar, Compositio Mathematica. 1442S. Ole Warnaar, Bisymmetric functions, Macdonald polynomials and basic hypergeometric series, Compositio Mathematica 144 (2008), no. 2, 271-303.
Refined Cauchy/Littlewood identities and six-vertex model partition functions: III. Deformed bosons. Michael Wheeler, Paul Zinn-Justin, Advances in Mathematics. 299Michael Wheeler and Paul Zinn-Justin, Refined Cauchy/Littlewood identities and six-vertex model partition functions: III. Deformed bosons, Advances in Mathematics 299 (2016), 543-600.
| []
|
[
"State-dependent Riccati equation feedback stabilization for nonlinear PDEs",
"State-dependent Riccati equation feedback stabilization for nonlinear PDEs"
]
| [
"Alessandro Alla [email protected] ",
"Dante Kalise [email protected] ",
"Valeria Simoncini [email protected] ",
"A Alla ",
"D Kalise ",
"V Simoncini ",
"A Alla ",
"D Kalise ",
"V Simoncini ",
"\nDepartment of Mathematics\nSchool of Mathematical Sciences\nPUC-Rio\n22451-900Rio de JaneiroBrazil\n",
"\nAM 2 and Dipartimento di Matematica\nUniversity of Nottingham\nUniversity Park Campus, Not-tingham NG7 2RDUnited Kingdom\n",
"\nUniversità di Bologna\nPiazza di Porta San Donato 5, I-40127 Bologna, and IMATI-CNRPaviaItaly\n",
"\nIntroduction\n\n"
]
| [
"Department of Mathematics\nSchool of Mathematical Sciences\nPUC-Rio\n22451-900Rio de JaneiroBrazil",
"AM 2 and Dipartimento di Matematica\nUniversity of Nottingham\nUniversity Park Campus, Not-tingham NG7 2RDUnited Kingdom",
"Università di Bologna\nPiazza di Porta San Donato 5, I-40127 Bologna, and IMATI-CNRPaviaItaly",
"Introduction\n"
]
| []
| The synthesis of suboptimal feedback laws for controlling nonlinear dynamics arising from semi-discretized PDEs is studied. An approach based on the State-dependent Riccati Equation (SDRE) is presented for H 2 and H ∞ control problems. Depending on the nonlinearity and the dimension of the resulting problem, offline, online, and hybrid offline-online alternatives to the SDRE synthesis are proposed. The hybrid offline-online SDRE method reduces to the sequential solution of Lyapunov equations, effectively enabling the computation of suboptimal feedback controls for two-dimensional PDEs. Numerical tests for the Sine-Gordon, degenerate Zeldovich, and viscous Burgers' PDEs are presented, providing a thorough experimental assessment of the proposed methodology. | 10.1007/s10444-022-09998-4 | [
"https://arxiv.org/pdf/2106.07163v1.pdf"
]
| 235,422,025 | 2106.07163 | 1898d636bc77e7ff355f929245ff9dc8acbafcb3 |
State-dependent Riccati equation feedback stabilization for nonlinear PDEs
14 Jun 2021
Alessandro Alla [email protected]
Dante Kalise [email protected]
Valeria Simoncini [email protected]
A Alla
D Kalise
V Simoncini
A Alla
D Kalise
V Simoncini
Department of Mathematics
School of Mathematical Sciences
PUC-Rio
22451-900Rio de JaneiroBrazil
AM 2 and Dipartimento di Matematica
University of Nottingham
University Park Campus, Not-tingham NG7 2RDUnited Kingdom
Università di Bologna
Piazza di Porta San Donato 5, I-40127 Bologna, and IMATI-CNRPaviaItaly
Introduction
State-dependent Riccati equation feedback stabilization for nonlinear PDEs
14 Jun 2021Received: date / Accepted: datearXiv:2106.07163v1 [math.OC] Noname manuscript No. (will be inserted by the editor)
The synthesis of suboptimal feedback laws for controlling nonlinear dynamics arising from semi-discretized PDEs is studied. An approach based on the State-dependent Riccati Equation (SDRE) is presented for H 2 and H ∞ control problems. Depending on the nonlinearity and the dimension of the resulting problem, offline, online, and hybrid offline-online alternatives to the SDRE synthesis are proposed. The hybrid offline-online SDRE method reduces to the sequential solution of Lyapunov equations, effectively enabling the computation of suboptimal feedback controls for two-dimensional PDEs. Numerical tests for the Sine-Gordon, degenerate Zeldovich, and viscous Burgers' PDEs are presented, providing a thorough experimental assessment of the proposed methodology.
Introduction
Feedback control laws are ubiquitous in modern science and engineering and can be found in autonomous vehicles, fluid flow control, and network dynamics, among many others. A distinctive feature of feedback laws is their ability to compensate external perturbations in real time. While an offline training phase is often affordable, an operational feedback law must be able to provide a control signal at a rate that is determined by the underlying time scales of the physical system to be controlled. This requirement poses a formidable computational constraint for the synthesis of feedback controls which require an online optimization procedure.
A natural approach to generate an optimal feedback law for real-time control is to follow a dynamic programming approach. Here, we solve a nonlinear Hamilton-Jacobi-Bellman (HJB) partial differential equation (PDE) for the value function of the optimal control problem under study. This is done in an offline phase, and once the value function has been computed, a feedback law is obtained as by-product. Once online, the cost of implementing an HJBbased control in real time, assuming the current state of the system is available, is reduced to the evaluation of a nonlinear feedback map. Unfortunately, the dynamic programming approach is not suitable for systems described by large-scale dynamics, as the computational complexity of approximating the associated high-dimensional HJB PDEs goes beyond the reach of traditional computational methods. Only very recently, the use of effective computational approaches such as sparse grids [19,31], tree structure algorithms [2], polynomial approximation [28,29,4] tensor decomposition methods [47,21,18,39], and representation formulas [13,12] have addressed the solution of highdimensional HJB PDEs. Recent works making use of deep learning [24,17,38,26,34,37,30] anticipate that the synthesis of optimal feedback laws for largescale dynamics can be a viable path in the near future.
An alternative to the dynamic programming approach is the use of a Nonlinear Model Predictive Control (NMPC) framework [22]. Here, an optimal open-loop control law is computed at every sampling instant of the dynamics. However, the control action is optimized over a prediction horizon, which is sufficiently large to guarantee asymptotic stability of the closed-loop, but short enough to ensure a computing time that is compatible with the rate at which the control law is called. The NMPC framework embodies the trade-off between optimality and real-time computability. It has been shown [23] that the NMPC corresponds to a relaxation of the dynamic programming approach, in the sense that the NMPC law is suboptimal with respect to the optimal stabilizing feedback law provided by the dynamic programming approach, although its suboptimality can be controlled by increasing the prediction horizon.
There exists a third synthesis alternative, which incorporates elements from both dynamic programming and NMPC, known as the State-Dependent Riccati Equation (SDRE) approach [15,16]. The SDRE method originates from the dynamic programming and the HJB PDE associated to infinite horizon optimal stabilization, however, it circumvents its solution by reformulating the feedback synthesis as the sequential solution of state-dependent Algebraic Riccati Equations (ARE), which are updated online along a trajectory. The SDRE feedback is implemented similarly as in NMPC, but the online solution of an optimization problem is replaced by a nonlinear matrix equation. Alternatives to the online formulation include the use of neural networks [48,1] in supervised learning, and reformulating the SDRE synthesis as an optimization problem [27]. However, in this work we will focus on addressing the SDRE synthesis from a numerical linear algebra perspective. The efficient solution of matrix equations arising in feedback control has been subject of extensive research over the last decades, leading to the design of solvers which can effectively be applied to large-scale dynamics such as those arising in the control of systems governed by PDEs (see, e.g., [3,8,9,10,44,46]), and agent-based models [25]. Moreover, under certain stabilizability conditions, this feedback law generates a locally asymptotically stable closed-loop and approximates the optimal feedback law from the HJB PDE.
The purpose of the present work is to study the design of SDRE feedback laws for the control of nonlinear PDEs, including nonlinear reaction and transport terms. This is a class of problems that are inherently large-scale, and where the presence of nonlinearities renders linear controllers underperformant. In this framework, the use of dynamic programming or NMPC methods is often prohibitively expensive, as for instance, in feedback control for multidimensional PDEs. Here instead, we propose and assess different alternatives for control design based on the SDRE approach which can be effectively implemented for two and three dimensional PDEs. The methodology studied in this paper is based on the parametrization of the SDRE synthesis proposed in [6,5]. In these works, different SDRE feedback laws are developed based on the representation of the nonlinear terms in the system dynamics. This is particularly relevant to PDE dynamics, as unlike the nonlinear ODE world, there exists a clear taxonomy of physically meaningful nonlinearities. In some particular cases, the SDRE approach is reduced to a series of offline computations, and a real-time controller only requires a nonlinear feedback evaluation. We explore the limitations of such a parametrization. In the more general case, the SDRE synthesis requires the sequential solution of AREs at a high rate, a task that can is demanding for large-scale dynamics. Here, we propose a variant requiring the solution of a Lyapunov equation instead, thus mitigating the computational cost associated to the online synthesis.
The paper is structured as follows. In section 2 and its subsections we review the H 2 /H ∞ optimal feedback synthesis problem. In section 3 and its subsections we present the suboptimal approximation to these feedback laws by means of the SDRE approach, including offline, online, and hybrid offlineonline implementations, illustrated with an application to the control of the Sine-Gordon equation. Section 4 discusses numerical linear algebra aspects of the solution of the Algebraic Riccati and Lyapunov equations which constitute the core building blocks of the SDRE feedback synthesis. Finally, in section 5 we perform a computational assessment of the proposed methodology on the two-dimensional degenerate Zeldovich and viscous Burgers PDEs.
Optimal feedback synthesis for nonlinear dynamics
In this section we revisit the use of dynamic programming and Hamilton-Jacobi-Bellman/Isaacs PDEs for the computation of optimal feedback controls for nonlinear dynamics. We begin by stating the problem of optimal feedback stabilization via H 2 synthesis, to then focus on robustness under perturbation in the framework of H ∞ control.
2.1
The H 2 synthesis and the Hamilton-Jacobi-Bellman PDE We consider the following infinite horizon optimal control problem:
min u(·)∈U J (u(·); x) := ∞ 0 y(t) 2 Q + u(t) 2 R dt
subject to the nonlinear dynamical constrainṫ
y(t) = f (y(t)) + g(y(t))u(t) , y(0) = x,(1)
where y(t) = (y 1 (t), . . . , y d (t)) ⊤ ∈ R d denotes the state of the system, and the control signal u(·) belongs to U := L ∞ (R + ; R m ). The running cost is given by y 2 Q := y ⊤ Qy with Q ∈ R d×d , Q ≻ 0, and u 2 R = u ⊤ Ru with R ∈ R m×m , R ≻ 0. We assume the system dynamics f (y) : R d → R d to be such that f (0) = 0, and the control operator g(y) : R d → R d×m to be C 1 (R d ). This formulation of the H 2 synthesis corresponds to the asymptotic stabilization of nonlinear dynamics towards the origin. By the application of the Dynamic Programming Principle, it is well-known that the optimal value function
V (x) = inf u(·)∈U J (u(·); x)
characterizing the solution of this infinite horizon control problem is the unique viscosity solution of the Hamilton-Jacobi-Bellman equation
min u∈R m {∇V (x) ⊤ (f (x) + g(x)u) + x 2 Q + u 2 R } = 0 , V (0) = 0 . (2)
The explicit minimizer u * of eq. (2) is given by
u * (x) = − 1 2 R −1 g(x) ⊤ ∇V (x) .(3)
By inserting (3) into (2), we obtain the HJB equation
∇V (x) ⊤ f (x) − 1 4 ∇V (x) ⊤ g(x)R −1 g(x) ⊤ ∇V (x) + x ⊤ Qx = 0 ,(HJB)
to be understood in the classical sense. We recall that under the additional linearity assumptions f (x) = Ax with A ∈ R d×d and g(x) = B ∈ R d×m , the value function is known to be a quadratic form, V (x) = x ⊤ Πx, with Π ∈ R d×d positive definite, and eq. (HJB) becomes an Algebraic Riccati Equation for Π
A ⊤ Π + ΠA − ΠBR −1 B ⊤ Π + Q = 0 . (ARE)
Solving for V (x) in eq. (HJB) globally in R d allows an online synthesis of the optimal feedback law (3) by evaluating the gradient ∇V (x) and g(x) at the current state x. This leads to an inherently robust control law in the sense that if the state of the system is perturbed to x ′ = x+δx, there still exists a stabilizing feedback action departing from the perturbed state, namely, u * (x ′ ). However, this control design neglects the modelling of the disturbance/uncertainties affecting the dynamics, with no general stabilization guarantees. We overcome this limitation by formulating an H ∞ synthesis, which we describe in the following.
The H ∞ problem and the Hamilton-Jacobi-Isaacs PDE
We extend the previous formulation by considering a nonlinear dynamical system of the forṁ
y(t) = f (y(t)) + g(y(t))u(t) + h(y(t))w(t) , y(0) = x ,(4)
where an additional disturbance signal w(·) ∈ W, with W = L 2 (R + ; R p ) enters the system through h(y) : R d → R d×p . We assume that y = 0 is an equilibrium of the system for u = w = 0. The H ∞ control objective is to achieve both internal stability of the closed-loop dynamics and disturbance attenuation through the design of a feedback law u = u(y) such that for a given γ > 0, and for all T ≥ 0 and w ∈ L 2 ([0, T [, R p ), the inequality
T 0 y 2 Q + u(t) 2 R dt ≤ γ 2 T 0 w(t) 2 P dt(5)
holds. Here, P ∈ R p×p , P ≻ 0 , and y is the solution to (4). The parameter γ plays a crucial role in H ∞ control design. We say that the control system (4) has L 2 -gain not greater than γ, if (5) holds. Finding the smallest γ * for which this inequality is verified, also known as the H ∞ -norm of the system, is a challenging problem in its own right. The simplest yet computationally expensive method to find the H ∞ -norm of a system is through a bisection algorithm, as described in [11]. Applying dynamic programming techniques leads to a characterization of the value function for this problem in terms of the solution of a Hamilton-Jacobi-Isaacs PDE
min u∈R m max w∈R p {∇V (x) ⊤ (f (x)+g(x)u+h(x)w)+ x 2 Q + u 2 R −γ 2 w 2 P } = 0 ,(6)
valid for all γ ≥ γ * . The unconstrained minimizer and maximizer of (6), u * γ and w * γ respectively, are explicitly computed as
u * γ (x) = − 1 2 R −1 g(x) ⊤ ∇V γ (x) ,(7)w * γ (x) = 1 2γ 2 P −1 h(x) ⊤ ∇V γ (x) ,(8)
where V γ (x) solves the Hamilton-Jacobi-Isaacs equation
∇V γ (x) ⊤ f (x) + 1 4 ∇V γ (x) ⊤ S(x)∇V γ (x) + x ⊤ Qx = 0 ,(HJI)
with
S(x) = 1 γ 2 h(x)P −1 h(x) ⊤ − g(x)R −1 g(x) ⊤ ,(9)
and V γ (0) = 0. For an initial condition which is not a steady state we have
⊤ 0 y + u(t) 2 R dt ≤ γ 2 ⊤ 0 w(t) 2 P dt + V γ (x),(10)
see e.g. [43,Theorem 16]. When there is no confusion, we denote V γ (x) by V . Note that if the disturbance attenuation is neglected by taking γ → ∞, we recover the solution of (HJB) instead. Moreover, under the linearity assump-
tions f (x) = Ax with A ∈ R d×d , g(x) = B ∈ R d×m , and h(x) = H ∈ R d×p , the value function is a quadratic form V γ (x) = x ⊤ Πx,
where Π ∈ R d×d is positive definite, and eq. (HJI) becomes the following Algebraic Riccati Equation
for Π A ⊤ Π + ΠA − Π BR −1 B ⊤ − 1 2γ 2 HP −1 H ⊤ Π + Q = 0 , (ARE ∞ )
solving the H ∞ problem for full state feedback. In the following section we discuss the construction of computational methods to synthesize nonlinear feedback laws inspired by the solution of HJB/HJBI PDEs.
3 Sub-optimal feedback control laws
Despite the extensive parametrization of the control problem, eqns. (HJB) and (HJI) remain first-order, stationary nonlinear PDEs whose numerical approximation is a challenging task. These nonlinear PDEs are cast over R d , where d relates to the dimension of the state space of the dynamics, which can be arbitrarily large. In particular, if the system dynamics correspond to nonlinear PDEs, the direct solution of the resulting infinite-dimensional eqns.
(HJB) and (HJI) remains an open computational problem. We explore different alternatives to circumvent the solution of high-dimensional HJB PDEs by resorting to the sequential solution of the Algebraic Riccati Equations (ARE) and (ARE ∞ ) respectively, providing a tractable alternative for feedback synthesis in large-scale nonlinear dynamics. The different techniques we propose trade the optimality associated to the HJB-based control for computability, and hence will be referred to as suboptimal feedback laws. For the sake of simplicity, we focus on the H ∞ synthesis. The H 2 feedback follows directly choosing h(x(t)) := 0 in (4). We assume a quadratic cost, i.e. ℓ(x) = x ⊤ Qx with Q ∈ R d×d symmetric positive semidefinite.
Linear-Quadratic Regulator (LQR) control law
The simplest suboptimal control law for nonlinear systems uses the optimal feedback law for the linear-quadratic control problem arising from linearization around an equilibrium, which we assume to be x = 0. For γ > γ * , we solve eq.
(ARE ∞ ) with A ij = ∂fi(x) ∂xj | x=0 , B = g(0)
, and H = h(0). From the solution Π, we obtain the linear feedback law
u(x) = −R −1 B ⊤ Πx .(11)
Such a feedback law, applied to the original nonlinear system, can only guarantee stabilization in a neighbourhood around the origin.
State-Dependent Riccati Equation (SDRE)
If we express the system dynamics through a space-dependent representation of the formẋ
= A(x)x + B(x)u(t) + H(x)ω(t) ,(12)
we can synthesize a suboptimal feedback law inherited from (HJI) by following an approach known as the State-dependent Riccati Equation (SDRE). This method has been thoroughly analyzed in the literature, see, e.g. [14,5], and despite being purely formal, it is extremely effective for stabilizing nonlinear dynamics. The SDRE approach is based on the idea that infinite horizon optimal feedback control for systems of the form (12) are linked to a state-dependent ARE:
A ⊤ (x)Π(x) + Π(x)A(x) − Π(x)S(x)Π(x) + Q = 0 ,(13)
where
S(x) = B(x)R −1 B ⊤ (x) − 1 2γ 2 H(x)P −1 H(x) ⊤ .
Solving this equation leads to a state-dependent Riccati operator Π(x), from where we obtain a nonlinear feedback law given by
u(x) := −R −1 B ⊤ (x)Π(x)x .(14)
It is important to observe that the operator equation (13) admits an analytical solution only in a limited number of cases. More importantly, even if this solution is computed for every state x, the closed-loop differs from the optimal feedback obtained from solving (HJI), as the SDRE approach assumes that the value function is always locally approximated as V (x) = x ⊤ Π(x)x. From an optimal control perspective the SDRE can be interpreted as a model predictive control loop where at a given instant, the dynamics (A(x), B(x), H(x)) are frozen at the current state and an LQR feedback is numerically approximated. The procedure is illustrated in Algorithm 3.1. The resulting feedback is naturally different from the optimal nonlinear feedback law, and will remain different regardless of how fast the SDRE feedback is updated along a trajectory. The latter is also observed in [27], where it is shown that a direct derivation of the SDRE approach departing from the HJB PDE would lead to additional terms in (12). Notwithstanding, it is possible to show local asymptotic stability for the SDRE feedback. We recall the following result [5] concerning asymptotic stability of the SDRE closed-loop in the H 2 case.
Proposition 1 Assume a nonlinear systeṁ
x(t) = f (x(t)) + B(x(t))u(t) ,(15)
where
f (·) is C 1 for x ≤ δ, and B(·) is continuous. If f is parametrized in the form f (x) = A(x)x, and the pair (A(x), B(x))
is stabilizable for every x in a non-empty neighbourhood of the origin. Then, the closed-loop dynamics generated by the feedback law (14) are locally asymptotically stable.
Algorithm 3.1 SDRE-MPC loop Require: {t 0 , t 1 , . . .}, model (12), R, Q, 1: for k = 0, 1, . . . do 2: Compute Π(x(t k )) from (13) 3: Set K(x(t k )) := R −1 B ⊤ (x(t k ))Π(x(t k )) 4:
Set u(t) := −K(x(t k ))x(t) 5:
Integrate system dynamics to x(t k+1 ) 6: end for Assuming the stabilizability hypothesis above, the main bottleneck in the implementation of Algorithm 3.1 is the high rate of calls to an ARE solver for (13). Moreover, these ARE calls are expected to be sufficiently fast for real-time feedback control. This is a demanding computational task for the type of largescale dynamics arising in optimal control of PDEs. In the following, we discuss two variations the SDRE-MPC algorithm which mitigate this limitation by resorting to offline and more efficient online computations.
Offline approximation of the SDRE
A first alternative for more efficient SDRE computations was proposed in [6], inspired by a power series argument first discussed in [49]. Assuming that the
state operator A(x) can be decomposed into A(x) = A 0 + f 1 (x)A 1 ,Π(x) = ∞ n=0 (f 1 (x)) n L n ,
where the matrices L n ∈ R d×d solve
L 0 A 0 + A ⊤ 0 L 0 − L 0 SL 0 + Q = 0 ,(16)L 1 (A 0 − SL 0 ) + A ⊤ 0 − L 0 S L 1 + L 0 A 1 + A ⊤ 1 L 0 = 0 ,(17)L n (A 0 − SL 0 ) + A ⊤ 0 − L 0 S L n + Q n = 0 ,(18)Q n := L n−1 A 1 + A ⊤ 1 L n−1 − n−1 k=1 L k SL n−k .
After solving a single ARE and N Lyapunov equations, an N −order approximation of Π(x) yields the feedback
u N (x) = −R −1 B ⊤ N n=0 (f 1 (x)) n L n x(19)
Unfortunately, the reduction above is only possible for a scalar nonlinearity.
If the nonlinearity that can be expressed as
A(x) = A 0 + r j=1 f j (x)A j ,(20)
where A j ∈ R d×d and the state dependence is restricted to r scalar functions f j (x) : R d → R, then a first order approximation of Π(x) is given by
Π(x) ≈Π(x) = Π 0 + r j=1 Π j f j (x)(21)
where Π 0 solves (ARE ∞ ) for A 0 and the remaining Π j satisfy the Lyapunov equations
Π j C 0 + C ⊤ 0 Π j + Q j = 0 , j = 1, . . . , r ,(22)with C 0 = A 0 − SΠ 0 , and Q j = Π 0 A j + A j Π 0 .
The resulting feedback law is given by
u(x) = −R −1 B ⊤Π (x)x .(23)
Overall, this approach requires the computation of the ARE associated to A 0 in addition to r Lyapunov equations whose solution is fully parallelizable. Its implementation is summarized below.
7: Set u(x(t)) := −R −1 B ⊤Π (x(t))x(t) 8: Integrate system dynamics for x(t)
An offline-online SDRE approach Although the previous approach is a valid variant to circumvent the online solution of Riccati equations at a high rate, it becomes unfeasible in cases where both d and r are large, as it requires the solution of r Lyapunov equations (22) and storage of the solution matrix with d 2 entries each. Such a large-scale setting arises naturally in feedback control of dynamics from semidiscretization of nonlinear PDEs and agent-based models. We present a variant of the offline SDRE approach which circumvents this limitation by resorting to an online phase requiring the solution of a single Lyapunov equation per step.
Let us define the quantity W (x) = r j=1 Π j f j (x). Multiplying each equation in (22) by its corresponding f j (x), it follows that W (x) satisfies the Lyapunov equation
W (x)C 0 + C ⊤ 0 W (x) + r j=1 Q j f j (x) = 0.(24)
Therefore, the feedback law can be expressed as
u(x) = −R −1 B ⊤ Π 0 + r j=1 Π j f j (x) x = −R −1 B ⊤ (Π 0 + W (x)) x. (25)
The feedback law (25) Compute W (x(t k )) from the Lyapunov equation (24) 6:
Set K(x(t k )) := R −1 B ⊤ (Π 0 + W (x(t k ))) 7: Set u(t) := −K(x(t k ))x(t) 8:
Integrate system dynamics to x(t k+1 ) 9: end for Remark 1 The approximation of the state space solution x(t k+1 ) at time t k+1 can be performed with both explicit or implicit time-stepping schemes. In both cases, the feedback gain remains frozen at K(x(t n )).
A preliminary test: the damped Sine-Gordon equation
We present a preliminary assessment of the effectiveness of Algorithm 3.1 and Algorithm 3.3 for the H 2 case. In section 5 we will focus on large-scale, twodimensional PDEs and H ∞ control. Given a domain Ω ⊂ R, we consider the control of the damped Sine-Gordon equation (see e.g. [42]) with homogeneous Dirichlet boundary conditions over Ω × R + 0 :
∂ tt X(ξ, t) = −α∂ t X(ξ, t) + ∂ ξξ X(ξ, t) − β sin X(ξ, t) + χ ωc (ξ)u(t) X(ξ, t) = 0 ξ ∈ ∂Ω, t > 0 , X(ξ, 0) = x 0 (ξ) , ξ ∈ Ω , ∂ t X(ξ, 0) = x 1 (ξ) , ξ ∈ Ω ,(26)
where the control variable u(t) acts through an indicator function χ ωc (ξ) supported over ω c ⊂ Ω. The cost functional to be minimized is given by:
J (u(·); X(·, 0)) := ∞ 0 z i=1 1 |ω oi | ωo i X(ξ, t) dξ 2 + R |u(t)| 2 dt(27)
where ω o := ∪ z i=1 ω oi ⊂ Ω represents a collection of local patches where we average the state. Defining y(t) = (X(·, t),Ẋ(·, t)) ⊤ , we write the dynamics as a first-order abstract evolution systeṁ
y(t) = Ay(t) + f (y(t)) + Bu(t) , where A = 0 I ∂ 2 ξξ −αI , f (y(t)) = 0 −β sin(X(·, t))
,
Bu(t) = 0 χ ωc (ξ)u(t)
. (28) We approximate the operators above using a finite difference discretization in space. Given the domain Ω = [ξ L , ξ R ], we construct the uniform grid
ξ i = ξ L + (i − 1)∆ξ with ∆ξ = (ξ R − ξ L )/(d − 1). We define the discrete state X i (t) := X(ξ i , t), and the discrete augmented state Y (t) = (X 1 (t), . . . , X d (t),Ẋ 1 (t), . . . ,Ẋ d (t)) ⊤ . The discrete operators read A d = 0 d×d I d −∆ d αI d , Bu(t) = 0 d×1 {χ ωc (ξ i )} d i=1 u(t) ,(29)
where I d is the d × d identity matrix, and ∆ d is the discrete Laplace operator 1 ,
∆ d := ∆ξ −2 tridiag([ 1 −2 1 ], d) ∈ R d×d . The quantity z i=1 1 |ω oi | ωo i X(ξ, t) dξ 2 in (27) is approximated by X(t) ⊤ QX(t) where Q = C ⊤ C ∈ R d×d and C ⊤ = ∆ξ {χ ωo 1 (ξ i )} d i=1 |ω o1 | , . . . , {χ ωo z (ξ i )} d i=1 |ω oz | ∈ R d×z .
To express the nonlinearity in a semilinear form consistent with (12) we definẽ
f (Y (t)) = −β 0 d×1 sin(X i (t)) X i (t) d i=1 Y (t) .(30)
In our test we consider the following values for the given parameters, Ω = [−10, 10], t ∈ [0, 10], α = 0.05, β = 2, ω c = [−1, 1], and
X(ξ, 0) = 0, ∂ t X(ξ, 0) = 8 3 sech 2 3ξ .
In the cost functional (27) Figure 1. The presence of the damping term α∂ t X(ξ, t) generates a stable trajectory for both the uncontrolled and LQR-controlled dynamics using the linearized feedback (11). However, we still observe differences in the state and control variables with respect to the SDRE controllers, SDRE-MPC and SDRE offline-online, described in Algorithms 3.1 and 3.3, respectively. The accumulated running costs in Figure 1 (top-left) indicate that the SDRE-MPC implementation achieves the best closed-loop performance, followed by SDRE offline-online, both outperforming linearized LQR and uncontrolled trajectories. However, the main difference between the SDRE-MPC and SDRE offline-online closed-loops is related to computational time. The SDRE-MPC solver requires the solution of multiple AREs in sequence, taking a total of 23 minutes of CPU time for this test, whereas the online solution of Lyapunov equations of the SDRE offline-online implementation reduces this computation time to 45 seconds. A deeper study on the methods performance in the large scale setting will be reported on in section 5, while implementation aspects associated with the solution of these algebraic equations are discussed in the next section.
Solving large-scale Algebraic Riccati/Lyapunov equations
The time steps discussed in Section 3.2 all use matrices that stem from the solution of algebraic matrix equations, and more precisely the (quadratic) Riccati and the (linear) Lyapunov equations. The past few years have seen a dramatic improvement in the effectiveness of numerical solution strategies for solving these equations in the large scale setting. For a survey in the linear case we refer the reader to the recent article [44], while for the algebraic Riccati equation we point the reader to, e.g., [7,45,10].
In our derivation we found projection methods to be able to adapt particularly well to the considered setting, with a similar reduction framework for both linear and quadratic problems; other approaches are reviewed for instance in [9,44]. We emphasize that all considered methods require that the zero order term in the matrix equation, e.g., matrix Q in the Riccati equation (13), be low rank.
The general idea consists of first determining an approximation space K k that can be naturally expanded if needed, and then seeking an approximate solution in this space, by imposing a Galerkin condition on the matrix residual for computing the projected approximate solution.
Let V k be such that K k = range(V k ), with V k having orthonormal columns. Recalling that in both the Riccati and Lyapunov equations the solution matrix is symmetric, the approximate solution can be written as V k Y V ⊤ k . Consider the Riccati equation (13) for a fixed x = x(t * ), so that we set
A(x(t * )) = A * and B(x(t * )) = B * . Let R = A ⊤ * V k Y X V ⊤ k + V k Y X V ⊤ k A * − V k Y X V ⊤ k B * V k Y X V ⊤ k + Q.
Imposing the Galerkin condition on R means that the residual matrix R be orthogonal to the approximation space, in the matrix sense, that is
V ⊤ k RV k = 0 ⇔ V ⊤ k A ⊤ * V k Y X +Y X V ⊤ k A * V k −Y X (V ⊤ k B * V k )Y X +V ⊤ k QV k = 0,(31)
where the orthogonality of the columns of V k was used. If V k has small dimensions, the reduced Riccati matrix equation on the right also has small dimensions and can be solved by a "dense" method to determine Y X ; see, e.g., [10]. The cost of solving the reduced quadratic equation with coefficient matrices of sizek is at least 63k 3 floating point operations with an invariant subspace approach [35]. Note that the large matrix V k Y X V ⊤ k is never constructed explicitly, since it would be dense even for sparse data.
Analogously, for the Lyapunov equation in (24), we can write W ≈ V k Y V ⊤ k for some Y = Y W to be determined. Let Q * = k Q k f k (x(t * )). As before,
letting R = C ⊤ 0 V k Y V ⊤ k + V k Y V ⊤ k C 0 + Q * , the Galerkin condition leads to V ⊤ k RV k = 0 ⇔ (V ⊤ k C ⊤ 0 V k )Y + Y V ⊤ k C 0 V k + V ⊤ k Q * V k = 0.
This reduced Lyapunov equation can be solved by means of a "dense" method at a cost of about 15k 3 floating point operations for coefficient matrices of sizek, if the real Schur decomposition is employed; see, e.g., [44]. Note that the computational cost is significantly lower than that of solving the reduced Riccati equation with matrices of the same size.
On the selection of the approximation space
Choices as approximation space explored in the literature include polynomial and rational Krylov subspaces [44]. They both enjoy the property of being nested as they enlarge, that is K k ⊆ K k+1 where k is associated with the space dimension. Rational Krylov subspaces have emerged as the key choice because they are able to deliver accurate approximate solutions with a relatively small space dimension, compared with polynomial spaces. Given a starting tall matrix V 0 and an invertible stable coefficient matrix A * , we have used two distinct rational spaces: the Extended Krylov subspace,
EK k = range([V 0 , A −1 * V 0 , A * V 0 , A −2 * V 0 , A 2 * V 0 , . . . , A k−1 * V 0 , A −k * V 0 ])
, which only involves matrix-vector products and solves with A * , and the (fully) Rational Krylov subspace,
RK k = range([V 0 , (A * − σ 2 I) −1 V 0 , . . . , k j=2 (A * − σ j I) −1 V 0 ]).
where σ j can be computed a-priori or adaptively. In both cases, the space is expanded iteratively, one block of vectors at the time, and systems with A * or with (A * − σ j I) are solved by fast sparse methods. For A * real valued and stable, the σ j 's are selected to be in C + , so that A * − σ j I is nonsingular. The actual choice of the shifts is a key step and a rich literature is available, yielding theoretically grounded effective strategies; see, e.g., the discussion in [44].
In our implementation we used the Extended Krylov subspace for solving the Lyapunov equation in Algorithm 3.3, which has several advantages, such as the computation of the sparse Cholesky factorization of A 0 once for all. On the other hand, we used the Rational Krylov subspace for the Riccati equation, which has been shown to be largely superior over the Extended Krylov on this quadratic equation, in spite of requiring the solution of a different (shifted) sparse coefficient matrix at each iteration [9,45]. Nonetheless, in Section 4.2 we report an alternative approach that makes the Extended Krylov subspace competitive again for the Riccati problem with A 0 symmetric. Except for the operations associated with the reduced problems, the computational costs per iteration of the Riccati and Lyapunov equation solvers are very similar, if the same approximation space is used.
Although we refer to the specialized literature for the algorithmic details 2 , we would like to include some important implementation details that are specific to our setting. In particular, the matrix C 0 employed in Algorithm 3.3 is given by C 0 = A 0 − BR −1 B ⊤ − 1 2γ 2 HP −1 H ⊤ Π 0 , which is not easily invertible if explicitly written, since it is dense in general. Note that A 0 is in general sparse, as it stems from the discretization of a partial differential operator. We can write
C 0 = A 0 − [B, H]G[B, H] ⊤ Π 0 , with G = R −1 0 0 − 1 2γ 2 P −1 .
Using the classical Sherman-Morrison-Woodbury formula, the product C −1 0 V for some tall matrix V can be obtained as
W := C −1 0 V = A −1 0 V − A −1 0 [B, H]G −1 1 [B, H] ⊤ Π 0 A −1 0 V, with G 1 = I + G[B, H] ⊤ Π 0 A −1 0 [B, H]
, which is assumed to be nonsingular. Therefore, C −1 0 V can be obtained by first solving sparse linear systems with A 0 , and then using matrix-matrix products. More precisely, the following steps are performed:
-Solve A 0 W 1 = V -Solve A 0 W 2 = [B, H] -Compute G 1 = I + G[B, H] ⊤ Π 0 W 2 -Compute W = W 1 − W 2 ( G −1 1 ([B, H] ⊤ Π 0 W 1 ) )
We also recall that Π 0 , the solution to the initial Riccati equation, is stored in factored form, and this should be taken into account when computing matrix-matrix products with Π 0 .
While trying to employ the Rational Krylov space, we found that the structure of C 0 made the selection of optimal shifts {σ j } particularly challenging, resulting in a less effective performance of the method. Hence, our preference went for the Extended Krylov space above, with the above enhancement associated with solves with C 0 .
Feedback matrix oriented implementation
In Algorithm 3.1 the Riccati equation needs to be solved at each time step t n . However, its solution Π(x n ) is only used to compute the feedback matrix K(x n ) := −R −1 B ⊤ Π(x n ). Hence, it would be desirable to be able to immediately compute K(x n ) without first computing Π(x n ). This approach has been explored in the Riccati equation literature but also for other problems based on Krylov subspaces, see, e.g., [41,33].
In this section, in the case where the matrix A(x n ) is symmetric, we describe the implementation of one of the projection methods described above, that is able to directly compute K(x n ) without computing Π(x n ) (in factored form), and more importantly, without storing and computing the whole approximation basis. The latter feature is particularly important for large scale problems, for which dealing with the orthogonal approximation basis represents one of the major computational and memory costs. To the best of our knowledge, this variant of the Riccati solver is new, while it is currently explored in [40] for related control problems and the rational Krylov space.
Here we consider the Extended Krylov subspace. For A(x n ) symmetric, the orthonormal basis of EK k can be constructed by explicitly orthogonalizing only with respect to the previous two basis blocks. Hence, only two previous blocks of vectors need to be stored in memory, and require explicit orthogonalization when the new block of vectors is added to the basis [41]. This is also typical of polynomial Krylov subspaces constructed for symmetric matrices, giving rise to the classical Lanczos three-term recurrence [33].
With this procedure, in the reduced equation (31) the matrices
A k := V ⊤ k A * V k , B k := V ⊤ k B and Q k := V ⊤ k QV k
are computed as k grows by updating the new terms at each iteration, and the solution Y X can be obtained without the whole matrix V k being available. Note that the stopping criterion does not require the computation of the whole residual matrix, so that also in the standard solver the full matrix V k Y X V ⊤ k is never explicitly accessed. However, to be able to compute
K = −RB ⊤ V k Y X V ⊤ k = −RB ⊤ k Y X V ⊤ k ,
the basis V k appearing on the right still seems to be required. As already done in the literature, this problem can be overcome by a so-called "twostep" procedure: at completion, once the final Y X is available, the basis V k is computed again one block at the time, and the corresponding terms in the product Y X V ⊤ k are updated. Since A(x n ) is already factorized and the orthogonalization coefficients are already available (they correspond to the non-zero entries of A k ), then the overall computational cost is feasible; we refer the reader to [41] and its references for additional details for the two-step procedure employed for different purposes.
Large-scale nonlinear dynamical systems
In this section we present a numerical assessment of the proposed methodology applied to the synthesis of feedback control for two-dimensional nonlinear PDEs. The first test is a nonlinear diffusion-reaction equation, known as the degenerate Zeldovich equation, where the origin is an unstable equilibrium and traditional linearization-based controllers fail. The second test studies the viscous Burgers' equation with a forcing term. We discretize the control problem in space using finite differences, similarly as in Section 3.4. Controlled trajectories are integrated in time using an implicit Euler method, which is accelerated using a Jacobian-Free Newton Krylov method (see e.g. [32]). The goal of all our tests is the optimal and robust stabilization of the dynamics to the origin, encoded in the optimization of the following cost:
J (u(·), w(·); X(·, 0)) :
= ∞ 0 z i=1 1 |ω oi | ωo i X(ξ, t) dξ 2 + R|u(t)| 2 − γ 2 P |w(t)| 2 dt .(32)
This expression is similar to (27), but includes the H ∞ term −γ 2 P |w(t)| 2 .
The reported numerical simulations were performed on a MacBook Pro with CPU Intel Core i7-6, 2,6GHz and 16GB RAM, using Matlab [36]. Fig. 2 Locations of the inputs ωc(ξ) = ω d (ξ) (black) and outputs ωo(ξ) (blue) in the region Ω for the degenerate Zeldovich equation.
Case study 1: the degenerate Zeldovich equation
We consider the control of a Zeldovich-type equation arising in combustion theory [20] over Ω × R + 0 , with Ω ⊂ R 2 and Neumann boundary conditions:
∂ t X(ξ, t) = ǫ∆X(ξ, t) + νX(ξ, t) + µ(X 2 (ξ, t) − X 3 (ξ, t)) + χ ωc (ξ)u(t) + χ ω d (ξ)w(t) ∂ ξ X(ξ, t) = 0 , ξ ∈ ∂Ω, t > 0 , X(ξ, 0) = x 0 (ξ) , ξ ∈ Ω .(33)
The scalar control and disturbance act, respectively, through functions χ ωc (ξ) and χ ω d (ξ) with support ω c , ω d ⊂ Ω. The uncontrolled dynamics have three equilibrium points: X ≡ 0, X ≡ 1 2 1 ± 1 + ν µ . Our goal is to stabilize the system to X ≡ 0, which is an unstable equilibrium point. A first step towards the application of the proposed framework is the space discretization of the system dynamics, leading to a finite-dimensional state-space representation. Following the setting presented in section 3.4, using a finite difference discretization leads tȯ X(t) = ǫ∆ d X(t)+νX(t)+µX(t)•X(t)•(1 d×1 − X(t))+Bu(t)+Hw(t) , (34) where the discrete state X(t) = (X 1 (t), . . . , X d (t)) ⊤ ∈ R d corresponds to the approximation of X(ξ, t) at the grid points and the symbol • denotes the Hadamard or component-wse product. The matrix ∆ d ∈ R d×d is the finite difference approximation of the Neumann Laplacian and the matrices B, H ∈ R d are the discretization of the indicator functions supported over ω c and ω d , respectively. The discretization of (32) follows similarly as in section 3.4. Once the finite-dimensional state-space representation is obtained, we proceed to express the system in semilinear form (20) and implement the proposed algorithms. To set Algorithm 3.1 and Algorithm 3.3, from (34) we define
A(X) := ǫ∆ d + νI d + µdiag(X(t) − X(t) • X(t)),
where diag(v) denotes a diagonal matrix with the components of the vector v on the main diagonal, and decompose A(X) as
A 0 = ǫ∆ d +νI d , [A j ] k,l = δ k,j δ l,j , f j (X) = µ(X j −X 2 j ), j = 1 . . . , d,
where ∆ d is the two-dimensional discrete Laplacian and δ i,j denotes the Kronecker delta. In this test we set In the following, we analyse results for the H 2 and H ∞ controls, i.e. γ = 0 and γ = 0, respectively, in (32).
Test 1.
Experiments for H 2 -control. We start by presenting results for H 2control, i.e. P ≡ 0 in (32) and H ≡ 0 in (34). In Figure 3 we show a snapshot of the controlled trajectories at t = 3, a horizon sufficiently large for the dynamics to approach a stationary regime. In the top-left panel the uncontrolled problem reaches the stable equilibrium X ≈ 1.02. In the top-right panel we show the results of the LQR control computed by linearizing equation (34) around the origin, which also fails to stabilize around the unstable equilibrium X ≡ 0. The controlled solutions with Algorithm 3.1 and Algorithm 3.3 are shown at the bottom of the same figure: both algorithms reach the desired configuration. The corresponding control input is shown in the top-right panel of Figure 4. We observe that the LQR control has a completely different behavior with respect to the control computed by Algorithm 3.1 or Algorithm 3.3. The performance results of the different controlled trajectories is presented in Figure 4 where we show the evaluation of the cumulative cost functional in the top-left panel. As expected, Algorithm 3.1 provides the best closed-loop performance among the proposed algorithms. However, in terms of efficiency Algorithm 3.3 is faster than Algorithm 3.1 when increasing the dimension of the problem as shown in the bottom-left panel of Figure 4. When the dimension d increases (x-axis in the plot), the cost functional converges to 0.6 for Algorithm 3.1 and to 1 for Algorithm 3.3. Both methods are able to stabilize the problem.
Test 2.
Experiments for H ∞ -control. We next show the results of the optimal solution under disturbances with the following configuration: P = 1, γ = 0.5, w 1 (t) = 0.1 sin(40t), w 2 (t) = 0.1 sin(2t).
We omit reporting the behavior of the LQR-based control as it fails to stabilize the dynamics. To compare the proposed approaches we compute the H 2 − and H ∞ − controls with the same disturbance using both Algorithm 3.1 and Algorithm 3.3. The results are presented in Figure 5 and Figure 6. In every test case, the SDRE-based methodologies effectively stabilize the perturbed dynamics to a small neighbourhood around X ≡ 0.
A quantitative study is proposed in Figure 7 where we show the evaluation of the cumulative H 2 cost functional for both disturbances in the left panels. As expected, Algorithm 3.1 with H ∞ −control exhibits the best performance, closely followed by Algorithm 3.3. The right panels of Figure 7 show the different control inputs, which reflect the observed differences in closed-loop performance.
Test 3. On the use of a feedback matrix oriented implementation
To conclude the first case study we provide a numerical example where we synthesize the feedback operator K(x) = −R −1 B ⊤ Π(x) directly, circumventing the computation of Π(x), as explained in Section 4.2. For this test we The performance of the different feedback synthesis methods is presented in Figure 9. Here we show the evaluation of the cumulative cost functional in the top-left panel. For completeness we provide the control inputs on the top-right panel. We compare the performance of Algorithm 3.1 using Rational Krylov (RK) and Extended Krylov (EKSM) subspaces as the problem dimension in- creases. Specifically, we recall that the Extended Krylov subspace computes directly the gain matrix. As expected, Algorithm 3.1 provides the best closedloop performance among the proposed algorithms. However, in terms of CPU time Algorithm 3.3 is faster than Algorithm 3.1 when increasing the problem dimension d as shown in the bottom-left panel of Figure 9. When the problem dimension d increases, the cost functional converges to 0.9 for Algorithm 3.1 and to 1.3 for Algorithm 3.3. The two different Krylov subspace methods in Algorithm 3.1 lead exactly to the same solution. In the plot legend we refer to Alg 3.1 RK for Rational Krylov subspaces and to Alg 3.1 EKSM for Extended Krylov subspaces.
Case study 2: the viscous Burgers equation with exponential forcing term
The second experiment deals with the control of a viscous Burgers equation with exponential forcing term over Ω × R + 0 , with Ω ⊂ R 2 and Dirichlet bound- ary conditions:
∂ t X(ξ, t) = ǫ∆X(ξ, t) − X(ξ, t) · ∇X(ξ, t) + 1.5X(ξ, t)e −0.1X(ξ,t) + χ ωc (ξ)u(t) + χ ω d (ξ)w(t)
X(ξ, t) = 0 , ξ ∈ ∂Ω, X(ξ, 0) = x 0 (ξ) , ξ ∈ Ω .
In this case, the scalar control and disturbance act, respectively, through the indicator function χ ωc (ξ), χ ω d (ξ) with ω c , ω d ⊂ Ω. A finite difference discretization of the space of the system dynamics leads to a state space representation of the forṁ
X(t) = ∆ d X(t) − X(t) • (DX(t) + 1.5e −0.1X(t) ) + Bu(t) + Hw(t) ,(36)
where the matrices ∆ d , D ∈ R d×d and B, H ∈ R d are finite-dimensional approximations of the Laplacian, gradient, control and disturbance operators, respectively, and the exponential term is understood component-wise. In particular, D is obtained using a backward finite difference discretization where B n := tridiag([ −1 1 0 ], n), and ⊗ denotes the Kronecker product. We proceed to express the semi-discretized dynamics in semilinear form. For this, we define
D := −∆ξ −1 (B n ξ 2 ⊗ I n ξ 1 + I n ξ 2 ⊗ B n ξ 1 ) ,0 0.A(X) := ǫ∆ d −D(X) + diag(1.5e −0.1X(t) ) , [D(X)] k,l = D k,l X k ,
and then
A 0 = ǫ∆ d , [A j ] k,l = D j,l δ k,l , f j (X) = X j , j = 1, . . . , d ,
[A j ] k,l = δ k,j−d δ l,j−d , f j (X) = 1.5e −0.1X j−d , j = d + 1, . . . , 2d .
For our numerical experiments we set Ω = [0, 1] × [0, 1], ǫ = 0.1, R = 0.05, and initial condition x 0 (ξ) = sin(ξ 1 ) sin(ξ 2 ), on a discretized space grid of n ξ1 × n ξ2 nodes with n ξ1 = n ξ2 = 101 (d = 10201). The matrices B and C are defined as in the previous study case (see Figure 2). In the following we discuss the results for H 2 and H ∞ control, i.e., P = 0 and P = 0, respectively, in (32).
Test 4. Experiments for H 2 −control.
The trajectories of the controlled system problem with P = 0 in (32) are shown in Figure 10. The uncontrolled solution tends to move towards the topright corner of Ω. All algorithms tend to control the solution to zero for large time, but at a different rate.
The control inputs are then shown in the top-right of Figure 11. Algorithm 3.1 has the largest control input, leading to the smallest cumulative cost functional as shown in the top-left panel of Figure 11. We also observe that the values of the cost functional are very similar for Algorithm 3.1 and Algorithm 3.3. This is also confirmed for different discretization of increasing dimensions as shown in the bottom-left panel of Figure 11. However, the time needed for Algorithm 3.3 to compute the solution is lower than for Algorithm 3.1 as depicted in the bottom-right panel of Figure 11.
Test 5. Experiments for H ∞ −control.
Finally, we discuss the results for P = 1, γ = 0.1 and disturbance w(t) = {0.1 sin(2t)} in (35). The results presented in Figure 12 are in line with our first case study. In this example, Algorithm 3.1 stabilizes the solution faster. Since it is difficult to visualize differences in the controlled state variables, we provide a qualitative analysis through Figure 12. We show the evaluation of the cost functional (32) on the left panel and the control inputs on the right panel. Again, we find that Algorithm 3.1 with γ = 0 has the lowest values for the cost functional.
H2-cost
Conclusions and future work
In this work we have discussed different alternatives for the synthesis of feedback laws for stabilizing nonlinear PDEs. In particular, we have studied the use of state-dependent Riccati equation methods, both for H 2 and H ∞ synthesis. Implementing an SDRE feedback law requires expressing the dynamics in semilinear form and the solution of algebraic Riccati equations at an arbitrarily high rate. This is a stringent limitation in PDE control, where highdimensional dynamics naturally emerge from space discretization. Hence, we study offline and offline-online synthesis alternatives which circumvent or mitigate the computational effort required in the SDRE synthesis. Most notably, we have proposed an offline-online method which replaces the sequential solution of algebraic Riccati equations by Lyapunov equations. Through extensive computational experiments, including two-dimensional nonlinear PDEs, we have assessed that the SDRE offline-online method provides a reasonable approximation of purely online SDRE synthesis, yielding similar performance results at a reduced computational cost. Moreover, the nonlinearities arising in nonlinear reaction and nonlinear advection PDE models can be easily represented within the semilinearization framework required by SDRE methods. In conclusion, SDRE-based feedback laws constitute a reasonable alternative for suboptimal feedback synthesis for large-scale, but well structured, nonlinear dynamical systems. Future research directions include the study of the SDRE methodology for high-dimensional systems arising from interacting particle systems, and the interplay with deep learning techniques to lower the computational burden associated to a real-time implementation.
where A 0 and A 1 are constant matrices and f (x) is a scalar function, B(x) = B and H(x) = H, then the Riccati operator Π(x) solving (13) is approximated by
can be computed by solving an offline Riccati equation for Π 0 and an online Lyapunov equation for W (x); see section 4 for a discussion on the computational costs of the two approaches. The offline-online SDRE approach is summarized in Algorithm 3.3 below.Algorithm 3.3 Offline-online SDRE 1: Offline phase: 2: Compute Π 0 from (ARE∞) 3: Online phase: 4: for k = 0, 1, . . . do 5:
we set R = 1, ω o (x) = [−2.5, −1.5] ∪ [1.5, 2.5] and z = 2. In this test we take d = 402 nodes in the finite difference discretization. Time-stepping is performed with an implicit Euler method with a time step of 0.1. The small size Riccati and Lyapunov equations are solved using the Matlab functions icare and lyap, respectively. Controlled dynamics with different feedback controls are shown in
Fig. 1
1Section 3.4. Damped sine-Gordon equation, controlled trajectories. Top: accumulated running cost with different H 2 −control (left) and corresponding control inputs (right). Middle: trajectories, uncontrolled (left), and linearized (right). Bottom: trajectories, H 2controlled solution with SDRE-MPC Algorithm 3.1 (left), and H 2 -controlled solution with SDRE offline-online Algorithm 3.3 (right).
Ω = [ 0 , 1 ]Figure 2
012× [0,1], ǫ = 0.2, ν = 0.1, µ = 10, R = 0.1, and the initial condition x(ξ, 0) = sin(ξ 1 ) sin(ξ 2 ), on a discretized space grid of n ξ1 × n ξ2 nodes with n ξ1 = n ξ2 = 101 (d = 10201). For the matrices B, H and C we considered a collection of patches depicted in
Fig. 3
3Test 1: state of the system (33) at time t = 3. Top: uncontrolled solution (left), H 2 -solution with LQR control (right). Bottom: H 2 -controlled solution with Algorithm 3.1 (left), H 2 -controlled solution with Algorithm 3.3 (right).
Fig. 4
4Test 1. Top: Cumulative cost functional functional (left) with H 2 −control and corresponding control inputs (right). Bottom: CPU time for Algorithm 3.1 and Algorithm 3.3 (left), and convergence of the cost functional with respect to the dimension of the problem d (x-axis) (right).introduce the following changes: Ω = [0, 1] × [0, 1], ǫ = 0.1, ν = 0, µ = 8, and initial condition x 0 (ξ) = sin(ξ 1 ) sin(ξ 2 ), on a discretized space grid of n ξ1 × n ξ2 nodes with n ξ1 = n ξ2 = 101. We replace Neumann boundary conditions with zero Dirichlet boundary conditions. For this test, we only consider the H 2 −control case since many considerations are similar to the previous part of this section. We use Algorithm 3.1 with the Extended Krylov subspace as in Section 4.2 and compare the performances of Algorithm 3.1 using the Rational Krylov subspaces and Extended Krylov subspaces.Figure 8reports the solution at time t = 3. In the top-left panel the uncontrolled state grows in time. In the top-right panel we show the results of the LQR control computed by linearizing equation(34) around the desired configuration. The control steers the solution to the origin at a very slow rate. The controlled solution with Algorithm 3.1 using an Extended Krylov subspace and Algorithm 3.3 are shown at the bottom of the same figure. Both algorithms reach the desired configuration.
Fig. 5
5Test 2: state of the system at time t = 3 with H 2 − and H∞−controls and disturbance w(t) = 0.1 sin(2t). Top: H 2 -control using Algorithm 3.1 (left) and Algorithm 3.3 (right). Bottom: H∞-control using Algorithm 3.1 (left) and Algorithm 3.3 (right).
Fig. 6
6Test 2: Solutions at time t = 3 with H 2 − and H∞−control and disturbance w(t) = 0.1 sin(40t). Top: H 2 -control using Algorithm 3.1 (left) and Algorithm 3.3 (right). Bottom: H∞-control using Algorithm 3.1 (left) and Algorithm 3.3 (right).
Fig. 7
7Test 2: evaluation of the cumulative cost functional H 2 (left) and control inputs (right) for w(t) = 0.1 sin(2t) (top) and w(t) = 0.1 sin(40t) (bottom).
Fig. 8
8Test 3: controlled trajectories (33) at time t = 3 with zero Dirichlet boundary conditions. Top: Uncontrolled solution (left), LQR control (right). Bottom: H 2 -controlled solution with Algorithm 3.1 with Extended Krylov subspaces (left), H 2 -controlled solution with Algorithm 3.3 (right).
Fig. 9
9Test 3: cumulative functional (top-left) with H 2 −control and corresponding control input (top-right). Evaluation of the cost functional with respect to the dimension of the problem d (x-axis) (bottom-right) and CPU time (bottom-left) for both Algorithm 3.1 using Rational Krylov subspaces and Extended Krylov subspaces and Algorithm 3.3.
Fig. 10
10Test 4: controlled dynamics (35) at time t = 3. Top: uncontrolled solution (left), H 2solution with LQR control (right). Bottom: H 2 -controlled solution with Algorithm 3.1 (left), H 2 -controlled solution with Algorithm 3.3 (right). The controllers stabilize the dynamics to X ≡ 0 at a different rate.
Fig. 11
11Test 4. Top: evaluation of the cumulative cost functional with H 2 −control (left) and control inputs (right). Bottom: cost functional for dynamics of increasing dimension for different algorithms (left) and CPU time for both Algorithm 3.1 and Algorithm 3.3 (right).
Fig. 12
12Test 5: evaluation of the cost functional H 2 (left) and control input( right) for w(t) = 0.1 sin(2t).
The notation tridiag([ a b c ], d) stands for a tridiagonal d×d matrix having the constant values b ∈ R on the main diagonal, a ∈ R on the lower diagonal and c ∈ R on the upper diagonal.
See www.dm.unibo.it/ simoncin/software for some related software.
Gradient-augmented supervised learning of optimal feedback laws using state-dependent Riccati equations. G Albi, S Bicego, D Kalise, ArXiv:2103.04091Albi, G., Bicego, S., Kalise, D.: Gradient-augmented supervised learning of optimal feedback laws using state-dependent Riccati equations (2021). ArXiv:2103.04091
An efficient DP algorithm on a tree-structure for finite horizon optimal control problems. A Alla, M Falcone, L Saluzzi, SIAM J. Sci. Comput. 414Alla, A., Falcone, M., Saluzzi, L.: An efficient DP algorithm on a tree-structure for finite horizon optimal control problems. SIAM J. Sci. Comput. 41(4), A2384-A2406 (2019)
Approximation of large-scale Dynamical Systems. Advances in Design and Control. A C Antoulas, SIAM, PhiladelphiaAntoulas, A.C.: Approximation of large-scale Dynamical Systems. Advances in Design and Control. SIAM, Philadelphia (2005)
Optimal feedback law recovery by gradient-augmented sparse polynomial regression. B Azmi, D Kalise, K Kunisch, J. Machin. Learn. Res. 2248Azmi, B., Kalise, D., Kunisch, K.: Optimal feedback law recovery by gradient-augmented sparse polynomial regression. J. Machin. Learn. Res. 22(48), 1-32 (2021)
Nonlinear feedback controllers and compensators: a state-dependent riccati equation approach. H T Banks, B M Lewis, H T Tran, Comput. Optim. Appl. 372Banks, H.T., Lewis, B.M., Tran, H.T.: Nonlinear feedback controllers and compensators: a state-dependent riccati equation approach. Comput. Optim. Appl. 37(2), 177-218 (2007)
Feedback control methodologies for nonlinear systems. S C Beeler, H T Tran, H T Banks, J. Optim. Theory Appl. 1071Beeler, S.C., Tran, H.T., Banks, H.T.: Feedback control methodologies for nonlinear systems. J. Optim. Theory Appl. 107(1), 1-33 (2000)
Numerical solution of large-scale Lyapunov equations, Riccati equations, and linear-quadratic optimal control problems. P Benner, J R Li, T Penzl, Numer. Linear Algebra Appl. 159Benner, P., Li, J.R., Penzl, T.: Numerical solution of large-scale Lyapunov equations, Riccati equations, and linear-quadratic optimal control problems. Numer. Linear Alge- bra Appl. 15(9), 755-777 (2008)
P Benner, V Mehrmann, Dimension Reduction of Large-Scale Systems. Lecture Notes in Computational Science and Engineering. Sorensen, D.Berlin/HeidelbergSpringer-VerlagBenner, P., Mehrmann, V., Sorensen, D. (eds.): Dimension Reduction of Large-Scale Systems. Lecture Notes in Computational Science and Engineering. Springer-Verlag, Berlin/Heidelberg (2005)
Numerical solution of large and sparse continuous time algebraic matrix Riccati and Lyapunov equations: A state of the art survey. P Benner, J Saak, GAMM-MittBenner, P., Saak, J.: Numerical solution of large and sparse continuous time algebraic matrix Riccati and Lyapunov equations: A state of the art survey. GAMM-Mitt. pp. 32-52 (2013)
Numerical Solution of Algebraic Riccati Equations. D Bini, B Iannazzo, B Meini, SIAM. Bini, D., Iannazzo, B., Meini, B.: Numerical Solution of Algebraic Riccati Equations. SIAM, Philadelphia (2012)
A bisection method for computing the H∞ norm of a transfer matrix and related problems. S Boyd, V Balakrishnan, P Kabamba, Math. Control Signals Systems. 23Boyd, S., Balakrishnan, V., Kabamba, P.: A bisection method for computing the H∞ norm of a transfer matrix and related problems. Math. Control Signals Systems 2(3), 207-219 (1989)
Algorithm for overcoming the curse of dimensionality for time-dependent non-convex Hamilton-Jacobi equations arising from optimal control and differential games problems. Y T Chow, J Darbon, S Osher, W Yin, J. Sci. Comput. 732-3Chow, Y.T., Darbon, J., Osher, S., Yin, W.: Algorithm for overcoming the curse of dimensionality for time-dependent non-convex Hamilton-Jacobi equations arising from optimal control and differential games problems. J. Sci. Comput. 73(2-3), 617-643 (2017)
Algorithm for overcoming the curse of dimensionality for state-dependent Hamilton-Jacobi equations. Y T Chow, J Darbon, S Osher, W Yin, J. Comput. Phys. 387Chow, Y.T., Darbon, J., Osher, S., Yin, W.: Algorithm for overcoming the curse of dimensionality for state-dependent Hamilton-Jacobi equations. J. Comput. Phys. 387, 376-409 (2019)
State-dependent Riccati equation techniques: an overview. J R Cloutier, Proceedings of the 1997 American Control Conference (Cat. No.97CH36041). the 1997 American Control Conference (Cat. No.97CH36041)2Cloutier, J.R.: State-dependent Riccati equation techniques: an overview. In: Proceed- ings of the 1997 American Control Conference (Cat. No.97CH36041), vol. 2, pp. 932-936 vol.2 (1997)
Nonlinear regulation and nonlinear H∞ control via the state-dependent Riccati equation technique. I. Theory. J R Cloutier, C N Souza, C P Mracek, First International Conference on Nonlinear Problems in Aviation and Aerospace. Daytona Beach, FL; Daytona Beach, FLEmbry-Riddle Aeronaut. Univ. PressCloutier, J.R., D'Souza, C.N., Mracek, C.P.: Nonlinear regulation and nonlinear H∞ control via the state-dependent Riccati equation technique. I. Theory. In: First In- ternational Conference on Nonlinear Problems in Aviation and Aerospace (Daytona Beach, FL, 1996), pp. 117-130. Embry-Riddle Aeronaut. Univ. Press, Daytona Beach, FL (1997)
Nonlinear regulation and nonlinear H∞ control via the state-dependent Riccati equation technique. II. Examples. J R Cloutier, C N Souza, C P Mracek, First International Conference on Nonlinear Problems in Aviation and Aerospace. Daytona Beach, FL; Daytona Beach, FLEmbry-Riddle Aeronaut. Univ. PressCloutier, J.R., D'Souza, C.N., Mracek, C.P.: Nonlinear regulation and nonlinear H∞ control via the state-dependent Riccati equation technique. II. Examples. In: First International Conference on Nonlinear Problems in Aviation and Aerospace (Daytona Beach, FL, 1996), pp. 131-141. Embry-Riddle Aeronaut. Univ. Press, Daytona Beach, FL (1997)
Overcoming the curse of dimensionality for some Hamilton-Jacobi partial differential equations via neural network architectures. J Darbon, G P Langlois, T Meng, Res. Math. Sci. 7350Paper No. 20Darbon, J., Langlois, G.P., Meng, T.: Overcoming the curse of dimensionality for some Hamilton-Jacobi partial differential equations via neural network architectures. Res. Math. Sci. 7(3), Paper No. 20, 50 (2020)
Tensor Decompositions for High-dimensional Hamilton-Jacobi-Bellman Equations. S Dolgov, D Kalise, K Kunisch, SIAM J. Sci. Comput. 43Dolgov, S., Kalise, D., Kunisch, K.: Tensor Decompositions for High-dimensional Hamilton-Jacobi-Bellman Equations. SIAM J. Sci. Comput. 43, A1625-A1650 (2021)
Suboptimal feedback control of PDEs by solving HJB equations on adaptive sparse grids. J Garcke, A Kröner, J. Sci. Comput. 701Garcke, J., Kröner, A.: Suboptimal feedback control of PDEs by solving HJB equations on adaptive sparse grids. J. Sci. Comput. 70(1), 1-28 (2017)
Travelling waves in nonlinear diffusion-convection reaction. B H Gilding, R Kersner, Progress in Nonlinear Differential Equations and their Applications. BaselBirkhäuser Verlag60Gilding, B.H., Kersner, R.: Travelling waves in nonlinear diffusion-convection reaction, Progress in Nonlinear Differential Equations and their Applications, vol. 60. Birkhäuser Verlag, Basel (2004)
High-dimensional stochastic optimal control using continuous tensor decompositions. A Gorodetsky, S Karaman, Y Marzouk, Int. J. Robot. Res. 372-3Gorodetsky, A., Karaman, S., Marzouk, Y.: High-dimensional stochastic optimal control using continuous tensor decompositions. Int. J. Robot. Res. 37(2-3), 340-377 (2018)
Nonlinear model predictive control. L Grüne, J Pannek, Communications and Control Engineering Series. SpringerTheory and algorithmsGrüne, L., Pannek, J.: Nonlinear model predictive control. Communications and Control Engineering Series. Springer, London (2011). Theory and algorithms
On the infinite horizon performance of receding horizon controllers. L Grüne, A Rantzer, IEEE Trans. Automat. Control. 539Grüne, L., Rantzer, A.: On the infinite horizon performance of receding horizon con- trollers. IEEE Trans. Automat. Control 53(9), 2100-2111 (2008)
Solving high-dimensional partial differential equations using deep learning. J Han, A Jentzen, E , W , Proc. Natl. Acad. Sci. USA. 11534Han, J., Jentzen, A., E, W.: Solving high-dimensional partial differential equations using deep learning. Proc. Natl. Acad. Sci. USA 115(34), 8505-8510 (2018)
Suboptimal nonlinear feedback control laws for collective dynamics. M Herty, D Kalise, 2018 IEEE 14th International Conference on Control and Automation (ICCA). Herty, M., Kalise, D.: Suboptimal nonlinear feedback control laws for collective dynam- ics. In: 2018 IEEE 14th International Conference on Control and Automation (ICCA), pp. 556-561 (2018)
A neural network-based policy iteration algorithm with global h 2 -superlinear convergence for stochastic games on domains. K Ito, C Reisinger, Y Zhang, Found. Comput. Math. 21Ito, K., Reisinger, C., Zhang, Y.: A neural network-based policy iteration algorithm with global h 2 -superlinear convergence for stochastic games on domains. Found. Comput. Math. 21, 331-374 (2021)
On the solution of optimal control problems using parameterized state-dependent Riccati equations. A Jones, A Astolfi, 2020 59th IEEE Conference on Decision and Control (CDC). Jones, A., Astolfi, A.: On the solution of optimal control problems using parameterized state-dependent Riccati equations. In: 2020 59th IEEE Conference on Decision and Control (CDC), pp. 1098-1103 (2020)
Robust feedback control of nonlinear PDEs by numerical approximation of high-dimensional Hamilton-Jacobi-Isaacs equations. D Kalise, S Kundu, K Kunisch, SIAM J. Appl. Dyn. Syst. 192Kalise, D., Kundu, S., Kunisch, K.: Robust feedback control of nonlinear PDEs by numerical approximation of high-dimensional Hamilton-Jacobi-Isaacs equations. SIAM J. Appl. Dyn. Syst. 19(2), 1496-1524 (2020)
Polynomial approximation of high-dimensional Hamilton-Jacobi-Bellman equations and applications to feedback control of semilinear parabolic PDEs. D Kalise, K Kunisch, SIAM J. Sci. Comput. 402Kalise, D., Kunisch, K.: Polynomial approximation of high-dimensional Hamilton- Jacobi-Bellman equations and applications to feedback control of semilinear parabolic PDEs. SIAM J. Sci. Comput. 40(2), A629-A652 (2018)
Algorithms of Data Development For Deep Learning and Feedback Design. W Kang, Q Gong, T Nakamura-Zimmerer, ArXiv preprint:1912.00492Kang, W., Gong, Q., Nakamura-Zimmerer, T.: Algorithms of Data Development For Deep Learning and Feedback Design (2019). ArXiv preprint:1912.00492
Mitigating the curse of dimensionality: sparse grid characteristics method for optimal feedback control and HJB equations. W Kang, L C Wilcox, Comput. Optim. Appl. 682Kang, W., Wilcox, L.C.: Mitigating the curse of dimensionality: sparse grid character- istics method for optimal feedback control and HJB equations. Comput. Optim. Appl. 68(2), 289-315 (2017)
Jacobian-free Newton-Krylov methods: a survey of approaches and applications. D A Knoll, D E Keyes, J. Comput. Phys. 1932Knoll, D.A., Keyes, D.E.: Jacobian-free Newton-Krylov methods: a survey of approaches and applications. J. Comput. Phys. 193(2), 357-397 (2004)
Memory-efficient Krylov subspace techniques for solving large-scale Lyapunov equations. D Kressner, IEEE International Symposium on Computer-Aided Control Systems. San AntonioKressner, D.: Memory-efficient Krylov subspace techniques for solving large-scale Lya- punov equations. In: IEEE International Symposium on Computer-Aided Control Sys- tems, pp. 613-618. San Antonio (2008)
Semiglobal optimal feedback stabilization of autonomous systems via deep neural network approximation. K Kunisch, D Walter, ESAIM:COCV. 27Kunisch, K., Walter, D.: Semiglobal optimal feedback stabilization of autonomous sys- tems via deep neural network approximation. ESAIM:COCV 27 (2021)
A Schur method for solving algebraic Riccati equations. A Laub, IEEE Trans. Automat. Control. 246Laub, A.: A Schur method for solving algebraic Riccati equations. IEEE Trans. Au- tomat. Control 24(6), 913-921 (1979)
MATLAB 7, r2017b edn. The Mathworks, Inc, The MathWorks, Inc.: MATLAB 7, r2017b edn. (2017)
Adaptive Deep Learning for High-Dimensional Hamilton-Jacobi-Bellman Equations. T Nakamura-Zimmerer, Q Gong, W Kang, SIAM J. Sci. Comput. 432Nakamura-Zimmerer, T., Gong, Q., Kang, W.: Adaptive Deep Learning for High- Dimensional Hamilton-Jacobi-Bellman Equations. SIAM J. Sci. Comput. 43(2), A1221-A1247 (2021)
Solving high-dimensional Hamilton-Jacobi-Bellman pdes using neural networks: perspectives from the theory of controlled diffusions and measures on path space. N Nüsken, L Richter, ArXiv preprint:2005.05409Nüsken, N., Richter, L.: Solving high-dimensional Hamilton-Jacobi-Bellman pdes using neural networks: perspectives from the theory of controlled diffusions and measures on path space (2020). ArXiv preprint:2005.05409
Approximating the stationary Hamilton-Jacobi-Bellman equation by hierarchical tensor products. M Oster, L Sallandt, R Schneider, ArXiv preprint:1911.00279Oster, M., Sallandt, L., Schneider, R.: Approximating the stationary Hamilton-Jacobi- Bellman equation by hierarchical tensor products (2019). ArXiv preprint:1911.00279
The short-term rational Lanczos method and applications. D Palitta, S Pozza, V Simoncini, arXiv:2103.04054Tech. Rep.Palitta, D., Pozza, S., Simoncini, V.: The short-term rational Lanczos method and applications. Tech. Rep. arXiv:2103.04054, arXiv (2021)
Computationally enhanced projection methods for symmetric Sylvester and Lyapunov matrix equations. D Palitta, V Simoncini, J. Comput. Applied Math. 330Palitta, D., Simoncini, V.: Computationally enhanced projection methods for symmetric Sylvester and Lyapunov matrix equations. J. Comput. Applied Math. 330, 648-659 (2018)
Control for the sine-gordon equation. ESAIM: Control, Optimisation and Calculus of Variations. M Petcu, R Temam, 10Petcu, M., Temam, R.: Control for the sine-gordon equation. ESAIM: Control, Optimi- sation and Calculus of Variations 10(4), 553-573 (2004)
L 2 -gain analysis of nonlinear systems and nonlinear state feedback H∞ control. A J Van Der Schaft, IEEE Trans. Automat. Control. 376van der Schaft, A.J.: L 2 -gain analysis of nonlinear systems and nonlinear state feedback H∞ control. IEEE Trans. Automat. Control 37(6), 770-784 (1992)
Computational methods for linear matrix equations. V Simoncini, SIAM Rev. 583Simoncini, V.: Computational methods for linear matrix equations. SIAM Rev. 58(3), 377-441 (2016)
On two numerical methods for the solution of large-scale algebraic Riccati equations. V Simoncini, D B Szyld, M Monsalve, IMA J. Numer. Anal. 343Simoncini, V., Szyld, D.B., Monsalve, M.: On two numerical methods for the solution of large-scale algebraic Riccati equations. IMA J. Numer. Anal. 34(3), 904-920 (2014)
Evaluation of the linear matrix equation solvers in SLICOT. M Slowik, P Benner, V Sima, Industrial and Applied Mathematics. 21-2J. of Numerical AnalysisSlowik, M., Benner, P., Sima, V.: Evaluation of the linear matrix equation solvers in SLICOT. J. of Numerical Analysis, Industrial and Applied Mathematics 2(1-2), 11-34 (2007)
Sequential alternating least squares for solving high dimensional linear Hamilton-Jacobi-Bellman equation. E Stefansson, Y P Leong, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Stefansson, E., Leong, Y.P.: Sequential alternating least squares for solving high di- mensional linear Hamilton-Jacobi-Bellman equation. In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3757-3764 (2016)
A multilayer recurrent neural network for solving continuous-time algebraic Riccati equations. J Wang, G Wu, Neural Networks. 115Wang, J., Wu, G.: A multilayer recurrent neural network for solving continuous-time algebraic Riccati equations. Neural Networks 11(5), 939-950 (1998)
Suboptimal control for the nonlinear quadratic regulator problem. A Wernli, Automatica-J. IFAC. 11Wernli, A.: Suboptimal control for the nonlinear quadratic regulator problem. Automatica-J. IFAC 11, 75-84 (1975)
| []
|
[
"Active Velocity Estimation using Light Curtains via Self-Supervised Multi-Armed Bandits",
"Active Velocity Estimation using Light Curtains via Self-Supervised Multi-Armed Bandits"
]
| [
"Siddharth Ancha \nMassachusetts Institute of Technology\n02139CambridgeMAUSA\n",
"Gaurav Pathak \n95110Adobe, San JoseCAUSA\n",
"Ji Zhang \nCarnegie Mellon University\n15213PittsburghPAUSA\n",
"Srinivasa Narasimhan \nCarnegie Mellon University\n15213PittsburghPAUSA\n",
"David Held \nCarnegie Mellon University\n15213PittsburghPAUSA\n"
]
| [
"Massachusetts Institute of Technology\n02139CambridgeMAUSA",
"95110Adobe, San JoseCAUSA",
"Carnegie Mellon University\n15213PittsburghPAUSA",
"Carnegie Mellon University\n15213PittsburghPAUSA",
"Carnegie Mellon University\n15213PittsburghPAUSA"
]
| []
| To navigate in an environment safely and autonomously, robots must accurately estimate where obstacles are and how they move. Instead of using expensive traditional 3D sensors, we explore the use of a much cheaper, faster, and higher resolution alternative: programmable light curtains. Light curtains are a controllable depth sensor that sense only along a surface that the user selects. We adapt a probabilistic method based on particle filters and occupancy grids to explicitly estimate the position and velocity of 3D points in the scene using partial measurements made by light curtains. The central challenge is to decide where to place the light curtain to accurately perform this task. We propose multiple curtain placement strategies guided by maximizing information gain and verifying predicted object locations. Then, we combine these strategies using an online learning framework. We propose a novel self-supervised reward function that evaluates the accuracy of current velocity estimates using future light curtain placements. We use a multi-armed bandit framework to intelligently switch between placement policies in real time, outperforming fixed policies. We develop a fullstack navigation system that uses position and velocity estimates from light curtains for downstream tasks such as localization, mapping, path-planning, and obstacle avoidance. This work paves the way for controllable light curtains to accurately, efficiently, and purposefully perceive and navigate complex and dynamic environments. 1 | 10.48550/arxiv.2302.12597 | [
"https://export.arxiv.org/pdf/2302.12597v3.pdf"
]
| 257,205,743 | 2302.12597 | 05f7e773820de901f6ddd898be64db5f1e21b4db |
Active Velocity Estimation using Light Curtains via Self-Supervised Multi-Armed Bandits
Siddharth Ancha
Massachusetts Institute of Technology
02139CambridgeMAUSA
Gaurav Pathak
95110Adobe, San JoseCAUSA
Ji Zhang
Carnegie Mellon University
15213PittsburghPAUSA
Srinivasa Narasimhan
Carnegie Mellon University
15213PittsburghPAUSA
David Held
Carnegie Mellon University
15213PittsburghPAUSA
Active Velocity Estimation using Light Curtains via Self-Supervised Multi-Armed Bandits
Website: https://siddancha.github.io/projects/active-velocity-estimation 1
To navigate in an environment safely and autonomously, robots must accurately estimate where obstacles are and how they move. Instead of using expensive traditional 3D sensors, we explore the use of a much cheaper, faster, and higher resolution alternative: programmable light curtains. Light curtains are a controllable depth sensor that sense only along a surface that the user selects. We adapt a probabilistic method based on particle filters and occupancy grids to explicitly estimate the position and velocity of 3D points in the scene using partial measurements made by light curtains. The central challenge is to decide where to place the light curtain to accurately perform this task. We propose multiple curtain placement strategies guided by maximizing information gain and verifying predicted object locations. Then, we combine these strategies using an online learning framework. We propose a novel self-supervised reward function that evaluates the accuracy of current velocity estimates using future light curtain placements. We use a multi-armed bandit framework to intelligently switch between placement policies in real time, outperforming fixed policies. We develop a fullstack navigation system that uses position and velocity estimates from light curtains for downstream tasks such as localization, mapping, path-planning, and obstacle avoidance. This work paves the way for controllable light curtains to accurately, efficiently, and purposefully perceive and navigate complex and dynamic environments. 1
I. INTRODUCTION
Robots in the real world must navigate in the presence of moving objects like humans and vehicles whose motion is a priori unknown. This is a common challenge in many applications like autonomous driving, indoor and outdoor mobile robotics, and robot delivery. How should a robot sense and perceive such dynamic environments? How can it accurately estimate the motion of obstacles?
3D sensors such as LiDARs and depth cameras are conventionally used for robot navigation. However, LiDARs are typically expensive and low-resolution. Although cameras are cheaper and higher-resolution, depth estimates can be noisy and inaccurate. An alternative paradigm is active perception [7,8] where a controllable sensor is actively guided to focus on only the relevant parts of the environment. Programmable light curtains [69,9,4,58,5] are a recently invented, 1 Please see our project website for (1) the appendix, (2) an overview video, lightweight 3D sensor that detects points intersecting any user-specified 2D surface ("curtain"). Light curtains combine the best of passive cameras (low cost, high resolution, and high speed) and LiDARs (accurate depth estimation along the curtain, robustness to bright lighting and scattered media like fog/smoke [69] Previously, light curtains have been used for object detection [4], depth estimation [58], and estimating safety regions [5]. However, light curtains have not been used to explicitly estimate velocities of dynamic objects. Velocity estimation is crucial for many tasks in robotics such as trajectory forecasting, obstacle avoidance, motion planning, and dynamic object removal for SLAM [65].
The focus of this paper is to develop light curtain placement strategies that improve velocity estimates. We use dynamic occupancy grids [23] to estimate velocities and occupancies from points detected by light curtains without requiring point cloud segmentation or explicit data association across frames. First, we extend light curtain placement strategies from previous works [4,5] to integrate dynamic occupancy grids. Then, we propose a novel method to switch between multiple light curtain placement strategies using a multi-armed bandits approach. The feedback for the multi-armed bandits is obtained using a novel self-supervised reward function that evaluates the current estimates of occupancy and velocity using future light curtain placements, without requiring ground truth or additional sensors. We obtain this supervision by reusing intermediate quantities computed during recursive Bayes estimation of dynamic occupancy grids; thus the self-supervised rewards do not require extra light curtain placements or additional computations. We evaluate our approach on challenging simulated and real-world environments with complex and fast object motion. We integrate our method into a full-stack navigation pipeline and show that (a) Light curtain working principle (b) Bayes filter with self-supervised reward Figure 1: (a) Illustration of programmable light curtains adapted from [4]. An illumination plane (from the projector) and an imaging plane (of the camera) intersect to produce a light curtain. A controllable galvanometer mirror rotates synchronously with the camera's rolling shutter and images the points of intersection. See Sec. II-A for more details. (b) A Dynamic Bayes network [65] for controllable sensing. At timestep t, x t corresponds to the state of the world, u t corresponds to the action i.e. the location of light curtain placement, z t corresponds to light curtain measurements, and bel(x t ) and bel(x t ) are the inferred distributions over states before and after incorporating measurements z t , respectively. This is a slightly modified graphical model for controllable sensing where actions u t don't affect state x t but directly affect observations z t . the multi-armed bandits approach is able to outperform each individual strategy.
Our contributions include: 1) We re-derive the dynamic occupancy grid method [23] using a more rigorous mathematical analysis grounded in Bayesian filtering [65] (Sec. IV, App. B). 2) We design curtain placement strategies for dynamic occupancy grids to verify predicted object locations (Sec. V-A) and maximize information gain in hybrid discrete-continuous spaces (Sec. V-B, App. D). 3) We propose a novel self-supervised reward function that evaluates current velocity estimates using future light curtain placements without requiring additional supervision. Using the self-supervised reward, we learn to combine multiple curtain placement policies using a multi-armed bandit framework (Sec. VI). 4) We evaluate this approach in simulated and real-world environments with fast-moving obstacles and demonstrate that it outperforms individual placement strategies (Sec. VIII). 5) We develop an efficient and parallelized pipeline where light curtain sensing, grid estimation and computing curtain placement are tightly coupled and continuously interact with each other at ∼45 Hz (Sec. VII, Fig. 4, App. F). 6) We integrate our method into a full-stack navigation pipeline that uses position and velocity estimates to perform localization, mapping and obstacle avoidance in real-world dynamic environments (App. K).
II. BACKGROUND A. Light curtain working principle
Programmable light curtains [69,9,4,58,5] are a recently developed controllable depth sensor that image any user-specified vertically-ruled 2D surface in the environment. The device contains two main components: a rolling-shutter camera and a rotating light sheet laser (see illustration in Fig. 1a). The camera activates one pixel column at a time, from left to right, via the rolling shutter. We refer to the top-down projection of the imaging plane corresponding to each pixel column as a "camera ray" (shown in Fig. 1a). The shape of the light curtain is entirely specified by a 2D control point selected on each camera ray (shown as gray and green circles). The set of control points forms the input to the light curtain device. The laser is vertically aligned and synchronized with the camera's rolling shutter. A controllable galvo-mirror rotates the light sheet to point it at the control point corresponding to the currently active pixel column. Triangulated 3D scene points that both (1) intersect the laser light sheet and (2) are visible in the currently active pixel column, get detected by the device. If there exists an object in the environment at the surface of this intersection, then the point will have a large intensity in the camera reading; otherwise it will not; thus the device outputs the subset of control points (shown as green circles in Fig. 1a) that correspond to 3D object surfaces. Importantly, light curtains form a partial observation on the scene, since only control points can be detected. Please see [9,69] for further details on the mechanism behind a programmable light curtain.
B. Bayes filtering
This section provides a brief background on Bayes filtering and introduces notation used throughout the paper. A dynamic Bayes filter [65], also known as a hidden Markov model or a state space model is represented by a probabilistic graphical model shown in Fig. 1b. The state of the world at timestep t is denoted by x t (in our case, x t is the occupancy and velocity of a set of cells arranged in a 2D grid from the topdown view; more details in App. B-A). The control actions are denoted by u t (the locations where the light curtain is placed). Observations obtained from the sensor are denoted by z t . Fig. 1b is a slight modification of the standard model for the task of active perception, where actions don't affect the state of the world x t but directly affect the observations z t .
The goal is to infer at each timestep t the posterior distribution (a.k.a "belief") bel(x t ) = P (x t | u 1:t , z 1:t ) over the current state x t from the sequence of sensor observations z 1:t and the known sequence of actions u 1:t . This is computed using recursive Bayesian estimation [65] that alternates between two steps.
bel
(x t ) = xt−1 bel(x t−1 ) P (x t | x t−1 ) dx t−1 (1) bel(x t ) ∝ P (z t | x t , u t ) bel(x t )(2)
First, the motion update step computes an intermediate prior belief bel(x t ) by applying a motion model P (x t | x t−1 ) that encodes the dynamics of the environment. Then, the measurement update step computes the updated posterior belief bel(x t ) by incorporating sensor observations from the current timestep. To make this paper self-contained, we provide a detailed mathematical derivation of these steps in App. A.
III. RELATED WORK
A. Active perception and light curtains
Active perception involves actively controlling a sensor such as camera parameters [7], moving a camera to look around occlusions [19], and next-best view planning [21] for object instance classification [76,28,26,60] and 3D reconstruction [44,47,67,24]. Programmable light curtains [69,9,16] are a controllable depth sensor that have been used for active perception tasks such as active object detection [4], active depth estimation [58], and actively estimating safety regions [5]. However, most prior light curtain work has only focused on estimating object positions. They either place curtains with fixed scan patterns [69,16] or adaptive curtains for static scenes without taking object motion into account [9,4,58]. Ancha et al. [5] track safety regions by learning to forecast future locations; this could be interpreted as implicit velocity estimation. However, we are the first to explicitly estimate obstacle velocities which can be used for other downstream tasks like trajectory forecasting, obstacle avoidance and motion planning. Furthermore, we combine multiple adaptive strategies like random curtains [9, 5], maximizing information gain [4], and verifying predicted object locations [5] using a novel multiarmed bandit framework for estimating both object positions and velocities.
B. Velocity estimation from point clouds
Prior works on estimating scene flow [68,70,49,32,63] compute correspondences between point clouds acquired at consecutive timesteps; velocities can then be extracted from these correspondences. Furthermore, self-supervised approaches [54,75,45,48,10,35] can learn to estimate scene flow without requiring ground truth annotations. However, these methods are designed to compute flow between complete scans of the environment, such as those obtained from a LiDAR sensor, where correspondences exist for most points. In contrast, a single light curtain measurement is a partial point cloud -a subset of visible points that intersect the curtain. Depending on where they are placed, consecutive light curtains may not contain any correspondences at all. Therefore, scene flow methods are not suited for point clouds acquired by light curtains.
Another approach is to first segment the point cloud into a collection of separate objects [38,27,46,64,78,41], track each object, and finally register each object's segmented point cloud across frames using either optimization-based [11,59,36,79,77,51], probabilistic [37, 39, 34, 2], or learning-based [71,72,6, 20] methods. However, errors in point cloud segmentation can lead to incorrect velocity estimates. Instead, our method uses particle-based occupancy grids and avoids the need to perform either segmentation or explicit data association across frames.
C. Self-tuning Bayes filters
Prior works have used innovation i.e. the difference between predicted and observed measurements of a Kalman filter, to "self-tune" model parameters without needing ground truth annotations. Earlier works use an autoregressive moving average innovation model (ARMA) [33,55,30,25,80]. More recent works use the normalized innovation squared (NIS) metric to optimize Kalman filter noise models using downhill simplex methods [57], Bayesian optimization [17,18], and evolutionary algorithms [56,12]. Our self-supervised metric is inspired by Kalman filter innovation, but is used to select a sensor control strategy at each timestep using multi-armed bandits rather than tuning noise models.
IV. DYNAMIC OCCUPANCY GRIDS
We now describe how we apply dynamic occupancy grids [23] for velocity estimation with light curtains. A dynamic occupancy grid is a Bayes filter that combines two conventional representations in robotics: occupancy grids and particle filters. Occupancy grids [29,65] are a standard tool for mapping the location of static objects in the environment from the 2D top-down view. Each cell in the grid contains an occupancy probability p ∈ [0, 1], denoting the probability of the cell being occupied by an object. Dynamic occupancy grids [23] are an extension of classical occupancy grids (see Fig. 2a). Each cell in the grid contains both the occupancy probability p as well as a probability distribution over 2D velocities. The velocity distribution is represented by a set of weighted particles, where each particles stores a single 2D velocity. The set of weighted particles approximates the true velocity distribution.
While Danescu et. al. [23] showed that dynamic occupancy grids can accurately estimate occupancies and velocities, the precise role of particles and what they represent remained In addition, each cell also contains a set of weighted particles where each particles stores a single 2D velocity. The set of particles together represents a probability distribution of that cell's velocity.
(b) Ray-casting to light curtain detections to extract freespace information. Red cells contain detected points and are marked occupied. Blue cells are freespace; they either lie undetected on the light curtain or lie on rays cast from the sensor to the red cells. Gray denotes unknown occupancy. Purple cells are outside the light curtain's field of view. (c) Ray-marching to compute the depth probabilities of cells along a camera ray. The depth probability of the red cell is the product of the probability that the red cell is occupied and the probabilities of each blue cell being unoccupied.
unclear. Particles were described as representing a cell's velocity distribution; however, the movement of particles from one cell to another is somewhat inconsistent with this interpretation. Elsewhere, particles are described as being "physical building blocks of the world", i.e. parts of objects that can move; however, under this interpretation, it is unclear what distribution a set of particles is supposed to represent, since each particle represents a different part of an object. Furthermore, the particles were not only used to represent velocities, but their count inside a cell was proportional to the occupancy probability. In this work, we re-derive dynamic occupancy grids using a more rigorous mathematical analysis found in App. B, in which we explicitly state the assumptions made and provide a precise, mathematically rigorous interpretation of particles.
Motion and measurement updates: In the motion update step, particles are resampled from each cell in the grid and moved to another cell based on their velocities and the motion model. We assume access to a depth sensor (e.g. light curtains, LiDAR, depth cameras) that measures depth but does not directly measure velocity. In the measurement update step, the sensor provides (noisy) observations of occupancy for a subset of un-occluded cells in the grid. These observations are used to update the occupancy probabilities; velocities are inferred indirectly in the motion update step that are consistent with observed occupancies. This method is able to estimate velocities from depth measurements alone without requiring explicit data association across frames.
Raycasting to extract freespace information: As explained in Section II-A, a light curtain only returns whether there is a 3D object surface at the location of the control points where the camera rays and the laser sheets intersect; no depth information is returned for other locations in the environment. Fig. 2b shows an observation grid from a light curtain placement where cells directly measured to be occupied are shown in red and free cells are shown in blue. From this figure, we see that all voxels in between the light curtain source and a detected point must be unoccupied. Since 3D points were detected in the occupied cells, light must have traveled along these rays without obstruction; we mark cells along these rays to be free (shown in blue in Fig. 2b). To take advantage of this information, we cast rays using an efficient voxel traversal algorithm [3,42] from the sensor to occupied cells (shown in red). More details can be found in App. B-C. Thus by exploiting visibility constraints, we are able to extract more information from the light curtain.
V. CURTAIN PLACEMENT STRATEGIES
Using dynamic occupancy grids and Bayesian filtering, we have a method to infer occupancies and velocities explicitly from light curtain measurements (details in App. B). The main challenge that we address in this paper is to compute the best curtain placement from the dynamic occupancy grid i.e. from the current estimates of occupancy and velocity. The measurements from the placed curtain will be input back to upgrade the grid, closing the loop.
In order to compute the best curtain placement, we must first predict the occupancy when the next light curtain will be placed. To do so, we forecast the current dynamic occupancy grid, using the currently estimated velocities, to the next timestep via the motion update step (Eqn. 1, Eqn. 4 in App. B-B). In this section, we propose various curtain placement strategies computed from the forecasted grid. In Sec. VI, we will propose a novel method to combine them and outperform each individual strategy. locations. This strategy is motivated by the fact that a light curtain only senses visible object surfaces when it intersects them. Therefore, this approach can be used to verify whether objects are indeed located at the forecasted object locations.
A. Maximizing depth probability
Since occupancy grids are probabilistic, this strategy places curtains at locations of highest "depth probability", which is the probability that a control point at a given cell would return a depth reading. The depth probability of a cell is the probability that the cell is occupied, and all occluding cells (lying on the ray starting from the sensor and ending at the target cell) are free (see Fig. 2c). We borrow the idea of "ray marching" from the literature on volumetric rendering [66,53] to compute depth probabilities efficiently; see App. C for more details on the algorithm and computational complexity. For each camera ray, we place the curtain on the cell with the maximum depth probability.
B. Maximizing information gain
Another placement strategy that was found useful in previous work on 3D object detection [4] was to place curtains at the regions of highest "uncertainty". This is based on the principle of maximizing information gain for active sensing.
Recall the dynamic Bayes network in Fig. 1b. Given a forecasted prior belief P (x t ) = bel(x t ), the information gain framework prescribes that the action u t should be taken that maximizes the information gain IG(x t , z t | u t ) between the state x t and the observations z t when using u t . Information gain, which is a well-studied quantity in information theory, is the expected reduction in entropy (i.e. uncertainty) before and after sensing: H(P (x t )) − E zt|ut H(P (x t | z t , u t )) .
While information gain for conventional occupancy grids is straightforward to derive [4], it is not so for the case of dynamic occupancy grids. This is because the underlying state space of dynamic occupancy grids is a 'mixture' of discrete and continuous spaces -a cell can either be unoccupied or occupied with a continuous velocity. Unfortunately, the entropy of such mixed discrete-continuous spaces is not welldefined [31]. We overcome this problem using a more general definition of information gain based on the "Radon-Nikodym" derivative [31] that doesn't require explicitly calculating the entropy. In App. D, we show that the formula for information gain for dynamic occupancy grids (under certain assumptions) turns out to equal the occupancy uncertainty, described next.
Strategy 2: Occupancy Uncertainty: Let ω i t be the occupancy probability estimated for the i-th cell at the t-th timestep. Then, the information gain is the sum of binary cross entropies
H occ (ω i t ) = −ω i t log 2 ω i t − (1 − ω i t ) log 2 (1 − ω i t )
of the cells that the curtain lies on. Intuitively, since measurements from a depth sensor only provide information about occupancy and not velocity, the overall information gain is equal to the total occupancy uncertainty. A similar information gain computation was used in Ancha et al. [4] for static occupancy grids; in App. D we prove that the formula for information gain is the same as total occupancy uncertainty even for the more complex case of mixed discrete-continuous distributions. Strategy 2 places a curtain that maximizes the occupancy uncertainty.
Strategy 3: Velocity Uncertainty: Each cell also contains a velocity distribution
V i t = {(v i,m t , p i,m t ) | 1 ≤ m ≤ M }
represented by a set of M weighted particles with velocities v i,m t and weights p i,m t that sum to 1. In this strategy, we maximize the sum of velocity entropies. The discrete set of particles is used to approximate what is inherently a continuous velocity distribution. Therefore, we must compute the differential entropy of the continuous velocity distribution by first estimating its probability density function. We fit a multivariate Gaussian distribution to the set of weighted particles with mean µ = velocity uncertainty is weighted by the occupancy probability. This captures the notion that if the occupancy probability is very low, then the overall uncertainty should also be low even if the velocity uncertainty is high, because the velocity uncertainty is not relevant if the cell is unoccupied. This is a heuristic curtain placement policy that performs well in practice.
VI. SELF-SUPERVISED MULTI-ARMED BANDITS
Can we combine the various curtain placement strategies developed in Sec. V to improve performance? In this section, we develop a multi-armed bandit method to do so enabled by a novel self-supervised reward function.
A. Multi-armed bandit framework
A multi-armed bandit [61] is an online learning framework consisting of a set of actions or "arms", where each action is associated with an unknown reward function. The agent only observe samples from the reward distribution when it takes that action. The goal is to maximize the cumulative reward over time. The agent maintains a running average of the rewards for each action, called Q-values. We use ϵ-greedy multi-armed bandits [61], that trades-off exploration with exploitation. With probability ϵ, the bandit performs exploration and chooses an action at random. With probability 1−ϵ, it performs exploitation and chooses the action that has the highest Q-value. We use multi-armed bandits to intelligently switch between the four curtain placement strategies at test time.
B. Self-supervised rewards
The bandit framework requires a reward function to evaluate actions. Our eventual goal is to accurately estimate occupancy and velocity. How can we design a function that rewards improvements in occupancy and velocity estimates, but can also be computed at test-time using only light curtain placements and measurements? This is challenging because light curtains cannot directly measure velocities; they can only measure the occupancies of a small set of locations where they are placed.
Let us revisit the dynamic Bayes network from Sec. II-B, shown in Fig. 1b. Belief distributions are represented by dynamic occupancy grids. At timestep t − 1, the grid representing the belief bel(x t−1 ) was forecasted by applying the motion model to obtain the prior belief bel(x t ) at timestep t (Eqn. 1, Eqn. 4 in App. B). Then, in the measurement update step, the current light curtain measurement z t obtained by placing a curtain at locations u t is used to update the grid to bel(x t ) (Eqn. 2, Eqn. 5 in App. B). Therefore, we attribute the accuracy of bel(x t ) to action u t .
The forecasted occupancy at time t+1 is computed by using the current velocity estimates to forecast the current occupancy by an interval ∆t using the motion update step (Eqn. 1, Eqn. 4 in App. B). The forecasted occupancy will be accurate if both the current velocities and current occupancies are accurate. Therefore, the accuracy of forecasted occupancy acts as an appropriate reward function that captures both occupancy and velocity accuracies.
How do we evaluate forecasted occupancy computed using bel(x t ), without requiring ground truth, in a self-supervised way? This is possible by reusing intermediate quantities output during recursive Bayesian updates.
First, note that the forecasted occupancy of bel(x t ) is bel(x t+1 ) computed by the next motion update step. Our main insight is that before applying the next measurement update step, bel(x t+1 ) can be evaluated using the partial occupancy observed by the next light curtain measurements z t+1 . We use the F 1 -score between the forecasted occupancy grid and the partially observed occupancy grid as a self-supervised reward for the previous light curtain placement u t (See Fig. 1b). Specifically, we compute the self-supervised reward
R t = F 1 (bel(x t+1 ), z t+1 ), where bel(x t+1 )
is computed using Eqn. 1 (more specifically, Eqn. 4 in App. B), and z t+1 is the partial occupancy observed at time t+1. See App. G for details on the F 1 -score.
An advantage of our self-supervised reward is that it does not require any extra computation. This is because (1) occupancy forecasting of bel(x t ) is performed anyway as part of the motion update step, and (2) the partial occupancy information from z t+1 is computed anyway in the next measurement update step. By reusing quantities already computed during recursive Bayes filtering, our self-supervised reward does not require any extra forecasting steps nor any extra light curtain placements.
At each timestep, we use the ϵ-greedy strategy to select one among the four curtain placement strategies a ∈ {a 1 , a 2 , a 3 , a 4 }. Then we compute the curtain placement u t according to strategy a. When the accuracy of the forecasted occupancy R t is obtained in the next timestep, we update the Q-value of a as Q(a) := Q(a) + α [R t − Q(a)]. We use the non-stationary reward formulation [61] of multi-armed bandits with smoothing parameter α to account for the possibility that different strategies {a 1 , a 2 , a 3 , a 4 } may be superior at different times. See App. I for more details. Fig. 4 shows our pipeline that has three processes: (1) light curtain sensing, (2) Bayes filtering using dynamic occupancy grids, and (3) computing curtain placement. The processes are run in parallel threads with shared memory, at their own independent speeds.
VII. PARALLELIZED PIPELINE
1. Light curtain imaging: This thread continuously places curtains at locations determined by one of the four strategies described in Sec. V. However, when waiting for the next curtain placement to be computed, it places random curtains [5] (that are generated offline) to sense random locations in the scene. This ensures that the device is always kept busy and runs at approximately 45 Hz.
2. Bayes filtering: This thread inputs light curtain measurements and updates the dynamic occupancy grid. It alternates between motion and measurement update steps (Eqns. 1, 2, Eqns. 4, 5 in App. B). The motion update step requires two grids, each representing the current and next timesteps. Particles are sampled from the current grid, perturbed according to the motion model, and inserted into the next grid. The roles of the Figure 4: Implementation of our method as a parallelized pipeline. Our methods contains three components: (1) light curtain sensing, (2) Bayes estimation of dynamic occupancy grids, and (3) computing curtain placement. Each process can be run in parallel in a separate thread at its own independent speed. The three processes are tightly coupled in a closed loop using three grids as shared memory. Our implementation ensures that information flows between the threads safely and continuously. two grids are swapped at every successive motion update to avoid copying data. This thread runs at approximately 35 Hz.
3. Computing curtain placement: This uses the most recent dynamic occupancy grid to compute the next curtain placement (Sec. V). It first forecasts the grid, using the same motion update step (Eqn. 1, Eqn. 4 in App. B), to the next timestep when the next curtain is expected to be imaged. The forecasted occupancy is used to compute the curtain placement. In App. F, we describe how an extra grid is used to ensure thread-safety and that no thread ever needs to wait on another to finish processing. Finally, the control points of the computed curtain are sent to the light curtain device. The three inter-dependent processes are tightly coupled and continuously interact with each other.
VIII. EXPERIMENTS
A. Environments
Simulation environment: We use a simulated environment consisting of various blocks moving in a variety of motions (see Fig. 3b). The environment contains cylinders and cuboids, moving in (1) linear, harmonic (oscillatory) motion along different directions, (2) curved sinusoidal motion, and (3) random Brownian motion. We use an efficient light curtain simulator described in App. J. Real-world environment: Our real-world environment consists of a mobile robot with a mounted light curtain device (Fig. 3a) navigating in the presence of two pedestrians walking in multiple directions, at different speeds and in complicated trajectories (see Fig. 3c).
B. Evaluation metrics
Since we wish to evaluate the accuracy of both occupancy and velocity estimates, we use the forecasted occupancy [50, 1] as our evaluation metric. As noted in Sec. VI-B, the forecasted occupancy will be accurate if both current velocities and current occupancies are accurate. The future occupancy at time t+∆t is computed by the motion update step (Eqn. 1, Eqn. 4 in App. B) that uses the current velocity to forecast the current occupancy by an interval ∆t. This metric is particularly relevant for obstacle avoidance where estimates of future obstacle locations must be accurately computed to plan safe, collision-free paths.
Ideally, the accuracy of forecasted occupancy can be computed by comparing it against ground truth occupancy at t + ∆t. This is possible in simulated environments where ground truth occupancy is available for all grid cells. In realworld environments, true occupancy can only be measured for a subset of cells by the light curtain; in this case, we use the "self-supervised" version of the metric described in Sec. VI-B. We follow prior works [40, 52, 62] that treat the evaluation of occupancy as a classification problem and compute several metrics: (1) Table I: Accuracy of occupancy and velocity estimation measured using forecasted occupancy in (a) simulated, and (b) real environments. recall, (4) F 1 -score and (5) the IoU [62] between the predicted and ground truth occupancy masks. For more details on these metrics, please see App. G. Table I shows the performance of various light curtain placement strategies in simulated and real-world environments, evaluated using multiple forecasted occupancy metrics (see Sec. VIII-B, App. G). Since a large proportion of cells are unoccupied, the classification accuracy of all methods is very similar. Furthermore, precision and recall metrics can be deceived by mostly predicting negative and positive labels respectively. However, the F 1 -score and IoU metrics are discriminative and robust; they are high only when both precision and recall are high. Therefore, we focus on these two metrics (shown in blue). In both sets of experiments, multi-armed bandits that combine the four curtain placement policies using our self-supervised reward outperform all other methods. This shows that intelligently switching between multiple placement strategies is more beneficial than using any one single strategy at all times.
C. Quantitative analysis
Between the other four strategies, maximizing occupancy Table II: Quantitative analysis of the multi-armed bandit method. The first column shows the percentage of times each action (i.e. curtain placement policy) was chosen. The second column shows the average Q-value of each action computed by the multi-armed bandit. Higher Q-value is better; the action with the highest value is selected during exploitation.
uncertainty and maximizing a linear combination of occupancy and velocity uncertainty perform comparably. Maximizing velocity uncertainty tends to perform the worst. Fortunately, multi-armed bandits learn to downweight this under-performing strategy (see Table II, rightmost column in Fig. 6). We also compare against other baselines: using only random curtains (without placing any computed curtains), and with a simulated LiDAR. Unsurprisingly, using random curtains performs the worst. All non-random curtain policies except maximizing velocity uncertainty are able to outperform LiDAR. This is because light curtains are faster (∼45 Hz) and can be placed intelligently to maximize the accuracy of occupancy and velocity estimates. Table II shows an analysis specific to the multi-armed bandit method. Please see the caption for details. We find that the best performing policies in Table I have the highest Q-values and are selected most frequently. The following trend holds: the better the performance of an individual policy when used in isolation (shown in Table I), the higher is its average Q-value and its frequency of being chosen. However, a combination of all policies (MAB) is better than any single one.
D. Qualitative analysis
Visualizing velocities and occupancies: We use the HSV colorwheel [73] shown in Fig. 5 and 6, to jointly visualize velocities and occupancies. The color 'value' (from HSV) encodes the occupancy probability; dark is low occupancy probability and bright is high occupancy probability. The 'hue' encodes the direction of velocity from the top-down view. 'Saturation' encodes the magnitude of velocity: white is stationary whereas colorful corresponds to high speed. See App. H for more details.
Examples. Fig. 5 shows an example of velocity estimation in the simulated environment and Fig. 6 shows qualitative results on the real-world environment using our multi-armed bandits (MAB) curtain placement method. Please see captions for explanation. We advise the reader to view the video examples on the project website. In Fig. 5, we see that the estimated velocities appear to be consistent with the ground truth, as shown by the corresponding colors that indicate the estimated and ground-truth velocity directions.
Full-stack navigation. We integrate our system into a fullstack navigation pipeline [15] that performs planning, control and obstacle avoidance. We mount the light curtain device on a mobile robot (see Fig. 3a). We use ORB-SLAM3 [13] for localization and mapping that takes depth from light curtains as input. Using position and velocity estimates, the robot is able to perform dense mapping in an indoor environment and avoids static and dynamic obstacles. Please see App. K for more details.
IX. CONCLUSION
In this work, we develop a method using programmable light curtains, an actively controllable resource-efficient sensor, to estimate the positions and velocities of objects in complex, dynamic scenes. We use a probabilistic framework based on particle filters and occupancy grids to estimate velocities from partial light curtain measurements. We design curtain placement policies that verify predicted object locations and maximize information gain. Importantly, we combine the strengths of these policies using a novel multi-armed bandits framework that switches between the placement strategies to improve performance. This is enabled by our novel selfsupervised reward function that evaluates current velocity estimates using future light curtain placements with only minimal computational overhead. We integrate our method into a full-stack navigation system that performs localization, mapping and obstacle avoidance using light curtains. We hope our work paves the way for combining multiple sensor control strategies using self-supervised feedback for perception and navigation in complex and dynamic environments. The goal is to infer at each timestep t a distribution bel(x t ) = P (x t | u 1:t , z 1:t ) over the current state x t from the sequence of sensor observations z 1:t and the known sequence of actions u 1:t . bel(x t ) is computed using recursive Bayesian estimation [65]. Combining the definition of bel(x t ) and the Markov property of the dynamic Bayes network, we can derive the following recursive relationship [65]:
bel(x t ) = P (x t | u 1:t , z 1:t ) ∝ P (x t , z t | u 1:t , z 1:t−1 ) = P (z t | x t , u 1:t , z 1:t−1 ) P (x t | u 1:t , z 1:t−1 ) = P (z t | x t , u t ) P (x t | u 1:t−1 , z 1:t−1 ) = P (z t | x t , u t ) xt−1 P (x t−1 , x t | u 1:t−1 , z 1:t−1 ) dx t−1 = P (z t | x t , u t ) xt−1 P (x t−1 | u 1:t−1 , z 1:t−1 ) · P (x t | x t−1 , u 1:t−1 , z 1:t−1 ) dx t−1 = P (z t | x t , u t ) xt−1 P (x t−1 | u 1:t−1 , z 1:t−1 ) bel(xt−1) · P (x t | x t−1 ) dx t−1 = P (z t | x t , u t ) xt−1 bel(x t−1 ) P (x t | x t−1 ) dx t−1 = P (z t | x t , u t ) bel(x t ) (Measurement update), where bel(x t ) = xt−1 bel(x t−1 ) P (x t | x t−1 ) dx t−1 (Motion update)
Based on the above recursive equations, recursive Bayesian estimation alternates between the following two steps: 1) Motion update step: This step accounts for the dynamics of the environment. It first computes an intermediate quantity defined above: bel(x t ) = xt−1 bel(x t−1 ) P (x t | x t−1 ) dx t−1 . This is the result of "applying" a known or assumed motion model P (x t | x t−1 ) to the previous belief bel(x t−1 ).
In dynamic occupancy grids, this step accounts for the motion of scene points based on their current 2D velocities.
The occupancies and velocities of the next timestep are computed based on the occupancies and velocities in the previous timestep. We use a constant velocity motion model with Gaussian noise in both velocity and position. When the motion model is applied to Fig. 7a, it is updated to Fig. 7b (illustration only). This correction step usually increases the uncertainty in occupancies and velocities. 2) Measurement update step: This step incorporates measurements from a sensor. It updates the prior belief bel(x t ) to bel(x t ) ∝ P (z t | x t , u t ) bel(x t ) by weighting bel(x t ) by the likelihood of the observed measurements P (z t | x t , u t ). This step usually reduces uncertainty in the state. In dynamic occupancy grids, occupancies are updated using the measurements from the light curtain. Since light curtains (or any depth sensor) only measure the locations of objects, this step does not update velocities. The velocity estimates are automatically refined in subsequent motion update steps. The measurement update reduced the probability of one of the positions of an object in Fig. 7b to Fig. 7c (illustration only).
APPENDIX B DYNAMIC OCCUPANCY GRIDS
The dynamic occupancy grid, like conventional occupancy grids [29,65], is an instance of Bayes filters. Occupancy grids [29,65] are a standard tool in robotics for mapping the location of static objects in the environment. 2D occupancy grids that map objects from the top-down view are commonly used for mapping and SLAM in robot navigation. Each cell in the grid contains an occupancy probability p ∈ [0, 1], denoting the probability of the cell being occupied by an object. Dynamic occupancy grids [23] are an extension of classical occupancy grids (see Figure 7a). Each cell in the grid contains both (1) the occupany probability p ∈ [0, 1], as well as (2) a probability distribution over 2D velocities. The velocity distribution is represented by a set of weighted particles, where each particles stores a single 2D velocity. The set of weighted particles approximates the true velocity distribution.
A. Mathematical framework
Our method is built upon dynamic occupancy grids introduced by Danescu. et. al. [23]. The authors describe particles as both representing a velocity distribution (i.e. weighted velocity hypotheses), as well as being "physical building blocks of the world". The former interpretation suggests that particles together represent the probability distribution of the velocity of a single physical scene point, whereas the latter suggests that each particle corresponds to its own scene point. Furthermore, the particles not only represent velocities, but their count represents the probability of occupancy. While the method is shown to be very promising, the precise role of particles and what they represent remains unclear. In this work, we re-derive dynamic occupancy grids using a more rigorous mathematical analysis. We explicitly state the assumptions made and provide a precise interpretation of particles. Our framework can be derived from three reasonably mild assumptions:
Assumption 1: There are no collisions
Each cell can be occupied by at most one physical scene point with a single velocity. Cells are sufficiently small that multiple objects with different velocities cannot exist ("collide") within a cell.
This assumption is required for a single velocity of a cell to be well-defined. Assumption 1 paves the way for a straightforward interpretation particles: all particles belonging to a cell represent a probability distribution over the single velocity of that cell. Assumption 1 allows us to define the state space of occupancies and velocities.
Representing the state space: Each cell is indexed by i ∈ I from an index set I of all cell in the grid. At timestep t, the state of the i-th cell is denoted by
x i t = (o i t , v i t ).
It contains two variables. The first is a binary occupancy variable o i t ∈ {0, 1} which denotes whether the cell is occupied or not. The second is a 2D velocity variable v i t ∈ R 2 representing the continuous velocity of the cell (if it is occupied) from the top-down view. The overall state of the dynamic occupancy grid x t is a concantenation of the states of all cells in the grid,
i.e. x t = {x i t = (o i t , v i t ) | i ∈ I}.
Note that the variable v i t is "conditional": it is only defined when the cell is occpied i.e. o i t = 1.
Assumption 2: Constant velocity motion model
Each scene point moves with a constant velocity, with added Gaussian noise.
Any motion can be approximated by a constant velocity motion model as long as the time interval is sufficiently small. Therefore, this assumption is reasonable in our setting since light curtains operate at very high speeds (45-60 Hz). Note that although we use the constant velocity motion model in this work following [23], the dynamic occupancy grid framework can still be used by swapping it with any other motion model of choice.
Let ϵ i t ∼ N (0, R ϵ ) and δ i t ∼ N (0, R δ ) be the Gaussian noise in velocity and position respectively for the i-th cell in the grid at time t. And let pos i denote the 2D location of the center of the i-th cell. The constant velocity motion model can be expressed mathematically as
o j t+1 , v j t+1 = 1, v i t + ϵ i t if ∃i ∈ I such that o i t = 1 and pos j ≈ pos i + v i t ∆t + δ i t 0 otherwise (3)
In other words, a cell j will be occupied in the next timestep if and only if there exists another cell i in the previous timestep which moves to j under the constant velocity motion model. In that case, the velocity of cell j will be equal to that of cell i modulo the Gaussian noise. The equality is approximate taking into account the finite size of the cell.
Representing the belief distribution: A dynamic occupancy grid represent the current belief over velocities and occupancies of the environment. It is a probability distribution over states x t described above. The state space is extremely large. Dynamic occupancy grids represent the belief compactly by making the following assumption:
Assumption 3: Cells are mutually independent
The probability distributions of all cells are mutually independent. This is a standard assumption made for occupancy grids for computational tractability.
Each cell contains two distributions: The larger the number of particles used, the better is the particle approximation to the true continuous velocity distribution. The velocity distribution acts like a "conditional" distribution. The probability that cell i is occupied with velocity v i,m t is the product ω i t p i,m t of the probability of being occupied (ω i t ) and the probability of having the velocity v i,m t given that it is occupied (p i,m t ). The probability of being unoccupied is simply 1 − ω i t . Since cells are assumed to be mutually independent, the probability of the entire grid is the product of probabilities of individual cells in the grid.
• Occupancy distribution (ω i t ): o i t is a Bernoulli random variable over {0, 1} with probability ω i t ∈ [0, 1]. • Velocity distribution (V i t ): v i t is a random variable over R 2 . It is represented by a set of M weighted particles V i t = {(v i,m t , p i,m t ) | 1 ≤ m ≤ M }.
B. Motion update step
We will now derive the motion update equations of the dynamic probability grid. These equations govern how particles will move across the grid and be reweighted. Consider a simplified grid shown in Fig. 8a. Assume that there are only three cells in the grid, and only cells 1 and 2 are occupied at time t. The occupancy and velocity distributions are illustrated in the figure, and particles (v 1,m1 t , p 1,m1 t ) from cell 1 and (v 2,m2 t , p 2,m2 t ) from cells 2 move to cell 3 in the next timestep t + 1 (assuming that noise has been incorporated in v 1,m1 t , v 2,m2 t ). What should be the occupancy and velocity distribution of cell 3?
Let E 1 be the event that the particle from cell 1 enters cell 3. Similarly, let E 2 be the event that the particle from cell 2 enters cell 3. We have that P (E 1 ) = ω 1 t p 1,m1 t and P (E 2 ) = ω 2 t p 2,m2 t . Cell 3 will be occupied when either E 1 or E 2 happens (from Eqn. 3). Its occupancy probability after the motion update is ω 3 t+1 = P (E 1 ∪ E 2 ). Now, from assumption 1 (the no collision assumption), objects of different velocities cannot occupy the same cell. Therefore events E 1 and E 2 must be disjoint. From the law of total probability, if E 1 and E 2 are disjoint, then P (E 1 ∪ E 2 ) = P (E 1 ) + P (E 2 ). Therefore, the occupancy probability of cell 3 after the motion update
ω 3 t+1 = ω 1 t p 1,m1 t + ω 2 t p 2,m2 t .
The conditional velocity distribution of cell 3 will comprise of v 1,m1 t and v 2,m2 t , with weights proportional to P (E 1 ) and P (E 2 ) respectively. This leads us to the general motion update of dynamic occupancy grids:
Motion update step for dynamic occupancy grids (4)
ω j t+1 = i∈I ω i t Mi m=1 p i,m
is however required for the measurement update step (see App. B-C). When adding the ω i t p i,m t terms from incoming particles to compute ω j t+1 , the sum should not exceed 1 under the no collision assumption (assumption 1). However, in practice, this may be violated. In such cases, we truncate the occupancy probability to 1 following [23]. However, this happens rarely; we needed to perform truncation only 0.35% times on average.
C. Measurement update step
Sensors such as light curtains and LiDARs provide depth information, from which the current occupancy of cells can be inferred. We use a post-processing algorithm described in [42] to process sensor data and output a detection variable z i t ∈ {OCCUPIED, FREE, UNKNOWN} for each cell i in the grid. z i t indicates the presence, absence or lack of knowledge about objects inside cell i.
For LiDAR scans, Hu et al.
[42] marks each cell that contains LiDAR points as OCCUPIED. Then, it uses the fact that if a 3D point was detected, light must have traveled between the sensor and the detected point in a straight line without obstruction. Therefore, rays are cast starting from the sensor to the OCCUPIED cells using an efficient voxel traversal algorithm [3]. Cells lying along these rays are marked FREE. Any cells that remain unclassified are marked as UNKNOWN. This method exploits visibility constraints of light to extract the maximum possible information from a 3D scan. We use the same processing method customized for light curtains. When a light curtain is placed on a set of cells in the grid, the cells are classified as OCCUPIED or FREE based on whether points were detected inside the cell. Raycasting to occupied cells is also performed to discover additional FREE cells. Figure 2b visualizes an example of visibility classification. Cells detected as OCCUPIED are shown in red. Cells shown in blue are inferred as FREE because they either lie undetected on the curtain or lie on rays cast to red cells. UNKNOWN cells are shown in gray.
The measurement update step takes the visibility classification as input. Our observation model treats this classification as a noisy observation of true occupancy. We do not update the occupancy of UNKNOWN cells. For known cells, we assume a false positive rate α fp ∈ [0, 1] and a false negative rate α fn ∈ [0, 1]. The observation model is:
P (z i t = OCCUPIED | o i t = 1) = 1 − α fn P (z i t = FREE | o i t = 1) = α fn P (z i t = OCCUPIED | o i t = 0) = α fp P (z i t = FREE | o i t = 0) = 1 − α fp
We first use the assumption that all cells are mutually independent (assumption 3) to write the belief of the overall grid bel(x t ) = i∈I bel(x i t ) as a product of belief distributions of each cell in the grid. Since the likelihood function P (z t | x t ) = i∈I P (z i t | o i t ) is also independent for each cell, the updated posterior belief bel(x t | z t ) ∝ bel(x t ) P (z t | x t ) can be computed independently for each cell. Given the prior occupancy distribution ω i t and an observation z i t for cell i, its occupancy distribution after the measurement update can be computed using the Bayes rule: Measurement update step for dynamic occupancy grids (5)
ω i t = ω i t P (z i t | o i t = 1) ω i t P (z i t | o i t = 1) + (1 − ω i t ) P (z i t | o i t = 0)
Depth sensors only provide information about the occupancy of cells; they do not directly measure object velocities. Velocities are inferred indirectly by a combination of measurement-and motion-update steps. The measurement update incorporates information about occupancy and the motion update infers velocities that are consistent with occupancy across timesteps in a principled probabilistic manner. Therefore, our method estimates velocities from depth measurements without requiring explicit data association across frames! APPENDIX C COMPUTING DEPTH PROBABILITIES USING RAYMARCHING The depth probability of a cell in the grid is the probability that the depth of the scene along the cell's direction is the cell's location. In other words, it is the probability that a visible surface exists in the cell i.e. the cell is occupied and all other occluding cells are empty. Once we compute the depth probability of each cell in the grid, we can place a curtain that lies on the cells with the highest depth probability.
How do we compute the depth probability in a probabilistic occupancy grid? We borrow the idea of "ray marching" from the literature on volumetric rendering [66,53]. In order to reconstruct the implicit depth surface from a probabilistic volume, ray marching travels along a ray originating from the sensor and computes the probability of visibility and occlusion at each point. Tulsiani et al. [66] performs this for discretized 3D grids (similar to our case) whereas NeRFs [53] perform this in a continuous space using neural radiance fields. Consider an example of raymarching in Fig. 2c. Let the sequence of cells on a ray be indexed as 1, 2, . . . , n, . . . N . Recall from App. B that ω i t is the occupancy probability of the i-th cell at timestep t. The depth probability of the n-th cell P D t (n) = ω n t n−1 i=1 (1 − ω i t ) is product of the probabilities that the n-th cell is occupied (ω n t ) and the probabilities that each i-th cell on the ray before the n-th cell is unoccupied (1 − ω i t ) so that light can reach the n-th cell unoccluded. Let us define the "visibility" probability
P V t (n) = n i=1 (1 − ω i t )
that all cells are visible upto the n-th cell. Then, we have the following recursive equations:
P V t (i) = P V t (i − 1) (1 − ω i t ) P D t (i) = P V t (i − 1) ω i t
These recursive equations can be used to compute the depth probability of each cell along a ray efficiently in time O(N ) linear in the number of cells on that ray. This strategy is implemented as follows. For each camera ray, we perform the ray-marching procedure and compute the depth probability of each cell along that ray. Then, for each camera ray, we place the curtain at the cell with the maximum depth probability.
APPENDIX D MAXIMIZING INFORMATION GAIN
Consider the dynamic Bayes network shown in Fig. 1b. Given a forecasted prior belief P (x t ) = bel(x t ), the information gain framework prescribes that the action u t should be taken that maximizes the information gain IG(x t , z t | u t ) between the state x t and the observations z t when taking an action u t . Information gain is a well-studied quantity in information theory and is usually defined as:
IG(x t , z t | u t ) = H(P (x t )) entropy of xt − E zt|ut H(P (x t | z t , u t )) conditional entropy of xt | zt under ut (6)
The information gain is the expected reduction in entropy (i.e. uncertainty) in x t before and after taking the action u t . Ancha et al. [4] showed that under certain assumptions, the information gain of conventional occupancy grids on placing light curtains is equal to the sum of binary entropies of the occupancy probabilities of the cells that the curtain lies on.
While information gain for conventional occupancy grids is straightforward to derive, it is not so for the case of dynamic occupancy grids. This is because the underlying state space of dynamic occupancy grids is a 'mixture' of discrete and continuous spaces. Consider the state x i t of the i-th cell in the grid. The space of the state x i t is:
x i t ∈ {unoccupied} discrete space ∪ {occupied with v i t | v i t ∈ R 2 } continuous space
The cell can either be unoccupied, or be occupied with a continuous velocity. Unfortunately, the entropy of such mixed discrete-continuous spaces is not well-defined [31]. Therefore the "2H-estimator" in Eqn. 6 (named so because it contains two entropy terms) cannot be used to calculate information gain since the individual terms on the right hand side are not well-defined. Fortunately, information gain (unlike entropy) is welldefined for most distributions, including discrete-continuous mixtures [31]. This is possible by using a more general definition of information gain given by the "Radon-Nikodym" derivative [31]:
IG(x t , z t | u t ) = xt,zt log dP x,z dP x P z Radon-Nikodym derivative dP x,z(7)
The Radon-Nikodym is well-defined for discrete-continuous mixtures [31]. When the individual entropy terms of Eqn. 6 (the 2H-estimator) are well-defined, the more general definition of Eqn. 7 reduces to Eqn. 6. In other words, the two definitions are consistent.
We will now derive the information gain of dynamic occupancy grids using the more general Radon-Nikodym definition. Here, we derive the information gain of a single i-th cell. Let ω be the occupancy probability of the cell. Let the continuous velocity distribution of the cell be denoted by P (v). Assume that we place a curtain on this cell, and we obtain an observation z i t ∈ {0, 1} to be a noisy measurement of the cell's occupancy. This is assuming that we are using a depth sensor that can only partially observe occupancy but cannot directly observe velocities. Let α fp and α fn be the false-positive and false-negative rates of the sensor respectively. Then,
IG(x i t | z i t ) = x,z dP x,z log dP x,z dP x P z (Radon-Nikodym formulation) = (1 − ω)(1 − α fp ) log (1 − ω)(1 − α fp ) (1 − ω)P (z i t = 0) unoccupied and undetected + (1 − ω) α fp log (1 − ω) α fp (1 − ω)P (z i t = 1) unoccupied and detected + v ω P (v)dv (1 − α fn ) log ω P (v)dv (1 − α fn ) ω P (v)dv P (z i t = 1)
velocity v and detected
+ v ω P (v)dv α fn log ω P (v)dv α fn ω P (v)dv P (z i t = 0)
velocity v and undetected
= (1 − ω)(1 − α fp ) log (1 − α fp ) P (z i t = 0) + (1 − ω) α fp log α fp P (z i t = 1) + ω (1 − α fn ) log (1 − α fn ) P (z i t = 1) + ω α fn log α fn P (z i t = 0) = − (1 − ω)(1 − α fp ) + ω α fn log P (z i t = 0) − (1 − ω) α fp + ω(1 − α fn ) log P (z i t = 1) − ω H(α fn ) − (1 − ω) H(α fp ) = −P (z i t = 0) log P (z i t = 0) − P (z i t = 1) log P (z i t = 1) − ω H(α fn ) − (1 − ω) H(α fp ) = H(z) − ω H(α fn ) − (1 − ω) H(α fp )
= H(ω) (assuming that α fp = α fn = 0) Assumption 1: We assume that the sensor is accurate (i.e. the false positive rate α fp and the false negative rate α fn are both close to zero). Then, the information gain of a single cell due to placing a light curtain on that cell is equal to its binary occupancy entropy
H occ (ω) = −ω log 2 ω − (1 − ω) log 2 (1 − ω).
Assumption 2: We assume that all cells are independently distributed. Since the information gain of independently distributed random variables is the sum of information gain of individual variables [22], the total information gain is the sum of binary cross entropies H occ (ω i t ) of the cells that the curtain lies on. This is similar to the information gain in Ancha et al. [4]. However, we have been able to prove this mathematically in the more complex case of mixed discrete-continuous distributions.
This theoretical result is also intuitive -since the depth sensor measurements only provide information about occupancy and not velocity, it is not surprising that the information gain is equal to the total occupancy uncertainty.
APPENDIX E ADVANTAGES OF LIGHT CURTAINS OVER CONVENTIONAL DEPTH SENSORS
LiDARs have long range and high accuracy under strong ambient light. However, compared to light curtains, they have poor vertical resolution (≤128 rows), low frame rate (5-20Hz), and are very expensive (>$20K). Table III Passive RGBD sensors (stereo sensors that do not project light) have high spatial and temporal resolution and are inexpensive. However, their accuracy is poor in non-textured regions due to inaccuracies in stereo feature matching.
Active RGBD sensors (like the Kinect sensor that projects light) inherit the benefits of stereo sensors but also work in texture-less regions. However, they have virtually no range outdoors.
Light curtains combine the best of these sensors. They have a long range (nearly 35-50m) both outdoors and indoors, high spatial resolution (1280 rows) and temporal resolution (45-60 Hz), work for textured and texture-less regions, and are inexpensive (<1$K). These advantages have been demonstrated in previous works on programmable light curtains [9,58].
APPENDIX F USING AN EXTRA GRID FOR THREAD-SAFETY AND
EFFICIENCY
Our parallelized pipeline contains three threads: (1) light curtain sensing, (2) Bayes filtering using dynamic occupancy grids, and (3) computing curtain placements. How many grids are required to run these threads in parallel, especially threads 2 and 3?
The motion update step (Eqns. 1, 4) moves particles across the grid according to the motion model. The particles cannot be moved in place inside the same grid since it may cause the same particle to be erroneously moved more than once. Therefore, the motion update step requires two grids: a "source/current" grid and a "destination/next" grid. Particles from the current grid are copied, moved and placed in the next grid. After the motion update is complete, the roles of the current and next grids are swapped. The next grid is now assigned to be the new "current" grid since it is now the most up-to-date, incorporating the latest measurements. In the next motion update step, particles move from this grid to the older current grid (now taking the role of the "next" grid).
The curtain computation thread also performs a motion update when it needs to forecast the current grid to a future timestep (when the next curtain is expected to be imaged). It uses the current grid of the Bayes filtering thread as the source, but requires a third, "forecasting" grid as a destination grid.
Although three grids are sufficient to implement parallelization, the pipeline can be made more efficient. Specifically, consider the situation where two motion updates take place simultaneously from "current" to "next" grids in the Bayes filtering thread and "current" to "forecasting" grid in the curtain computation thread. Once the motion update in the Bayes filtering thread is complete, it cannot immediately perform the next motion update step. It must wait for the curtain computation thread to finish forecasting using the "current" grid before the "current" and "next" grids can be swapped and the next motion update dirties the "current" grid. If we use an additional "extra" grid, the Bayes filtering thread can use this as the destination grid for its next motion update step without needing to wait on the curtain computation thread to finish the latter's forecasting step.
Our parallelized pipeline tightly integrates the three interdependent processes in a closed loop. We use a total of four grids to simultaneously guarantee the following two properties:
(1) grids in use are never mistakenly overwritten, and (2) no thread ever needs to wait on another to finish processing.
APPENDIX G EVALUATION METRICS
As mentioned in Sec. VIII-B, forecasted occupancy [50, 1] simultaneously captures both the accuracy of occupancy estimates as well as velocity estimates. This metric is especially pertinent for obstacle avoidance where future occupancy of obstacles is needed to plan paths that avoid collisions. We will use the notation for dynamic occupancy grids introduced in Sec. B-A. The current dynamic occupancy grid at timestep t is represented by
G t = {ω i t , V i t | i ∈ I},
where ω i t is the Bernoulli occupancy probability of the i-th cell, and
V i t = {(v i,m t , p i,m t ) | 1 ≤ m ≤ M }
is the set of M weighted particles that represents the velocity distribution of the i-th cell.
To evaluate G t , we first apply the motion update step (Eqns. 1, 4) to forecast it by a time ∆t and obtain the dynamic occupancy grid G t+∆t at time t + ∆t. Then, the forecasted occupancy probabilities {ω i t+∆t | i ∈ I} are evaluated against the ground truth occupancies {o i t+∆t | i ∈ I}. We follow prior works [40, 52, 62] that treat the evaluation of occupancy as a classification problem and compute binary occupancies o i t+∆t = I(ω i t+∆t ≥ 0.5) thresholded at 0.5 probability. We use the following metrics to evaluate the quality of predicted occupancy. We ignore cells that are occluded (in the ground truth) since (1) they cannot be observed by optical sensors, and (2) they are not the closest object to the robot making them less relevant for obstacle avoidance. Let I LOS be the subset of cells that are in the sensor's line-of-sight (LOS). Note that o i t+∆t , o i t+∆t ∈ {0, 1}. 1) Classification accuracy: The fraction of cells whose occupancy is correctly predicted. 2) Precision: The fraction of cells predicted to be occupied that were actually occupied.
Accuracy = i∈ILOS I{ o i t+∆t = o i t+∆t } |I LOS |Precision = i∈ILOS I{ o i t+∆t = 1} I{o i t+∆t = 1} i∈ILOS I{ o i t+∆t = 1} 3) Recall:
The fraction of occupied cells that were also predicted to be occupied.
Recall = i∈ILOS I{ o i t+∆t = 1} I{o i t+∆t = 1}
i∈ILOS I{o i t+∆t = 1} 4) F 1 -Score: A combination (harmonic mean) of precision and recall that is a commonly used for binary classification [74]. The F 1 score is robust to class imbalance; unlike precision and recall, it cannot be trivially improved by predicting mostly negative labels and mostly positive labels.
F 1 -Score = 2 · Precision · Recall Precision + Recall 5) IoU: The intersection-over-union between cells that are occupied (in ground truth) and cells that are predicted to be occupied.
IoU = i∈ILOS I( o i t+∆t = 1) I(o i t+∆t = 1) i∈ILOS I( o i t+∆t = 1 or o i t+∆t = 1)
For all metrics, a higher numerical score is better. Fig. 9 shows how we visualize 2D velocities and occupancies (both ground truth and estimated velocities and occupancies). The visualization of an example ground truth grid is shown in Fig. 9c. We use the three-dimensional HSV colorwheel shown in Fig. 9a to jointly visualize velocities (two-dimensional) and occupancies (one-dimensional). The 'value' encodes the occupancy probability; dark means low occupancy probability and bright means high occupancy probability. The 'hue' encodes the direction of velocity. We show a top-down view of the HSV colorwheel in Fig. 9b for clarity. For example, the bluish-purple hue of the cuboid in Fig. 9c means that the cuboid is moving upwards in 2D i.e. away from the sensor in 3D. 'Saturation' encodes the magnitude of velocity. This means that white is stationary (e.g. the walls of the environment shown as parallel white lines) whereas colorful regions corresponds to high speed.
APPENDIX H VISUALIZING VELOCITIES AND OCCUPANCIES
APPENDIX I NON-STATIONARY REWARDS
Vanilla multi-armed bandits assume that the reward distribution for each action is stationary. The Q-value of an action after it was been performed n times is computed as Q n = 1 n n i=1 R i , where R i is the reward obtained in the i-th trial. This is equivalent to the following recursive update rule: Q n+1 = Q n + 1 n [R n − Q n ], where the Q-value is incremented by the error scaled by a decaying factor 1 n . However, in our case, a single placement strategy may not be superior to the rest at all times and in all situations. The reward distribution for each strategy (action) may change with time. Hence, we assume that our rewards are nonstationary. For non-stationary rewards, we wish to give more weight to recent rewards than to older rewards. Therefore, the decaying parameter is replaced by a constant step-size parameter α: Q n+1 = Q n + α [R n − Q n ]. This weights newer rewards exponentially more than older rewards according to the expression: Q n = (1 − α) n−1 R 1 + n i=2 α (1 − α) n−i R i . Fig. 10 shows the working principle behind the illumination module of a programmable light curtain. It consists of a fixed laser source that emits a light beam, and a rotating galvo-mirror that reflects and redirects the light in any desired direction. The laser beam is collimated to a thin rectangular sheet; however, in reality it is a prismatic slab containing a small divergence. The Figure 10: Efficient light curtain simulation using a virtual laser. A light curtain consists of a fixed laser source that emits a light beam, and a rotating galvo-mirror that reflects and redirects the light in any desired direction. The laser beam is collimated to a thin rectangular sheet; however, in reality it is a prismatic slab containing a small divergence. The pixel intensity of an object (shown by a green circle ) imaged by the light curtain depends on the radiant intensity of the laser ray that is incident on the object (shown by the green ray →). A ray at the center of the beam has the highest intensity whereas a ray at the boundary of the beam (shown by orange rays →) has the lowest intensity. In order to simulate light curtain pixel intensities for a given object, we must compute its incident ray i.e. compute its "laser angle" shown by . This computation can be expensive since (1) it involves tracing rays between the source and the object through a reflection at the mirror, and (2) it must be performed for each pixel. Our insight is to construct a "virtual" laser source by reflecting the real source about the mirror plane. Due to the laws of reflection, the reflected beam is equivalent to originating from the virtual source behind the mirror. This allows the laser angle to be computed efficiently by projecting the object point in the virtual source's frame. Furthermore, the virtual source needs to be reflected only once for each mirror configuration; all pixel points in the currently active camera column can be efficiently projected into the same virtual laser source. pixel intensity of an object (shown by a green circle ) imaged by the light curtain depends on the radiant intensity of the laser ray that is incident on the object (shown by the green ray →). A ray at the center of the beam has the highest intensity whereas a ray at the boundary of the beam (shown by orange rays →) has the lowest intensity. In order to simulate light curtain pixel intensities for a given object, we must compute its incident ray i.e. compute its "laser angle" shown by .
APPENDIX J EFFICIENT LIGHT CURTAIN SIMULATION
Computing the laser ray incident to the object is expensive because (1) it involves tracing rays between the source and the object through a reflection at the mirror, and (2) it must be performed for each pixel. Our insight is to construct a "virtual" laser source by reflecting the real source about the mirror plane. Due to the laws of reflection, the reflected beam is equivalent to originating from the virtual source behind the mirror. This allows an object point's laser angle to be computed efficiently by projecting it in the virtual source's frame. Furthermore, the virtual source needs to be reflected only once for each mirror configuration; all pixel points in the currently active camera column can be efficiently projected into the same virtual source.
APPENDIX K FULL-STACK NAVIGATION We integrate our system into a full-stack navigation pipeline based on the Autonomous Exploration Development Environment [15]. We mount the light curtain device on a mobile robot (see Fig. 3a). Our tightly integrated pipeline performs localization, mapping, planning, control and obstacle avoidance. We use ORB-SLAM3 [13] for localization and mapping that takes depth from light curtains as input while the planning and control capabilities are provided by Cao et al. [15]. The pipeline described in Sec. VII combines light curtain placement strategies using self-supervised multi-armed bandits with recursive Bayes estimation of dynamic occupancy grids. The output of this pipeline i.e. position and velocity estimates are used by the autonomy stack to perform dense mapping in an indoor environment and obstacle avoidance. Furthermore, the localization from ORB-SLAM3 [13] is fed back into our pipeline for ego-motion subtraction in the motion update step (Eqns. 1, 4). We show two demonstrations of our fully integrated autonomy stack:
A. Real-time dense mapping
Light curtains sense objects that intersect its surface at a high resolution. This ability can be leveraged to perform dense mapping and reconstruction of an environment. Please see a video of dense real-time reconstruction of an indoor hallway environment using our system on the project website. Fig. 11 shows a sideways and top-down projection of the same. The robot was operated in the indoor hallway environment, and 3D points detected by light curtains were input to ORB-SLAM3 [13], an RGB-D based localization and mapping system that estimates the robot's pose. The pose estimates are fed back into our pipeline to perform ego-motion subtraction in the motion update step. The robot trajectory is shown as a white line. Fig. 11a shows that the floor, walls and other objects are reconstructed densely and accurately. Fig. 11b contains a top-down orthographic view which shows the accuracy of our system's localization -the robot's trajectory was correctly determined to be an (approximately) closed loop around the building floor. This experiment serves to demonstrate our fullstack navigation pipeline using light curtains. Figure 11: Dense indoor reconstruction and mapping using our integrated system. The light curtain was mounted on a mobile robot (Fig. 3a) and operated in an indoor hallway environment. Detected depth points from light curtains were input to ORB-SLAM3 [13], an RGB-D based localization and mapping system that estimates the robot's pose. The pose estimates are fed back into our pipeline to perform ego-motion subtraction in the motion update step. The robot trajectory is shown as a white line.
B. Real-time obstacle avoidance
Light curtains are a fast sensor (∼45 Hz) whose speed is leveraged by our system to produce position and velocity estimates at a high frequency (∼35 Hz). We use these estimates for real-time obstacle avoidance, shown in Fig. 12. The robot is represented by a green vehicle. The yellow curves show a library of dynamically feasible paths of the robot [14]. Points detected by the light curtain are shown in blue. There are two objects in the scene: a static chair, and a moving person. Using the position estimates of obstacles, feasible paths that are expected to collide with objects are rejected, and a safe path (shown in red) is chosen by a local planner [14]. Please find the full video of real-time obstacle avoidance on the project website.
occupancy grid: each cell contains occupancy and velocity distributions (b) Raycasting to compute freespace (c) Raymarching to compute depth probs.
Figure 2 :
2(a) Dynamic occupancy grid. The 2D grid represents the top-down view. Like conventional occupancy grids [29, 65], each cell contains an occupancy probability p ∈ [0, 1].
Strategy 1 :Figure 3 :
13Depth Probability: Following Ancha et al. [5], Strategy 1 places curtains at the highest probability object (a) Mobile light curtain robot platform: A light curtain device (in blue) is mounted on top of a mobile robot. We use this setup to perform real-world experiments. (b) Simulated environment: consists of differently shaped objects (cuboids and cylinders) moving in (1) linear oscillatory/harmonic motion along various directions, (2) curved sinusoidal motion, and (3) random Brownian motion. (c) Real-world environment: consists of two pedestrians walking in front of the sensor in multiple directions, at different speeds and in complex trajectories.
Figure 5 :
5Velocity estimation in simulation. (a) The environment with moving blocks and curtain placement shown in blue. (b) Color-coding for visualizing velocity from the top-down view. (c) Ground truth occupancy and velocity. (d) Raw light curtain images; high intensities are good because it means that object surfaces were found and intersected by the light curtain. (e) Partial occupancy observations from light curtain measurements that are input to the dynamic occupancy grid. (f) Velocity and occupancy estimated by the dynamic occupancy grid. We advise the reader to view video examples on the project website.
Figure 6 :
6Velocity estimation in the real world using multi-armed bandits (MAB). Please refer to titles andFig. 5for descriptions of each column. Our method only uses light curtains; RGB images are for visualization only. The rightmost column shows the Q-values of each strategy. Higher Q-value is better; the action with the highest Q-value is chosen during exploitation. Top row: shows two pedestrians walking at relaxed speeds. The directions of motion are correctly inferred for each person: the pedestrian walking to the right is shown in greenish-blue and the person walking to the left is colored in red. The current action selected maximizes occupancy uncertainty. The bottom row shows a more challenging environment where where a lone pedestrian performs fast motion: running and jumping. The direction of velocity is correctly inferred as moving top-left (left and away from the sensor) i.e. reddish pink. The color saturation is high indicating the larger magnitude of velocity. The current action selected maximizes depth probability. We advise the reader to view the video examples on the project website.
occupancy grid before the motion update step.(b) After the motion update step.(c) After the measurement update step.
Figure 7 :
7Dynamic occupancy grid. Grid structure: The 2D grid represents the top-down view and is made up of cells. Like conventional static occupancy grids [29, 65], each cell contains an occupancy probability p ∈ [0, 1]. Dark indicates low occupancy probability and bright indicates high occupancy probability. In addition, each cell also contains a set of weighted particles where each particles stores a single 2D velocity. The set of particles together represents a probability distribution of that cell's velocity. Grid updates: The grid is a Bayes filter [65] that consists of two steps: the motion update step and the measurement update step. (a) The grid before performing any update. (b) Motion update step: the occupancies and velocities of the next timestep are computed based on the grid in the previous timestep, increasing uncertainty. (c) Measurement update step: the occupancies are updated using the measurements from the light curtain, decreasing uncertainty. This will refine the velocity estimates.
a) HSV colorwheel used to visualize velocities. (b) Colorwheel from the top-down view (c) GT velocities of the simulated environment.
Figure 9 :
9(a) The HSV (hue-saturation-value) colorwheel[73] used to visualize 2D velocities and occupancies. Value corresponds to the occupancy probability. The hue corresponds to the direction of the velocity. Saturation corresponds the magnitude of velocity. (b) The HSV colorwheel in the top-down view, denoting the direction of velocity. (c) Ground truth velocities and occupancies in the simulated environment. The grid shows a stationary wall to the left and two objects. The bluish-purple square is moving upwards in 2D (i.e. farther away from the sensor in 3D) whereas the other objects in white are stationary.
(a) Sideways perspective view: showing dense reconstruction of walls, the floor and other objects. (b) Top-down orthographic view: showing the accuracy of localization (white line) and loop-closure. Please find the full video on the project website. This experiment serves to demonstrate our full-stack navigation pipeline using light curtains.
Figure 12 :
12Real-time obstacle avoidance using our integrated system. The robot is represented by a green vehicle. The yellow curves show a library of dynamically feasible paths of the robot[14]. Points detected by the light curtain on the static (chair) and dynamic (person) objects are show in blue. The feasible paths that are expected to collide with objects are removed, and a safe path (shown in red) is chosen by the planner. Please find the full video on the project website.
videos showing qualitative results of our method, and (4) source code. * Corresponding author. E-mail: [email protected]. * † This work was performed when SA and GP were affiliated with CMU.
Hz v.s. 10-20 Hz). See App. E for benefits of light curtains over conventional depth sensors. Because programmable light curtains are an active sensor, realizing these benefits requires actively deciding where to place the curtain at each timestep; this is the principal algorithmic challenge posed by programmable light curtains.). Compared to widely used commercial
LiDARs like the Ouster OS1-128 [43], a lab-built light curtain
prototype is relatively inexpensive ($1,000 v.s. ∼$20,000),
higher vertical resolution (1280 rows/0.07 • v.s. 128 rows/0.35 • )
and faster (45-60
classification accuracy [40, 52], (2) precision, (3)Simulated environment
Real environment
Classification
accuracy
Precision Recall F1-score
IOU
Classification
accuracy
Precision Recall F1-score
IOU
↑
↑
↑
↑
↑
↑
↑
↑
↑
↑
Simulated
LiDAR
0.9584
0.2498
0.1073
0.1360
0.0787
NA
Random curtains
only
0.9662
0.2698
0.0567
0.0850
0.0468
0.9852
0.6306
0.2357
0.2405
0.2136
Max. depth
probability
0.9610
0.2448
0.1146
0.1388
0.0792
0.9832
0.5943
0.2727
0.3047
0.2353
Max. occupancy
uncertainty
0.9609
0.2717
0.1266
0.1493
0.0857
0.9811
0.5733
0.3041
0.3319
0.2515
Max. velocity
uncertainty
0.9648
0.2728
0.0581
0.0838
0.0458
0.9864
0.6221
0.2615
0.2545
0.2232
Max. occupancy + velocity
uncertainty
0.9629
0.3026
0.1251
0.1544
0.0895
0.9822
0.5899
0.3175
0.3421
0.2727
Multi-armed bandits
(Ours)
0.9623
0.2814
0.1402
0.1690
0.0976
0.9854
0.6467
0.3647
0.3703
0.3053
Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm, editors, Computer Vision -ECCV 2020, pages 751-766, Cham, 2020. Springer International Publishing. Stefan Andreas Baur, David Josef Emmerichs, Frank Moosmann, Peter Pinggera, Björn Ommer, and Andreas Geiger. SLIM: Self-supervised LiDAR scene flow and motion segmentation. In Proceedings of the IEEE/CVF Thomas M. Cover and Joy A. Thomas. Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing). Wiley-Interscience, USA, 2006. ISBN 0471241954. [23] Radu Danescu, Florin Oniga, and Sergiu Nedevschi. Zi-Li Deng, Yuan Gao, Chun-Bo Li, and Gang Hao. Patrick Fung and Mike Grimble. Dynamic ship position-ing using a self-tuning Kalman filter. IEEE Transactions on Automatic Control, 28(3):339-350, 1983. [31] Weihao Gao, Sreeram Kannan, Sewoong Oh, and Pramod Viswanath. Estimating mutual information for discretecontinuous mixtures. Advances in neural information processing systems, 30, 2017. [32] Xiuye Gu, Yijie Wang, Chongruo Wu, Yong Jae Lee, and David Held, Jesse Levinson, and Sebastian Thrun. Precision tracking with sparse 3D and dense color 2D data. In 2013 IEEE International Conference on Robotics and Automation, pages 1138-1145. IEEE, 2013. [37] David Held, Jesse Levinson, Sebastian Thrun, and Silvio Savarese. Combining 3D Shape, Color, and Motion for Robust Anytime Tracking. In Robotics: science and systems, volume 1. Citeseer, 2014. [38] David Held, Devin Guillory, Brice Rebsamen, Sebastian Thrun, and Silvio Savarese. A Probabilistic Framework for Real-time 3D Segmentation using Spatial, Temporal, and Semantic Cues. In Robotics: Science and Systems, volume 12, 2016. [39] David Held, Jesse Levinson, Sebastian Thrun, and Silvio Savarese. Robust real-time tracking combining 3D shape, color, and motion. The International Journal of Robotics Stefan Isler, Reza Sabzevari, Jeffrey Delmerico, and Davide Scaramuzza. An information gain formulation for active volumetric 3D reconstruction. In 2016 IEEE International Conference on Robotics and Automation (ICRA), pages 3477-3484. IEEE, 2016. [45] Yair Kittenplon, Yonina C Eldar, and Dan Raviv. Flow-step3d: Model unrolling for self-supervised scene flow estimation. In Proceedings of the IEEE/CVF Conference Xingyu Liu, Charles R Qi, and Leonidas J Guibas. Flownet3D: Learning scene flow in 3D point clouds. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 529-537, 2019. [50] Reza Mahjourian, Jinkyu Kim, Yuning Chai, Mingxing Tan, Ben Sapp, and Dragomir Anguelov. Occupancy flow fields for motion forecasting in autonomous driving.ISBN 978-3-030-58558-7.
[5] Siddharth Ancha, Gaurav Pathak, Srinivasa Narasimhan,
and David Held. Active Safety Envelopes using Light
Curtains with Probabilistic Guarantees. In Proceedings of
Robotics: Science and Systems, Virtual, July 2021. doi:
10.15607/rss.2021.xvii.045.
[6] Yasuhiro Aoki, Hunter Goforth, Rangaprasad Arun Srivat-
san, and Simon Lucey. PointNetLK: Robust & efficient
point cloud registration using PointNet. In Proceedings of
the IEEE/CVF conference on computer vision and pattern
recognition, pages 7163-7172, 2019.
[7] Ruzena Bajcsy. Active perception. Proceedings of the
IEEE, 76(8):966-1005, 1988.
[8] Ruzena Bajcsy, Yiannis Aloimonos, and John K Tsotsos.
Revisiting active perception. Autonomous Robots, 42(2):
177-196, 2018.
[9] Joseph R Bartels, Jian Wang, William Whittaker, Srini-
vasa G Narasimhan, et al. Agile depth sensing using
triangulation light curtains. In Proceedings of the
IEEE/CVF International Conference on Computer Vision,
pages 7900-7908, 2019.
[10] International Conference on Computer Vision, pages
13126-13136, 2021.
[11] P.J. Besl and Neil D. McKay. A method for registration
of 3-D shapes. IEEE Transactions on Pattern Analysis
and Machine Intelligence, 14(2):239-256, 1992. doi:
10.1109/34.121791.
[12] Levi Cai, Burak Boyacıoglu, Sarah E Webster, Lora
Van Uffelen, and Kristi Morgansen. Towards auto-
tuning of Kalman filters for underwater gliders based
on consistency metrics. In OCEANS 2019 MTS/IEEE
Seattle, pages 1-6. IEEE, 2019.
[13] Carlos Campos, Richard Elvira, Juan J Gómez Rodríguez,
José MM Montiel, and Juan D Tardós. ORB-SLAM3:
An accurate open-source library for visual, visual-inertial,
and multimap SLAM. IEEE Transactions on Robotics,
37(6):1874-1890, 2021.
[14] Chao Cao, Hongbiao Zhu, Howie Choset, and Ji Zhang.
Tare: A hierarchical framework for efficiently exploring
complex 3d environments. In Robotics: Science and
Systems Conference (RSS), Virtual, 2021.
[15] Chao Cao, Hongbiao Zhu, Fan Yang, Yukun Xia, Howie
Choset, Jean Oh, and Ji Zhang. Autonomous exploration
development environment and the planning algorithms. In
2022 International Conference on Robotics and Automa-
tion (ICRA), pages 8921-8928. IEEE, 2022. URL https:
//www.cmu-exploration.com/development-environment.
[16] Dorian Chan, Srinivasa Narasimhan, and Matthew
O'Toole. Holocurtains: Programming Light Curtains
via Binary Holography. Computer Vision and Pattern
Recognition, 2022.
[17] Zhaozhong Chen, Christoffer Heckman, Simon Julier,
and Nisar Ahmed. Weak in the NEES?: Auto-tuning
Kalman filters with Bayesian optimization. In 2018
21st International Conference on Information Fusion
(FUSION), pages 1072-1079. IEEE, 2018.
[18] Zhaozhong Chen, Nisar Ahmed, Simon Julier, and
Christoffer Heckman. Kalman filter tuning with Bayesian
optimization. arXiv preprint arXiv:1912.08601, 2019.
[19] Ricson Cheng, Arpit Agarwal, and Katerina Fragkiadaki.
Reinforcement learning of active vision for manipulating
objects under occlusions. In Conference on Robot
Learning, pages 422-431. PMLR, 2018.
[20] Christopher Choy, Wei Dong, and Vladlen Koltun. Deep
global registration. In Proceedings of the IEEE/CVF
conference on computer vision and pattern recognition,
pages 2514-2523, 2020.
[21] Cl Connolly. The determination of next best views.
In Proceedings. 1985 IEEE international conference on
robotics and automation, volume 2, pages 432-435. IEEE,
1985.
[22] Modeling and tracking the driving environment with
a particle-based occupancy grid. IEEE Transactions
on Intelligent Transportation Systems, 12(4):1331-1342,
2011.
[24] Jonathan Daudelin and Mark Campbell. An adaptable,
probabilistic, next-best view algorithm for reconstruction
of unknown 3-d objects. IEEE Robotics and Automation
Letters, 2(3):1540-1547, 2017.
[25] Self-tuning decoupled information fusion Wiener state
component filters and their convergence. Automatica, 44
(3):685-695, 2008.
[26] Joachim Denzler and Christopher M Brown. Information
theoretic sensor data selection for active object recognition
and state estimation. IEEE Transactions on pattern
analysis and machine intelligence, 24(2):145-157, 2002.
[27] Bertrand Douillard, James Underwood, Noah Kuntz,
Vsevolod Vlaskine, Alastair Quadros, Peter Morton, and
Alon Frenkel. On the segmentation of 3D LIDAR point
clouds. In 2011 IEEE International Conference on
Robotics and Automation, pages 2798-2805. IEEE, 2011.
[28] Andreas Doumanoglou, Rigas Kouskouridas, Sotiris
Malassiotis, and Tae-Kyun Kim. Recovering 6D object
pose and predicting next-best-view in the crowd. In
Proceedings of the IEEE conference on computer vision
and pattern recognition, pages 3583-3592, 2016.
[29] Alberto Elfes. Using occupancy grids for mobile robot
perception and navigation. Computer, 22(6):46-57, 1989.
[30] Panqu Wang. HPLFlowNet: Hierarchical permutohedral
lattice flownet for scene flow estimation on large-scale
point clouds. In Proceedings of the IEEE/CVF conference
on computer vision and pattern recognition, pages 3254-
3263, 2019.
[33] Per Hagander and Björn Wittenmark. A self-tuning
filter for fixed-lag smoothing. IEEE Transactions on
Information Theory, 23(3):377-384, 1977.
[34] Dirk Hähnel and Wolfram Burgard. Probabilistic matching
for 3D scan registration. In Proc. of the VDI-Conference
Robotik, volume 2002. Citeseer, 2002.
[35] Pan He, Patrick Emami, Sanjay Ranka, and Anand Ran-
garajan. Self-Supervised Robust Scene Flow Estimation
via the Alignment of Probability Density Functions. Pro-
ceedings of the AAAI Conference on Artificial Intelligence,
36(1):861-869, Jun. 2022. doi: 10.1609/aaai.v36i1.19968.
[36] Research, 35(1-3):30-49, 2016.
[40] Armin Hornung, Kai M Wurm, Maren Bennewitz, Cyrill
Stachniss, and Wolfram Burgard. OctoMap: An efficient
probabilistic 3D mapping framework based on octrees.
Autonomous robots, 34(3):189-206, 2013.
[41] Peiyun Hu, David Held, and Deva Ramanan. Learning
to optimally segment point clouds. IEEE Robotics and
Automation Letters, 5(2):875-882, 2020.
[42] Peiyun Hu, Jason Ziglar, David Held, and Deva Ramanan.
What you see is what you get: Exploiting visibility for
3D object detection. In Proceedings of the IEEE/CVF
Conference on Computer Vision and Pattern Recognition,
pages 11001-11009, 2020.
[43] Ouster Inc. Ouster OS1 Hardware Specification Sheet.
2021. URL https://data.ouster.io/downloads/datasheets/
datasheet-revd-v2p0-os1.pdf.
[44] on Computer Vision and Pattern Recognition, pages 4114-
4123, 2021.
[46] Klaas Klasing, Dirk Wollherr, and Martin Buss. A
clustering method for efficient segmentation of 3D laser
data. In 2008 IEEE international conference on robotics
and automation, pages 4043-4048. IEEE, 2008.
[47] Simon Kriegel, Christian Rink, Tim Bodenmüller, and
Michael Suppa. Efficient next-best-scan planning for au-
tonomous 3D surface reconstruction of unknown objects.
Journal of Real-Time Image Processing, 10(4):611-631,
2015.
[48] Ruibo Li, Guosheng Lin, and Lihua Xie. Self-point-flow:
Self-supervised scene flow estimation from point clouds
with optimal transport and random walk. In Proceedings
of the IEEE/CVF conference on computer vision and
pattern recognition, pages 15577-15586, 2021.
[49] IEEE
Robotics and Automation Letters, 7(2):5639-5646, 2022.
[51] Ameesh Makadia, Alexander Patterson, and Kostas Dani-
ilidis. Fully automatic registration of 3D point clouds. In
2006 IEEE Computer Society Conference on Computer
Vision and Pattern Recognition (CVPR'06), volume 1,
pages 1297-1304. IEEE, 2006.
[52] Daniel Meyer-Delius, Maximilian Beinhofer, and Wolfram
Burgard. Occupancy grid models for robot mapping in
changing environments. In Twenty-Sixth AAAI Conference
on Artificial Intelligence, 2012.
[53] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik,
Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng.
NeRF: Representing scenes as neural radiance fields for
view synthesis. In European conference on computer
vision, pages 405-421. Springer, 2020.
[54] Himangi Mittal, Brian Okorn, and David Held. Just go
with the flow: Self-supervised scene flow estimation. In
Proceedings of the IEEE/CVF conference on computer
vision and pattern recognition, pages 11177-11185, 2020.
[55] T Moir and M Grimble. Optimal self-tuning filtering,
prediction, and smoothing for discrete multivariable
processes. IEEE Transactions on Automatic control, 29
(2):128-137, 1984.
[56] Yaakov Oshman and Ilan Shaviv. Optimal tuning of a
Kalman filter using genetic algorithms. In AIAA Guidance,
Navigation, and Control Conference and Exhibit, page
Active Velocity Estimation using Light Curtains
via Self-Supervised Multi-Armed Bandits
APPENDIX A
DERIVATION OF RECURSIVE BAYESIAN ESTIMATION
compares LiDARs and light curtains.LiDAR
Light Curtains
Resolution
128 rows
1280 rows
Cost
∼$20,000
∼$1,000
Frame rate
10-20 Hz
45-60 Hz
Table III :
IIIComparison between a modern Ouster OS1 [43]
LiDAR, and Programmable Light Curtains [9].
M m=1 p i,m t v i,m t and covariance matrix Σ = M m=1 p i,m t (v i,m t − µ)(v i,m t − µ) T .Then, we compute the differential entropy of the fitted Gaussian: H vel (V i t ) =1 2 log det(2πeΣ). One could alternatively use other families of continuous distributions, such as kernel density estimators. Finally, we place a curtain that maximizes the sum of velocity entropies H vel of the cells the curtain lies on. Strategy 4: Combined Uncertainty: In this strategy, we maximize a weighted combination of occupancy and velocity entropies: H cmb (ω i t , V i t ) = H occ (ω i t ) + ω i t H vel (V i t ). The
t I pos j ≈ pos i + v i,m t ∆t + δ i,m t V j t+1 = v i,m t + ϵ i,m t , ω i t p i,m t ω j t+1 i, m : pos j ≈ pos i + v i,m t ∆t + δ i,m tNote that the above derivation does not require assumption 3; the law of total probability applies even when the distributions of incoming cells are not independent! Therefore, the motion update is exact even if we treat ω i t and V i t as marginal distributions of potentially correlated cells. Assumption 3Figure 8: Applying the motion model to update occupancies and velocities of cells in the motion update step. ω's are occupancy probabilities of cells, v's and p's are velocities and weights of the individual particles respectively.
ACKNOWLEDGMENTSWe thank Pulkit Grover for discussions on informationtheoretic measures of mixed discrete-continuous random variables. This material is based upon work supported by the National Science Foundation under Grants No. IIS-1849154, IIS-1900821, the United States Air Force and DARPA under Contract No. FA8750-18-C-0092, and a grant from the Manufacturing Futures Institute at Carnegie Mellon University.Appendix
Waymo Occupancy and Flow Prediction Challenge. 2022. 22Waymo Occupancy and Flow Prediction Challenge. 2022. [Online; accessed 22-December-2022].
Point clouds registration with probabilistic data association. Gabriel Agamennoni, Simone Fontana, Y Roland, Domenico G Siegwart, Sorrenti, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEEGabriel Agamennoni, Simone Fontana, Roland Y Sieg- wart, and Domenico G Sorrenti. Point clouds registration with probabilistic data association. In 2016 IEEE/RSJ In- ternational Conference on Intelligent Robots and Systems (IROS), pages 4092-4098. IEEE, 2016.
A fast voxel traversal algorithm for ray tracing. John Amanatides, Andrew Woo, Eurographics. 87John Amanatides, Andrew Woo, et al. A fast voxel traversal algorithm for ray tracing. In Eurographics, volume 87, pages 3-10, 1987.
Active Perception Using Light Curtains for Autonomous Driving. Siddharth Ancha, Yaadhav Raaj, Peiyun Hu, G Srinivasa, David Narasimhan, Held, Andrea 4558. Siddharth Ancha, Yaadhav Raaj, Peiyun Hu, Srinivasa G. Narasimhan, and David Held. Active Perception Using Light Curtains for Autonomous Driving. In Andrea 4558, 2000.
Automated tuning of an extended Kalman filter using the downhill simplex algorithm. D Thomas, Powell, https:/arc.aiaa.org/doi/10.2514/2.4983Journal of Guidance, Control, and Dynamics. 255Thomas D Powell. Automated tuning of an extended Kalman filter using the downhill simplex algorithm. Journal of Guidance, Control, and Dynamics, 25(5):901- 908, 2002.
Exploiting and Refining Depth Distributions with Triangulation Light Curtains. Yaadhav Raaj, Siddharth Ancha, Robert Tamburo, David Held, Srinivasa Narasimhan, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)2021Yaadhav Raaj, Siddharth Ancha, Robert Tamburo, David Held, and Srinivasa Narasimhan. Exploiting and Refining Depth Distributions with Triangulation Light Curtains. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021.
Fast point feature histograms (FPFH) for 3D registration. Nico Radu Bogdan Rusu, Michael Blodow, Beetz, IEEE international conference on robotics and automation. IEEERadu Bogdan Rusu, Nico Blodow, and Michael Beetz. Fast point feature histograms (FPFH) for 3D registration. In 2009 IEEE international conference on robotics and automation, pages 3212-3217. IEEE, 2009.
View planning for automated three-dimensional object reconstruction and inspection. R William, Gerhard Scott, Jean-François Roth, Rivest, https:/dl.acm.org/doi/abs/10.1145/641865.641868ACM Computing Surveys (CSUR). 351William R Scott, Gerhard Roth, and Jean-François Rivest. View planning for automated three-dimensional object reconstruction and inspection. ACM Computing Surveys (CSUR), 35(1):64-96, 2003.
Reinforcement learning: An introduction. S Richard, Andrew G Sutton, Barto, MIT pressRichard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 2018.
Octree generating networks: Efficient convolutional architectures for high-resolution 3D outputs. Maxim Tatarchenko, Alexey Dosovitskiy, Thomas Brox, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionMaxim Tatarchenko, Alexey Dosovitskiy, and Thomas Brox. Octree generating networks: Efficient convolutional architectures for high-resolution 3D outputs. In Proceed- ings of the IEEE international conference on computer vision, pages 2088-2096, 2017.
RAFT-3D: Scene flow using rigid-motion embeddings. Zachary Teed, Jia Deng, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionZachary Teed and Jia Deng. RAFT-3D: Scene flow using rigid-motion embeddings. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8375-8384, 2021.
Towards 3D object recognition via classification of arbitrary object tracks. Alex Teichman, Jesse Levinson, Sebastian Thrun, 2011 IEEE International Conference on Robotics and Automation. IEEEAlex Teichman, Jesse Levinson, and Sebastian Thrun. Towards 3D object recognition via classification of arbitrary object tracks. In 2011 IEEE International Conference on Robotics and Automation, pages 4034- 4041. IEEE, 2011.
Probabilistic robotics. Sebastian Thrun, Communications of the ACM. 453Sebastian Thrun. Probabilistic robotics. Communications of the ACM, 45(3):52-57, 2002.
Multi-view supervision for single-view reconstruction via differentiable ray consistency. Shubham Tulsiani, Tinghui Zhou, Alexei A Efros, Jitendra Malik, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionShubham Tulsiani, Tinghui Zhou, Alexei A Efros, and Jitendra Malik. Multi-view supervision for single-view reconstruction via differentiable ray consistency. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2626-2634, 2017.
Volumetric nextbest-view planning for 3D object reconstruction with positioning error. J Irving Vasquez-Gomez, Enrique Sucar, Rafael Murrieta-Cid, Efrain Lopez-Damian, https:/journals.sagepub.com/doi/full/10.5772/58759International Journal of Advanced Robotic Systems. 1110159J Irving Vasquez-Gomez, L Enrique Sucar, Rafael Murrieta-Cid, and Efrain Lopez-Damian. Volumetric next- best-view planning for 3D object reconstruction with positioning error. International Journal of Advanced Robotic Systems, 11(10):159, 2014.
Three-dimensional scene flow. Sundar Vedula, Simon Baker, Peter Rander, Robert Collins, Takeo Kanade, Proceedings of the Seventh IEEE International Conference on Computer Vision. the Seventh IEEE International Conference on Computer VisionIEEE2Sundar Vedula, Simon Baker, Peter Rander, Robert Collins, and Takeo Kanade. Three-dimensional scene flow. In Proceedings of the Seventh IEEE International Conference on Computer Vision, volume 2, pages 722- 729. IEEE, 1999.
Programmable triangulation light curtains. Jian Wang, Joseph Bartels, William Whittaker, C Aswin, Sankaranarayanan, G Srinivasa, Narasimhan, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)Jian Wang, Joseph Bartels, William Whittaker, Aswin C Sankaranarayanan, and Srinivasa G Narasimhan. Pro- grammable triangulation light curtains. In Proceedings of the European Conference on Computer Vision (ECCV), pages 19-34, 2018.
Deep parametric continuous convolutional neural networks. Shenlong Wang, Simon Suo, Wei-Chiu Ma, Andrei Pokrovsky, Raquel Urtasun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionShenlong Wang, Simon Suo, Wei-Chiu Ma, Andrei Pokrovsky, and Raquel Urtasun. Deep parametric con- tinuous convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2589-2597, 2018.
Deep closest point: Learning representations for point cloud registration. Yue Wang, Justin M Solomon, Proceedings of the IEEE/CVF international conference on computer vision. the IEEE/CVF international conference on computer visionYue Wang and Justin M. Solomon. Deep closest point: Learning representations for point cloud registration. In Proceedings of the IEEE/CVF international conference on computer vision, pages 3523-3532, 2019.
PRNet: Self-supervised learning for partial-to-partial registration. Yue Wang, Justin M Solomon, Advances in neural information processing systems. 32Yue Wang and Justin M. Solomon. PRNet: Self-supervised learning for partial-to-partial registration. Advances in neural information processing systems, 32, 2019.
The Free Encyclopedia, 2022. [Online; accessed 30. . Wikipedia, Hsl, Hsv -Wikipedia, Wikipedia contributors. HSL and HSV -Wikipedia, The Free Encyclopedia, 2022. [Online; accessed 30-November- 2022].
Wikipedia contributors. F-score -Wikipedia, The Free Encyclopedia. 2022Online; accessed 05-December-2022Wikipedia contributors. F-score -Wikipedia, The Free Encyclopedia, 2022. [Online; accessed 05-December- 2022].
PointPWC-Net: Cost Volume on Point Clouds for (Self-) Supervised Scene Flow Estimation. Wenxuan Wu, Zhuwen Zhi Yuan Wang, Wei Li, Li Liu, Fuxin, European Conference on Computer Vision. SpringerWenxuan Wu, Zhi Yuan Wang, Zhuwen Li, Wei Liu, and Li Fuxin. PointPWC-Net: Cost Volume on Point Clouds for (Self-) Supervised Scene Flow Estimation. In European Conference on Computer Vision, pages 88-107. Springer, 2020.
3D shapenets: A deep representation for volumetric shapes. Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaoou Tang, Jianxiong Xiao, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionZhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaoou Tang, and Jianxiong Xiao. 3D shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1912-1920, 2015.
TEASER: Fast and certifiable point cloud registration. Heng Yang, Jingnan Shi, Luca Carlone, IEEE Transactions on Robotics. 372Heng Yang, Jingnan Shi, and Luca Carlone. TEASER: Fast and certifiable point cloud registration. IEEE Transactions on Robotics, 37(2):314-333, 2020.
A divideand-merge point cloud clustering algorithm for LiDAR panoptic segmentation. Yiming Zhao, Xiao Zhang, Xinming Huang, 2022 International Conference on Robotics and Automation (ICRA). IEEEYiming Zhao, Xiao Zhang, and Xinming Huang. A divide- and-merge point cloud clustering algorithm for LiDAR panoptic segmentation. In 2022 International Conference on Robotics and Automation (ICRA), pages 7029-7035. IEEE, 2022.
Fast global registration. Qian-Yi Zhou, Jaesik Park, Vladlen Koltun, https:/link.springer.com/chapter/10.1007/978-3-319-46475-6_47European conference on computer vision. SpringerQian-Yi Zhou, Jaesik Park, and Vladlen Koltun. Fast global registration. In European conference on computer vision, pages 766-782. Springer, 2016.
Self-tuning information fusion Kalman predictor weighted by diagonal matrices and its convergence analysis. Deng Zi, - Li, Li Chun-Bo, Acta Automatica Sinica. 332Deng Zi-Li and Li Chun-Bo. Self-tuning information fusion Kalman predictor weighted by diagonal matrices and its convergence analysis. Acta Automatica Sinica, 33 (2):156-163, 2007.
| []
|
[
"On the automorphism groups of hyperbolic manifolds",
"On the automorphism groups of hyperbolic manifolds"
]
| [
"Ryan Budney ",
"David Gabai [email protected] ",
"\nMathematics and Statistics\nSTN CSC\nUniversity of Victoria\nPO BOX 3060V8W 3R4VictoriaCanada\n",
"\nFine Hall\nWashington Road Princeton08544-1000NJUSA\n"
]
| [
"Mathematics and Statistics\nSTN CSC\nUniversity of Victoria\nPO BOX 3060V8W 3R4VictoriaCanada",
"Fine Hall\nWashington Road Princeton08544-1000NJUSA"
]
| []
| Let Diff(N) and Homeo(N) denote the smooth and topological group of automorphisms respectively that fix the boundary of the n-manifold N , pointwise. We show that π n−4 Homeo(S 1 × D n−1 ) is not finitely-generated for n ≥ 4 and in particular π 0 Homeo(S 1 × D 3 ) is infinitely generated. We apply this to show that the smooth and topological automorphism groups of finite-volume hyperbolic n-manifolds (when n ≥ 4) do not have the homotopy-type of finite CW-complexes, results previously known for n ≥ 11 by Farrell and Jones. In particular, we show that if N is a closed hyperbolic n-manifold, and Diff 0 (N) represents the subgroup of diffeomorphisms that are homotopic to the identity, then π n−4 Diff 0 (N) is infinitely generated and hence if n = 4, then π 0 Diff 0 (N) is infinitely generated with similar results holding topologically. | null | [
"https://export.arxiv.org/pdf/2303.05010v1.pdf"
]
| 257,427,442 | 2303.05010 | 122cc1518aee1187118dda61ed3c18addd821582 |
On the automorphism groups of hyperbolic manifolds
9 Mar 2023
Ryan Budney
David Gabai [email protected]
Mathematics and Statistics
STN CSC
University of Victoria
PO BOX 3060V8W 3R4VictoriaCanada
Fine Hall
Washington Road Princeton08544-1000NJUSA
On the automorphism groups of hyperbolic manifolds
9 Mar 2023ISSN numbers are printed here 1 2 Ryan Budney and David GabaiAMS Classification numbers Primary: 57M99 Secondary: 57R5257R5057N50 Keywords: 4-manifolds2-knotsisotopy
Let Diff(N) and Homeo(N) denote the smooth and topological group of automorphisms respectively that fix the boundary of the n-manifold N , pointwise. We show that π n−4 Homeo(S 1 × D n−1 ) is not finitely-generated for n ≥ 4 and in particular π 0 Homeo(S 1 × D 3 ) is infinitely generated. We apply this to show that the smooth and topological automorphism groups of finite-volume hyperbolic n-manifolds (when n ≥ 4) do not have the homotopy-type of finite CW-complexes, results previously known for n ≥ 11 by Farrell and Jones. In particular, we show that if N is a closed hyperbolic n-manifold, and Diff 0 (N) represents the subgroup of diffeomorphisms that are homotopic to the identity, then π n−4 Diff 0 (N) is infinitely generated and hence if n = 4, then π 0 Diff 0 (N) is infinitely generated with similar results holding topologically.
Introduction
The main result of this paper is the following. Theorem 1.1 π n−4 Homeo(S 1 × D n−1 ) is infinitely generated and in particular π 0 Homeo(S 1 × D 3 ) is infinitely generated.
In the smooth category, this was the topic of [1], where it was shown that π n−4 Diff(S 1 × D n−1 ) is not finitely generated. Here all automorphism groups act via the identity on the boundary and hence a given automorphism is homotopic to id. To prove this theorem we elaborate a method briefly introduced in [1] using linking numbers coming from collinear and cohorizontal spaces and use it to give in Section 3 a new proof that the δ k families of [1] are linearly independent in the smooth category, provided k ≥ 4. It's new in the sense that it is a direct argument using that theory starting with the δ k families while [1] showed how to express δ k 's in terms of our G(p, q) families. Remarkably, being based on elementary intersection theory, this method also works in the topological category as detailed in Section 4. Our main result has the following applications.
Theorem 1.2 The automorphism groups of S 1 × D n−1 do not have the homotopy-type of finite-dimensional CW-complexes, provided n ≥ 4.
For dimensions n ≥ 6 this result was proven by Hatcher and Wagoner [12] more than 50 years ago, where they showed that the topological and smooth mapping class groups of S 1 × D n−1 are not finitely generated. In contrast, the smooth and topological automorphism groups of S 1 × D n−1 have the homotopy-type of ΩS 1 ≃ Z when n = 2, and when n = 3 these groups are contractible by the work of Hatcher [11]. Theorem 1. 3 If N is a complete hyperbolic n-manifold, then π n−4 Diff 0 (N) and π n−4 Homeo 0 (N) are infinitely generated. In particular if n = 4, both π 0 Diff 0 (N) and π 0 Homeo 0 (N) are infinitely generated.
Here Diff 0 and Homeo 0 denote automorphisms homotopic to id. For n ≥ 11, this result was proven by Farrell and Jones [6] over 30 years ago. Our result is sharp since Diff 0 (N) is contractible when n ≤ 3 by [8] and [9]. Details are given in Section 5
Barbell diffeomorphisms
For the purpose of this paper, an n-dimensional barbell manifold will be the boundary connectsum of two trivial disc-bundles over spheres. We index the barbell manifolds by the dimensions of the spheres, thus we define B n i,j = S i × D n−i ♮S j × D n−j as the standard (i, j)-barbell in dimension n. We will always assume i, j ≥ 1, as none of our constructions below will be of interest when i = 0 or j = 0. The spheres S i × {0} in the first summand and S j × {0} in the second summand we call core spheres. The discs { * } × D n−i in the first summand and { * } × D n−j in the second we call the cocores, where { * } is a choice of basepoint in the respective spheres. The mid-ball we denote B n−1 , this is the embedded co-dimension one disc that separates the boundary connect sum into a copy of S i × D n−i and S j × D n−j respectively.
We will use the terminology Diff(M) to denote the group of diffeomorphisms of a manifold. If M has boundary, we demand the diffeomorphisms restrict to the identity on the boundary, i.e. the restriction map Diff(M) → Diff(∂M) is a constant function.
For the sake of argument, assume i ≤ j. Consider the barbell manifold as fibering over D n−j−1 with fiber B These are sometimes also known as 'horizontal diffeomorphisms.' Thus we have a homotopyequivalence Diff F (B n i,j ) ≃ Ω n−j−1 Diff(B j+1 i,j ).
Given that B j+1 i,j is a once-punctured S i × D j−i+1 , there is the restriction fibre-bundle
Diff(B j+1 i,j ) → Diff(S i × D j−i+1 ) → Emb(D j+1 , S i × D j−i+1 )
where the map to the base space is null-homotopic. The map is obtained by fixing a compact (j + 1)-ball in the interior of S i × D j−i+1 and taking the restriction map from Diff(S i × D j−i+1 ). Thus we have a fibre sequence
ΩEmb(D j+1 , S i × D j−i+1 ) → Diff(B j+1 i,j ) → Diff(S i × D j−i+1 )
such that the induced maps on homotopy groups give short exact sequences
0 → π k ΩEmb(D j+1 , S i × D j−i+1 ) → π k Diff(B j+1 i,j ) → π k Diff(S i × D j−i+1 ) → 0.
The map ΩEmb(D j+1 , S i × D j−i+1 ) → Diff(B j+1 i,j ) is obtained by applying isotopy extension to the loop in Emb(D j+1 , S i × D j−i+1 ), and restricting to B j+1 i,j , i.e. the punctured copy of S i × D j−i+1 .
preprint By the (homotopy) classification of spaces of tubular neighbourhoods, we have that Emb(D j+1 , S i × D j−i+1 ) has the homotopy-type of S i × O j+1 . Observe that the generator of π i S i ≃ Z gives a nontorsion element of π i−1 Diff(B j+1 i,j ) via the above short exact sequence with k = i − 1, and via the equivalence Diff F (B n i,j ) ≃ Ω n−j−1 Diff(B j+1 i,j ) it gives us a non-torsion element in π i+j−n Diff F (B n i,j ) provided i + j ≥ n.
We now analyze three special classes which are not completely disjoint:
(1) In the case of the twice punctured 2-disc B 2 1,1 , the barbell diffeomorphism is the composite of the Dehn twists [5] about the boundary curves such that the signs form a homology, i.e. signs chosen consistent with the boundary orientation.
(2) The barbell diffeomorphism of B n n−2,n−2 is the family studied in [1]. These barbells have the feature that one can knot them in the 'handcuff' fashion, provided n ≥ 3. The diffeomorphisms themselves are defined only when i + j ≥ n, thus requires n ≥ 4.
(3) When i + j = n these barbells can be 'Hopf-linked' in S 1 × D n−1 provided i, j ≥ 3, i.e. n ≥ 6, allowing us to relate to the work of Hatcher and Wagoner [12].
We offer an alternative, more symmetric definition of the induced map π i+j−n Ω n−j S i ≡ Z → π i+j−n Diff(B n i,j ) when i + j ≥ n. Consider two vector subspaces of R n isomorphic to R i and R j . We assume the two vector subspaces meet in a single point, {0}, thus n ≥ i + j. If n > i + j we can use a small perturbation near the origin (say, using bump function) to deform the vector subspaces to disjoint submanifolds. Provided n > i + j + 1, all such small deformations are isotopic, as the normal sphere to the subspace spanned by R i and R j is S n−i−j−1 , which is connected. To ensure we are dealing with compact manifolds, consider D i ⊂ R i and D j ⊂ R j . Using bump functions supported in the interiors of these discs, gives us the following proposition.
PSfrag replacements
R i R i R j R j R n−i−j R n−i−j
Proposition 2.1 Consider the spherical family S n
−i−j−1 → Emb(D i ⊔ D j , D n ) defined above.
Then the connecting map for the homotopy long exact sequence for the fibration Diff(D n ) → Emb(D i ⊔ D j , D n ) gives us an element of π n−i−j−2 Diff(D n , D i ⊔ D j ). By thickening the embedded copies of D i and D j slightly, we can assume these diffeomorphisms are the identity in a neighbourhood of the embedded copies of D i and D j , thus this is an element of the homotopy group
π n−i−j−2 Diff(B n n−i−1,n−j−1 ).
Moreover, if we let i ′ = n − i − 1 and j ′ = n − j − 1, this can be rewritten as an element of π i ′ +j ′ −n Diff(B n i ′ ,j ′ ), and it is the barbell diffeomorphism, i.e. the induced map on π i ′ +j ′ −n for the map
Ω n−j ′ S i ′ → Diff(B n i ′ ,j ′ ).
String link families in Proposition 2.1 are studied systematically in Koytcheff [14].
Proposition 2.2 The barbell diffeomorphism Z ≡ π i+j−n Ω n−j S i → π i+j−n Diff(B n i,j ) is essential, i.e. there is a homomorphism π i+j−n Diff(B n i,j ) → Z thatπ i+j−n Diff(B n i,j ) → π i+j−2 Emb(I, B n i,j )
. This map detects the barbell diffeomorphism. Furthermore, the homomorphism π i+j−2 Emb(I, B n i,j ) → Z is computed by counting signed pairs of points t 1 < t 2 ∈ I such that f (t 1 ) is on the first cocore, and f (t 2 ) is on the second cocore.
PSfrag replacements B n n−2,n−2 The above two propositions are small variants of the arguments in [1], so we leave them to the reader. Proposition 2.1 is obtained by a direct comparison, i.e. these two diffeomorphisms are induced by the same isotopy-extension construction.
E 1 E 2 D 2n−6 ≡ D n−3 × D n−3 D n−3 D n−3
Proposition 2.2 has an alternative way of being expressed. Given the barbell diffeomorphism family,
S i+j−n → Diff(B n i,j )
we can imagine this family fibering, i.e.
S i+j−n × D n−j−1 → Diff(B j+1 i,j )
. Now consider the mid-ball in B n i,j , this is a copy of D n−1 . The preimage the two cocores in the midball is given by the intersection of the map S i+j−n × D n−1 → B n i,j with the cocores, thus they will be two disjoint (framed) closed manifolds of dimension (i − 1) and (j − 1) respectively in S i+j−n × D n−1 . If we further use the fibering, we can imagine this as a S i+j−n × D n−j−1 -parametrized family of 0-manifolds and (j − i)-manifolds in the B j+1 i,j mid-ball, which is a copy of D j . This family can be readily visualized. The 0-manifold family could be described as a parametrized family of nullcobordisms of an embedded S 0 , and the (j − i)-manifold family is similarly a parametrized null cobordism of S j−i , i.e. this is a family of disjoint spheres: one a copy of S 0 and the other a copy of S j−i which on the boundary of this (i − 1)-dimensional family are spheres that bound disjoint discs -which are used to construct the null cobordism. But since our family is (i − 1)-dimensional this is exactly the right dimension that allows the family to link, which is exactly what is going on. Proposition 2.2 is the homotopy-theoretic analogue of the linking number of this parametrized family of high codimension links. With a slight change of perspective we could perform this analysis in the mid-ball of B n i,j using the S i+j−n parameter space. This will be a family consisting generically of two spheres: one S n−j−1 and the other S n−i−1 in D n−1 , thus a family parametrized by S i+j−n is precisely the correct dimension to allow for a linking number.
While Proposition 2.2 tells us that the inclusion Ω n−j S i → Diff(B n i,j ) is non-trivial on the first non-trivial homotopy group, the inclusion is in fact a retract, i.e. non-trivial on all homotopy and homology groups of Ω n−j S i . To show this we need to construct a map back.
Definition 2.3 Observe there are maps
Diff(B n i,j ) → Ω n−j S i , Diff(B n i,j )
→ Ω n−i S j given by restricting to cocores and projecting to the cellular skeleton S i ∨ S j , then forgetting the complementary sphere wedge summand.
Proposition 2.4
The inclusion of the fiber-preserving subspace Ω n−j S i → Diff(B n i,j ) is a retract, i.e. composition with the above map Diff(B n i,j ) → Ω n−j S i is homotopic to the identity.
Ω i S i $ $ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ≃Id / / Ω i S i Diff(B n i,j ) : : ✉ ✉ ✉ ✉ ✉ ✉ ✉ ✉ ✉ ✉ .
The composite with the other map Proof The idea is to chase through the definition of our family, using the fibration B j+1 i,j → B n i,j → D n−j−1 . Thinking of the fiber as a once-punctured S i × D j−i+1 . This gives us the inclusion Ω n−j S i → Diff(B i,j ) as fiber-preserving diffeomorphisms. We consider the induced diffeomorphisms of the B j+1 i,j fibers. The cocore complementary to the S i core sphere is a copy of D j+1−i , while the cocore complementary to the S j core sphere is a copy of the interval, D 1 .
Ω n−j S i % % ❑ ❑ ❑ ❑ ❑ ❑ ❑ ❑ ❑ ❑ ❑ ≃Σ (j−i) / / Ω n−i S j Diff(B n i,
The fact that the composite Ω n−j S i → Diff(B i,j ) → Ω n−j S i is the identity map (after suitable identifications) is derivable immediately from the definition, carefully keeping track of the suspension parameters. Figure 3. The argument is essentially identical to the previous case, but our fibrewise cocores are copies of D j+1−i , i.e. an interval with j − i additional parameters. These additional parameters supply the canonical null-homotopies of the embedded interval, which is another way of stating that the map We list one other elementary property of barbell diffeomorphisms. The idea is to consider the standard inclusion Diff(B n i,j ) → Diff(S i × D n−i ) and Diff(B n i,j ) → Diff(S j × D n−j ) obtained by attaching an (i + 1)-handle or (j + 1)-handle respectively and extending via the identity map.
The composite Ω n−j S i → Diff(B i,j ) → Ω n−i S j is depicted inΩ n−j S i → Ω n−i S j is the suspension Σ j−i . PSfrag replacements B j+1 i,j S i × D j+1−i S j × D 1
Proposition 2.5 The composites
Ω n−j S i → Diff(B n i,j ) → Diff(S i × D n−i ) and Ω n−j S i → Diff(B n i,j ) → Diff(S j × D n−j ) are canonically null-homotopic.
Proof Recall the map Ω n−j S i → Diff(B n i,j ) was defined via a fibrewise isotopy-extension process. The first map in the statement of the proposition corresponds to forgetting the ball used to construct the isotopy extension, the second map corresponds to filling in the manifold in which the preprint ball moves. In the first case, the diffeomorphism family is tautologically null due to the Palais homotopy long exact sequence. The second map is null as the input isotopy is itself null, i.e. the parametrizing family Ω n−j S i factors through the inclusion Ω n−j S i → Ω n−j D i+1 .
The rationale behind constructing the above null isotopies is that we can use them to construct certain null pseudo-isotopies, once we embed the barbell manifolds in larger manifolds. This is the content of Proposition 2.6.
While the barbell diffeomorphisms themselves Ω n−j S i → Diff(B n i,j ) are not null in pseudo-isotopy, i.e. they do not lift to maps Ω n−j S i → PDiff(B n i,j ), the implanted barbell diffeomorphisms are often null in pseudoisotopy. The next proposition is a variation of Proposition 2.5.
Proposition 2.6
Given an embedded barbell B n i,j → N where N is an n-manifold, the induced map
Ω n−j S i → Diff(N)
is null in pseudo-isotopy, provided one of the two core spheres is smoothly slice, i.e. is the boundary of a smoothly-embedded D i+1 or D j+1 in N × I . Precisely, there is a lift of the barbell diffeomorphism family The idea is to consider the manifold S i × D n−i (or S j × D n−j ) as the barbell manifold B n i,j union an i + 1 (or j + 1)-handle respectively. We embed B n i,j × I into N × I using the map f (p, t) = (g(p), t/2) where g : B n i,j → N is our barbell embedding. We embed the (i + 1) or (j + 1)-handle in N × I so that its intersection with N × [0, 1 2 ] exists in U × [0, 1 2 ] where U is a small neighbourhood of g(B n i,j ) in N . We can do this by ensuring the height function for the smooth slice disc has height > 1 2 outside of a small neighbourhood of the slice sphere. This ensures the handle, in its interior, is disjoint from the image of f . The image of f union this handle is diffeomorphic to S i × D n−i or S j × D n−j respectively, thus our family of diffeomorphisms Ω n−j S i → Diff(N) extends to a diffeomorphism of N × I , using the null-isotopy of Proposition 2.5 on the image of f union the handle, which extends to N × I via the identity map. Proposition 2.6 was inspired by a conversation with David Gay, who has alternative descriptions of such null pseudoisotopies. Figure 4: Null-pseudoisotopy via embedded null isotopy. Embedded handle in red.
PSfrag replacements
N × I f (B n i,j × [0, 1])
We give a surgery description of the barbell diffeomorphisms in the case i + j = n. We start with the observation that one full Dehn twist about about a curve in a punctured disc can be visualized by a technique of embedded surgeries.
PSfrag replacements k − 1 Figure 5: Surgery description of a Dehn twist In the upper-left figure we see a blue arc splitting the twice-punctured disc into two annuli. We perform a Dehn twist about the red circle, with the resulting embedded arc appearing in the bottom-left. In the top right we have two linking copies of S 0 embedded in the blue arc, representing the attaching maps for two one-handles on the left (in orange) and right (in magenta). The result of the embedded surgery appears in the bottom-right.
The barbell diffeomorphism of B n i,j for i + j = n has an analogous description. One replaces the blue arc in Figure 5 by the mid-ball (diffeomorphic to D n−1 ). And one replaces the orange and magenta 1-handle attachments with i and j-handle attachments respectively, with the i-handle being the core of the S i × D j summand, and the j-handle attachment being the core of the S j × D i summand of B n i,j . The important issue is that the boundaries of the handle attachments are linked spheres in the mid-ball S i−1 ⊔ S j−1 → D n−1 .
Proposition 2.7
The action of the barbell diffeomorphism on the mid-ball of B n i,j when n = i + j is isotopic to replacing the mid-ball by its surgered embedding, where one does surgery on a trivially framed link S i−1 ⊔ S j−1 ⊂ D n−1 (the mid-ball) where the first sphere is attaching map for the core of the S i × D n−i summand, and the S j−1 is the attaching sphere for the core of the S j × D n−j summand. The link S i−1 ⊔ S j−1 ⊂ D n−1 has unknotted components, but the components have linking number ±1.
Proof To see this, consider B n i,j fibering over D i−1 with fiber B j+1 i,j . Consider the action of the barbell diffeomorphism on the mid-balls in the fibers. Specifically, consider the intersection of the image of these mid-balls with the cocores. Generally these will consist of a disjoint union S 0 ⊔ S j−i . The S 0 comes from the S j cocore, while the S j−i comes from the S i cocore. At the centre of the D i−1 parameter space the S j−i and S 0 sit on a common D j−i+1 with one point of S 0 inside the S j−i and the other on the outside. As one moves the D i−1 parameter the S 0 is pushed out of the subspace of the S j−i , and as one approaches the boundary first the S j−i is coned-off, then the S 0 is coned-off. This is exactly the slicing perspective on the standard linked pair
S j−1 ⊔ S i−1 ⊂ D n−1 , slicing over D i−1 .
The advantage of this perspective is that it allows us to give a relatively elementary combinatorial description of the barbell diffeomorphism, in terms of handle attachments.
Proposition 2.8
The barbell diffeomorphism, as an element of π 0 Diff(B n i,j ) with i + j = n is non-trivial in pseudo-isotopy. We have two arguments. The restriction to the mid-ball
π 0 Diff(B n i,j ) → π 0 Emb(D n−1 , B n i,j ) is non-trivial in pseudo-isotopy, indeed if we let Map(D n−1 , B n i,j ) denote the space of maps of D n−1 to B n i,j
that restrict to the standard inclusion (the boundary connect-sum splitting disc) on the boundary, then the map π 0 Diff(B n i,j ) → π 0 Map(D n−1 , B n i,j ) is homotopically non-trivial. This latter space, up to a canonical homotopy-equivalence, is Ω n−1 (S i ∨ S j ).
The restriction to either cocore
π 0 Diff(B n i,j ) → π 0 Emb(D i , B n i,j ) or π 0 Diff(B n i,j ) → π 0 Emb(D j , B n i,j ) is non-trivial in pseudo-isotopy. Similarly, if we go one step further, the map π 0 Diff(B n i,j ) → π 0 Map(D i , B n i,j ) is non-trivial. Proof
The key observation is that the barbell diffeomorphism restricted to the mid-ball is obtained by surgery on a 2-component link, S i−1 ⊔ S j−1 ⊂ D n−1 corresponding to the core S i and S j respectively, i.e. the S i−1 is the attaching sphere for the i-handle, and the S j−1 is the attaching sphere for the j-handle, when building B n i,j from the midball by handle attachments. Given an
embedding D n−1 → B n i,j it induces an element of Ω n−1 B n i,j ≃ Ω n−1 (S i ∨ S j ) and the barbell diffeo- morphism induces the Whitehead product [w i , w j ] where w i : S i → S i ∨ S j is the inclusion of S i , and w j : S j → S i ∨ S j is the inclusion of S j .
A second pseudo-isotopy obstruction follows from Proposition 2.4. Specifically, the embedding of the i-dimensional cocore may be projected to the S i -core, giving an element of Ω i S i . For the barbell diffeomorphism, this is a generating element of π 0 Ω i S i ≡ π i S i ≃ Z.
Proposition 2.9
The action of the barbell diffeomorphism for i + j = n on the cocores corresponds to tubing with the complementary core sphere. The intersection of the image of the cocores with the mid-balls are Hopf-linked embedded copies of S i−1 ⊔ S j−1 , as in the Figure 6. Figure 6: Barbell applied to cocores having 'linked' tubings.
PSfrag replacements
S i S i S j S j S i−1 S j−1 D i D i D j D jProof Consider B n i,j fibering over D i−1 with fiber B j+1 i,j . The map D i−1 → Diff(B j+1
i,j ) corresponds to the diffeomorphisms induced by ambient isotopy as one slides the ' j puncture' about the 'i puncture'. The cocores in the fiber B j+1 i,j correspond to embedded copies of D 1 (corresponding to the j-puncture) and D j−i+1 (the i-puncture) respectively. In the i = 1 case, the cocore pair are linked as described, by an explicit performance of the isotopy-extension. When i > 1 the cocores are no longer linked in B j+1 i,j . What we see is a fibering of the standard linked pair, fibered over D i−1 .
Notice we have several equivalent ways to distinguish the barbell diffeomorphism from its inverse. Proposition 2.8 tells us that if we consider the intersection of the mid-ball with the image of the cocores, we get a standard linked pair. If we orient the linked pair using normal bundles (i.e. the standard in oriented intersection theory) this would be a labeled and oriented 2-component link. The linking number is therefore a well-defined integer and these will be opposite for the barbell diffeomorphism and its inverse. There is an analogous result when i + j > n. In this case, the cocores intersected with the mid-ball are too low-dimensional to link, but in the family of maps S i+j−n → Ω n−j S i → Diff(B n i,j ) the family of cocores intersected with the mid-ball analogously link.
PSfrag replacements
S i S i S i S j S j S j S i−1 S i−1 S j−1 S j−1 D i D i D i D j D j D j
Barbell diffeomorphisms are closely related to the diffeomorphisms constructed by Watanabe [19]. See [1] for details.
3 π n−4 Diff(S 1 × D n−1 ) and the δ k diffeomorphisms.
In this section prove Theorem 3.1 which is a new proof that the homotopy group π n−4 Diff(S 1 × D n−1 ) is not finitely generated for n ≥ 4. On the large scale, this proof has several similarities to the one presented in [1] in that we compute the same W 3 -invariant on the same implanted barbell diffeomorphisms δ k to show they are linearly independent. The principal difference between the argument given here, and the one in [1] is that the computation of the W 3 -invariant given here is directly from our definition of the invariant W 3 and diffeomorphisms δ k . In [1] we deduced relationships between the W 3 -invariants of 'nearby' implanted barbell diffeomorphisms, somewhat like bilinearity or a Skein relation. This relationship gave us a tool to reduce the computation of the W 3 -invariant of any implanted barbell diffeomorphism with linearly-embedded cuffs to that of W 3 (G(p, q)).
The elements δ k ∈ π n−4 Diff(S 1 × D n−1 ) are the implanted barbells diffeomorphisms that come from embeddings of the B n n−2,n−2 barbells using 'handcuff embeddings' as depicted in Figure 8. The element of π n−4 Diff(B n n−2,n−2 ) corresponds to the image of the first non-trivial homotopy
group (π i+j−n ), under the map Ω n−j S i → Diff(B n i,j ) when i = j = n − 2 defined in Section 2
. The inspiration for the W 3 -invariant comes from Proposition 2.2, and it can also be seen in Figure 7. Specifically, Proposition 2.2 states that barbell diffeomorphisms are detectable by considering the mid-ball B n−1 to be fibered by intervals, giving a map Diff(B n n−2,n−2 ) → Ω n−2 Emb(I, B n n−2,n−2 ). In this formulation we consider pairs of points t 1 < t 2 ∈ I such that the embedding sends t 1 to the first cocore, and t 2 to the second, as a signed intersection number for the family. Another way to state this is we are counting the linking number of the standard linking pair, of the pre-image of the cocores, intersected with the mid-ball, i.e. it is a double-point formula for the linking number of the pair depicted in Figure 7. Our preference is to state our invariant as a map of the form π n−4 Diff(B n n−2,n−2 ) → π n−4 Ω n−2 Emb(I, B n n−2,n−2 ) ≡ π 2n−6 Emb(I, B n n−2,n−2 ) → Z as this is an expression that we can generalize to Diff(S 1 × D n−1 ). Since we understand the barbell diffeomorphism when restricted to the mid-ball, we can similarly 'scan'
Ω 2 S n−2 → Diff(B n n−2,n−2 ). PSfrag replacements S 1 1 1 2 2 3 3 k − 3 k − 3 k − 2 k − 2 k − 1 k − 1 Figure 8: δ k barbell in S 1 × D n−1 PSfrag replacements 1 2 3 k − 3 k − 2 k − 1through {1} × D n−1 ⊂ S 1 × D n−1 giving a map Diff(S 1 × D n−1 ) → Ω n−2 Emb(I, S 1 × D n−1 ). The W 3 -invariant of an element of π n−4 Diff(S 1 × D n−1 ) takes values in Q ⊗ π 2n−6 Emb(I, S 1 × D n−1 )
. This homotopygroup is detectable at the 3 rd -stage of the Taylor tower, thus we consider the induced map of 3-point configuration spaces to extract invariants of the map. So as our brown interval passes through the bar, it curls as described in Figure 10. If we think of our (2n − 6)-parameter family as D n−3 × D n−3 , the first D n−3 factor corresponds to the suspension parameter S n−2 ≡ ΣS n−3 and controls the red cylinder being swung around the red cuff. Similarly, the second copy of D n−3 corresponds to the suspension parameter of the blue cuff S n−2 ≡ ΣS n−3 and parametrizes the blue cylinder being swung around the blue cuff. Lastly, the red and blue straight lines (8 in total) depicted at the bottom of Figure 10 indicate the points of the embedding that intersect the spanning disc for the cuffs, i.e. the points on the embedding that have parameters with double-points.
Theorem 3.1 W 3 (δ k ) = (k − 1) t −1 1 t 1−k 3 + (−1) n t 1−k 1 t −1 3 − t 2−k 1 t 1 3 + (−1) n−1 t 1 t 2−k 3 + t 1 t k−1 3 + (−1) n t k−1 1 t 3 − t 1−k 1 t 2−k 3 + (−1) n−1 t 2−k 1 t 1−k 3 . The W 3 -invariant takes values in the group Q ⊗ π 2n−3 C ′ 3 [S 1 × D n−1 ]/R and the elements {W 3 (δ k ) : k ≥ 4} are linearly-independent over Q .
The remainder of this section is devoted to explaining the above: what precisely the group Figure 10 using only the double-point data. That said, the claimed formula above for W 3 (δ k ) has some clear features in common with Figure 10. Notice that the bar crosses {1} × D n−1 at (k − 1) locations, marked with green dots. There similarly a term with coefficient k − 1 in the W 3 (δ k ) formula. Roughly speaking, the remaining term in the W 3 (δ k ) computation is a correction term, since there is a different combinatorial pattern in the double-point data for the family as it crosses through the the green dot labelled k − 1.
Q ⊗ π 2n−3 C ′ 3 [S 1 × D n−1 ]/R is, how it can be considered a subgroup of Q ⊗ π 2n−6 Emb(I, S 1 × D n−1 ), and how we compute W 3 (δ k ) from
Definition 3.2
If M is a manifold, the configuration space of k points in M is the space
C k (M) = {(p 1 , · · · , p k ) : p i = p j ∀i = j}. The Fulton-Macpherson compactification of C k (M) is denoted C k [M]
. This is obtained by taking the closure of C k (M) under the product map
C k (M) → M k × (S n ) ( k 2 ) × [0, ∞] ( k 3 )
where the inclusion C k (M) → M k is set-theoretic inclusion C k (M) ⊂ M k . The maps C k (M) → S n come from taking unit displacement vectors between pairs of points,
p i −p j |p i −p j | i.e. we assume M ⊂ R n+1 . Lastly, the maps C k (M) → [0, ∞] come from the relative ratio map |p i −p j | |p i −p l | where {i, j, l} ⊂ {1, 2, · · · , k}. Provided M is compact C k [M]
is a compact manifold with corners. Moreover, the construction is natural with respect to embeddings and the inclusion C k (M) → C k [M] is a homotopy-equivalence.
In the context where we are considering embedding spaces Emb(I, M), the notation C ′ k [M] indicates a small variation of C k [M] where. Specifically, we take the pull-back of the unit tangent bundle under the map
UTM k+2 C k+2 [M]
/ / M k+2 preprint and then restrict to the subspace where p 1 (and its vector) agree with the initial-point of the embeddings of Emb(I, M), and p k+2 (and its vector) agree with the terminal point of the embeddings defining Emb(I, M). See [18] for details. In the case of the interval, we restrict to the subspace of C ′ k [I] such that t 1 ≤ t 2 ≤ · · · ≤ t k , i.e. we choose a standard connected-component. In this case, C ′ k [I] is known to be the k-th Stasheff Polytope, or associahedron. Denote the generators of π 1 C k [S 1 × D n−1 ] ≃ Z k by {t i : i = 1, 2, · · · , n}. The class w ij ∈ π n−1 C k [S 1 × D n−1 ] has all k points stationary, with the exception of point j that orbits around point i.
π n−1 C k [S 1 × D n−1 ] is generated by the set {t q l .w ij ∀i, j, l, q}, with the relations • w ii = 0 ∀i • w ij = (−1) n w ji ∀i = j. • t l .w ij = w ij provided l / ∈ {i, j}. • t j .w ij = t −1 i .w ij ∀i, j.
The way one proves the above is to observe the forgetful map
C k (S 1 × D n−1 ) → C k−1 (S 1 × D n−1 )
is a locally-trivial fiber bundle. Moreover, the map has a section, so the homotopy-groups of C k (S 1 × D n−1 ) are isomorphic to the product of the homotopy groups of the fibers (iteratively), which are (individually) wedges of S 1 with various copies of S n−1 . The S 1 factors contribute the t i generators in π 1 , while the sphere factors contribute the w ij generators. By the Hilton-Milnor theorem the higher rational homotopy groups are generated by Whitehead Products. The Whitehead Product is a bilinear mapping [·, ·] : π i X × π j X → π i+j−1 X satisfying the
(−1) pr [[ f , g], h] + (−1) pq [[g, h], f ] + (−1) rq [[h, f ], g] = 0,
where f ∈ π p X, g ∈ π q X, h ∈ π r X with p, q, r ≥ 2.
Due to the form of the above relation it is sometimes called a 'graded Jacobi identity' in analogy with the Lie Bracket.
There are two elementary relations satisfied by the w ij classes via the Whitehead product:
• [w ij , w lm ] = 0 when {i, j} ∩ {l, m} = ∅. • [w ij + w il , w lj ] = 0 for all i, j, l .
The latter relation should be viewed a generalized 'orbital system' map S n × S n → C 3 (D n ) where there is an earth-moon-sun orbital triple. For this interpretation one views the Whitehead Bracket as the obstruction to extending a wedge of maps, i.e. S i ∨ S j → X to the product S i × S j → X . For an orbital triple such a map exists, thus the corresponding Whitehead bracket is zero.
• [w ij , w lm ] = 0 if {i, j} ∩ {l, m} = ∅, • [w ij , w jl ] = [w jl , w li ] = [w li , w ij ], • t l .[ f , g] = [t l . f , t l .g].
A relatively constructive way to verify much of the above is via intersection theory. Fix a unit direction ζ ∈ ∂D n−1 . Define t i Co 2 1 to consist of pairs of points (p 1 , p 2 ) ∈ C 2 (R 1 × D n−1 ) such that the displacement vector t i 2 .p 2 − p 1 is a positive multiple of ζ . We call t i Co 2 1 a cohorizontal manifold. Given an element of π n−1 C 2 (S 1 × D n−1 ), we lift the map to the universal coverC 2 (S 1 × D n−1 ) ⊂ C 2 (R × D n−1 ) and take its intersection with the t i Co 2 1 submanifold is a well-defined framed 0dimensional manifold as a cobordism class, thus an integer. This invariant detects the class t i 1 w 12 . To similarly detect homotopy classes in π 2n−3 C 3 (S 1 × D n−1 ) we have the collinear classes. This will consist of three points sitting on a 'straight line' in S 1 × D n−1 . Roughly speaking by 'straight line' we are referring to geodesics in the standard Euclidean metric on S 1 × D n−1 . To be more precise, the manifold Col 1 α,β is the collection of points of the form (p 1 ,
p 2 , p 3 ) ∈ C 3 (R × D n−1 ) such that (p 2 , t α 1 p 1 , t β 3 p 3 )
sit on a straight line in R × D n−1 in the listed order. The manifold Col 3 α,β is similarly defined by the requirement (t α 1 p 1 , t β 3 p 3 , p 2 ) sit on a straight line in R × D n−1 in the listed order. The universal cover of C 3 (S 1 × D n−1 ) is naturally an open subspace of (R 1 × D n−1 ) 3 , thus we can consider Col 1 α,β and Col 3 α,β naturally as subspaces of the universal cover of C 3 (S 1 × D n−1 ). The manifolds Col 1 α,β and Col 3 α,β are disjoint and closed in the universal cover of C 3 (S 1 × D n−1 ). Given a map S 2n−3 → C 3 (S 1 × D n−1 ), we take its lift to the universal cover S 2n−3 →C 3 (S 1 × D n−1 ) and take the pre-image of the pair (Col 1 α,β , Col 3 α,β ). Generically, this gives us a disjoint pair of compact oriented manifolds of dimension (n − 2) in S 2n−3 , thus they have a well-defined linking number. This linking number detects the coefficient of t α 1 t β 3 [w 12 , w 23 ]. Given that linking numbers of pre-images of the pair (Col 1 α,β , Col 3 α,β ) can be difficult to visualize and compute for a lift of an arbitrary map S 2n−3 → C 3 (S 1 × D n−1 ), we describe an isotopy of the pair (Col 1 α,β , Col 3 α,β ) that converts the computation into something that is often more manageable. For ǫ ∈ R consider the diffeomorphism of R n given by
P ǫ (x 1 , x 2 , · · · , x n ) = x 1 , x 2 , · · · , x n−1 , x n + ǫ n−1 ∑ i=1 x 2 i .
This diffeomorphism has the feature that it converts the x n = c hyperplanes into paraboloids, when ǫ = 0, similarly it turns lines in the x n = c plane into parabolas, but on a line parallel to the x n -axis the diffeomorphism acts by translation. Moreover, P ǫ 1 • P ǫ 2 = P ǫ 1 +ǫ 2 and P −1 ǫ = P −ǫ . If we consider the Col i α,β manifolds to be submanifolds naturally defined in C 3 (R n ) (i.e. before we pull them back to C 3 (R × D n−1 )), we can pull them back via the diffeomorphism P ǫ , and these will be manifolds of coparabolic triples (plus triples on the lines parallel to the x n -axis). As P ǫ is an orientation-preserving diffeomorphism, these manifolds when pulled-back toC 3 (S 1 × D n−1 ) also detect the t α 1 t β 3 [w 12 , w 13 ] classes. We denote the pull-backs of the collinear manifolds the coparabolic manifolds, i.e.
Cop i α,β,ǫ = P * ǫ (Col i α,β ).
Given a map S 2n−3 → C 3 (S 1 × D n−1 ), we lift to the universal cover and take the pre-images of Cop i α,β,ǫ for i = 1, 3. Given that these are disjoint closed, oriented manifolds in the codomain, their pre-images are disjoint compact, oriented manifolds in S 2n−3 . Generically we can assume these maps contain no cohorizontal triples, since the cohorizontal triple condition is of codimension 2n − 2 > 2n − 3. Thus for if ǫ is large, and our triples of points are not approximating infinitesimal triples along the embedding, we can assume that the critical point of the parabola occurs outside the R × D n−1 , and this critical point separates two of the three points in the parabolic triple. Thus in the limit, the linking of the pre-image of the pair (Cop 1 α,β,ǫ , Cop 3 α,β,ǫ ) is computable as the linking numbers of the pre-image of the pair (t α Co 1
2 − t α−β Co 1 3 , t β−α Co 3 1 − t β Co 3 2 )
, as steep segments of parabolas approximate cohorizontal lines. The reason for the signs, such as the minus sign in front of the t α−β Co 1 3 term is that when computing the signed intersection number of a parabolic triple, all the signs for the Co 1 2 and Co 1 3 are the same, with the exception for the reversal of direction of the parabola.
PSfrag replacements
ζ 2 3 k − 3 k − 2 k − 1(t α Co 1 2 − t α−β Co 1 3 , t β−α Co 3 1 − t β Co 3 2 ) of linear combinations of manifolds, for all α, β ∈ Z.
Lemma 3.4 is a variant of an argument the first author learned from Misha Polyak verbally in 2005, who later described his argument in more detail in his presentation [17]. While Polyak's argument occurs at the 4 th -stage of the Taylor tower, this variant works at the 3 rd .
As in [1], given a map S n−4 → Diff(S 1 × D n−1 ) we compose with the scanning map Diff(S 1 × D n−1 ) → Ω n−2 Emb(I, S 1 × D n−1 ). We further compose with the 3 rd stage of the Taylor tower, which we think of as the evaluation map Emb(I,
S 1 × D n−1 ) × C ′ 3 [I] → C ′ 3 [S 1 × D n−1 ], or its adjoint ev 3 : Emb(I, S 1 × D n−1 ) → Map(C ′ 3 [I], C ′ 3 [S 1 × D n−1 ])
where in this mapping space we demand that maps are stratum-preserving and aligned, meaning that when points collide in the domain, the corresponding points collide in the codomain, moreover, their associated tangent vectors agree. Putting these ingredients together we have an induced map
S n−4 ∧ S n−2 ≡ S 2n−6 → Map(C ′ 3 [I], C ′ 3 [S 1 × D n−1 ]).
One could have some concerns here that the lifts of these maps to the universal cover of C ′ 3 [S 1 × D n−1 ] have transversality issues on various boundary strata of C 3 [I]. Specifically, if our family in Emb(I, S 1 × D n−1 ) has any velocity vectors parallel to the 'vertical' direction (i.e. the S 1 -direction of S 1 × D n−1 ) then one has collinear triples for infinitely many α, β of the Col i α,β variety. For example, on the t 1 = t 2 = t 3 stratum, but also there are intersections of distinct codimensions on the t 1 = t 2 stratum, where the co-dimension depends on the choice of α, β. There are several ways to avoid these transversality problems. For example, if the family has no vertical tangent vectors, this issue does not arise. That said, vertical tangent vectors are not known to be avoidable (that said, we do know the tangent vector field along the embeddings to be canonically nullhomotopic). The underlying geometric problem is that the collinear manifolds contain all the vertical lines R × {p} for p ∈ D n−1 . Thus we can change our model to avoid this problem. Using the coparabolic manifolds Cop i α,β,ǫ with ǫ = 0 suffices. Our families S 2n
−6 × C ′ 3 [I] → C ′ 3 [S 1 × D n−1 ]
are generically transverse to these coparabolic manifolds, pulling them back to oriented co-dimension n − 1 submanifolds, along all strata.
The associated map on the 2 nd stage S 2n−6 → Map(C ′ 2 [I], C ′ 2 [S 1 × D n−1 ]) is torsion (see [1] for details), so we can attach a null-homotopy to an appropriate multiple of the 3 rd stage, giving a homotopy-class of map S 2n−3 → C ′ 3 [S 1 × D n−1 ]. Depending on which null-homotopy we attach, we can get a different homotopy-class of map S 2n−3 → C ′ 3 [S 1 × D n−1 ]. This is the subject of items (1)-(4) below.
As we have seen π 2n−3 C ′ 2 [S 1 × D n−1 ] is isomorphic to π 2n−3 (S 1 ∨ S n−1 ) ⊕ 2 π 2n−3 S n−1 . Modulo torsion, the generators of π 2n−3 (S 1 ∨ S n−1 ) are the Whitehead products of elements t k w 12 for k ∈ Z. This gives us the result that π 2n−3 C ′ 2 [S 1 × D n−1 ], mod torsion, is isomorphic to
Z[t ±1 1 , t ±1 2 ]/ t 1 t 2 − 1 = 0
as a module over the group-ring of the fundamental group. The genera-
tor of π 2n−3 C ′ 2 [S 1 × D n−1 ] corresponding to a monomial t α 1 t β 2 is t α 1 t β 2 w 12 .
By attaching a homotopy-
class of maps S 2n−6 × I × C 2 [I] → C ′ 2 [S 1 × D n−1 ] to a closed-off S 2n−6 × C 3 [I] → C ′ 3 [S 1 × D n−1 ]
we change the homotopy class by adding:
(1) [t α 2 w 23 , t β 2 w 23
]. This comes from the t 1 = 0 face. Thus the generator t α 1 w 12 is mapped to t α 2 w 23 , and a Whitehead bracket [t α 1 w 12 , t
β 1 w 12 ] is mapped to [t α 2 w 23 , t β 2 w 23 ]. (2) [t α 1 w 12 , t β 1 w 12 ] to [t α 1 w 13 + t α 2 w 23 + a 1 w 21 , t β 1 w 13 + t β 2 w 23 + a 1 w 21 ]
. This comes from the t 1 = t 2 face map, i.e. the inclusion
C ′ 2 [S 1 × D n−1 ] → C ′ 3 [S 1 × D n−1 ]
that doubles the first point, i.e. (p 1 , p 2 ) −→ (p 1 , ǫ + p 1 , p 2 ), where the perturbation ǫ + p 1 is in the direction of the velocity vector. The integer a 1 is the degree of this velocity vector map. This map sends w 12 to w 13 + w 23 + a 1 w 21 , t 1 to t 1 t 2 and t 2 to t 2 . The 2nd stage of the Taylor tower induces a nullhomotopy of the velocity vector map, so we can assume a 1 = 0, but it is of interest that the following computation gives the same answer for a 1
= 0. Thus it sends [t α 1 w 12 , t β 1 w 12 ] to [t α 1 w 13 + t α 2 w 23 + a 1 w 21 , t β 1 w 13 + t β 2 w 23 + a 1 w 21 ].
Expanding this bracket using bilinearity we get
= −t α−β 1 t −β 3 + (−1) n t β−α 1 t −α 3 [w 12 , w 23 ] + [t α 1 w 13 , t β 1 w 13 ]+ a 1 (−1) n−1 t −β 3 + (−1) n t −β 3 + t −α 1 − t −α 1 [w 12 , w 23 ]
where the latter row comes from collecting the terms involving a 1 , and clearly these terms sum to zero.
(3) [t α 1 w 12 + t α 1 w 13 + a 2 w 23 , t β 1 w 12 + t β 1 w 13 + a 2 w 23
]. This is for the t 2 = t 3 facet. This corresponds to the map
C ′ 2 [S 1 × D n−1 ] → C ′ 3 [S 1 × D n−1 ] that doubles the second point, i.e. (p 1 , p 2 ) −→ (p 1 , p 2 , ǫ + p 2 )
. This map sends w 12 to w 12 + w 13 + a 2 w 23 , t 1 to t 1 and t 2 to t 2 t 3 .
Thus [t α 1 w 12 , t β 1 w 12 ] −→ [t α 1 w 12 + t α 1 w 13 + a 2 w 23 , t β 1 w 12 + t β 1 w 13 + a 2 w 23 ].
Like the previous case, this simplifies to
= −t α 1 t α−β 3 + (−1) n t β 1 t β−α 3 [w 12 , w 23 ] + [t α 1 w 13 , t β 1 w 13 ]+ a 2 t β 1 − t β 1 + (−1) n−1 t α 1 + (−1) n t α 1 [w 12 , w 23 ].
Again, the terms with a 2 cancel.
(4) [t α 1 w 12 , t β 1 w 12 ]
. This is for the t 3 = 1 facet. This corresponds to the inclusion (1, 0)), thus it sends w 12 −→ w 12 and t 1 −→ t 1 , t 2 −→ t 2 , thus it acts trivially on [t α 1 w 12 , t β 1 w 12 ].
C ′ 2 [S 1 × D n−1 ] → C ′ 3 [S 1 × D n−1 ] that maps (p 1 , p 2 ) to (p 1 , p 2 ,
Thus our invariant via closure 1 m ev 3 (m f ) of π 2n−6 Emb(I, S 1 × D n−1 ) takes values in Q ⊗ π 2n−3 C ′ 3 [S 1 × D n−1 ]/R where R is the subgroup generated by the above four inclusions. Notice (1) kills the summand corresponding to the w 23 brackets, and (4) kills the summands corresponding to the w 12 brackets. Using relation (1) and (4) we can simplify (2) and (3) into relations between w 13 brackets and brackets of the form [w 12 , w 23 ], giving us the Proposition 3.5.
Proposition 3.5 (Closure Argument) Given an element of
[ f ] ∈ π 2n−6 Emb(I, S 1 × D n−1 ) such that ev 2 ( f ) : S 2n−6 → T 2 Emb(I, S 1 × D n−1 ) is null, we form the closure of the evaluation map ev 3 ( f ) : S 2n−6 → T 3 Emb(I, S 1 × D n−1 ) which is a based map of the form ev 3 ( f ) : S 2n−3 → C ′ 3 [S 1 × D n−1 ]
. The homotopy-class of this map, as a function of the homotopy-class [ f ] is well-defined modulo a subgroup we call R. R is generated by the torsion subgroup of
π 2n−3 C ′ 3 [S 1 × D n−1 ] together with the elements t α−β 1 t −β 3 − t α 1 t α−β 3 + (−1) n t β 1 t β−α 3 − t β−α 1 t −α 3 [w 12 , w 23 ] ∀α, β ∈ Z, [t α 2 w 23 , t β 2 w 23 ] ∀α, β, [t α 1 w 12 , t β 1 w 12 ] ∀α, β, [t α 1 w 13 , t β 1 w 13 ] + t α−β 1 t −β 3 + (−1) n−1 t β−α 1 t −α 3 [w 12 , w 23 ] ∀α, β. preprint
Since π 2n−6 T 2 Emb(I, S 1 × D n−1 ) is torsion, there is a homomorphism, called the closure operator π 2n−6 Emb(I,
S 1 × D n−1 ) → Q[t ±1 1 , t ±1 3 ]/ t α−β 1 t −β 3 − t α 1 t α−β 3 = (−1) n−1 t β 1 t β−α 3 − t β−α 1 t −α 3 ∀α, β ∈ Z
given by mapping f −→ 1 m ev 3 (m f ).
Proof The relations are given in the comments preceding the Proposition. Relations (1) and (4) (1) and (4) we can simplify relations (2) and (3)
t α−β 1 t −β 3 − t α 1 t α−β 3 + (−1) n t β 1 t β−α 3 − t β−α 1 t −α 3 [w 12 , w 23 ] = 0.
To compute the W 3 invariant, we first consider the homotopy-class of the map on the 2 nd stage, ev 2 (δ k ). In our family, depicted in Figure 10, there are (k − 1) green dots where double-points are available. The first (k − 2) produce identical double-point data and they are depicted in Figure 12. Figure 12: Scanning through the first (k − 2) green dots.
(−1) n−1 (−1) n−1 (−1) n−1 (−1) n (−1) n (−1) n (−1) n t 1 t 3 D n−3 D n−3 C ′ 2 [I] C 3 [I] −t 2−k 1 t 1 3 (−1) n t 1−k 1 t −1 3 0 0 +t −1 1 t 1−k 3 (−1) n−1 t 1 1 t 2−k 3 (t α Co 1 2 , t β−α Co 3 1 ) (t α Co 1 2 , t β−α Co 3 1 ) lk(t α Co 1 2 , t β Co 3 2 ) lk(t α Co 1 2 , t β Co 3 2 ) t α−β Co 1 3 , t β Co 3 2 ) t α−β Co 1 3 , t β Co 3 2 )
In Figure 12 we have depicted the pre-image of the cohorizontal manifolds for the 2 nd -stage map, expressed as a map of the form
D n−3 × D n−3 × C ′ 2 [I] → C ′ 2 [S 1 × D n−1 ]. While C ′ 2 [I]
is technically a hexagon, the cohorizontal manifold is disjoint from the boundary so for the purpose of exposition we have collapsed C ′ 2 [I] down to a triangle ∆ 2 . The cohorizontal manifolds are spheres of dimension n − 3, having a natural surgery 'product' decomposition S n−3 ≡ D n−3 × ∂I ∪ S n−4 × I .
In the figure, this surgery decomposition is represented by the solid colored arcs on the left side of the figure, depicting a copy of D n−3 , together with the pair of similarly-coloured points on the right side of the figure -for the D n−3 × ∂I portion. For the I × S n−4 portion, the spherical boundary is depicted by a pair of large gray dots on the left-side of the picture, while the black intervals on the right-side of the picture describe an interval I . The S n−4 × I factors occur during the 'end homotopy' in the construction of our family (depicted in the top-right portion of Figure 2), while the D n−3 × ∂I portion come from the double-points that persist when varying the arcs in the opposite cuff, i.e. depicted in the bottom-right portion of Figure 2.
We deduce Figure 12 from Figure 10. The key idea is there are only cohorizontal points (i.e. double points) for small families where the scanning arc passes through the k − 1 green dots. These correspond to the centres of the copies of D n−3 in the D n−3 × D n−3 -family corresponding to grabbing the two strands of the embedding in the red or blue cylinder respectively, and sweeping them around the barbell and over the respective red or blue cuff. Thus the double-points occur when the two strands in the coloured cylinders over or undercross the strands running through the bar or the embedded barbell. The numbers decorating features of the lower left (and right) part of Figure 10 marks the rough parameter-times where cohorizontal points occur. We use this numbering system in Figure 12, i.e. coordinate (2, 4) has a red dot decorating it, meaning it records the cohorizontal points from the first k − 2 green dots, where the strand decorated by 2 sweeps around the point on the embedding decorated by 4. (5,9)
(1,5) +t 1 ∧ t 2 ∧ b 1 ∧ · · · ∧ b n−3 (3,5) −t 1 ∧ t 2 ∧ b 1 ∧ · · · ∧ b n−3 (1,11) −t 1 ∧ t 2 ∧ b 1 ∧ · · · ∧ b n−3 (3,11) +t 1 ∧ t 2 ∧ b 1 ∧ · · · ∧ b n−3 (7,11) −t 1 ∧ t 2 ∧ b 1 ∧ · · · ∧ b n−3 (9,11) +t 1 ∧ t 2 ∧ b 1 ∧ · · · ∧ b n−3 (5,7) (−1) n−1 t 1 ∧ t 2 ∧ b 1 ∧ · · · ∧ b n−3(−1) n t 1 ∧ t 2 ∧ b 1 ∧ · · · ∧ b n−3
(2,4) (−1) n t 1 ∧ t 2 ∧ r 1 ∧ · · · ∧ r n−3 (2,6) (−1) n−1 t 1 ∧ t 2 ∧ r 1 ∧ · · · ∧ r n−3 (2,10) (−1) n t 1 ∧ t 2 ∧ r 1 ∧ · · · ∧ r n−3 (2,12) (−1) n−1 t 1 ∧ t 2 ∧ r 1 ∧ · · · ∧ r n−3 (8,10) (−1) n−1 t 1 ∧ t 2 ∧ r 1 ∧ · · · ∧ r n−3 (8,12) (−1) n t 1 ∧ t 2 ∧ r 1 ∧ · · · ∧ r n−3 (4,8) +t 1 ∧ t 2 ∧ r 1 ∧ · · · ∧ r n−3 Notice that the spheres in Figure 12 are unlinked and trivially framed, meaning the 2 nd -stage map is null homotopic. Given that the spheres are essentially linearly embedded, you could think about how the spanning disc for one sphere intersects the others, and some of these intersections are nontrivial. That said, they are avoidable. Figure 12 also has some sign information recorded. This describes the orientations inherited by the spheres. For example, take one of the 'red' spheres. In its surgery decomposition, it consists of the red copy of D n−3 × ∂I together with a copy of S n−4 × I . The sign information in the figure indicates the orientation of the D n−3 × ∂I components. Since the D n−3 × ∂I components are parallel to the blue D n−3 coordinate axis, the sign is a reference to the orientation of the normal bundle of that portion of the manifold, given in reference to the standard orientation of ∆ 2 × D n−3 × D n−3 using the coordinates (r 1 , · · · , r n−3 , b 1 , · · · , b n−3 , t 1 , t 2 ) in that order, i.e. and we use the orientation form
(6,8) −t 1 ∧ t 2 ∧ r 1 ∧ · · · ∧ r n−3dt 1 ∧ dt 2 ∧ dr 1 ∧ · · · ∧ dr n−3 ∧ db 1 ∧ · · · ∧ db n−3
With these conventions, the normal orientations for the first 1 ≤ l < k − 1 terms are given in Figure 13. We abbreviate the 1-forms dx simply by the symbol x. For the last green dot, we have a somewhat different 2 nd -stage diagram given in Figure 14.
In Figure 13 the point (1,5) and (3,5) are labelled in blue, with signs + and − respectively. In Figure 12 they are connected by an arc decorated with the monomial t 1−k . This means that they are part of the preimage of the t 1−k Co 2 1 manifold. The plus sign indicates the normal orientation of this manifold is +r 1 ∧ · · · ∧ r n−3 , i.e. agreeing with the orientation induced by the natural ordering of the coordinates listed in the order (t 1 , t 2 , r 1 , · · · , r n−3 , b 1 , · · · , b n−3 ). Similarly, the disc corresponding to the red dot at (4,8) is labelled in the preimage of t k−2 Co 2 1 with orientation described in the Figure 13 table. g replacements Figure 14: Scanning through the last green dot.
+ + + + + − − − − − (−1) n−1 (−1) n−1 (−1) n−1 (−1) n (−1) n (−1) n t 1 t 3 D n−3 D n−3 C 2 [I] C 3 [I] (−1) n−1 t 2−k 1 t 1−k 3 −t 1−k 1 t 2−k 3 −t 2−k 1 t 3 1−k 1 t −1 3 + t k−1 1 t 3 t 1 t k−1 3 + t −1 1 t 1−k 3 (−1) n−1 t 1 t 2−k 3 lk(t α Co 1 2 , t β Co 3 2 ) lk(t α Co 1 2 , t β Co 3 2 ) (t α Co 1 2 , t β−α Co 3 1 ) (t α Co 1 2 , t β−α Co 3 1 ) t α−β Co 1 3 , t β Co 3 2 ) t α−β Co 1 3 , t β Co 3 2 )
preprint (2,6) (1,9) −t 1 ∧ t 2 ∧ r 1 ∧ · · · ∧ r n−3 (5,9) +t 1 ∧ t 2 ∧ r 1 ∧ · · · ∧ r n−3 (7,9) −t 1 ∧ t 2 ∧ r 1 ∧ · · · ∧ r n−3 Figure 15: Normal orientations for the cohorizontal manifolds in the last green dot.
+t 1 ∧ t 2 ∧ b 1 ∧ · · · ∧ b n−3 (2,12) −t 1 ∧ t 2 ∧ b 1 ∧ · · · ∧ b n−3 (4,6) −t 1 ∧ t 2 ∧ b 1 ∧ · · · ∧ b n−3 (4,12) +t 1 ∧ t 2 ∧ b 1 ∧ · · · ∧ b n−3 (8,12) −t 1 ∧ t 2 ∧ b 1 ∧ · · · ∧ b n−3 (10,12) +t 1 ∧ t 2 ∧ b 1 ∧ · · · ∧ b n−3 (6,8) (−1) n−1 t 1 ∧ t 2 ∧ b 1 ∧ · · · ∧ b n−3 (6,10) (−1) n t 1 ∧ t 2 ∧ b 1 ∧ · · · ∧ b n−3 (3,5) (−1) n t 1 ∧ t 2 ∧ r 1 ∧ · · · ∧ r n−3 (3,7) (−1) n−1 t 1 ∧ t 2 ∧ r 1 ∧ · · · ∧ r n−3 (3,11) (−1) n t 1 ∧ t 2 ∧ r 1 ∧ · · · ∧ r n−3 (9,11) (−1) n−1 t 1 ∧ t 2 ∧ r 1 ∧ · · · ∧ r n−3 (1,3) +t 1 ∧ t 2 ∧ r 1 ∧ · · · ∧ r n−3
We construct the 3 rd -stage map cohorizontal manifolds from the 2 nd -stage map, the idea being that whichever cohorizontal manifold one is considering, it will be constant in one of the three parameters of C ′ 3 [I], given that the cohorizontal condition is a constraint on only two of the three coordinates of C ′ 3 [I]. Attaching the null-homotopies described for the 2 nd -stage map allows us to close-off the cohorizontal manifolds, getting a collection of disjoint spheres in a neighbourhood of
D n−3 × D n−3 × C ′ 3 [I].
We emphasize 'neighbourhood' since the null-homotopy attachments are external to D n−3 × D n−3 × C ′ 3 [I]. In Figures 16 and 17 we suppress the D n−3 × D n−3 factors, since it would only be a repetition of Figure 12. In our diagrams we display the t 1 and t 3 coordinates, with t 2 being out of the page. Thus our diagram depicts collections of spheres diffeomorphic to S n−2 . We simplify our sketch of C ′ 3 [I] to be a simple tetrahedron ∆ 3 . If one looks at the diagram, one sees a collection of disjoint circles in a neighbourhood of ∆ 3 , some linking, and others not. As with Figures 12 and 14, these diagrams are in 'product form', i.e. these spheres have the form D n−3 × S 1 ∪ S n−4 × D 2 , where the circle and D 2 factors live in the neighbourhood of ∆ 3 , and the D n−3 factor comes from one of the factors in the parametrization of our family. Thus our figures only depict the {0} × S 1 portions of our spheres, but fortunately for us, this is precisely where our double-points occur.
The above kind of geometry occurs in the study of standard linking pairs. While this is an elementary geometric observation, the first author learned about this phenomenon from Haefliger [10]. For example, if we take a standard linking pair in S n−1 with i + j = n, i.e.
S n−1 ≡ ∂D n ≡ ∂(D i × D j ) = S i−1 × D j ∪ D i × S j−1
we can go one step further and think of D n × {0} as the equator in D n+1 , giving
S n ≡ ∂D n+1 ≡ S i × D j ∪ D i+1 × S j−1 or S i−1 × D j+1 ∪ D i × S j .
i.e. one can think of a (S i−1 , S j−1 ) standard linking pair in S n−1 as equatorial in a (S i , S j−1 ) standard linking pair in S n , or the reverse, in a (S i−1 , S j ) standard linking pair in S n . This is a single step in an inductive suspension process that can generate all standard linking pairs of spheres, from linking pairs of the form (S 0 , S 0 ) in S 1 .
D n−3 D n−3 C 2 [I] C 3 [I] C 3 [I] −t 2−k 1 t 1 3 (−1) n t 1−k 1 t −1 3 0 0 +t −1 1 t 1−k 3 (−1) n−1 t 1 1 t 2−k 3 lk(t α Co 1 2 , t β−α Co 3 1 ) lk(t α Co 1 2 , t β−α Co 3 1 ) −lk(t α Co 1 2 , t β Co 3 2 ) −lk(t α Co 1 2 , t β Co 3 2 ) lk(t α−β Co 1 3 , t β Co 3 2 ) lk(t α−β Co 1 3 , t β Co 3 2 )
PSfrag replacements
+ − (−1) n−1 (−1) n D n−3 D n−3 C 2 [I] C 3 [I] C 3 [I] −t 2−k 1 t 1 3 (−1) n t 1−k 1 t −1 3 0 0 +t −1 1 t 1−k 3 (−1) n−1 t 1 1 t 2−k 3 lk(t α Co 1 2 , t β−α Co 3 1 ) lk(t α Co 1 2 , t β−α Co 3 1 ) −lk(t α Co 1 2 , t β Co 3 2 ) −lk(t α Co 1 2 , t β Co 3 2 ) lk(t α−β Co 1 3 , t β Co 3 2 ) lk(t α−β Co 1 3 , t β Co 3 2 )
PSfrag replacements In Figure 16 we break up the cohorizontal manifolds into their constituent parts, depending on which cuff the points are being mapped to (via colour) and the translate of the relevant cohorizontal manifold. In the top-left part of Figure 16 (lk(t α Co 1 2 , t β−α Co 3 1 )) there is only the one linking pair, with monomial t 2−k 1 t 1 3 . To compute the signs, we count the signed overcrossings of t α Co 1
+ − (−1) n−1 (−1) n D n−3 D n−3 C 2 [I] C 3 [I] C 3 [I] −t 2−k 1 t 1 3 (−1) n t 1−k 1 t −1 3 0 0 +t −1 1 t 1−k 3 (−1) n−1 t 1 1 t 2−k 3 lk(t α Co 1 2 , t β−α Co 3 1 ) lk(t α Co 1 2 , t β−α Co 3 1 ) −lk(t α Co 1 2 , t β Co 3 2 ) −lk(t α Co 1 2 , t β Co 3 2 ) lk(t α−β Co 1 3 , t β Co 3 2 ) lk(t α−β Co 1 3 , t β Co32 − t α−β Co 1 3 over t β−α Co 3 1 − t β Co 3 2 .
The normal orientation of the arc parallel to the t 3 -axis through (6,8,8) is −t 1 ∧ t 2 ∧ r 1 ∧ · · · ∧ r n−3 . The normal orientation to the arc parallel to the t 2 -axis through (5, 5, 7)
is (−1) n−1 t 1 ∧ t 3 ∧ b 1 ∧ · · · ∧ b n−3 .
Repeating for all six sub-diagrams of Figure 16, we get the sum of all these terms for the first k − 2 green dots as
(k − 2) t −1 1 t 1−k 3 + (−1) n t 1−k 1 t −1 3 − t 2−k 1 t 1 3 + (−1) n−1 t 1 1 t 2−k 3 .
If we repeat for Figure 17, the sum of the terms for the last green dot gives
(−1) n−1 t 2−k 1 t 1−k 3 − t 1−k 1 t 2−k 3 − t 2−k 1 t 3 + (−1) n t 1−k 1 t −1 3 + (−1) n t k−1 1 t 1 3 + t 1 t k−1 3 + t −1 1 t 1−k 3 + (−1) n−1 t 1 t 2−k 3 .
Putting this together with the first k − 2 green dots, we have
W 3 (δ k ) = (k − 1) t −1 1 t 1−k 3 + (−1) n t 1−k 1 t −1 3 − t 2−k 1 t 1 3 + (−1) n−1 t 1 t 2−k 3 + t 1 t k−1 3 + (−1) n t k−1 1 t 3 − t 1−k 1 t 2−k 3 + (−1) n−1 t 2−k 1 t 1−k 3
which completes the proof of Theorem 3.1.
Recall the hexagon relation.
t α 1 t β 3 + (−1) n t −β 1 t −α 3 = t α−β 1 t α 3 + (−1) n t −α 1 t β−α 3 (1) = t −β 1 t α−β 3 + (−1) n t β−α 1 t β 3 (2) = t −α 1 t −β 3 + (−1) n t β 1 t α 3 (3) = t β−α 1 t −α 3 + (−1) n t α 1 t α−β 3 (4) = t β 1 t β−α 3 + (−1) n t α−β 1 t −β 3 (5) preprint D n−3 D n−3 C 2 [I] C 3 [I] C 3 [I] (−1) n−1 t 2−k 1 t 1−k 3 −t 1−k 1 t 2−k 3 −t 2−k 1 t 3 1) n t 1−k 1 t −1 3 + t k−1 1 t 3 +t 1 t k−1 3 + t −1 1 t 1−k 3 (−1) n−1 t 1 t 2−k 3 −lk(t α Co 1 2 , t β Co 3 2 ) −lk(t α Co 1 2 , t β Co 3 2 ) lk(t α Co 1 2 , t β−α Co 3 1 ) lk(t α Co 1 2 , t β−α Co 3 1 ) lk(t α−β Co 1 3 , t β Co 3 2 ) lk(t α−β Co 1 3 , t β Co 3 2 )
PSfrag replacements Figure 17: Cohorizontal manifolds for the last green dot, in
+ − (−1) n−1 (−1) n t 1 t 3 D n−3 D n−3 C 2 [I] C 3 [I] C 3 [I] (−1) n−1 t 2−k 1 t 1−k 3 −t 1−k 1 t 2−k 3 −t 2−k 1 t 3 (−1) n t 1−k 1 t −1 3 + t k−1 1 t 3 +t 1 t k−1 3 + t −1 1 t 1−k 3 (−1) n−1 t 1 t 2−k 3 −lk(t α Co 1 2 , t β Co 3 2 ) −lk(t α Co 1 2 , t β Co 3 2 ) lk(t α Co 1 2 , t β−α Co 3 1 ) lk(t α Co 1 2 , t β−α Co 3 1 ) lk(t α−β Co 1 3 , t β Co 3 2 ) lk(t α−β Co 1 3 , t β Co 3 2 ) PSfrag replacements + − (−1) n−1 (−1) n t 1 t 3 D n−3 D n−3 C 2 [I] C 3 [I] C 3 [I] (−1) n−1 t 2−k 1 t 1−k 3 −t 1−k 1 t 2−k 3 −t 2−k 1 t 3 1) n t 1−k 1 t −1 3 + t k−1 1 t 3 +t 1 t k−1 3 + t −1 1 t 1−k 3 (−1) n−1 t 1 t 2−k 3 −lk(t α Co 1 2 , t β Co 3 2 ) −lk(t α Co 1 2 , t β Co 3 2 ) lk(t α Co 1 2 , t β−α Co 3 1 ) lk(t α Co 1 2 , t β−α Co 3 1 ) lk(t α−β Co 1 3 , t β Co 3 2 ) lk(t α−β Co 1 3 , t β Co 3 2 )D n−3 × D n−3 × C ′ 3 [I]
Plugging in k = 3 in the above formula for W 3 (δ 3 ) gives
W 3 (δ 3 ) = 0 if n even 4t −2 1 t −1 3 + 4t −1 1 t −2 3 − 2t −1 1 t 3 − 2t 1 t −1 3 if n odd. 4 Homeomorphisms of S 1 × D n−1
In this section we give a proof of Theorem 1.1.
Proof of Theorem 1.1: We supply an argument that a quotient of the W 3 invariant is definable out of π n−4 Homeo(S 1 × D n−1 ). This will suffice to show π n−4 Homeo(S 1 × D n−1 ) is not finitelygenerated for all n ≥ 4.
Let Emb τ (I, S 1 × D n−1 ) denote the space of topological embeddings of I = [0, 1] in S 1 × D n−1 .
We require these embeddings send 0 to (1, − * ) and 1 to (1, * ) where * ∈ ∂D n−1 is a choice of basepoint. We similarly require that the embedding send the interior of I to the interior of S 1 × D n−1 . This last condition does not affect the homotopy-type of the space it does make some technical arguments easier to read. We give this embedding space the compact-open topology.
We let ∆ k denote the standard simplex,
∆ k = {(t 1 , t 2 , · · · , t k ) ∈ R k : 0 ≤ t 1 ≤ t 2 ≤ · · · ≤ t k ≤ 1}.
A topological embedding f : I → X induces a map f * : ∆ k → (S 1 × D n−1 ) k defined by f * (t 1 , · · · , t k ) = ( f (t 1 ), · · · , f (t k )). We list its properties. Given a set A = {i 1 , · · · , i j } ⊂ {1, 2, · · · , k}, the A-diagonal of X k denotes the subspace of X k where x i 1 = x i 2 = · · · = x i j . Call the subspace of ∆ k satisfying t 1 = 0 the initial facet of ∆ k , and the subspace satisfying t k = 1 the terminal facet of ∆ k . The subset of (S 1 × D n−1 ) k satisfying p 1 = (1, − * ) we call the initial facet, and p k = (1, * ) the terminal facet of (S 1 × D n−1 ) k .
(a) The induced map f * sends A-diagonals to A-diagonals, moreover the pre-images of Adiagonals are A-diagonals.
(b) f * sends the initial facet of ∆ k to the initial facet of (S 1 × D n−1 ) k , similarly the terminal facets.
(c) If we lift f * to a map of universal covers ∆ k → (R 1 × D n−1 ) k then two covering translates of points of the image agree if and only if the covering translates are identical, i.e. p i = t.p j is possible if and only if t = 0 ∈ π 1 (S 1 × D n−1 ) k and p i = p j .
Given a manifold M define C τ k (M) = {(p 1 , · · · , p k ) ∈M k : p i / ∈ (π 1 M \ {0}).p j ∀i, j} whereM is the universal cover of M. This definition can be interpreted as saying that any two listed points p i , p j ∈M either have disjoint π 1 M-orbits, or when the orbits intersect we have p i = p j . We call C τ k (M) the principal configuration space of k points in the universal cover of M.
wedge of spheres. Thus given an element of π n−4 Homeo(S 1 × D n−1 ) the induced element of the second stage, is a stratum-preserving map of the form S n−4 × D n−2 × ∆ 2 → C τ 2 (S 1 × D n−1 ).
The restriction of this map to the boundary facets of ∆ 2 give canonically null-homotopic maps, thus we can cap-off the above map to construct a map S 2n−4 → C τ 2 (S 1 × D n−1 ). Given that π 2n−4 S n−1 is torsion when n ≥ 4, our map is torsion. Like in the smooth case, some multiple of the 3 rd -stage map
S n−4 × D n−2 × ∆ 3 → C τ 3 (S 1 × D n−1 )
is null on the boundary. Will attach a choice of null-homotopy, and as in the smooth case the induced element of π 2n−3 C τ 3 (S 1 × D n−1 )
is well-defined up to an error terms coming from a subgroup R ′ . The subgroup R ′ is the image of R under the induced map from the forgetful map π 2n−3 C 3 (S 1 × D n−1 ) → π 2n−3 C τ (S 1 × D n−1 ).
So we have a commutative diagram π n−4 Diff(S 1 × D n−1 ) ⊗ Q W 3 / / π 2n−3 C 3 (S 1 × D n−1 ) ⊗ Q/R π n−4 Homeo(S 1 × D n−1 ) ⊗ Q W ′ 3 / / π 2n−3 C τ 3 (S 1 × D n−1 ) ⊗ Q/R ′ . Due to Proposition 4.1, our elements δ k satisfy
W ′ 3 (δ k ) = (k − 1) t −1 1 t 1−k 3 + (−1) n t 1−k 1 t −1 3 − t 2−k 1 t 1 3 + (−1) n−1 t 1 t 2−k 3 + t 1 t k−1 3 + (−1) n t k−1 1 t 3 − t 1−k 1 t 2−k 3 + (−1) n−1 t 2−k 1 t 1−k 3
which are non-trivial and linearly independent for k ≥ 4. The key observation is that these elements lie in the 12-element orbits of the dihedral group of the hexagon, and these orbits do not belong to the kernel of the map π 2n−3C3 (S 1 × D n−1 ) ⊗ Q → π 2n−3 C τ 3 (S 1 × D n−1 ) ⊗ Q/R ′ by Proposition 4.1, completing the proof of Theorem 1.1. Sander Kupers has informed us one can further prove that the kernel of the map π 2n−6 Emb(I, S 1 × D n−1 ) → π 2n−6 Emb τ (I, S 1 × D n−1 ) is the image of the inclusion π 2n−6 Emb(I, D n ) → π 2n−6 Emb(I, S 1 × D n−1 ). This is the subgroup given by embeddings disjoint from {−1} × D n−1 . This result (unpublished) would give an alternative proof of Theorem 1.1. One can also obtain Theorem 1.1 in dimensions different from n = 4, 5 and 7 using smoothing theory [15]. More generally, its known that the homotopy groups of B(Homeo(M)) and B(Diff(M)) are finitely generated whenever π 1 M is finite and M has even dimension, different from 4 [3].
Automorphisms of complete finite volume hyperbolic manifolds
Farrell and Jones [6] proved that provided N is a compact hyperbolic manifold of dimension greater than or equal to 11, then π 0 Diff(N) and π 0 Homeo(N) are not finitely generated. In particular, Diff(N) and Homeo(N) do not have the homotopy-type of compact manifolds. The purpose of this section is to reduce 11 to 4. By [8], it is known that the diffeomorphism group of a complete finite volume hyperbolic 3-manifold has the homotopy-type of its isometry group, i.e. it has the homotopy-type of a discrete, finite set. Thus the results of this section are optimal. While the Smale Conjecture for hyperbolic 3-manifolds is stated for closed manifolds in the introduction to [8], it follows for complete manifolds by Lemma 7.2 and Theorem 7.3 of [8].
The proof of Farrell and Jones is constrained by two important dimension restrictions: 1) we do not yet know the optimal range for pseudo-isotopy stability. Indeed, we still depend on the initial result of Igusa [13]. 2) Farrell and Jones also depend on the work of Hatcher and Wagoner [12] which begins in dimension 6.
While Farrell and Jones compute the mapping class group of N in the smooth and topological cases, we restrict to π n−4 Diff(N) and π n−4 Homeo(N). In a future paper [2] we anticipate extending these arguments to the level of mapping class groups.
Theorem 5.1 If N is a complete finite volume hyperbolic manifold of dimension n ≥ 4 then both
The proof of Theorem 5.1 can be elaborated to construct a homomorphism π n−4 Homeo(N) ⊗ Q to an infinite direct sum of copies of Q when n > 4, in particular one copy of π 2n−3 C τ 3 (S 1 × D n−1 ) ⊗ Q/R ′ for each embedded orientable geodesic in N . The n = 4 case is somewhat distinct, as the target group is a semi-direct product of the group of hyperbolic isometries of N and an infinite direct-sum of copies of Q . This follows from Mostow Rigidity and the finiteness of the isometry groups of closed hyperbolic manifolds. The idea for the homomorphism is to find an infinite set γ i of distinct embedded geodesics and consider the diffeomorphisms f i,k obtained by implanting δ k in N(γ i ). Again these diffeomorphisms generate a free abelian subgroup of π 0 (Diff 0 (N)), provided k ≥ 4. The key point is that when lifting toN γ i all the preimages of each γ j , j = i are non compact, so when extending to N γ i only the f j,k 's with j = i survive up to isotopy and these are distinguished by W ′ 3 . I.e. our invariant of π n−4 Homeo(N) ⊗ Q will be W ′ 3 ( f i,k ), in the summand corresponding to γ i .
once-punctured S i × D j−i+1 . If we let Diff F (B n i,j ) denote the fiber-preserving diffeomorphism group of B n i,j , i.e, diffeomorphisms f : B n i,j
Figure 1 :
1Barbell diffeomorphism via resolution of double point.
Figure 2 :
2Barbell diffeomorphism family restricted to mid-ball as map D 2n−6 → Emb(I, B n n−2,n−2 ).
Figure 3 :
3The composite Ω n−j S i → Diff(B i,j ) → Ω n−i S j in a fiber over a point in D j−1 . The image of { * } × D j+1−i from the S i × D j+1−i summand is in blue, and the image of { * } × D 1 from the S j × D 1 summand is in red.
Proof
The group PDiff(N) is the collection of all diffeomorphisms of N × I which restrict to the identity on N × {0} and (∂N) × I , often called the group of pseudo-isotopy diffeomorphisms.
Figure 7 :
7Barbell image and pre-image of cocores.
Figure 9 :
9Projection of δ k barbell to D n−1
k− 1 Figure 10 :
110− 4 k − 3 k − 2 k Scanning δ k barbell in S 1 × D n−1 . Interval fibers of mid-ball {1} × D n−1 in brown.Notice when scanning through δ k , if the brown interval fiber is disjoint from the bar, it is unaffected by δ k . But when it passes through the bar, imagine the bar's cross-section as D n−1 ≃ D n−2 × I . The D n−2 together with the (n − 4)-parameter family of δ k : S n−4 → Diff(S 1 × D n−1 ) gives us a (2n − 6)-parameter family of embedded intervals in S 1 × D n−1 , which are described in Proposition 2.2 andFigure 2. We modifyFigure 2as our barbell is embedded in S 1 × D n−1 in handcuff fashion.
Proposition 3. 3
3The rational homotopy-groups of C k [S 1 × D n−1 ] are generated by the Whitehead products of the elements t m l .w ij . These satisfy the relations
Figure 11 :
11Parabolic triples, ǫ = 0 left. Large ǫ middle and right.
Lemma 3. 4
4Given a smooth map f : S 2n−3 → C 3 (S 1 × D n−1 ), generically we can assume it has no cohorizontal triples. Moreover, provided the map does not have any cohorizontal triples, consider the lift to the universal coverf : S 2n−3 →C 3 (S 1 × D n−1 ). The linking numbers in S 2n−3 of the pre-images of the disjoint pair of manifolds inC 3 (S 1 × D n−1 ) (Col 1 α,β , Col 3 α,β ) agrees with the linking number of the pre-image of the pair
Figure 13 :
13Normal orientations for the cohorizontal manifolds in the first (k − 2) green dots.
2 )
2Figures 16 and 17are depicting standard linking pairs of the form (S n−2 , S n−2 ) where we are inducting up from the base case of disjoint linking circles in ∆ 3 , where for one circle we suspend up using the D n−3 × {0} factor, and for the other circle we suspend up using the {0} × D n−3 factor.
detects the barbell diffeomorphism. This homomorphism is a version of the scanning map. Specifically, let B be a mid-ball for B i,j , i.e. a smoothly-embedded copy of D n−1 that splits B n i,j into a boundary connectsum. Fiber B by parallel intervals. Scanning using B gives a map
The latter relation above can be rewritten as [w ij , w jk ] − [w jk , w ki ] = 0, giving the equality of the three cyclic permutations, [w ij , w jk ] = [w jk , w ki ] = [w ki , w ij ].
preprint
t 2−k t 2−k t 2−k t 1−k t 1−k t 1−k t k−1 t k−2
t 2−k t 2−k t 1−k t 1−k t 1−k t k−1 t k−2 t k−2
t 1 t 1 t 3 t 3 t 2−k t 2−k t 2−k t 2−k t 1−k t 1−k t 1−k t 1−k t k−1 t k−1 t k−1 t k−1 t k−2 t k−2 t k−2 t k−2
t 1 t 1 t 3 t 3 t 2−k t 2−k t 2−k t 2−k t 1−k t 1−k t 1−k t 1−k t k−1 t k−1 t k−1 t k−1 t k−2 t k−2 t k−2 t k−2
t 3 t 2−k t 2−k t 2−k t 2−k t 1−k t 1−k t 1−k t 1−k t k−1 t k−1 t k−1 t k−1 t k−2 t k−2 t k−2 t k−2
t 2−k t 2−k t 2−k t 2−k t 1−k t 1−k t 1−k t 1−k t k−1 t k−1 t k−1 t k−1 t k−2 t k−2 t k−2 t k−2
t 2−k t 2−k t 2−k t 2−k t 1−k t 1−k t 1−k t 1−k t k−1 t k−1 t k−1 t k−1 t k−2 t k−2 t k−2 t k−2
Thus item (c) above states the lift of f * : ∆ k → (S 1 × D n−1 ) k to the universal cover is a map of the form ∆ k → C τ k (S 1 × D n−1 ). Items (a) and (b) should be thought of as a relative mapping space condition.As a space, C τ k (M) is the orbit configuration space of the universal cover of M union the {i, j}diagonals, i.e. thinking of C τ k (M) as a subspace of (M) k it is the union of C k (M) with the {i, j}diagonals for all i = j. See for example Fred Cohen's work on orbit configuration spaces[4]. For our purposes we need to know π 2n−1C τ 2 (S 1 × D n−1 ) and enough of π 2n−3C3 (S 1 × D n−1 ) ⊗ Q , which is the content of the next proposition.Proposition 4.1The homotopy group π n−1C2 (S 1 × D n−1 ) is freely generated by the elements t a 2 w 12 with a ∈ Z \ {0}.The kernel of the mapProof Concerning π n−1C2 (S 1 × D n−1 ), the forgetful mapC 2 (S 1 × D n−1 ) → R × D n−1 is a locallytrivial fiber bundle with fiber diffeomorphic to the complement of the non-trivial covering translates of a point, which has the homotopy-type of a wedge of spheres, the generators in dimension (n − 1) being our t a 2 w 12 classes with a = 0. Concerning π 2n−1C2 (S 1 × D n−1 ) we consider the collinear manifolds Col 1 α,β and Col 3 α,β as beingand Col 3 α,β are disjoint and closed in C τ 3 (S 1 × D n−1 ) provided α = 0 = β and α = β.If we pull-back the pair Col 1 α,β and Col 3 α,β via the map [t a 2 w 12 , t b 2 w 23 ] we get a disjoint oriented manifold pair with linking number ±1 provided α = −a and β = −b, otherwise we get zero. Thus the set of brackets of the form [t a 2 w 12 , t b 2 w 23 ] are linearly independent in π 2n−3 C τIn the smooth category, given a diffeomorphism of S 1 × D n−1 we consider the induced scanning map of the disc {1} × D n−1 , this gave us an element of Ω n−2 Emb(I, S 1 × D n−1 ). We follow that same outline for homeomorphisms, using the map Homeo(S 1 × D n−1 ) → Ω n−2 Emb τ (I, S 1 × D n−1 ).In the smooth case, the induced map on the second stage was torsion. In the topological case, the 'second stage' we take as the map Emb τ (I, S 1 × D n−1 ) → Map(∆ 2 , C τ 2 (S 1 × D n−1 )) with the associated boundary conditions, i.e. this is a stratum-preserving mapping space, as described in conditions (a) and (b). Notice that the forgetful map C τ k (S 1 × D n−1 ) → C τ k−1 (S 1 × D n−1 ) is in general not a fibration, but in the case k = 2 it is, with the fiber having the homotopy-this has the homotopy-type of a π n−4 Diff(N) and π n−4 Homeo(N)are not finitely generated.Proof We give the proof for n = 4 and N orientable. In a neighbourhood N(γ) of an embedded closed geodesic γ, implant the barbell δ k to obtain the diffeomorphism f k ∈ Diff 0 (N). Letf k be the lift of f k to the covering spaceN γ of N corresponding to the subgroup of π 1 N generated by γ. This covering space admits a canonical compactification N γ , for example, using normal coordinates about the geodesic. Alternatively view N γ as the Z quotient of H 4 ∪ S 3 ∞ by the loxodromic element corresponding to γ. We identify N γ with S 1 × D 3 . Since f k is homotopically trivial via a compactly supported homotopy, points ofN are moved distances uniformly bounded above. It follows thatf k extends to a homeomorphism f * k such that f * k |∂N γ = id. Now the preimage of γ inN consists of a single geodesicγ that maps 1-1 to γ and infinitely many others that map ∞ to 1. By[1]or using Proposition 2.5, it follows that if γ i is one such lift, thenf k |N(γ i ) is isotopic to id via an isotopy that moves points uniformly bounded distance, independent of i. Here N(γ i ) is the corresponding lift of N(γ). The point here is that δ k is isotopically trivial when lifted to some finite sheeted cover. Thus by an isotopy of f * k which moves points uniformly bounded hyperbolic distance, we can assume that f * k |N γ is supported in N(γ), where we abuse notation by calling the isotoped map f * k . I.e. f * k is the standard δ k implantation in a neighborhood of the core geodesic of N γ .Finally apply our W ′ 3 -invariant to the resulting homeomorphism f * k of S 1 × D 3 to conclude that the f k 's, k ≥ 4, freely generate an infinite rank abelian subgroup of π 0 (Diff 0 (N)).
. R Budney, & D Gabai, arXiv:1912.09029Knotted 3-balls in S 4 , preprintR. Budney & D. Gabai, Knotted 3-balls in S 4 , preprint [arXiv:1912.09029].
Scanning diffeomorphisms. R Budney, & D Gabai, in preparationR. Budney & D. Gabai, Scanning diffeomorphisms, in preparation.
Finiteness properties of automorphism spaces of manifolds with finite fundamental group. M Bustamante, M Krannich, A Kupers, Mathematische Annalen. to appearM. Bustamante, M. Krannich, A. Kupers, Finiteness properties of automorphism spaces of manifolds with finite fundamental group. Mathematische Annalen (to appear).
Orbit configuration spaces assiciated to discrete subgroups of PSL(2, R). F Cohen, T Kohno, M Xicoténcatl, J.Pure Appl. Algebra. 21312F. Cohen, T. Kohno, M. Xicoténcatl, Orbit configuration spaces assiciated to discrete subgroups of PSL(2, R) . J.Pure Appl. Algebra 213 (2009) no. 12.
Die gruppe der abbildungsklassen: Das arithmetishe feld auf flächen. M Dehn, Acta Math. 69M. Dehn. Die gruppe der abbildungsklassen: Das arithmetishe feld auf flächen. Acta Math. 69 (1938): 135-206.
A topological analogue of Mostow's rigidity theorem. F T Farrell, L E Jones, JAMS Apr. 22F.T. Farrell, L.E. Jones A topological analogue of Mostow's rigidity theorem, JAMS Apr. (1989) Vol. 2, No. 2. pp. 257-370.
Self-referential discs and the light bulb lemma. D Gabai, Comment. Math. Helv. 96D. Gabai, Self-referential discs and the light bulb lemma, Comment. Math. Helv. 96 (2021), 483-513.
D Gabai, The Smale Conjecture for Hyperbolic 3-Manifolds: Isom(M 3 ) ≃ Diff(M 3 ). 58D. Gabai, The Smale Conjecture for Hyperbolic 3-Manifolds: Isom(M 3 ) ≃ Diff(M 3 ), J. Diff. Geom. 58 (1): 113-149. May 2001.
Le type d'homotopie du groupe des diffeomorphismes d'une surface compacte. A Gramain, Ann. Sci. Ecole Norm. Sup. 64A. Gramain, Le type d'homotopie du groupe des diffeomorphismes d'une surface compacte, Ann. Sci. Ecole Norm. Sup. 6(4) (1973) 53-66.
A Haefliger, Differentiable Links, Topology. 1A. Haefliger, Differentiable Links, Topology Vol. 1, pp. 241-244 (1962).
A proof of the Smale Conjecture Diff(S 3 ) ≃ O(4). A Hatcher, Ann. Math. 117A. Hatcher. A proof of the Smale Conjecture Diff(S 3 ) ≃ O(4) , Ann. Math. 117 (1983).
The second obstruction for pseudo-isotopies Astérisque. A Hatcher, 6Part IIA. Hatcher, The second obstruction for pseudo-isotopies Astérisque, tome 6 (1973). Part II.
The stability theorem for smooth pseudoisotopies, K-theory 2 (1-2). K Igusa, K. Igusa, The stability theorem for smooth pseudoisotopies, K-theory 2 (1-2), (1988) 1-355.
Graphics, homotopy groups of spheres and spaces of links and knots. R Koytcheff, arXiv:2205.00635preprint.R. Koytcheff, Graphics, homotopy groups of spheres and spaces of links and knots, preprint. arXiv: 2205.00635.
Some finiteness results for groups of automorphisms of manifolds. A Kupers, Geometry and Topology. to appearA. Kupers, Some finiteness results for groups of automorphisms of manifolds. Geometry and Topology (to appear).
Extending diffeomorphisms. R Palais, Proc. Amer. Math. Soc. 11R. Palais, Extending diffeomorphisms, Proc. Amer. Math. Soc. 11 (1960), 274-277.
Enumerative geometry and finite type invariants, presentation. M Polyak, M. Polyak, Enumerative geometry and finite type invariants, presentation (2008).
Manifold-theoretic compactifications of configuration spaces. D Sinha, Selecta Math. (N.S.). 103D. Sinha, Manifold-theoretic compactifications of configuration spaces, Selecta Math. (N.S.) 10 (2004), no.3, 391-428.
T Watanabe, Some exotic nontrivial elements of the rational homotopy of Diff(D 4 ). arXiv preprintT. Watanabe, Some exotic nontrivial elements of the rational homotopy of Diff(D 4 ), arXiv preprint (2018).
| []
|
[
"Multilingual and cross-lingual document classification: A meta-learning approach",
"Multilingual and cross-lingual document classification: A meta-learning approach"
]
| [
"Niels Van Der Heijden [email protected] \nILLC\nUniversity of Amsterdam\nthe Netherlands\n",
"Helen Yannakoudakis [email protected] \nDept. of Informatics\nKing's College London\nUnited Kingdom\n",
"Pushkar Mishra [email protected] \nFacebook AI\nLondonUnited Kingdom\n",
"Ekaterina Shutova [email protected] \nILLC\nUniversity of Amsterdam\nthe Netherlands\n"
]
| [
"ILLC\nUniversity of Amsterdam\nthe Netherlands",
"Dept. of Informatics\nKing's College London\nUnited Kingdom",
"Facebook AI\nLondonUnited Kingdom",
"ILLC\nUniversity of Amsterdam\nthe Netherlands"
]
| []
| The great majority of languages in the world are considered under-resourced for the successful application of deep learning methods. In this work, we propose a meta-learning approach to document classification in limitedresource setting and demonstrate its effectiveness in two different settings: few-shot, crosslingual adaptation to previously unseen languages; and multilingual joint training when limited target-language data is available during training. We conduct a systematic comparison of several meta-learning methods, investigate multiple settings in terms of data availability and show that meta-learning thrives in settings with a heterogeneous task distribution. We propose a simple, yet effective adjustment to existing meta-learning methods which allows for better and more stable learning, and set a new state of the art on several languages while performing on-par on others, using only a small amount of labeled data. | 10.18653/v1/2021.eacl-main.168 | [
"https://arxiv.org/pdf/2101.11302v1.pdf"
]
| 231,719,017 | 2101.11302 | a88dc4852d5133d1bd730ac7c1285234458bb148 |
Multilingual and cross-lingual document classification: A meta-learning approach
Niels Van Der Heijden [email protected]
ILLC
University of Amsterdam
the Netherlands
Helen Yannakoudakis [email protected]
Dept. of Informatics
King's College London
United Kingdom
Pushkar Mishra [email protected]
Facebook AI
LondonUnited Kingdom
Ekaterina Shutova [email protected]
ILLC
University of Amsterdam
the Netherlands
Multilingual and cross-lingual document classification: A meta-learning approach
The great majority of languages in the world are considered under-resourced for the successful application of deep learning methods. In this work, we propose a meta-learning approach to document classification in limitedresource setting and demonstrate its effectiveness in two different settings: few-shot, crosslingual adaptation to previously unseen languages; and multilingual joint training when limited target-language data is available during training. We conduct a systematic comparison of several meta-learning methods, investigate multiple settings in terms of data availability and show that meta-learning thrives in settings with a heterogeneous task distribution. We propose a simple, yet effective adjustment to existing meta-learning methods which allows for better and more stable learning, and set a new state of the art on several languages while performing on-par on others, using only a small amount of labeled data.
Introduction
There are more than 7000 languages around the world and, of them, around 6% account for 94% of the population. 1 Even for the 6% most spoken languages, very few of them possess adequate resources for natural language research and, when they do, resources in different domains are highly imbalanced. Additionally, human language is dynamic in nature: new words and domains emerge continuously and hence no model learned in a particular time will remain valid forever.
With the aim of extending the global reach of Natural Language Processing (NLP) technology, much recent research has focused on the development of multilingual models and methods to efficiently transfer knowledge across languages.
Among these advances are multilingual word vectors which aim to give word-translation pairs a similar encoding in some embedding space (Mikolov et al., 2013a;Lample et al., 2017). There has also been a lot of work on multilingual sentence and word encoders that either explicitly utilizes corpora of bi-texts (Artetxe and Schwenk, 2019;Lample and Conneau, 2019) or jointly trains language models for many languages in one encoder (Devlin et al., 2018;. Although great progress has been made in cross-lingual transfer learning, these methods either do not close the gap with performance in a single high-resource language (Artetxe and Schwenk, 2019;, e.g., because of cultural differences in languages which are not accounted for, or are impractically expensive (Lai et al., 2019).
Meta-learning, or learning to learn (Schmidhuber, 1987;Bengio et al., 1990;Thrun and Pratt, 1998), is a learning paradigm which focuses on the quick adaption of a learner to new tasks. The idea is that by training a learner to adapt quickly and from a few examples on a diverse set of training tasks, the learner can also generalize to unseen tasks at test time. Meta-learning has recently emerged as a promising technique for few-shot learning for a wide array of tasks (Finn et al., 2017;Koch et al., 2015;Ravi and Larochelle, 2017) including NLP (Dou et al., 2019;Gu et al., 2018). To our best knowledge, no previous work has been done in investigating meta-learning as a framework for multilingual and cross-lingual few-shot learning. We propose such a framework and demonstrate its effectiveness in document classification tasks. The only current study on meta-learning for cross-lingual few-shot learning is the one by (Nooralahzadeh et al., 2020), focusing on natural language inference and multilingual question answering. In their work, the authors focus on applying meta-learning to learn to adapt a monolingually trained classi-= θ for all steps k do Compute: θ
(k+1) l = θ (k) l − α(∇ θ (k) l LS l (f θ (k) l ))
end for end for Update θ = θ − β(MetaUpdate(f θ (K) l , Q l )) end while fier to new languages. In contrast to this work, we instead show that, in many cases, it is more favourable to not initialize the meta-learning process from a monolingually trained classifier, but rather reserve its respective training data for metalearning instead.
Our contributions are as follows: 1) We propose a meta-learning approach to few-shot cross-lingual and multilingual adaptation and demonstrate its effectiveness on document classification tasks over traditional supervised learning; 2) We provide an extensive comparison of meta-learning methods on multilingual and cross-lingual few-shot learning and release our code to facilitate further research in the field; 2 3) We analyse the effectiveness of meta-learning under a number of different parameter initializations and multiple settings in terms of data availability, and show that meta-learning can effectively learn from few examples and diverse data distributions; 4) We introduce a simple yet effective modification to existing methods and empirically show that it stabilizes training and converges faster to better local optima; 5) We set a new state of the art on several languages and achieve on-par results on others using only a small amount of data.
2 Meta-learning methods Meta-learning, or learning to learn, aims to create models that can learn new skills or adapt to new tasks rapidly from few training examples. Unlike traditional machine learning, datasets for either training or testing, which are referred to as metatrain and meta-test datasets, comprise of many tasks sampled from a distribution of tasks p(D) rather than individual data points. Each task is asso-2 https://github.com/mrvoh/meta_ learning_multilingual_doc_classification ciated with a dataset D which contains both feature vectors and ground truth labels and is split into a support set and a query set, D = {S, Q}. The support set is used for fast adaptation and the query set is used to evaluate performance and compute a loss with respect to model parameter initialization. Generally, some model f θ parameterized by θ, often referred to as the base-learner, is considered. A cycle of fast-adaptation on a support-set followed by updating the parameter initialization of the baselearner based on the loss on the query-set is called an episode. In the case of classification, the optimal parameters maximize the probability of the true labels across multiple batches Q ⊂ D
θ * := argmax θ E Q⊂D [ (x,y)∈Q P θ (y|x)] (1)
In few-shot classification/fast learning, the goal is to minimize the prediction error on data samples with unknown labels given a small support set for learning. Meta-training (Algorithm 1) consists of updating the parameters of the base-learner by performing many of the formerly described episodes, until some stop criterion is reached.
Following this procedure, the extended definition of optimal parameters is given in Eq. 2 to include fast adaptation based on the support set. The underlined parts mark the difference between traditional supervised-learning and meta-learning. The optimal parameters θ * are obtained by solving
argmax θ E l⊂L [E S l ⊂D,Q l ⊂D [ (x,y)∈Q l P θ (y|x,S l )]] (2)
In this work, we focus on metric-and optimizationbased meta-learning algorithms. In the following sections, their respective characteristics and the update methods in Algorithm 1 are introduced.
Prototypical Networks
Prototypical Networks (Snell et al., 2017) belong to the metric-based family of meta-learning algorithms. Typically they consist of an embedding network f θ and a distance function d(x 1 , x 2 ) such as Euclidean distance. The embedding network is used to encode all samples in the support set S c and compute prototypes µ c per class c ∈ C by computing the mean of the sample encodings of that respective class
µ c := 1 |S c | (x i ,y i )∈Sc f θ (x i )(3)
Using the computed prototypes, Prototypical Networks classify a new sample as Wang et al. (2019) show that despite their simplicity, Prototypical Networks can perform on par or better than other state-of-the-art meta-learning methods when all sample encodings are centered around the overall mean of all classes and consecutively L2-normalized. We also adopt this strategy.
p(y = c|x) = exp(−d(f θ (x), µ c ) c ∈C exp(−d(f θ (x), µ c )(4)
MAML
Model-Agnostic Meta-Learning (MAML) (Finn et al., 2017) is an optimization-based method that uses the following objective function
θ * := argmin θ D l ∼p(D) L l (f θ (k) l ) (5) L l (f θ (k) l )
is the loss on the query set after updating the base-learner for k steps on the support set. Hence, MAML directly optimizes the base-learner such that fast-adaptation of θ, often referred to as inner-loop optimization, results in task-specific parameters θ (k) l which generalize well on the task. Setting B as the batch size, MAML implements its MetaUpdate, which is also referred to as outer-loop optimization, as
θ = θ − β 1 B D l ∼p(D) (∇ θ L l (f θ (k) l ))(6)
Such a MetaUpdate requires computing second order derivatives and, in turn, holding θ (j) l ∀j = 1, . . . , k in memory. A first-order approximation of MAML (foMAML), which ignores second order derivatives, can be used to bypass this problem:
θ = θ − β 1 B D l ∼p(D) (∇ θ (k) l L l (f θ (k) l )) (7)
Following previous work (Antoniou et al., 2018), we also adopt the following improvements in our framework for all MAML-based methods:
Per-step Layer Normalization weights Layer normalization weights and biases are not updated in the inner-loop. Sharing one set of weights and biases across inner-loop steps implicitly assumes that the feature distribution between layers stays the same at every step of the inner optimization.
Per-layer per-step learnable inner-loop learning rate Instead of using a shared learning rate for all parameters, the authors propose to initialize a learning rate per layer and per step and jointly learn their values in the MetaUpdate steps.
Cosine annealing of outer-loop learning rate It has shown to be crucial to model performance to anneal the learning rate using some annealing function (Loshchilov and Hutter, 2016).
Reptile
Reptile (Nichol et al., 2018) is a first-order optimization-based meta-learning algorithm which is designed to move the weights towards a manifold of the weighted averages of task-specific parameters θ (k) l :
θ = θ − β 1 B D l ∼p(D) (θ (k) l − θ)(8)
Despite its simplicity, it has shown competitive or superior performance against MAML, e.g., on Natural Language Understanding (Dou et al., 2019).
ProtoMAML
Triantafillou et al. (2020) introduce ProtoMAML as a meta-learning method which combines the complementary strengths of Prototypical Networks and MAML by leveraging the inductive bias of the use of prototypes instead of random initialization of the final linear layer of the network. Snell et al. (2017) show that Prototypical Networks are equivalent to a linear model when Euclidean distance is used. Using the definition of prototypes µ c as per Eq. 3, the weights w c and bias b c corresponding to class c can be computed as follows
w c := 2µ c b c := −µ T c µ c(9)
ProtoMAML is defined as the adaptation of MAML where the final linear layer is parameterized as per Eq. 9 at the start of each episode using the support set. Due to this initialization, it allows modeling a varying number of classes per episode.
ProtoMAMLn Inspired by Wang et al. (2019), we propose a simple, yet effective adaptation to Pro-toMAML by applying L 2 normalization to the prototypes themselves, referred to as ProtoMAMLn, and, again, use a first-order approximation (foPro-toMAMLn). We demonstrate that doing so leads to a more stable, faster and effective learning algorithm at only constant extra computational cost (O(1)).
We hypothesize the normalization to be particularly beneficial in case of a relatively highdimensional final feature space -in case of BERTlike models typically 768 dimensions. Let x be a sample andx = f θ (x) be the encoding of the sample in the final feature space. Since the final activation function is the tanh activation, all entries of bothx and µ c have values between -1 and 1. The pre-softmax activation for class c is computed aŝ x T µ c . Due to the size of the vectors and the scale of their respective entries, this in-product can yield a wide range of values, which in turn results in relatively high loss values, making the inner-loop optimization unstable.
Related work 3.1 Multilingual NLP
Just as the deep learning era for monolingual NLP started with the invention of dense, lowdimensional vector representations for words (Mikolov et al., 2013b) so did cross-lingual NLP with works like those of Mikolov et al. (2013a); Faruqui et al. (2014). More recently, multilingual and/or cross-lingual NLP is approached by training one shared encoder for multiple languages at once, either by explicitly aligning representations with the use of parallel corpora (Artetxe and Schwenk, 2019;Lample and Conneau, 2019) or by jointly training on some monolingual language model objective, such as the Masked Language Model (MLM) (Devlin et al., 2018), in multiple languages (Devlin et al., 2018;.
The formerly described language models aim to create a shared embedding space for multiple languages with the hope that fine-tuning in one language does not degrade performance in others. Lai et al. (2019) argue that just aligning languages is not sufficient to generalize performance to new languages due to the phenomenon they describe as domain drift. Domain drift accounts for all differences for the same tasks in different languages which cannot be captured by a perfect translation system, such as differences in culture. They instead propose a multi-step approach which utilizes a multilingual teacher trained with Unsupervised Data Augmentation (UDA) (Xie et al., 2019) to create labels for a student model that is pretrained on large amounts of unlabeled data in the target lan-guage and domain using the MLM objective. With their method, the authors obtain state-of-the-art results on the MLDoc document classification task (Schwenk and Li, 2018) and the Amazon Sentiment Polarity Review task (Prettenhofer and Stein, 2010). A downside, however, is the high computational cost involved. For every language and domain combination: 1) a machine translation system has to be inferred on a large amount of unlabeled samples; 2) the UDA method needs to be applied to obtain a teacher model to generate pseudo-labels on the unlabeled in-domain data; 3) a language model must be finetuned, which involves forwards and backwards computation of a softmax function over a large output space (e.g., 50k tokens for mBERT and 250k tokens for XLM-RoBERTa). The final classifier is then obtained by 4) training the finetuned language model on the pseudo-labels generated by the teacher.
Meta-learning in NLP
Monolingual Bansal et al. (2019) apply metalearning to a wide range of NLP tasks within a monolingual setting and show superior performance for parameter initialization over selfsupervised pretraining and multi-task learning. Their method is an adaptation of MAML where a combination of a text-encoder, BERT (Devlin et al., 2018), is coupled with a parameter generator that learns to generate task-dependent initializations of the classification head such that metalearning can be performed across tasks with disjoint label spaces. Obamuyide and Vlachos (2019b) apply meta-learning on the task of relation extraction; Obamuyide and Vlachos (2019a) apply lifelong meta-learning for relation extraction; apply meta-learning for few-shot learning on missing link prediction in knowledge graphs.
Multilingual Gu et al. (2018) apply metalearning to Neural Machine Translation (NMT) and show its advantage over strong baselines such as cross-lingual transfer learning. By viewing each language pair as a task, the authors apply MAML to obtain competitive NMT systems with as little as 600 parallel sentences. To our best knowledge, the only application of meta-learning for cross-lingual few-shot learning is the one by Nooralahzadeh et al. (2020). The authors study the application of X-MAML, a MAML-based variant, to crosslingual Natural Language Inference (XNLI) (Conneau et al., 2018) and Multilingual Question An-swering (MLQA) (Lewis et al., 2019) in both a cross-domain and cross-language setting. X-MAML works by pretraining some model M on a high-resource task h to obtain initial model parameters θ mono . Consecutively, a set L of one or more auxiliary languages is taken, and MAML is applied to achieve fast adaptation of θ mono for l ∈ L. In their experiments, the authors use either one or two auxiliary languages and evaluate their method in both a zero-and few-shot setting. It should be noted that, in the few-shot setting, the full development set (2.5k instances) is used to finetune the model, which is not in line with other work on few-shot learning, such as (Bansal et al., 2019). Also, there is a discrepancy in the training set used for the baselines and their proposed method. All reported baselines are either zero-shot evaluations of θ mono or of θ mono finetuned on the development set of the target language, whereas their proposed method additionally uses the development set in either one or two auxiliary languages during meta-training.
Data
In this section, we give an overview of the datasets we use and the respective classification tasks.
MLDoc Schwenk and Li (2018) published an improved version of the Reuters Corpus Volume 2 (Lewis et al., 2004) with balanced class priors for all languages. MLDoc consists of news stories in 8 languages: English, Spanish, French, Italian, Russian, Japanese and Chinese. Each news story is manually classified into one of four groups: Corporate/Industrial, Economics, Government/Social and Markets. The train datasets contain 10k samples whereas the test sets contain 4k samples.
Amazon Sentiment Polarity Another widely used dataset for cross-lingual text classification is the Amazon Sentiment Analysis dataset (Prettenhofer and Stein, 2010). The dataset is a collection of product reviews in English, French, German and Japanese in three categories: books dvds and music. Each sample consists of the original review accompanied by meta-data such as the rating of the reviewed product expressed as an integer on a scale from one to five. In this work, we consider the sentiment polarity task where we distinguish between positive (rating > 3) and negative (rating < 3) reviews. When all product categories are concatenated, the dataset consists of 6K samples per language per dataset (train, test). We extend this with Chinese product reviews in the cosmetics domain from JD.com (Zhang et al., 2015), a large e-commerce website in China. The train and test sets contain 2k and 20k samples respectively.
Experiments
We use XLM-RoBERTa , a strong multilingual model, as the base-learner in all models. We quantify the strengths and weaknesses of meta-learning as opposed to traditional supervised learning in both a cross-and a multilingual joint-training setting with limited resources.
Cross-lingual adaptation Here, the available data is split into multiple subsets: the auxiliary languages l aux which are used in meta-training, the validation language l dev which is used to monitor performance, and the target languages l tgt which are kept unseen until meta-testing. Two scenarios in terms of amounts of available data are considered. A small sample of the available training data of l aux is taken to create a limited-resource setting, whereas all available training data of l aux is used in a high-resource setting. The chosen training data per language is split evenly and stratified over two disjoint sets from which the meta-training support and query samples are sampled, respectively. For meta-testing, one batch (16 samples) is taken from the training data of each target language as support set, while we test on the whole test set per target language (i.e., the query set).
Multilingual joint training We also investigate meta-learning as an approach to multilingual jointtraining in the same limited-resource setting as previously described for the cross-lingual experiments. The difference is that instead of learning to generalize to l tgt = l aux from few examples, here l tgt = l aux . If we can show that one can learn many similar tasks across languages from few examples per language, using a total number of examples in the same order of magnitude as in "traditional" supervised learning for training a monolingual classifier, this might be an incentive to change data collection processes in practice.
For both experimental settings above, we examine the influence of additionally using all training data from a high-resource language l src during meta-training, English.
MetaUpdate Method Num inner-loop steps Inner-loop lr
Class-head lr multiplier Inner-optimizer lr Reptile 2,3,5 1e-5, 5e-5, 1e-4 1, 10 -foMAML 2,3,5 1e-5, 1e-4, 1e-3 1, 10 3e-5, 6e-5, 1e-4 foProtoMAMLn 2,3,5 1e-5, 1e-4, 1e-3 1, 10 3e-5, 6e-5, 1e-4
Specifics per dataset
MLDoc As MLDoc has sufficient languages, we set l src = English and l dev = Spanish. The remaining languages are split in two groups: l aux = {German, Italian, Japanese}; and l tgt = {French, Russian, Chinese}. In the limitedresource setting, we randomly sample 64 samples per language in l aux for training. Apart from comparing low-and high-resource settings, we also quantify the influence of augmenting the training set l aux with a high-resource source language l src , English.
Amazon Sentiment Polarity The fact that the Amazon dataset (augmented with Chinese) comprises of only five languages has some implications for our experimental design. In the cross-lingual experiments, where l aux , l dev and l tgt should be disjoint, only three languages, including English, remain for meta-training. As we consider two languages too little data for meta-training, we do not experiment with leaving out the English data.
Hence, for meta-training, the data consists of l src = English, as well as two languages in l aux . We always keep one language unseen until meta-testing, and alter l aux such that we can meta-test on every language. We set l dev = French in all cases except when French is used as the target language; then, l dev = Chinese. In the limited-resource setting, a total of 128 samples per language in l aux is used. For the multilingual joint-training experiments there are enough languages available to quantify the influence of English during meta-training. When English is excluded, it is used for metavalidation. When included, we average results over two sets of experiments: one where l dev = French and one where l dev = Chinese.
Baselines
We introduce baselines trained in a standard supervised, non-episodic fashion. Again, we use XLM-RoBERTa-base as the base-learner in all models.
Zero-shot This baseline assumes sufficient training data for the task to be available in one language l src (English). The base-learner is trained in a nonepisodic manner using mini-batch gradient descent with cross-entropy loss. Performance is monitored during training on a held-out validation set in l src , the model with the lowest loss is selected, and then evaluated on the same task in the target languages.
Non-episodic The second baseline aims to quantify the exact impact of learning a model through the meta-learning paradigm versus standard super- vised learning. The model learns from exactly the same data as the meta-learning algorithms, but in a non-episodic manner: i.e., merging support and query sets in l aux (and l src when included) and training using mini-batch gradient descent with cross-entropy loss. During testing, the trained model is independently finetuned for 5 steps on the support set (one mini-batch) of each target language l tgt .
Training setup and hyper-parameters
We use the Ranger optimizer, an adapted version of Adam (Kingma and Ba, 2014) with improved stability at the beginning of training -by accounting for the variance in adaptive learning rates (Liu et al., 2019) -and improved robustness and convergence speed Yong et al., 2020). We use a batch size of 16 and a learning rate of 3e-5 to which we apply cosine annealing. For meta-training, we perform 100 epochs of 100 episodes and perform evaluation with 5 different seeds on the meta-validation set after each epoch. One epoch consists of 100 update steps where each update step consists of a batch of 4 episodes. Earlystopping with a patience of 3 epochs is performed to avoid overfitting. For the non-episodic baselines, we train for 10 epochs on the auxiliary languages while validating after each epoch. All models are created using the PyTorch library (Paszke et al., 2017) and trained on a single 24Gb NVIDIA Titan RTX GPU. We perform grid search on MLDoc in order to determine optimal hyperparameters for the MetaUpdate methods. The hyper-parameters resulting in the lowest loss on l dev = Spanish are used in all experiments. The number of update steps in the inner-loop is 5; the (initial) learning rate of the inner-loop is 1e-5 for MAML and ProtoMAML and 5e-5 for Reptile; the factor by which the learn-ing rate of the classification head is multiplied is 10 for MAML and ProtoMAML and 1 for Reptile; when applicable, the learning rate with which the inner-loop optimizer is updated is 6e-5. See Table 1 for the considered grid. Tables 2 and 3 show the accuracy scores on the target languages on ML-Doc and Amazon respectively. We start by noting the strong multilingual capabilities of XLM-RoBERTa as our base-learner: Adding the full training datasets in three extra languages (i.e., comparing the zero-shot with the non-episodic baseline in the high-resource, 'Included' setting) results in a mere 1.2% points increase in accuracy on average for MLDoc and 0.6% points for Amazon. Although the zero-shot 3 and non-episodic baselines are strong, in the majority of cases, a metalearning approach improves performance. This holds especially for our version of ProtoMAML (ProtoMAMLn), which achieves the highest average accuracy in all considered settings.
Results
Cross-lingual adaptation
The substantial improvements for Russian on MLDoc and Chinese on Amazon indicate that metalearning is most advantageous when the considered task distribution is somewhat heterogeneous or, in other words, when domain drift (Lai et al., 2019) is present. For the Chinese data used for the sentiment polarity task, the presence of domain drift is obvious as the data is collected from a different website and concerns different products than the other languages. For Russian in the MLDoc dataset, it holds that the non-episodic baseline has the smallest gain in performance when adding English data (l src ) in the limited-resource setting (0.2% absolute gain as l src = en opposed to 5.7% on average for the remaining languages) and even a decrease of 2.4% points when adding English data in the high-resource setting.
Especially for these languages with domain drift, our version of ProtoMAML (foProtoMAMLn) outperforms the non-episodic baselines with a relatively large margin. For instance, in Table 2 in the high-resource setting with English included during training, foProtoMAMLn improves over the non-episodic baseline with 9.1% points whereas the average gain over the remaining languages is 0.9% points. A similar trend can be seen in Table 3 where, in the limited-resource setting, foPro-toMAMLn outperforms the non-episodic baseline with 1.9% points on Chinese, with comparatively smaller gains on average for the remaining languages.
Joint training In this setting, we achieve a new state of the art on MLDoc for German, Italian, Japanese and Russian using our method, foPro-toMAMLn ( Tables 2 and 3. on MLDoc, while we use a much less computationally expensive approach. Again, we use Russian in MLDoc to exemplify the difference between meta-learning and standard supervised learning. When comparing the difference in performance between excluding and including English meta-training episodes (l src ), opposite trends are noticeable: for standard supervised, nonepisodic learning, performance drops slightly by 0.3%, whereas all meta-learning algorithms gain between 2.2% and 6.7% in absolute accuracy. This confirms our earlier finding that meta-learning benefits from, and usefully exploits heterogeneity in data distributions; in contrast, this harms performance in the standard supervised-learning case.
7 Ablations foProtoMAMLn Figure 1 shows the development of the validation accuracy during training for 25 epochs for the original foProtoMAML and our model, foProtoMAMLn. By applying L 2 normalization to the prototypes, we obtain a more stable version of foProtoMAML which empirically converges faster. We furthermore re-run the high- resource experiments with English for both ML-Doc and Amazon using the original foProtoMAML (Table 5) and find it performs 4.3% and 1.7% accuracy points worse on average, respectively, further demonstrating the effectiveness of our approach.
Initializing from a monolingual classifier In our experiments, we often assume the presence of a source language (English). We now investigate (in the l src = en 'Excluded' setting) whether it is beneficial to pre-train the base-learner in a standard supervised way on this source language and use the obtained checkpoint θ mono as an initialization for meta-training (Table 6) rather than initializing from the transformer checkpoint. We observe that only ProtoNet consistently improves performance, whereas foProtoMAMLn suffers the most with a decrease of 3.1% and 3.96% in accuracy in the low-and high-resource setting respectively. We surmise this difference is attributable to two factors. Intuitively, the monolingual classifier aims to learn a transformation from the input space to the final feature space, from which the prototypes for ProtoNet and Pro-toMAML are created, in which the learned classes are encoded in their own disjoint sub-spaces such that a linear combination of these features can be used to correctly classify instances. ProtoNet aims to learn a similar transformation, but uses a Nearest Neighbours approach to classify instances instead. ProtoMAML on the other hand benefits the most from prototypes which can be used to classify instances after the inner-loop updates have been per-formed. This, in combination with the fact that the first-order approximation of ProtoMAML cannot differentiate through the creation of the prototypes, could explain the difference in performance gain with respect to ProtoNet.
Conclusion
We proposed a meta-learning framework for fewshot cross-and multilingual joint-learning for document classification tasks in different domains. We demonstrated that it leads to consistent gains over traditional supervised learning on a wide array of data availability and diversity settings, and showed that it thrives in settings with a heterogenous task distribution. We presented an effective adaptation to ProtoMAML and, among others, obtained a new state of the art on German, Italian, Japanese and Russian in the few-shot setting on MLDoc.
Figure 1 :
1Validation accuracy for 3 seeds for original foProtoMAML and our new method, foProtoMAMLn.
Table 1 :
1Search range per hyper-parameter. We consider the number of update steps in the inner-loop, Num innerloop steps, the (initial) learning rate of the inner-loop, Inner-loop lr, the factor by which the learning rate of the classification head is multiplied, Class-head lr multiplier, and, if applicable, the learning rate with which the innerloop optimizer is updated, Inner-optimizer lr. The chosen value is underlined.l src = en
Method
Limited-resource setting
High-resource setting
de
fr
it
ja
ru
zh
∆
de
fr
it
ja
ru
zh
∆
Excluded
Non-episodic
82.0 86.7 68.3 71.9 70.9 81.0 76.8 95.3 90.9 80.9 82.9 74.5 89.6 85.7
ProtoNet
90.5 85.0 76.6 75.0 69.6 82.0 79.8 95.5 91.7 82.0 82.2 76.6 87.4 85.9
foMAML
89.7 85.5 74.1 74.1 74.0 83.2 80.1 95.0 91.4 81.4 82.7 76.9 87.8 86.1
foProtoMAMLn 90.6 86.2 77.8 75.6 73.6 83.8 80.7 95.6 92.1 82.6 83.1 77.9 88.9 86.7
Reptile
87.9 81.8 72.7 74.4 73.9 80.9 78.6 95.0 90.1 81.1 82.7 72.5 88.7 85.0
Included
Zero-shot
92.4 92.1 80.3 81.0 71.7 89.1 84.4 92.4 92.1 80.3 81.0 71.7 89.1 84.4
Non-episodic
93.7 91.3 81.5 80.6 71.1 88.4 84.4 93.7 92.9 82.4 82.3 72.1 90.1 85.6
ProtoNet
93.4 91.9 79.1 81.3 72.2 87.8 84.5 95.0 91.7 81.1 82.7 72.0 88.0 85.9
foMAML
95.1 91.2 79.5 79.6 73.3 89.7 84.6 94.8 93.2 79.9 82.4 75.7 90.6 86.1
foProtoMAMLn 94.9 91.7 81.5 81.4 75.2 89.9 85.5 95.8 94.1 82.7 83.0 81.2 90.4 87.9
Reptile
92.3 91.4 79.7 79.5 71.8 88.1 83.8 94.8 91.0 80.2 82.0 72.7 89.9 85.1
Table 2 :
2Average accuracy of 5 different seeds on the unseen target languages for MLDoc. ∆ corresponds to the average accuracy across test languages.
Table 3 :
3Average accuracy of 5 different seeds on the unseen target languages for Amazon. ∆ corresponds to the average accuracy across test languages.
Table 4 :
4Average accuracy of 5 different seeds on the target languages in the joint-training setting for MLDoc and Amazon. ∆ corresponds to the average accuracy across test languages.
Table 4 )
4. 4 The previous state of the art for German and Russian is held by Lai et al. (2019) (95.73% and 84.65% respectively). For Japanese and Italian, it is held by Eisenschlos et al. (2019) (80.55% and 80.12% respectively). The state of the art for French and Chinese is also held by Lai et al. (2019) (96.05% and 93.32% respectively). On the Amazon dataset, foProtoMAMLn also outperforms all other methods on average. The state of the art is held by (2019) with 93.3%, 94.2% and 90.6% forFrench, German and Chinese respectively and, although we do not outperform it, the differences are rather small -between 0.2% (Chinese) and 3.4% points (German) -even when grid search is based 4 The zero-shot baselines are the same as in
Table 5 :
5Average accuracy of 5 different seeds on unseen target languages using the original/unnormalized foPro-toMAML model. Diff is the difference in average accuracy ∆ across languages against foProtoMAMLn.Method
Limited-resource setting
High-resource setting
de
fr
ja
zh
Diff
de
fr
ja
zh
Diff
ProtoNet
91.1 90.9 87.1 85.5 +0.75 91.3 91.1 87.4 88.7 +1.44
foMAML
90.8 87.4 87.3 85.2 -0.75 91.7 91.2 87.2 88.1 -1.13
foProtoMAMLn 87.7 87.8 83.9 84.4 -3.1
90.8 89.8 86.2 82.3 -3.96
Reptile
89.3 90.2 86.7 85.5 +0.35 90.0 89.3 87.1 85.7 -1.04
Table 6 :
6Average accuracy of 5 different seeds on unseen target languages for Amazon when initializing from monolingual classifier in l src . Diff : difference in average accuracy ∆ across languages compared to initializing from the XLM-RoBERTa language model.
https://www.ethnologue.com/statistics
The zero-shot baseline is only applicable in the 'Included' setting, as the English data is not available under 'Excluded'.
Mikel Artetxe and Holger Schwenk. 2019. Massively multilingual sentence embeddings for zeroshot cross-lingual transfer and beyond. Transactions of the Association for Computational Linguistics, 7:597-610.
AcknowledgementsThis work was supported by Deloitte Risk Advisory B.V., the Netherlands.
How to train your maml. Antreas Antoniou, Harrison Edwards, Amos Storkey, arXiv:1810.09502arXiv preprintAntreas Antoniou, Harrison Edwards, and Amos Storkey. 2018. How to train your maml. arXiv preprint arXiv:1810.09502.
Learning to few-shot learn across diverse natural language classification tasks. Trapit Bansal, Rishikesh Jha, Andrew Mccallum, arXiv:1911.03863arXiv preprintTrapit Bansal, Rishikesh Jha, and Andrew McCallum. 2019. Learning to few-shot learn across diverse nat- ural language classification tasks. arXiv preprint arXiv:1911.03863.
Learning a synaptic learning rule. Yoshua Bengio, Samy Bengio, Jocelyn Cloutier, CiteseerYoshua Bengio, Samy Bengio, and Jocelyn Cloutier. 1990. Learning a synaptic learning rule. Citeseer.
Meta relational learning for few-shot link prediction in knowledge graphs. Mingyang Chen, Wen Zhang, Wei Zhang, Qiang Chen, Huajun Chen, 10.18653/v1/D19-1431Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsMingyang Chen, Wen Zhang, Wei Zhang, Qiang Chen, and Huajun Chen. 2019. Meta relational learning for few-shot link prediction in knowledge graphs. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 4217- 4226, Hong Kong, China. Association for Computa- tional Linguistics.
Unsupervised cross-lingual representation learning at scale. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1911.02116arXiv preprintAlexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.
Alexis Conneau, Guillaume Lample, Ruty Rinott, Adina Williams, Holger Samuel R Bowman, Veselin Schwenk, Stoyanov, arXiv:1809.05053Xnli: Evaluating crosslingual sentence representations. arXiv preprintAlexis Conneau, Guillaume Lample, Ruty Rinott, Ad- ina Williams, Samuel R Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. Xnli: Evaluating cross- lingual sentence representations. arXiv preprint arXiv:1809.05053.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, arXiv:1810.04805Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.
Investigating meta-learning algorithms for low-resource natural language understanding tasks. Zi-Yi Dou, Keyi Yu, Antonios Anastasopoulos, arXiv:1908.10423arXiv preprintZi-Yi Dou, Keyi Yu, and Antonios Anastasopoulos. 2019. Investigating meta-learning algorithms for low-resource natural language understanding tasks. arXiv preprint arXiv:1908.10423.
Multifit: Efficient multi-lingual language model fine-tuning. Julian Eisenschlos, Sebastian Ruder, Piotr Czapla, Marcin Kardas, Sylvain Gugger, Jeremy Howard, arXiv:1909.04761arXiv preprintJulian Eisenschlos, Sebastian Ruder, Piotr Czapla, Marcin Kardas, Sylvain Gugger, and Jeremy Howard. 2019. Multifit: Efficient multi-lingual language model fine-tuning. arXiv preprint arXiv:1909.04761.
Retrofitting word vectors to semantic lexicons. Manaal Faruqui, Jesse Dodge, K Sujay, Chris Jauhar, Eduard Dyer, Noah A Hovy, Smith, arXiv:1411.4166arXiv preprintManaal Faruqui, Jesse Dodge, Sujay K Jauhar, Chris Dyer, Eduard Hovy, and Noah A Smith. 2014. Retrofitting word vectors to semantic lexicons. arXiv preprint arXiv:1411.4166.
Model-agnostic meta-learning for fast adaptation of deep networks. Chelsea Finn, Pieter Abbeel, Sergey Levine, Proceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine LearningJMLR. org70Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th Interna- tional Conference on Machine Learning-Volume 70, pages 1126-1135. JMLR. org.
Meta-learning for lowresource neural machine translation. Jiatao Gu, Yong Wang, Yun Chen, Kyunghyun Cho, O K Victor, Li, arXiv:1808.08437arXiv preprintJiatao Gu, Yong Wang, Yun Chen, Kyunghyun Cho, and Victor OK Li. 2018. Meta-learning for low- resource neural machine translation. arXiv preprint arXiv:1808.08437.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Siamese neural networks for one-shot image recognition. Gregory Koch, Richard Zemel, Ruslan Salakhutdinov, ICML deep learning workshop. Lille2Gregory Koch, Richard Zemel, and Ruslan Salakhutdi- nov. 2015. Siamese neural networks for one-shot im- age recognition. In ICML deep learning workshop, volume 2. Lille.
Bridging the domain gap in cross-lingual document classification. Guokun Lai, Barlas Oguz, Veselin Stoyanov, arXiv:1909.07009arXiv preprintGuokun Lai, Barlas Oguz, and Veselin Stoyanov. 2019. Bridging the domain gap in cross-lingual document classification. arXiv preprint arXiv:1909.07009.
Guillaume Lample, Alexis Conneau, arXiv:1901.07291Crosslingual language model pretraining. arXiv preprintGuillaume Lample and Alexis Conneau. 2019. Cross- lingual language model pretraining. arXiv preprint arXiv:1901.07291.
Unsupervised machine translation using monolingual corpora only. Guillaume Lample, Alexis Conneau, Ludovic Denoyer, Marc'aurelio Ranzato, arXiv:1711.00043arXiv preprintGuillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2017. Unsupervised ma- chine translation using monolingual corpora only. arXiv preprint arXiv:1711.00043.
Rcv1: A new benchmark collection for text categorization research. D David, Yiming Lewis, Tony G Yang, Fan Rose, Li, Journal of machine learning research. 5David D Lewis, Yiming Yang, Tony G Rose, and Fan Li. 2004. Rcv1: A new benchmark collection for text categorization research. Journal of machine learning research, 5(Apr):361-397.
Patrick Lewis, Barlas Oguz, Ruty Rinott, Sebastian Riedel, Holger Schwenk, arXiv:1910.07475Mlqa: Evaluating cross-lingual extractive question answering. arXiv preprintPatrick Lewis, Barlas Oguz, Ruty Rinott, Sebastian Riedel, and Holger Schwenk. 2019. Mlqa: Eval- uating cross-lingual extractive question answering. arXiv preprint arXiv:1910.07475.
Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, Jiawei Han, arXiv:1908.03265On the variance of the adaptive learning rate and beyond. arXiv preprintLiyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Jiawei Han. 2019. On the variance of the adaptive learning rate and beyond. arXiv preprint arXiv:1908.03265.
Ilya Loshchilov, Frank Hutter, arXiv:1608.03983Sgdr: Stochastic gradient descent with warm restarts. arXiv preprintIlya Loshchilov and Frank Hutter. 2016. Sgdr: Stochas- tic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983.
Exploiting similarities among languages for machine translation. Tomas Mikolov, V Quoc, Ilya Le, Sutskever, arXiv:1309.4168arXiv preprintTomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013a. Exploiting similarities among languages for ma- chine translation. arXiv preprint arXiv:1309.4168.
Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, Jeff Dean, Advances in neural information processing systems. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013b. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.
On first-order meta-learning algorithms. Alex Nichol, Joshua Achiam, John Schulman, arXiv:1803.02999arXiv preprintAlex Nichol, Joshua Achiam, and John Schulman. 2018. On first-order meta-learning algorithms. arXiv preprint arXiv:1803.02999.
Zero-shot cross-lingual transfer with meta learning. Farhad Nooralahzadeh, Giannis Bekoulis, Johannes Bjerva, Isabelle Augenstein, arXiv:2003.02739arXiv preprintFarhad Nooralahzadeh, Giannis Bekoulis, Johannes Bjerva, and Isabelle Augenstein. 2020. Zero-shot cross-lingual transfer with meta learning. arXiv preprint arXiv:2003.02739.
Meta-learning improves lifelong relation extraction. Abiola Obamuyide, Andreas Vlachos, 10.18653/v1/W19-4326Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019). the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019)Florence, ItalyAssociation for Computational LinguisticsAbiola Obamuyide and Andreas Vlachos. 2019a. Meta-learning improves lifelong relation extraction. In Proceedings of the 4th Workshop on Represen- tation Learning for NLP (RepL4NLP-2019), pages 224-229, Florence, Italy. Association for Computa- tional Linguistics.
Model-agnostic meta-learning for relation classification with limited supervision. Abiola Obamuyide, Andreas Vlachos, 10.18653/v1/P19-1589Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsAbiola Obamuyide and Andreas Vlachos. 2019b. Model-agnostic meta-learning for relation classifica- tion with limited supervision. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 5873-5879, Florence, Italy. Association for Computational Linguistics.
Automatic differentiation in pytorch. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary Devito, Zeming Lin, Alban Desmaison, Luca Antiga, Adam Lerer, NIPS 2017 Workshop Autodiff Submission. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. In NIPS 2017 Workshop Autodiff Submission.
Crosslanguage text classification using structural correspondence learning. Peter Prettenhofer, Benno Stein, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. the 48th Annual Meeting of the Association for Computational LinguisticsUppsalaSweden. Association for Computationprettenhoferal LinguisticsPeter Prettenhofer and Benno Stein. 2010. Cross- language text classification using structural corre- spondence learning. In Proceedings of the 48th Annual Meeting of the Association for Computa- tional Linguistics, pages 1118-1127, Uppsala, Swe- den. Association for Computationprettenhoferal Lin- guistics.
Optimization as a model for few-shot learning. Sachin Ravi, Hugo Larochelle, International Conference on Learning Representations. Sachin Ravi and Hugo Larochelle. 2017. Optimization as a model for few-shot learning. In International Conference on Learning Representations.
Evolutionary principles in self-referential learning. On learning how to learn: The meta-meta-... hook.) Diploma thesis. Jurgen Schmidhuber, 1Institut f. Informatik, Tech. Univ. MunichJurgen Schmidhuber. 1987. Evolutionary principles in self-referential learning. On learning how to learn: The meta-meta-... hook.) Diploma thesis, Institut f. Informatik, Tech. Univ. Munich, 1(2).
A corpus for multilingual document classification in eight languages. Holger Schwenk, Xian Li, Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)Paris, FranceEuropean Language Resources Association (ELRAHolger Schwenk and Xian Li. 2018. A corpus for mul- tilingual document classification in eight languages. In Proceedings of the Eleventh International Confer- ence on Language Resources and Evaluation (LREC 2018), Paris, France. European Language Resources Association (ELRA).
Prototypical networks for few-shot learning. Jake Snell, Kevin Swersky, Richard Zemel, Advances in neural information processing systems. Jake Snell, Kevin Swersky, and Richard Zemel. 2017. Prototypical networks for few-shot learning. In Ad- vances in neural information processing systems, pages 4077-4087.
Learning to learn: Introduction and overview. Sebastian Thrun, Lorien Pratt, Learning to learn. SpringerSebastian Thrun and Lorien Pratt. 1998. Learning to learn: Introduction and overview. In Learning to learn, pages 3-17. Springer.
Meta-dataset: A dataset of datasets for learning to learn from few examples. Eleni Triantafillou, Tyler Zhu, Vincent Dumoulin, Pascal Lamblin, Utku Evci, Kelvin Xu, Ross Goroshin, Carles Gelada, Kevin Swersky, Pierre-Antoine Manzagol, Hugo Larochelle, International Conference on Learning Representations. Eleni Triantafillou, Tyler Zhu, Vincent Dumoulin, Pas- cal Lamblin, Utku Evci, Kelvin Xu, Ross Goroshin, Carles Gelada, Kevin Swersky, Pierre-Antoine Man- zagol, and Hugo Larochelle. 2020. Meta-dataset: A dataset of datasets for learning to learn from few ex- amples. In International Conference on Learning Representations.
Simpleshot: Revisiting nearest-neighbor classification for few-shot learning. Yan Wang, Wei-Lun Chao, Q Kilian, Laurens Weinberger, Van Der Maaten, arXiv:1911.04623arXiv preprintYan Wang, Wei-Lun Chao, Kilian Q Weinberger, and Laurens van der Maaten. 2019. Simpleshot: Re- visiting nearest-neighbor classification for few-shot learning. arXiv preprint arXiv:1911.04623.
Unsupervised data augmentation for consistency training. Qizhe Xie, Zihang Dai, Eduard Hovy, Minh-Thang Luong, Quoc V Le, arXiv:1904.12848arXiv preprintQizhe Xie, Zihang Dai, Eduard Hovy, Minh-Thang Lu- ong, and Quoc V Le. 2019. Unsupervised data aug- mentation for consistency training. arXiv preprint arXiv:1904.12848.
Hongwei Yong, Jianqiang Huang, Xiansheng Hua, Lei Zhang, arXiv:2004.01461Gradient centralization: A new optimization technique for deep neural networks. arXiv preprintHongwei Yong, Jianqiang Huang, Xiansheng Hua, and Lei Zhang. 2020. Gradient centralization: A new optimization technique for deep neural networks. arXiv preprint arXiv:2004.01461.
Lookahead optimizer: k steps forward, 1 step back. Michael Zhang, James Lucas, Jimmy Ba, Geoffrey E Hinton, Advances in Neural Information Processing Systems. Michael Zhang, James Lucas, Jimmy Ba, and Geof- frey E Hinton. 2019. Lookahead optimizer: k steps forward, 1 step back. In Advances in Neural Infor- mation Processing Systems, pages 9597-9608.
Daily-aware personalized recommendation based on feature-level time series analysis. Yongfeng Zhang, Min Zhang, Yi Zhang, Guokun Lai, Yiqun Liu, Honghui Zhang, Shaoping Ma, Proceedings of the 24th international conference on world wide web. the 24th international conference on world wide webYongfeng Zhang, Min Zhang, Yi Zhang, Guokun Lai, Yiqun Liu, Honghui Zhang, and Shaoping Ma. 2015. Daily-aware personalized recommendation based on feature-level time series analysis. In Proceedings of the 24th international conference on world wide web, pages 1373-1383.
| [
"https://github.com/mrvoh/meta_"
]
|
[
"pi-GAN: Periodic Implicit Generative Adversarial Networks for 3D-Aware Image Synthesis",
"pi-GAN: Periodic Implicit Generative Adversarial Networks for 3D-Aware Image Synthesis"
]
| [
"Eric R Chan [email protected] \nStanford University\nStanford University\nStanford University\nStanford University\nStanford University\n\n",
"Marco Monteiro [email protected] \nStanford University\nStanford University\nStanford University\nStanford University\nStanford University\n\n",
"Petr Kellnhofer \nStanford University\nStanford University\nStanford University\nStanford University\nStanford University\n\n",
"Jiajun Wu [email protected] \nStanford University\nStanford University\nStanford University\nStanford University\nStanford University\n\n",
"Gordon Wetzstein [email protected] \nStanford University\nStanford University\nStanford University\nStanford University\nStanford University\n\n"
]
| [
"Stanford University\nStanford University\nStanford University\nStanford University\nStanford University\n",
"Stanford University\nStanford University\nStanford University\nStanford University\nStanford University\n",
"Stanford University\nStanford University\nStanford University\nStanford University\nStanford University\n",
"Stanford University\nStanford University\nStanford University\nStanford University\nStanford University\n",
"Stanford University\nStanford University\nStanford University\nStanford University\nStanford University\n"
]
| []
| We have witnessed rapid progress on 3D-aware image synthesis, leveraging recent advances in generative visual models and neural rendering. Existing approaches however fall short in two ways: first, they may lack an underlying 3D representation or rely on view-inconsistent rendering, hence synthesizing images that are not multi-view consistent; second, they often depend upon representation network architectures that are not expressive enough, and their results thus lack in image quality. We propose a novel generative model, named Periodic Implicit Generative Adversarial Networks (π-GAN or pi-GAN), for high-quality 3D-aware image synthesis. π-GAN leverages neural representations with periodic activation functions and volumetric rendering to represent scenes as view-consistent 3D representations with fine detail. The proposed approach obtains state-of-the-art results for 3D-aware image synthesis with multiple real and synthetic datasets. | 10.1109/cvpr46437.2021.00574 | [
"https://arxiv.org/pdf/2012.00926v1.pdf"
]
| 227,247,980 | 2012.00926 | 8d17d62952f141fe5c4948eeafb8be5a8db9d054 |
pi-GAN: Periodic Implicit Generative Adversarial Networks for 3D-Aware Image Synthesis
Eric R Chan [email protected]
Stanford University
Stanford University
Stanford University
Stanford University
Stanford University
Marco Monteiro [email protected]
Stanford University
Stanford University
Stanford University
Stanford University
Stanford University
Petr Kellnhofer
Stanford University
Stanford University
Stanford University
Stanford University
Stanford University
Jiajun Wu [email protected]
Stanford University
Stanford University
Stanford University
Stanford University
Stanford University
Gordon Wetzstein [email protected]
Stanford University
Stanford University
Stanford University
Stanford University
Stanford University
pi-GAN: Periodic Implicit Generative Adversarial Networks for 3D-Aware Image Synthesis
We have witnessed rapid progress on 3D-aware image synthesis, leveraging recent advances in generative visual models and neural rendering. Existing approaches however fall short in two ways: first, they may lack an underlying 3D representation or rely on view-inconsistent rendering, hence synthesizing images that are not multi-view consistent; second, they often depend upon representation network architectures that are not expressive enough, and their results thus lack in image quality. We propose a novel generative model, named Periodic Implicit Generative Adversarial Networks (π-GAN or pi-GAN), for high-quality 3D-aware image synthesis. π-GAN leverages neural representations with periodic activation functions and volumetric rendering to represent scenes as view-consistent 3D representations with fine detail. The proposed approach obtains state-of-the-art results for 3D-aware image synthesis with multiple real and synthetic datasets.
Introduction
Generative Adversarial Networks (GANs) are capable of generating high-resolution, photorealistic images [23,24,25]. However, these GANs are often confined to two dimensions because of a lack of photorealistic 3D training data; therefore, they cannot support tasks such as synthesizing multiple views of a single object. 3D-aware image synthesis offers to learn 3D representations unsupervised from 2D images. The learned 3D representations can be used to render view-consistent images from new camera poses [40,52,17]. * These authors contributed equally to this work. Figure 1: Selected examples synthesized by π-GAN with CelebA [31] and Cats [63] datasets. Current solutions have achieved impressive results in decoupling identity from structure, allowing for the rendering of a single instance from multiple poses. Nevertheless, these approaches either lack multi-view consistency or fine detail. Voxel-based approaches [17] generate interpretable, true 3D representations, but are limited by computational complexity to low resolutions and coarse detail. Convolutional approaches with deep-voxel representations [40,41] take advantage of recent progress in convolutional GANs and can create finely detailed images. However, because of their reliance on learned black-box rendering, these approaches fail to guarantee multi-view consistency and cannot easily generalize beyond the training distribution of camera poses at inference. Recent approaches that leverage neural implicit representations [52] incorporate interpretable 3D representations that ensure multi-view consistency and explicit camera control. Nonetheless, the implicit representations used by these approaches have so far been unable to effectively express fine details, leading to compromised image quality.
We propose Periodic Implicit Generative Adversarial Networks (π-GAN), a generative adversarial approach to unsupervised 3D representation learning from images. Given input noise, π-GAN conditions an implicit radiance field represented by a SIREN network [54], a fullyconnected network with periodic activation functions. The conditioned radiance field maps a 3D location and 2D viewing direction to a view-dependent radiance and viewindependent volume density [21,34]. Using a differentiable volume rendering approach that relies on classical volume rendering techniques, we can render the radiance field from arbitrary camera poses [38].
π-GAN improves upon the image quality and viewconsistency of previous approaches to 3D-aware image synthesis, as shown in Figure 1. The proposed method utilizes a view-independent, SIREN-based 3D representation to encourage multi-view consistency, allowing rendering from a wide range of camera poses and providing an interpretable 3D structure. The SIREN implicit scene representation, which makes use of periodic activation functions, is more capable than ReLU implicit representations at representing fine details and enables π-GAN to render sharper images than previous works.
Beyond introducing π-GAN, we make two additional technical contributions. First, we observe that while existing work has conditioned ReLU-based radiance fields through concatenation of the input noise to one or more layers, conditioning-by-concatenation is sub-optimal for implicit neural representations with period activations (SIRENs). We instead propose to use a mapping network to condition layers in the SIREN through feature-wise linear modulation (FiLM) [47,8]. This contribution can more generally be applied to SIREN architectures beyond GANs. Second, we introduce a progressive growing strategy, inspired by previous successes in 2D convolutional GANs [23], to accelerate training and offset the increased computational complexity of 3D GANs.
We obtain state-of-the-art results on real-world and synthetic datasets, demonstrate that our method generalizes to new viewpoints, possesses an interpretable 3D representation, and has applications to novel-view-synthesis.
In summary, our contributions in this paper include the following:
• We introduce SIREN-based implicit GANs as a viable alternative to convolution GAN architectures. • We propose a mapping network with FiLM conditioning and a progressive growing discriminator as key components to achieve high quality results with our novel SIREN-based implicit GAN. • We demonstrate strong view consistency, explicit camera control, and an interpretable 3D structure as advantages of approaches that rely on an underlying 3D representation and classical rendering. • We achieve state-of-the-art results on 3D-aware image synthesis from unsupervised 2D data on the CelebA [31], Cats [63], and CARLA [7,52] datasets.
Related Work
Implicit neural representations and rendering. Emerging neural implicit scene representations promise 3Dstructure-aware, continuous, memory-efficient representations for shape parts [11,10], objects [45,37,1,14,62,6,4], or scenes [9,55,19,46,54]. These representations can be supervised with 3D data, such as point clouds, and optimized as either signed distance functions [45,37,1,14,55,19,46,53] or occupancy networks [35,5]. Using neural rendering [57], implicit neural representations can also be trained using multiview 2D images [50,55,44,43,38,62,28,20,30]. Temporally aware extensions [42] and multimodal variants that add part-level semantic segmentation [26] have also been proposed. Among these approaches, sinusoidal representation networks (SIREN) [54] and neural radiance fields (NeRF) [38] are most closely related to our work. Specifically, we use SIREN as the representation network architecture of our framework combined with a neural rendering technique inspired by NeRF. Both SIREN and NeRF, however, have only been explored in the context of overfitting to individual objects or scenes, whereas we study the combination of aspects of these seminal works for applications in 3D GANs. Exploring the unique challenges of training a 3D neural implicit GAN supervised by natural 2D data is one of the core contributions of our work.
Generative 3D-aware image synthesis. Generative Adversarial Nets (GANs) [13], or more generally the paradigm of adversarial learning, have led to significant progress in various image synthesis tasks, including image generation [48,23,24,25], image-to-image translation [64], and interactive image editing [60]. These methods operate on the 2D space of pixels, ignoring the 3D nature of our physical world. This has limited the application of these generative models in tasks such as view synthesis.
Visual Object Networks [65] learn to synthesize 2D images by first generating a voxelized 3D shape using a 3D- Figure 2: The π-GAN generator architecture.
GAN [61], projecting it into 2D, and applying textures using a CycleGAN [64]. HoloGAN [40] and BlockGAN [41] have extended the system by incorporating a volumetric but implicit 3D representation. While these methods attempt to model the 3D structure of the object in the synthesized image, the use of an explicit volume representation has constrained their resolution [33]. Liao et al. [27] instead proposed to model 3D shape as a collection of primitives for image synthesis; however, primitives lack the expressiveness needed to synthesize high-fidelity pictures. The work most similar to ours is GRAF [52], which learns a generative model for implicit radiance fields for 3D-aware image synthesis, leveraging recent advances in implicit neural representations. Our π-GAN differs from GRAF in three ways. First, we use SIREN rather than a positionally encoded ReLU MLP as a choice of neural implicit representation. Second, GRAF conditioned its MLP generator on both a shape noise code and an appearance noise code by concatenation; in contrast, we leverage a StyleGAN-inspired mapping network, which conditions the entire MLP on a single input noise vector through FiLM conditioning. Third, we utilize a progressive growing strategy during training. Experiments demonstrate that all innovations are critical to high-quality image synthesis results.
Beyond unconditional 3D-aware image generation, there is an orthogonal line of work on conditional reconstruction of 3D shape and texture from partial observations. These reconstructions can later be used for novel view synthesis. Various 3D representations have been considered for the task, including voxels [17,59], meshes [22,5,12,16,39], point clouds [56], and implicit functions [49,58]. Many of these methods are also grounded in adversarial training. While these methods focus on 3D reconstruction, π-GAN aims to learn an unconditional generative model of images and its underlying 3D structure.
Methods
π-GAN is a generative approach to learning 3D representations from unlabeled 2D images, with the goal of syn-thesizing high-quality view consistent images. Traditional 2D GANs, such as StyleGAN [24], take in a latent vector z ∼ p z and directly produce a 2D image. Instead of directly generating a 2D image from the input noise, z, our generator G θ G (z, ξ) produces an implicit 3D radiance field conditioned on z. This radiance field is rendered using volume rendering to produce a 2D image from some camera pose ξ.
At training time, the generated images are directed to a traditional convolutional discriminator for adversarial training. At test time, the radiance field can be rendered from arbitrary camera poses to produce view-consistent images.
SIREN-Based Implicit Radiance Field
We represent 3D objects implicitly with a neural radiance field, which is parameterized as a multilayer perceptron (MLP) that takes as input a 3D coordinate in space x = (x, y, z) and the viewing direction d. The neural radiance field outputs both the spatially varying density σ(x) : R 3 → R and the view-dependent color (r, g, b) = c(x, d) : R 5 → R 3 . Moreover, we leverage a StyleGANinspired mapping network to condition the SIREN on a noise vector z through FiLM conditioning [47,8].
As shown in Figure 2a, we formalize the FiLM-ed SIREN backbone of our representation as
Φ (x) =φ n−1 • φ n−2 • . . . • φ 0 (x) , (1) φ i (x i ) = sin (γ i · (W i x i + b i ) + β i ) ,(2)
where φ i : R Mi → R Ni is the i th layer of an MLP. It consists of an affine transform defined by the weight matrix W i ∈ R Ni×Mi and the biases b i ∈ R Ni applied on the input x i ∈ R Mi , followed by the sine nonlinearity applied to each component of the resulting vector ( Figure 2b). Our mapping network is a simple ReLU MLP, which takes as input a noise vector z and outputs the frequencies γ i and phase shifts β i , which condition each layer of the SIREN. We found this mapping network to be more expressive than concatenation-based conditioning. It yielded imagequality improvements, both for conditioning ReLU-based Figure 3: A visualization of our neural volume rendering procedure. Given a conditioned radiance field, we cast rays from the camera origin o, sample density σ and color c values along each ray, and calculate pixel color C using Eq. 5. and SIREN-based neural implicit representations. The ablation studies shown in Sec. 4.3 give further insight into these conditioning methods.
Both density and color of our implicit volume are then defined as
σ (x) = W σ Φ (x) + b σ ,(3)c (x, d) = W c φ c [Φ (x) , d] T + b c ,(4)
where W σ/c and b σ/c are additional weight and bias parameters.
Neural Rendering
We render a neural radiance field from arbitrary camera poses ξ using neural volume rendering. For this purpose, we cast rays from the camera origin o and compute the integrals along each ray through the volume. At every sample, our generator predicts the volume density σ and color c. The pixel color C for a camera ray r(t) = o + td with near and far bounds t n and t f is then calculated using the volume rendering equation [34]:
C(r) = t f tn T (t)σ (r(t)) c (r(t), d) dt, where T (t) = exp − t tn σ(r(s))ds .(5)
Our approach implements a discretized form of this equation using the stratified and hierarchical sampling approach introduced by NeRF [38] (see Fig. 3).
This neural rendering approach, which is also adopted by GRAF [52], has several advantages over previous 3D-to-2D projections. Neural rendering allows for explicit control over camera pose, focal length, aspect ratio, and other parameters, while simple projections, such as those used by HoloGAN [40], are restricted to representing poses in the training dataset.
With neural implicit representations and volume rendering, our approach is agnostic to resolution. We can train at modest resolutions governed by available compute power, and sample at higher resolutions at test time to produce the most visually appealing results. In practice, we train using progressive growing up to a resolution of 128 × 128, but sample the final results at 512 × 512 pixels.
Discriminator
Following ProgressiveGAN [23], we use a convolutional discriminator D θ D with parameters θ D that grows progressively. We begin training at low resolutions and high batch sizes, during which the generator can focus on producing coarse shapes. As training progresses, we increase the image resolution and add new layers to the discriminator to handle the higher resolutions and discriminate fine details.
For most experiments, we begin training at 32×32 and double the resolution twice during training, up to 128 × 128. In practice, we found this progressive growing strategy to allow for larger batch sizes at the beginning of training, which helped to stabilize and speed training (see Sec. 4.3).
Unlike ProgressiveGAN [23], our generator architecture does not grow; instead, we increase the resolution of the generator by sampling rays more densely from the same implicit representation.
Training Details
At training time, we randomly sample camera poses ξ from a distribution p ξ . For our experiments, we constrained camera positions to the surface of a unit sphere and directed the camera to point towards the origin. At training time, pitch and yaw along the sphere were sampled from a distribution that was tuned according to the dataset. Real images I are sampled from the training set with distribution p I . We use the non-saturating GAN loss with R1 regularization [36]:
L(θ, φ) = E z∼pz,ξ∼p ξ [f (D θ D (G θ G (z, ξ)))] + E I∼p D [f (−D θ D (I)) + λ|∇D θ D (I)| 2 ], where f (u) = − log(1 + exp(−u)).(6)
We use the Adam optimizer with β 1 = 0, β 2 = 0.9. We initialize learning rates to 5 × 10 −5 for the generator and 4 × 10 −4 for the discriminator, decayed over training to 1 × 10 −5 and 1 × 10 −4 respectively. Further training and implementation details can be found in the supplemental materials.
Experiments and Analysis
In this section, we first evaluate the quality of images generated by π-GAN. We then demonstrate that it learns 3D representations that enables synthesizing images at unseen poses. We also include ablation studies to justify our use of sinusoidal activations and mapping network conditioning.
Evaluating Image Quality
Datasets. We evaluate π-GAN on the real-world CelebA [31] and Cats [63] datasets, as well as the synthetic CARLA [7,52] dataset. CelebaA contains 200,000 high-resolution face images of 10,000 different celebrities. We crop the images from the top of the hair to the bottom of the chin. The Cats dataset contains 6,444 128 × 128 images of cat heads. The CARLA dataset contains 10k images of 16 car models with random texture and color properties, rendered with the Carla Driving simulator. We train and evaluate at 128 × 128 resolution for all datasets and models. We evaluate all models using a moving average of parameters.
Baselines. We compare against two previous approaches to 3D-aware image synthesis: HoloGAN [40] and Generative Radiance Fields (GRAF) [52].
Qualitative results. Figure 4 compares images generated by π-GAN, HoloGAN, and GRAF on three datasets.
Qualitatively, HoloGAN achieves good image quality but suffers from multi-view inconsistency. Although it generally produces sharp images, identity shift is visible across rotations, particularly at the edges of the training distribution. HoloGAN struggled on the synthetic CARLA dataset, which featured much larger variations in viewpoint than CelebA or Cats. Previous papers were also unable to obtain consistent HoloGAN baselines on this dataset [52]. GRAF, which allows for explicit camera control, is more capable than HoloGAN at recovering wide viewing angles. Because it utilizes a 3D representation, it renders different views of the same scene with less identity shift than Holo-GAN. However, GRAF is less capable than HoloGAN at rendering fine details such as hair and teeth, and generally produces images that are more cartoon-ish and less lifelike than HoloGAN.
Our method combines high detail with the ability to represent a wide range of camera angles. Compared to Holo-GAN and GRAF, π-GAN better recreates fine details such as individual teeth (CelebA) and whiskers (Cats). Because we represent each instance with a radiance field, a true 3D representation, π-GAN generates images that are inherently view consistent, have minimal identity shift, and that recover a wide range of angles.
Quantitative results. We evaluate image quality using Frechet Inception Distance (FID) [18], Kernel Inception Distance (KID) [2], and Inception Score [51]. Tables 1a, 1b, and 1c show a quantitative comparison on CelebA, Cats, and CARLA, respectively. We show significant improvements in image quality metrics compared with baselines, particularly on real-world datasets with fine details. Center 1 std. dev. 2 std. dev. Figure 6: π-GAN is capable of rendering views from steep angles, producing reasonable results even beyond two standard deviations of camera yaw on CelebA. Face yaw on CelebA is approximately zero-centered Gaussian, with a standard deviation of 17º from the centerline.
Generating True 3D Representations
A key advantage of our approach over previous CNN attempts at 3D representation learning is that by generating an explicit three-dimensional radiance field, our model learns an underlying 3D-structure-aware representation. This representation allows for explicit camera control, naturally lends itself to rendering poses that were uncommon or unseen at training time, and is interpretable.
Extrapolation to rare or unseen camera poses. π-GAN relies on an underlying 3D structural representation and offers explicit camera control. Consequently, it more readily renders views and poses outside of the training dataset distribution than previous methods that rely on black-box representations or projections (e.g., [40]). Figure 6 shows that although the CelebA dataset has a relatively constrained camera yaw distribution, the explicit camera control and 3D structural representation naturally generalizes to rendering views even from steep angles, albeit at reduced quality. Figure 7 illustrates that, despite only training on tightly cropped images, the radiance field extrapolates when we zoom out the camera. Because the radiance field may be rendered from any of a wide variety of angles at training time, the generator is encouraged to produce a radiance field that represents the entire scene, even if only a small portion will be visible in any single image.
Interpreting the 3D representation. Although the color output of the implicit representation depends on ray direction to allow for view-dependent effects, such as specularities, the density output σ is completely view independent, resulting in a view-consistent 3D structure. The 3D representation can be extracted and visualized using the marching cubes algorithm [32] on the density output of the conditioned radiance field to produce a surface mesh. In practice, we found that the quality of extracted meshes was sensitive to the threshold level of the marching cubes algorithm; depth-image projection required no tuning and tended to produce more consistent results. Figure 8 shows 3D models extracted from the 3D representation.
Ablations
We ablate sinusoidal activations and mapping network conditioning to better understand their individual contribu- Figure 9: Ablation study for training π-GAN with and without the progressive growing discriminator on CelebA at 128 × 128 tions. We compare radiance fields with sinusoidal activations against radiance fields with ReLU activations and positional encodings (P.E.) [38]. Moreover, we evaluate radiance fields conditioned with a mapping network and FiLM conditioning against radiance fields conditioned via concatenation [52]. Table 2 summarizes the results of these experiments. Sinusoidal activations and mapping network conditioning each yielded improvements against their respective baselines. However, the combined model, with both sinusoidal activations and a mapping network, was more effective than the sum of its parts. Figure 9 compares a model trained with progressive growing against a model initialized to the full 128 × 128 image resolution. Because computational complexity grows quadratically with image size, progressive growing, which begins at low resolutions, allows for the use of much larger batch sizes at the start of training. The large batch sizes are helpful in stabilizing training, while also allowing for a higher throughput in images per iteration. As others have found before us [23], progressive growing, and the larger batch sizes it enables, helped ensure quality and diversity for generated images.
Discussion
Applications to novel view synthesis. Figure 10 demonstrates that it is possible to use a trained generator, without modifications, to perform single-view reconstruction using the procedure described by Karras et al. [25]. For this purpose, we freeze the parameters of our implicit representation and seek the frequencies γ i and phase shifts β i for each MLP layer i which produce a radiance field that, when rendered, best matches the target image. Additional details are found in the supplement.
Failure modes, limitations, and future work. While π-GAN has demonstrated considerable improvements to image quality for 3D-aware image synthesis, there remain a plethora of avenues for future work.
Input Image
Synthesized Views Figure 10: Using a trained π-GAN generator, we can optimize a radiance field to fit an input image and synthesize novel views from arbitrary camera poses. Figure 11: In a failure case reminiscent of the hollow-face illusion, our model sometimes generates objects with inverted sections.
Although the unsupervised learning of 3D shapes was not the focus of this work, π-GAN nevertheless produces interpretable and view-consistent 3D representations that capture the 3D structures of objects. Future work could focus on refining the quality of extracted meshes, with π-GAN as a viable solution to learning shapes from unposed images.
In certain cases, π-GAN can generate a radiance field that creates viable images when rendered from each direction but nonetheless fails to conform to the 3D shape that we would expect. As Figure 11 demonstrates, a concave face is a valid geometric solution, given the constrained range of poses the discriminator sees at training. Further investigation may reveal insights that could resolve such ambiguities.
While π-GAN has made strides in improving image quality for 3D-aware image synthesis, much work remains before implicit GANs can match the image quality of stateof-the-art 2D-convolutional GANs [25,3,23]. Future work may produce solutions to remaining visual artifacts and further improve image quality. π-GAN is computationally expensive compared to traditional 2D GANs because the complexity of training the generator scales not only with image size but also with depth along each ray. More efficient render techniques could lower the computational barrier and allow for larger, sharper images.
Ethical considerations. While our inverse rendering results only reconstruct static images, the method could be extended to generate fake photos or videos of real world people (DeepFakes). DeepFakes pose a real world societal threat, and we do not condone using our work to generate fake images or videos of any person with the intent of spreading misinformation or tarnishing their reputation. We also recognize a lack of diversity in our faces results, stemming from the implicit bias in the CelebA dataset. We are willing to work with the community in developing more inclusive datasets.
Conclusion. Photorealistic 3D-aware image synthesis has many exciting applications in vision and graphics. With our work, we take a significant step towards this goal.
A. Novel View Synthesis Details
We demonstrate a potential application of π-GAN: we can use a trained generator, without modifications, to perform single-view reconstruction. We base our method on the inverse projection procedure outlined by Karras et al. [25].
We freeze the parameters of our implicit representation and seek the frequencies γ i and phase shifts β i for each MLP layer i which produce a radiance field that, when rendered, best matches the target image. We initialize γ i and β i toγ i andβ i , the center of mass of frequencies and phase shifts for each layer. We calculateγ i andβ i simply by averaging the frequencies and phase shifts of ten thousand random noise vector inputs. We then run gradient descent to minimize the mean-squared-error image reconstruction loss. We additionally introduce an L 2 penalty with a weight of 0.1 during the optimization process to prevent γ i and β i from straying too far fromγ i andβ i . We optimize the frequencies and phase shifts with the Adam optimizer over 700 iterations. We initialize the learning rate to 0.01, decaying by a factor of 0.5 every 200 iterations.
B. Model Details
Mapping Network. The mapping network is parameterized as an MLP with three hidden layers of 256 units each. The mapping network uses leaky-ReLU activations with a negative slope of 0.2.
SIREN-based Implicit Radiance Field. The FiLMed-SIREN [54] backbone of the generator is parameterized as an MLP with eight FiLMed-SIREN hidden layers of 256 units each.
Discriminator. Table 3 shows the architecture of the progressive discriminator. We begin training at low resolutions and progressively add discriminator stages while upsampling image size. In order to smooth transitions between upsamples, we fade in the contributions of new layers over ten-thousand iterations. We utilized CoordConv layers [29] and residual connections [15] throughout the discriminator.
C. Additional Training Details
We train the majority of our models across two RTX 6,000 GPUs. We begin training at a resolution of 32 × 32, with an initial batch size of 120. At each upsample, we drop the batch size by a factor of four to keep the models and generated images in memory. At higher resolutions, we aggregate across mini-batches to keep an effective batch size at or above 12, given our GPU constraints. To further reduce memory usage, we used PyTorch's Automatic Mixed Precision (AMP). Certain rendering and camera parameters were tuned according to the dataset. We sample camera poses for CelebA from a normal distribution, with a vertical standard deviation of 0.15 radians and a horizontal standard deviation of 0.3 radians. We sample camera poses for Cats from a uniform distribution, with horizontal range (−0.75, 0.75) and vertical range (−0.4, 0.4). We sample poses for CARLA uniformly from the upper hemisphere. We tune the number of samples along each ray to balance memory consumption and depth resolution. We use 24 samples per ray for CelebA and Cats and 64 samples per ray for CARLA.
D. π-GAN results @ 64 × 64 Table 4 includes additional quantitative results, evaluated at 64 × 64, in order to allow for comparisons of π-GAN against models evaluated at lower resolutions.
E. Additional Visual Results
We include additional visual results to show the image quality and view consistency of π-GAN. Figures 13 and 14 demonstrate the wide range of camera poses supported by π-GAN for generated faces and cats. Figure 12 shows the fine detail that π-GAN renders on larger images. Figure 15 shows additional cars with varying elevation and rotation. We include several videos of faces and cats with the cam-era following an elliptical trajectory in our supplementary video.
Figure 4 :
4Qualitative comparison on CelebA, Cats, and CARLA.
Figure 5 :
5Uncurated generated faces, corresponding to the first 50 random seeds. (a) CelebA @ 128 × 128 FID ↓ KID ↓ IS ↑
Figure 7 :
7Explicit camera control at inference enables rendering views completely absent from the training distribution of camera poses. Although π-GAN was trained only on close-up images, it extrapolates to zoomed-out poses.
Figure 8 :
8We can extract the 3D representation as a mesh, either by projecting a depth-map (CelebA, Cats), or through marching cubes (CARLA
Figure 12 :
12Curated examples from our model trained with CelebA.[31]
Figure 13 :
13Curated examples from our model trained with CelebA, displayed from multiple viewing angles.
Figure 14 :
14Curated examples from our model trained with Cats[63], displayed from multiple viewing angles.
Figure 15 :
15Curated examples from our model trained with CARLA[7], displayed from multiple viewing angles.
Table 1: FID, KID mean×100, and IS for CelebA, Cats, and CARLA datasets.HoloGAN 39.7
2.91
1.89
GRAF
41.1
2.29
2.34
π-GAN
14.7
0.39
2.62
(b) Cats @ 128 × 128
FID ↓ KID ↓ IS ↑
HoloGAN 40.4
3.30
2.03
GRAF
28.9
1.43
1.66
π-GAN
16.8
0.92
2.06
(c) CARLA @ 128 × 128
FID ↓ KID ↓ IS ↑
HoloGAN 67.5
3.95
3.52
GRAF
41.7
2.43
3.70
π-GAN
35.5
2.11
4.38
Table 2 :
2FID scores on CelebA @ 64 × 64, when compar-
ing network architectures with different activation functions
and conditioning methods.
Table 3 :
3Discriminator architecture, showing progressive growing stages.Activation
Output Shape
Input Image
Adapter Block (1x1)
Coord Conv 1 (3x3)
Coord Conv 2 (3x3)
Avg Pool Downsample
-
LeakyReLU (0.2)
LeakyReLU (0.2)
LeakyReLU (0.2)
-
3x128x128
64x128x128
128x128x128
128x128x128
128x64x64
Coord Conv 1 (3x3)
Coord Conv 2 (3x3)
Avg Pool Downsample
LeakyReLU (0.2)
LeakyReLU (0.2)
-
256x64x64
256x64x64
256x32x32
Coord Conv 1 (3x3)
Coord Conv 2 (3x3)
Avg Pool Downsample
LeakyReLU (0.2)
LeakyReLU (0.2)
-
400x32x32
400x32x32
400x16x16
Coord Conv 1 (3x3)
Coord Conv 2 (3x3)
Avg Pool Downsample
LeakyReLU (0.2)
LeakyReLU (0.2)
-
400x16x16
400x16x16
400x8x8
Coord Conv 1 (3x3)
Coord Conv 2 (3x3)
Avg Pool Downsample
LeakyReLU (0.2)
LeakyReLU (0.2)
-
400x4x4
400x4x4
400x2x2
Conv 2d (2x2)
1x1x1
Table 4 :
4FID, KID mean × 100, and IS for π-GAN on CelebA, Cats, and CARLA datasets.FID ↓ KID ↓ IS ↑CelebA @ 64 × 64
5.15
0.09
2.28
Cats @ 64 × 64
7.36
0.23
2.07
CARLA @ 64 × 64 15.1
0.87
4.13
pi-GAN: Periodic Implicit Generative Adversarial Networks for 3D-Aware Image Synthesis Eric R. Chan * Stanford University [email protected] Marco Monteiro * Stanford University [email protected] Petr Kellnhofer Stanford University [email protected] Jiajun Wu
AcknowledgementsWe would like to offer special thanks to Matthew Chan for fruitful discussions and assistance in completing this work. We'd like to thank Stanford HAI for the AWS Cloud Credits. Gordon Wetzstein was supported by an NSF CA-REER Award (IIS 1553333), a Sloan Fellowship, and a PECASE from the ARO.
SAL: Sign agnostic learning of shapes from raw data. Matan Atzmon, Yaron Lipman, Proc. CVPR. CVPRMatan Atzmon and Yaron Lipman. SAL: Sign agnostic learning of shapes from raw data. In Proc. CVPR, 2020. 2
Demystifying mmd gans. Mikołaj Bińkowski, J Sutherland, Michael Arbel, Arthur Gretton, Mikołaj Bińkowski, Dougal J. Sutherland, Michael Arbel, and Arthur Gretton. Demystifying mmd gans, 2018. 6
Large scale gan training for high fidelity natural image synthesis. Andrew Brock, Jeff Donahue, Karen Simonyan, arXiv:1809.11096arXiv preprintAndrew Brock, Jeff Donahue, and Karen Simonyan. Large scale gan training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096, 2018. 8
Deep local shapes: Learning local sdf priors for detailed 3d reconstruction. Rohan Chabra, Jan Eric Lenssen, Eddy Ilg, Tanner Schmidt, Julian Straub, Steven Lovegrove, Richard Newcombe, arXiv:2003.10983arXiv preprintRohan Chabra, Jan Eric Lenssen, Eddy Ilg, Tanner Schmidt, Julian Straub, Steven Lovegrove, and Richard Newcombe. Deep local shapes: Learning local sdf priors for detailed 3d reconstruction. arXiv preprint arXiv:2003.10983, 2020. 2
Learning implicit fields for generative shape modeling. Zhiqin Chen, Hao Zhang, Proc. CVPR. CVPR23Zhiqin Chen and Hao Zhang. Learning implicit fields for generative shape modeling. In Proc. CVPR, pages 5939- 5948, 2019. 2, 3
Overfit neural networks as a compact shape representation. Thomas Davies, Derek Nowrouzezahrai, Alec Jacobson, Thomas Davies, Derek Nowrouzezahrai, and Alec Jacobson. Overfit neural networks as a compact shape representation, 2020. 2
CARLA: an open urban driving simulator. Alexey Dosovitskiy, Germán Ros, Felipe Codevilla, Antonio M López, Vladlen Koltun, abs/1711.03938CoRR15Alexey Dosovitskiy, Germán Ros, Felipe Codevilla, Anto- nio M. López, and Vladlen Koltun. CARLA: an open urban driving simulator. CoRR, abs/1711.03938, 2017. 2, 5, 15
Feature-wise transformations. Distill. Ethan Vincent Dumoulin, Nathan Perez, Florian Schucher, Strub, Aaron Harm De Vries, Yoshua Courville, Bengio, 23Vincent Dumoulin, Ethan Perez, Nathan Schucher, Flo- rian Strub, Harm de Vries, Aaron Courville, and Yoshua Bengio. Feature-wise transformations. Distill, 2018. https://distill.pub/2018/feature-wise-transformations. 2, 3
Neural scene representation and rendering. Danilo Sm Ali Eslami, Frederic Jimenez Rezende, Fabio Besse, Ari S Viola, Marta Morcos, Avraham Garnelo, Andrei A Ruderman, Ivo Rusu, Karol Danihelka, Gregor, Science. 3606394SM Ali Eslami, Danilo Jimenez Rezende, Frederic Besse, Fabio Viola, Ari S Morcos, Marta Garnelo, Avraham Ru- derman, Andrei A Rusu, Ivo Danihelka, Karol Gregor, et al. Neural scene representation and rendering. Science, 360(6394):1204-1210, 2018. 2
Local deep implicit functions for 3d shape. Kyle Genova, Forrester Cole, Avneesh Sud, Aaron Sarna, Thomas Funkhouser, Proc. CVPR. CVPRKyle Genova, Forrester Cole, Avneesh Sud, Aaron Sarna, and Thomas Funkhouser. Local deep implicit functions for 3d shape. In Proc. CVPR, 2020. 2
Learning shape templates with structured implicit functions. Kyle Genova, Forrester Cole, Daniel Vlasic, Aaron Sarna, T William, Thomas Freeman, Funkhouser, Proc. ICCV. ICCVKyle Genova, Forrester Cole, Daniel Vlasic, Aaron Sarna, William T Freeman, and Thomas Funkhouser. Learning shape templates with structured implicit functions. In Proc. ICCV, pages 7154-7164, 2019. 2
Shape and viewpoint without keypoints. Shubham Goel, Angjoo Kanazawa, Jitendra Malik, Proc. ECCV. ECCVShubham Goel, Angjoo Kanazawa, and Jitendra Malik. Shape and viewpoint without keypoints. In Proc. ECCV, 2020. 3
Generative adversarial nets. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, Proc. NeurIPS. NeurIPSIan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Proc. NeurIPS, 2014. 2
Implicit geometric regularization for learning shapes. Amos Gropp, Lior Yariv, Niv Haim, Matan Atzmon, Yaron Lipman, Proc. ICML. ICMLAmos Gropp, Lior Yariv, Niv Haim, Matan Atzmon, and Yaron Lipman. Implicit geometric regularization for learning shapes. In Proc. ICML, 2020. 2
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognition12Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceed- ings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 12
Leveraging 2d data to learn textured 3d mesh generation. Paul Henderson, Vagia Tsiminaki, Christoph H Lampert, Proc. CVPR. CVPRPaul Henderson, Vagia Tsiminaki, and Christoph H Lampert. Leveraging 2d data to learn textured 3d mesh generation. In Proc. CVPR, 2020. 3
Escaping plato's cave: 3d shape from adversarial rendering. Philipp Henzler, J Niloy, Tobias Mitra, Ritschel, Proc. ICCV. ICCV13Philipp Henzler, Niloy J Mitra, and Tobias Ritschel. Escap- ing plato's cave: 3d shape from adversarial rendering. In Proc. ICCV, 2019. 1, 3
Gans trained by a two time-scale update rule converge to a nash equilibrium. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, Günter Klambauer, Sepp Hochreiter, abs/1706.08500CoRRMartin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, Günter Klambauer, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a nash equilibrium. CoRR, abs/1706.08500, 2017. 6
Local implicit grid representations for 3d scenes. Chiyu Jiang, Avneesh Sud, Ameesh Makadia, Jingwei Huang, Matthias Nießner, Thomas Funkhouser, Proc. CVPR. CVPR2020Chiyu Jiang, Avneesh Sud, Ameesh Makadia, Jingwei Huang, Matthias Nießner, and Thomas Funkhouser. Local implicit grid representations for 3d scenes. In Proc. CVPR, pages 6001-6010, 2020. 2
Sdfdiff: Differentiable rendering of signed distance fields for 3d shape optimization. Yue Jiang, Dantong Ji, Zhizhong Han, Matthias Zwicker, Proc. CVPR. CVPRYue Jiang, Dantong Ji, Zhizhong Han, and Matthias Zwicker. Sdfdiff: Differentiable rendering of signed distance fields for 3d shape optimization. In Proc. CVPR, 2020. 2
Ray tracing volume densities. T James, Brian P Von Kajiya, Herzen, ACM SIGGRAPH computer graphics. 183James T Kajiya and Brian P Von Herzen. Ray tracing volume densities. ACM SIGGRAPH computer graphics, 18(3):165- 174, 1984. 2
Learning category-specific mesh reconstruction from image collections. Angjoo Kanazawa, Shubham Tulsiani, Alexei A Efros, Jitendra Malik, Proc. ECCV. ECCVAngjoo Kanazawa, Shubham Tulsiani, Alexei A. Efros, and Jitendra Malik. Learning category-specific mesh reconstruc- tion from image collections. In Proc. ECCV, 2018. 3
Progressive growing of GANs for improved quality, stability, and variation. Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen, Proc. ICLR. ICLRTero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of GANs for improved quality, stability, and variation. In Proc. ICLR, 2018. 1, 2, 4, 8
A style-based generator architecture for generative adversarial networks. Tero Karras, Samuli Laine, Timo Aila, Proc. CVPR. CVPR13Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In Proc. CVPR, 2019. 1, 2, 3
Analyzing and improving the image quality of StyleGAN. Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, Timo Aila, Proc. CVPR. CVPR812Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the image quality of StyleGAN. In Proc. CVPR, 2020. 1, 2, 8, 12
Semantic implicit neural scene representations with semisupervised training. Amit Kohli, Vincent Sitzmann, Gordon Wetzstein, Proc. 32020Amit Kohli, Vincent Sitzmann, and Gordon Wetzstein. Se- mantic implicit neural scene representations with semi- supervised training. Proc. 3DV 2020, 2020. 2
Towards unsupervised learning of generative models for 3d controllable image synthesis. Yiyi Liao, Katja Schwarz, Lars Mescheder, Andreas Geiger, Proc. CVPR. CVPRYiyi Liao, Katja Schwarz, Lars Mescheder, and Andreas Geiger. Towards unsupervised learning of generative models for 3d controllable image synthesis. In Proc. CVPR, 2020. 3
Lingjie Liu, Jiatao Gu, Kyaw Zaw Lin, Tat-Seng Chua, and Christian Theobalt. Neural sparse voxel fields. Lingjie Liu, Jiatao Gu, Kyaw Zaw Lin, Tat-Seng Chua, and Christian Theobalt. Neural sparse voxel fields. NeurIPS, 2020. 2
An intriguing failing of convolutional neural networks and the coordconv solution. Rosanne Liu, Joel Lehman, Piero Molino, Felipe Petroski Such, Eric Frank, Alex Sergeev, Jason Yosinski, Advances in Neural Information Processing Systems. 12Rosanne Liu, Joel Lehman, Piero Molino, Felipe Petroski Such, Eric Frank, Alex Sergeev, and Jason Yosinski. An intriguing failing of convolutional neural networks and the coordconv solution. In Advances in Neural Information Pro- cessing Systems, pages 9605-9616, 2018. 12
Dist: Rendering deep implicit signed distance function with differentiable sphere tracing. Shaohui Liu, Yinda Zhang, Songyou Peng, Boxin Shi, Marc Pollefeys, Zhaopeng Cui, Proc. CVPR. CVPRShaohui Liu, Yinda Zhang, Songyou Peng, Boxin Shi, Marc Pollefeys, and Zhaopeng Cui. Dist: Rendering deep implicit signed distance function with differentiable sphere tracing. In Proc. CVPR, 2020. 2
Deep learning face attributes in the wild. Ziwei Liu, Ping Luo, Xiaogang Wang, Xiaoou Tang, Proceedings of International Conference on Computer Vision (ICCV), December. International Conference on Computer Vision (ICCV), December13Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), De- cember 2015. 1, 2, 5, 13
Marching cubes: A high resolution 3d surface construction algorithm. E William, Harvey E Lorensen, Cline, ACM siggraph computer graphics. 214William E Lorensen and Harvey E Cline. Marching cubes: A high resolution 3d surface construction algorithm. ACM siggraph computer graphics, 21(4):163-169, 1987. 7
Inverse graphics gan: Learning to generate 3d shapes from unstructured 2d data. Sebastian Lunz, Yingzhen Li, Andrew Fitzgibbon, Nate Kushman, arXiv:2002.12674arXiv preprintSebastian Lunz, Yingzhen Li, Andrew Fitzgibbon, and Nate Kushman. Inverse graphics gan: Learning to gener- ate 3d shapes from unstructured 2d data. arXiv preprint arXiv:2002.12674, 2020. 3
Optical models for direct volume rendering. N Max, IEEE TVCG. 124N. Max. Optical models for direct volume rendering. IEEE TVCG, 1(2):99-108, 1995. 2, 4
Occupancy networks: Learning 3d reconstruction in function space. Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, Andreas Geiger, Proc. CVPR. CVPRLars Mescheder, Michael Oechsle, Michael Niemeyer, Se- bastian Nowozin, and Andreas Geiger. Occupancy net- works: Learning 3d reconstruction in function space. In Proc. CVPR, 2019. 2
On the convergence properties of GAN training. Lars M Mescheder, abs/1801.04406CoRRLars M. Mescheder. On the convergence properties of GAN training. CoRR, abs/1801.04406, 2018. 4
Implicit surface representations as layers in neural networks. Mateusz Michalkiewicz, K Jhony, Dominic Pontes, Mahsa Jack, Anders Baktashmotlagh, Eriksson, Proc. ICCV. ICCVMateusz Michalkiewicz, Jhony K Pontes, Dominic Jack, Mahsa Baktashmotlagh, and Anders Eriksson. Implicit sur- face representations as layers in neural networks. In Proc. ICCV, pages 4743-4752, 2019. 2
Nerf: Representing scenes as neural radiance fields for view synthesis. Ben Mildenhall, P Pratul, Matthew Srinivasan, Jonathan T Tancik, Ravi Barron, Ren Ramamoorthi, Ng, Proc. ECCV. ECCVBen Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view syn- thesis. In Proc. ECCV, 2020. 2, 4, 8
Selfsupervised viewpoint learning from image collections. Varun Siva Karthik Mustikovela, Jampani, Sifei Shalini De Mello, Umar Liu, Carsten Iqbal, Jan Rother, Kautz, Proc. CVPR. CVPRSiva Karthik Mustikovela, Varun Jampani, Shalini De Mello, Sifei Liu, Umar Iqbal, Carsten Rother, and Jan Kautz. Self- supervised viewpoint learning from image collections. In Proc. CVPR, 2020. 3
Hologan: Unsupervised learning of 3d representations from natural images. Thu Nguyen-Phuoc, Chuan Li, Lucas Theis, Christian Richardt, Yong-Liang Yang, Proc. ICCV. ICCV57Thu Nguyen-Phuoc, Chuan Li, Lucas Theis, Christian Richardt, and Yong-Liang Yang. Hologan: Unsupervised learning of 3d representations from natural images. In Proc. ICCV, 2019. 1, 3, 4, 5, 7
Blockgan: Learning 3d objectaware scene representations from unlabelled images. Thu Nguyen-Phuoc, Christian Richardt, Long Mai, Yong-Liang Yang, Niloy Mitra, Proc. NeurIPS. NeurIPS13Thu Nguyen-Phuoc, Christian Richardt, Long Mai, Yong- Liang Yang, and Niloy Mitra. Blockgan: Learning 3d object- aware scene representations from unlabelled images. In Proc. NeurIPS, 2020. 1, 3
Occupancy flow: 4d reconstruction by learning particle dynamics. Michael Niemeyer, Lars Mescheder, Michael Oechsle, Andreas Geiger, Proc. ICCV. ICCVMichael Niemeyer, Lars Mescheder, Michael Oechsle, and Andreas Geiger. Occupancy flow: 4d reconstruction by learning particle dynamics. In Proc. ICCV, 2019. 2
Differentiable volumetric rendering: Learning implicit 3d representations without 3d supervision. Michael Niemeyer, Lars Mescheder, Michael Oechsle, Andreas Geiger, Proc. CVPR. CVPRMichael Niemeyer, Lars Mescheder, Michael Oechsle, and Andreas Geiger. Differentiable volumetric rendering: Learn- ing implicit 3d representations without 3d supervision. In Proc. CVPR, 2020. 2
Texture fields: Learning texture representations in function space. Michael Oechsle, Lars Mescheder, Michael Niemeyer, Thilo Strauss, Andreas Geiger, Proc. ICCV. ICCVMichael Oechsle, Lars Mescheder, Michael Niemeyer, Thilo Strauss, and Andreas Geiger. Texture fields: Learning tex- ture representations in function space. In Proc. ICCV, 2019. 2
Deepsdf: Learning continuous signed distance functions for shape representation. Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, Steven Lovegrove, Proc. CVPR. CVPRJeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. Deepsdf: Learning con- tinuous signed distance functions for shape representation. In Proc. CVPR, 2019. 2
Convolutional occupancy networks. Songyou Peng, Michael Niemeyer, Lars Mescheder, Marc Pollefeys, Andreas Geiger, Proc. ECCV. ECCVSongyou Peng, Michael Niemeyer, Lars Mescheder, Marc Pollefeys, and Andreas Geiger. Convolutional occupancy networks. In Proc. ECCV, 2020. 2
Film: Visual reasoning with a general conditioning layer. Ethan Perez, Florian Strub, Harm De, Vincent Vries, Aaron Dumoulin, Courville, arXiv:1709.0787123arXiv preprintEthan Perez, Florian Strub, Harm De Vries, Vincent Du- moulin, and Aaron Courville. Film: Visual reason- ing with a general conditioning layer. arXiv preprint arXiv:1709.07871, 2017. 2, 3
Unsupervised representation learning with deep convolutional generative adversarial networks. Alec Radford, Luke Metz, Soumith Chintala, Proc. ICLR. ICLRAlec Radford, Luke Metz, and Soumith Chintala. Unsuper- vised representation learning with deep convolutional gener- ative adversarial networks. In Proc. ICLR, 2016. 2
Pix2shape: Towards unsupervised learning of 3d scenes from images using a view-based representation. Sai Rajeswar, Fahim Mannan, Florian Golemo, Jérôme Parent-Lévesque, David Vazquez, Derek Nowrouzezahrai, Aaron Courville, International Journal of Computer Vision. 3Sai Rajeswar, Fahim Mannan, Florian Golemo, Jérôme Parent-Lévesque, David Vazquez, Derek Nowrouzezahrai, and Aaron Courville. Pix2shape: Towards unsupervised learning of 3d scenes from images using a view-based repre- sentation. International Journal of Computer Vision, pages 1-16, 2020. 3
Pifu: Pixel-aligned implicit function for high-resolution clothed human digitization. Shunsuke Saito, Zeng Huang, Ryota Natsume, Shigeo Morishima, Angjoo Kanazawa, Hao Li, Proc. ICCV. ICCVShunsuke Saito, Zeng Huang, Ryota Natsume, Shigeo Mor- ishima, Angjoo Kanazawa, and Hao Li. Pifu: Pixel-aligned implicit function for high-resolution clothed human digitiza- tion. In Proc. ICCV, pages 2304-2314, 2019. 2
Improved techniques for training gans. Tim Salimans, Ian J Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, Xi Chen, abs/1606.03498CoRRTim Salimans, Ian J. Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. CoRR, abs/1606.03498, 2016. 6
Graf: Generative radiance fields for 3d-aware image synthesis. Katja Schwarz, Yiyi Liao, Michael Niemeyer, Andreas Geiger, Proc. NeurIPS. NeurIPS6Katja Schwarz, Yiyi Liao, Michael Niemeyer, and Andreas Geiger. Graf: Generative radiance fields for 3d-aware image synthesis. In Proc. NeurIPS, 2020. 1, 2, 3, 4, 5, 6, 8
Metasdf: Meta-learning signed distance functions. Vincent Sitzmann, Eric R Chan, Richard Tucker, Noah Snavely, Gordon Wetzstein, Proc. NeurIPS. NeurIPSVincent Sitzmann, Eric R. Chan, Richard Tucker, Noah Snavely, and Gordon Wetzstein. Metasdf: Meta-learning signed distance functions. In Proc. NeurIPS, 2020. 2
Implicit neural representations with periodic activation functions. Vincent Sitzmann, N P Julien, Alexander W Martel, David B Bergman, Gordon Lindell, Wetzstein, Proc. NeurIPS, 2020. NeurIPS, 2020212Vincent Sitzmann, Julien N.P. Martel, Alexander W. Bergman, David B. Lindell, and Gordon Wetzstein. Implicit neural representations with periodic activation functions. In Proc. NeurIPS, 2020. 2, 12
Scene representation networks: Continuous 3d-structure-aware neural scene representations. Vincent Sitzmann, Michael Zollhöfer, Gordon Wetzstein, Proc. NeurIPS. NeurIPSVincent Sitzmann, Michael Zollhöfer, and Gordon Wet- zstein. Scene representation networks: Continuous 3d- structure-aware neural scene representations. In Proc. NeurIPS 2019, 2019. 2
Multi-view 3D models from single images with a convolutional network. Maxim Tatarchenko, Alexey Dosovitskiy, Thomas Brox, Proc. ECCV. ECCVMaxim Tatarchenko, Alexey Dosovitskiy, and Thomas Brox. Multi-view 3D models from single images with a convolu- tional network. In Proc. ECCV, 2016. 3
Ayush Tewari, Ohad Fried, Justus Thies, Vincent Sitzmann, Stephen Lombardi, Kalyan Sunkavalli, Ricardo Martin-Brualla, Tomas Simon, Jason Saragih, Matthias Nießner, et al. State of the art on neural rendering. Proc. Eurographics. Ayush Tewari, Ohad Fried, Justus Thies, Vincent Sitzmann, Stephen Lombardi, Kalyan Sunkavalli, Ricardo Martin- Brualla, Tomas Simon, Jason Saragih, Matthias Nießner, et al. State of the art on neural rendering. Proc. Eurograph- ics, 2020. 2
Implicit mesh reconstruction from unannotated image collections. Shubham Tulsiani, Nilesh Kulkarni, Abhinav Gupta, arXiv:2007.08504arXiv preprintShubham Tulsiani, Nilesh Kulkarni, and Abhinav Gupta. Im- plicit mesh reconstruction from unannotated image collec- tions. arXiv preprint arXiv:2007.08504, 2020. 3
Learning shape abstractions by assembling volumetric primitives. Shubham Tulsiani, Hao Su, Leonidas J Guibas, Alexei A Efros, Jitendra Malik, Proc. CVPR. CVPRShubham Tulsiani, Hao Su, Leonidas J. Guibas, Alexei A. Efros, and Jitendra Malik. Learning shape abstractions by assembling volumetric primitives. In Proc. CVPR, 2017. 3
High-resolution image synthesis and semantic manipulation with conditional GANs. Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, Bryan Catanzaro, Proc. CVPR. CVPRTing-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. High-resolution image syn- thesis and semantic manipulation with conditional GANs. In Proc. CVPR, 2018. 2
Learning a probabilistic latent space of object shapes via 3D generative-adversarial modeling. Jiajun Wu, Chengkai Zhang, Tianfan Xue, William T Freeman, Joshua B Tenenbaum, Proc. NeurIPS. NeurIPSJiajun Wu, Chengkai Zhang, Tianfan Xue, William T. Free- man, and Joshua B. Tenenbaum. Learning a probabilistic latent space of object shapes via 3D generative-adversarial modeling. In Proc. NeurIPS, 2016. 3
Multiview neural surface reconstruction by disentangling geometry and appearance. Lior Yariv, Yoni Kasten, Dror Moran, Meirav Galun, Matan Atzmon, Ronen Basri, Yaron Lipman, Proc. NeurIPS. NeurIPSLior Yariv, Yoni Kasten, Dror Moran, Meirav Galun, Matan Atzmon, Ronen Basri, and Yaron Lipman. Multiview neu- ral surface reconstruction by disentangling geometry and ap- pearance. In Proc. NeurIPS, 2020. 2
Cat head detection -how to effectively exploit shape and texture features. Weiwei Zhang, Jian Sun, Xiaoou Tang, ECCV. 14Weiwei Zhang, Jian Sun, and Xiaoou Tang. Cat head detec- tion -how to effectively exploit shape and texture features. In ECCV, 2008. 1, 2, 5, 14
Unpaired image-to-image translation using cycle-Consistent adversarial networks. Jun-Yan Zhu, Taesung Park, Phillip Isola, Alexei A Efros, Proc. ICCV. ICCV23Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. Unpaired image-to-image translation using cycle- Consistent adversarial networks. In Proc. ICCV, 2017. 2, 3
Visual object networks: Image generation with disentangled 3D representations. Jun-Yan Zhu, Zhoutong Zhang, Chengkai Zhang, Jiajun Wu, Antonio Torralba, Joshua B Tenenbaum, William T Freeman, Proc. NeurIPS. NeurIPSJun-Yan Zhu, Zhoutong Zhang, Chengkai Zhang, Jiajun Wu, Antonio Torralba, Joshua B. Tenenbaum, and William T. Freeman. Visual object networks: Image generation with disentangled 3D representations. In Proc. NeurIPS, 2018. 2
| []
|
[
"Published as a conference paper at ICLR 2023 CANARY IN A COALMINE: BETTER MEMBERSHIP IN- FERENCE WITH ENSEMBLED ADVERSARIAL QUERIES",
"Published as a conference paper at ICLR 2023 CANARY IN A COALMINE: BETTER MEMBERSHIP IN- FERENCE WITH ENSEMBLED ADVERSARIAL QUERIES"
]
| [
"Yuxin Wen [email protected] \nUniversity of Maryland\nUniversity of Maryland\nUniversity of Maryland\nUniversity of Chicago\nNew York University\nUniversity of Maryland\nUniversity of Maryland\n\n",
"Arpit Bansal \nUniversity of Maryland\nUniversity of Maryland\nUniversity of Maryland\nUniversity of Chicago\nNew York University\nUniversity of Maryland\nUniversity of Maryland\n\n",
"Hamid Kazemi \nUniversity of Maryland\nUniversity of Maryland\nUniversity of Maryland\nUniversity of Chicago\nNew York University\nUniversity of Maryland\nUniversity of Maryland\n\n",
"Eitan Borgnia \nUniversity of Maryland\nUniversity of Maryland\nUniversity of Maryland\nUniversity of Chicago\nNew York University\nUniversity of Maryland\nUniversity of Maryland\n\n",
"Micah Goldblum \nUniversity of Maryland\nUniversity of Maryland\nUniversity of Maryland\nUniversity of Chicago\nNew York University\nUniversity of Maryland\nUniversity of Maryland\n\n",
"Jonas Geiping \nUniversity of Maryland\nUniversity of Maryland\nUniversity of Maryland\nUniversity of Chicago\nNew York University\nUniversity of Maryland\nUniversity of Maryland\n\n",
"Tom Goldstein \nUniversity of Maryland\nUniversity of Maryland\nUniversity of Maryland\nUniversity of Chicago\nNew York University\nUniversity of Maryland\nUniversity of Maryland\n\n"
]
| [
"University of Maryland\nUniversity of Maryland\nUniversity of Maryland\nUniversity of Chicago\nNew York University\nUniversity of Maryland\nUniversity of Maryland\n",
"University of Maryland\nUniversity of Maryland\nUniversity of Maryland\nUniversity of Chicago\nNew York University\nUniversity of Maryland\nUniversity of Maryland\n",
"University of Maryland\nUniversity of Maryland\nUniversity of Maryland\nUniversity of Chicago\nNew York University\nUniversity of Maryland\nUniversity of Maryland\n",
"University of Maryland\nUniversity of Maryland\nUniversity of Maryland\nUniversity of Chicago\nNew York University\nUniversity of Maryland\nUniversity of Maryland\n",
"University of Maryland\nUniversity of Maryland\nUniversity of Maryland\nUniversity of Chicago\nNew York University\nUniversity of Maryland\nUniversity of Maryland\n",
"University of Maryland\nUniversity of Maryland\nUniversity of Maryland\nUniversity of Chicago\nNew York University\nUniversity of Maryland\nUniversity of Maryland\n",
"University of Maryland\nUniversity of Maryland\nUniversity of Maryland\nUniversity of Chicago\nNew York University\nUniversity of Maryland\nUniversity of Maryland\n"
]
| []
| As industrial applications are increasingly automated by machine learning models, enforcing personal data ownership and intellectual property rights requires tracing training data back to their rightful owners. Membership inference algorithms approach this problem by using statistical techniques to discern whether a target sample was included in a model's training set. However, existing methods only utilize the unaltered target sample or simple augmentations of the target to compute statistics. Such a sparse sampling of the model's behavior carries little information, leading to poor inference capabilities. In this work, we use adversarial tools to directly optimize for queries that are discriminative and diverse. Our improvements achieve significantly more accurate membership inference than existing methods, especially in offline scenarios and in the low false-positive regime which is critical in legal settings. Code is available at https://github.com/YuxinWenRick/canary-in-a-coalmine. | 10.48550/arxiv.2210.10750 | [
"https://export.arxiv.org/pdf/2210.10750v2.pdf"
]
| 252,992,499 | 2210.10750 | e35989611e21d635b5ab8abccfb97992a3f0f07e |
Published as a conference paper at ICLR 2023 CANARY IN A COALMINE: BETTER MEMBERSHIP IN- FERENCE WITH ENSEMBLED ADVERSARIAL QUERIES
Yuxin Wen [email protected]
University of Maryland
University of Maryland
University of Maryland
University of Chicago
New York University
University of Maryland
University of Maryland
Arpit Bansal
University of Maryland
University of Maryland
University of Maryland
University of Chicago
New York University
University of Maryland
University of Maryland
Hamid Kazemi
University of Maryland
University of Maryland
University of Maryland
University of Chicago
New York University
University of Maryland
University of Maryland
Eitan Borgnia
University of Maryland
University of Maryland
University of Maryland
University of Chicago
New York University
University of Maryland
University of Maryland
Micah Goldblum
University of Maryland
University of Maryland
University of Maryland
University of Chicago
New York University
University of Maryland
University of Maryland
Jonas Geiping
University of Maryland
University of Maryland
University of Maryland
University of Chicago
New York University
University of Maryland
University of Maryland
Tom Goldstein
University of Maryland
University of Maryland
University of Maryland
University of Chicago
New York University
University of Maryland
University of Maryland
Published as a conference paper at ICLR 2023 CANARY IN A COALMINE: BETTER MEMBERSHIP IN- FERENCE WITH ENSEMBLED ADVERSARIAL QUERIES
As industrial applications are increasingly automated by machine learning models, enforcing personal data ownership and intellectual property rights requires tracing training data back to their rightful owners. Membership inference algorithms approach this problem by using statistical techniques to discern whether a target sample was included in a model's training set. However, existing methods only utilize the unaltered target sample or simple augmentations of the target to compute statistics. Such a sparse sampling of the model's behavior carries little information, leading to poor inference capabilities. In this work, we use adversarial tools to directly optimize for queries that are discriminative and diverse. Our improvements achieve significantly more accurate membership inference than existing methods, especially in offline scenarios and in the low false-positive regime which is critical in legal settings. Code is available at https://github.com/YuxinWenRick/canary-in-a-coalmine.
INTRODUCTION
Membership inference algorithms are designed to determine whether a target data point was present in the training set of a model. Membership inference is often studied in the context of ML privacy, as there are situations where belonging to a dataset is itself sensitive information (e.g. a model trained on a group of people with a rare disease). However, it is also relevant in other social and regulatory contexts as legislators have begun developing a slew of regulations with the intention of protecting data ownership. The right to be forgotten, which was written into the European Union's strict GDPR law, has important implications for the operation of ML-as-a-service (MLaaS) providers (Wilka et al., 2017;Truong et al., 2021). As one example, Veale et al. (2018) discuss that machine learning models could legally (in terms of the GDPR) fall into the category of "personal data", which equips all parties represented in the data with rights to restrict processing and to object to their inclusion. However, such rights are vacuous if enforcement agencies are unable to detect when they are violated. Membership inference could potentially be used as a legal tool against a noncompliant or malicious MLaaS provider.
Because membership inference is a difficult task, the typical setting for existing work is generous to the attacker and assumes full white-box access to model weights. In the aforementioned legal scenario this is not a realistic assumption. Organizations have an understandable interest in keeping their proprietary model weights secret and, short of a legal search warrant, often only provide blackbox querying to their clients (OpenAI, 2020). Moreover, even if a regulatory agency obtained whitebox access via an audit, for example, a malicious provider could adversarially spoof the reported weights to cover up any violations.
Published as a conference paper at ICLR 2023
In this paper, we achieve state-of-the-art performance for membership inference in the black-box setting by using a new adversarial approach. We observe that previous work (Shokri et al., 2017;Yeom et al., 2018;Salem et al., 2018; improves membership inference attacks through a variety of creative strategies, but these methods query the targeted model using only the original target data point or its augmentations. We instead learn query vectors that are maximally discriminative; they separate all models trained with the target data point from all models trained without it. We show that this strategy reliably results in more precise predictions than the baseline method for three different datasets, four different model architectures, and even models trained with differential privacy guarantees.
2 BACKGROUND AND RELATED WORK Homer et al. (2008) originated the idea of membership inference attacks (MIAs) by using aggregated information about SNPs to isolate a specific genome present in the underlying dataset with high probability. Such attacks on genomics data are facilitated by small sample sizes and the richness of information present in each DNA sequence, which for humans can be up to three billion base pairs. Similarly, the overparametrized regime of deep learning makes neural models vulnerable to MIAs. Yeom et al. (2018) designed the first attacks on deep neural networks by leveraging overfitting to the training data -members exhibit statistically lower loss values than non-members.
Since their inception, improved MIAs have been developed, across different problem settings and threat models with varying levels of adversarial knowledge. Broadly speaking, MIAs can be categorized into metric-based approaches and binary classifier approaches (Hu et al., 2021). The latter utilizes a variety of calculated statistics to ascertain membership while the former involves training shadow models and using a neural network to learn when model outputs are indicative of training on the target image (Shokri et al., 2017;Truong et al., 2021;Salem et al., 2018).
More specifically, existing metric-based approaches infer the presence of a sample by monitoring model behavior in terms of correctness (Yeom et al., 2018;Choquette-Choo et al., 2021;Bentley et al., 2020;Irolla & Châtel, 2019;Sablayrolles et al., 2019), loss (Yeom et al., 2018;Sablayrolles et al., 2019), confidence (Salem et al., 2018), and entropy (Song & Mittal, 2021;Salem et al., 2018). The ability to query such metrics at various points during training has been shown to further improve membership inference. Liu et al. (2022) devise a model distillation approach to simulate the loss trajectories during training, and Jagielski et al. (2022b) leverage continual updates to model parameters to get multiple trajectory points.
Despite the vast literature on MIAs, all existing methods in both categories rely solely on the data point x whose membership status is in question -metric-based approaches compute statistics based on x * or augmentations of x * and binary classifiers take x * as an input and output membership status directly. Our work hinges on the observation that an optimized canary image x mal can be a more effective litmus test for determining the membership of x * . Note that this terminology is separate from the use in Zanella-Béguelin et al. (2020) and Carlini et al. (2019), where a canary refers to a sequence that serves as a proxy to test memorization of sensitive data in language models. It also differs from the canary-based gradient attack in Pasquini et al. (2021), where a malicious federated learning server sends adversarial weights to users to infer properties about individual user data even with secure aggregation.
The metric used for assessing the efficacy of a MIA has been the subject of some debate. A commonly used approach is balanced attack accuracy, which is an empirically determined probability of correctly ascertaining membership. However, point out that this metric is inadequate because it implicitly assigns equal weight to both classes of mistakes (i.e. false positive and false negatives) and it is an average-case metric. The latter characteristic is especially troubling because meaningful privacy should protect minorities and not be measured solely on effectiveness for the majority. A good alternative to address these shortcomings is to provide the receiver operating characteristic (ROC) curve. This metric reports the true positive rate (TPR) at each false positive rate (FPR) by varying the detection threshold. One way to distill the information present in the ROC curve is by computing the area under the curve (AUC) -more area means a higher TPR across all FPRs on average. However, more meaningful violations of privacy occur at a low FPR. Methods that optimize solely for AUC can overstress the importance of high TPR at high FPR, a regime inherently protected by plausible deniability. In our work, we report both AUC and numerical results at the FPR deemed acceptable by for ease of comparison.
There have been efforts to characterize the types of models and data most vulnerable to the MIAs described above. Empirical work has shown the increased privacy risk for more overparametrized models (Yeom et al., 2018;Leino & Fredrikson, 2020), which was made rigorous by Tan et al. (2022b) for linear regression models with Gaussian data. Tan et al. (2022a) show the overparameterization/privacy tradeoff can be improved by using wider networks and early stopping to prevent overfitting. From the data point of view, Carlini et al. (2022c) show that data at the tails of the training distribution are more vulnerable, and efforts to side-step the privacy leakage by removing tail data points just creates a new set of vulnerable data. Jagielski et al. (2022a) show data points encountered early in training are "forgotten" and thus more protected from MIAs than data encountered late in training.
HATCHING A CA N A R Y
In this section, we expound upon the threat model for the type of membership inference we perform. We then provide additional background on metric-based MIA through likelihood ratio tests, before describing how to optimize the canary query data point.
THREAT MODELS
Membership inference is a useful tool in many real-world scenarios. For example, suppose a MLaaS company trains an image classifier by scraping large amounts of online images and using data from users/clients to maximize model performance. A client requests that their data be unlearned from the company's model -via their right-to-be-forgotten -and wants to test compliance by determining membership inference of their own private image. We assume the client also has the ability to scrape online data points, which may or may not be in the training data of the target classifier. However, the target model can only be accessed through an API that returns predictions and confidence scores, hiding weights and intermediate activations.
We formulate two threat models, where the trainer is the company and the attacker is the client as described above:
Online Threat Model. As in Carlini et al. (2022b), we assume there exists a public training algorithm T (including the model architecture) and a universal dataset D. The trainer trains a target model θ t on a random subset D t ⊆ D through T . Given a sensitive point (x * , y * ) ∈ D, the attacker aims to determine whether (x * , y * ) ∈ D t or (x * , y * ) / ∈ D t . The target model parameters are protected, and the attacker has limited query access to the target model and its confidence f θt (x) y for any (x, y).
We use the term online to indicate that the attacker can modify their membership inference strategy as a function of (x * , y * ). A more conservative threat model is the offline variant, where the attacker must a priori decide on a fixed strategy to utilize across all sensitive data points. This is more realistic when the strategy involves training many shadow models. These shadow models will allow the attacker to directly inspect the properties of models, trained similarly to the target model, when they are trained with or without the target image, but the training process is computationally expensive.
Offline Threat Model. As above, the trainer trains a target model on D t ⊆ D with T . However, now we assume the attacker only has access to an auxiliary dataset D aux ⊆ D to prepare their attack. The set of sensitive data points D test ⊆ D is defined to have the properties D aux ∩ D test = ∅ but D t ∩ D test ̸ = ∅. Again, the attacker has limited query access to the target model and its confidence f θt (x) y for any (x, y).
LIKELIHOOD RATIO ATTACKS
As a baseline, we start out with the metric-based Likelihood Ratio Attack (LiRA) introduced by . In the online threat model, a LiRA attacker first trains N shadow models S = {θ 1 , ..., θ N } on randomized even splits of the dataset D. For any data point (x, y) ∈ D, it follows that there are on average N/2 OUT shadow models trained without (x, y) and N/2 IN shadow models trained with (x, y). This allows the attacker to run membership inference using a joint pool of shadow models, without having to retrain models for every new trial data point. Given a target point x * and its label y * , an attacker calculates confidence scores of IN models
S in = {θ in 1 , ..., θ in n } and OUT models S out = {θ out 1 , ..., θ out m }. Confidence scores are scaled via ϕ(f θ (x * ) y * ) = log f θ (x * ) y * 1 − f θ (x * ) y * ,(1)
where f θ (x) y denotes the confidence score from the model θ on the point (x, y). This scaling approximately standardizes the confidence distribution, as the distribution of the unnormalized confidence scores is often non-Gaussian. After retrieving the scaled scores for IN and OUT models, the attacker fits them to two separate Gaussian distributions denoted N (µ in , σ 2 in ) and N (µ out , σ 2 out ) respectively. Then, the attacker queries the target model with (x * , y * ) and computes the scaled confidence score of the target model conf t = ϕ(f θt (x * ) y * ). Finally, the probability of (x * , y * ) being in the training data of θ t is calculated as:
p(conf t | N (µ in , σ 2 in )) p(conf t | N (µ out , σ 2 out )) ,(2)
where p(conf | N (µ, σ 2 )) calculates the probability of conf under N (µ, σ 2 ).
For the offline threat model, the attacker exclusively produces OUT shadow models by training on a set of randomized datasets fully disjoint from the possible sensitive data. For the sensitive data point (x * , y * ), the final score is now calculated as:
1 − p(conf t | N (µ out , σ 2 out )).
Though assessing membership this way is more challenging, the offline model allows the attacker to avoid having to train any new models at inference time in response to a new (x * , y * ) pair -a more realistic setting if the attacker is a regulatory agency responding to malpractice claims by many users, for example.
In practice, modern machine learning models are trained with data augmentations. Both the online and offline methods above can be improved if the attacker generates k augmented target data points {x 1 , ..., x k }, performs the above probability test on each of the k augmented samples, and averages the resulting scores.
MOTIVATION
Despite achieving previous state-of-the-art results, a LiRA attacker exclusively queries the target model with the target data (x * , y * ) and/or its augmentations. Even in the online setting, if (x * , y * ) is not an outlier that strongly influences the final IN model (Carlini et al., 2022c;Ilyas et al., 2022), then its impact on the model and thus the information gained from its confidence score is quite limited. Moreover, the longer a model trains, the more it becomes invariant to its data augmentations, so the ensemble of augmented target samples might still lack sufficient information to ascertain membership.
Since the threat model does not bar the attacker from querying arbitrary data, we ask whether more information about the target model can be obtained by synthesizing more powerful queries. Intuitively, we want the final synthesized query to always give statistically separable signals for models trained with/without the original sensitive sample. Existing work has shown two models trained with the same data point, often share similar properties at the decision boundary near that point (Somepalli et al., 2022). Using the shadow models from LiRA, the attacker can therefore adversarially optimize a query (near the original x * ) such that the distribution of confidence scores for IN models is as different as possible from the distribution of confidence scores for OUT models. We call the synthesized query a canary because it is an indicator for the membership status of x * .
OPTIMIZING FOR CANARY SUCCESS
We now formally present our strategy to generate adversarial queries. For a target data point (x * , y * ), its IN shadow models S in = {θ in 1 , ..., θ in n }, and its OUT shadow models S out = {θ out 1 , ..., θ out m }, the attacker's goal is to find a data point x mal such that IN models and OUT models have different behaviors (logits/confidence scores/losses). In the simplest case, the attacker can optimize x mal so that IN shadow models have high losses on x mal and OUT models to have low losses on x mal . This can be simply achieved by minimizing the following objective:
argmin xmal∈I 1 n n i=1 L(x mal , y * , θ in i ) + 1 m m i=1 L out (x mal , y * , θ out i ),(3)
where I is the feasible data point domain, L is the main task loss, and L out is − log(1 − f θ (x) y ). We further evaluate the optimal choice of objective functions in Section 4.4. Calculate loss on OUT models: 6:
g ∆ mal = ∇ ∆ mal 1 b b i=1 L out (x * + ∆ mal , y * , θ out i ) 7:
Calculate loss on IN models (removed when offline):
8:
g ∆ mal += ∇ ∆ mal 1 b b i=1 L(x * + ∆ mal , y * , θ in i ) 9:
Update ∆ mal based on g ∆mal 10:
Project ∆ mal onto ||∆ mal || ∞ ≤ ϵ and (x * + ∆ mal ) ∈ I 11: x mal = x * + ∆ mal 12: return x mal Though in principle an attacker can construct a canary query as described above, in practice the optimization problem is intractable. Accumulating the loss on all shadow models requires a significant amount of computational resources, especially for a large number of shadow models or models with many parameters. Another way to conceptualize the problem at hand is to think of x mal as the model parameters and the shadow models as training data points in traditional machine learning. When framed this way, the number of parameters in our model x mal is much greater than the number of data points |S in | + |S out |. For CIFAR-10 the number of parameters in x mal is 3 × 32 × 32 = 3072, but the largest number of shadow models used in the original LiRA paper is merely 256. Therefore, if we follow the loss Equation (3), x mal will overfit to shadow models and not be able to generalize to the target model.
To alleviate the computational burden and the overfitting problem, we make some modifications to the canary generation process. During optimization, we stochastically sample b IN shadow models from S in and b OUT shadow models from S out for each iteration, where b < min(n, m). This is equivalent to stochastic mini-batch training for batch size b, which might be able to help the query generalize better (Geiping et al., 2021). We find that such a mini-batching strategy does reduce the required computation, but it does not completely solve the overfitting problem. An attacker can easily find an x mal with a very low loss on Equation (3), and perfect separation of confidence scores from IN models and OUT models. However, querying with such a canary x mal results in random confidence for the holdout shadow models, which indicates that the canary is also not generalizable to the unseen target model.
To solve this, instead of searching for x mal on the whole feasible data domain, we initialize the adversarial query with the target image or the target image with a small noise. Meanwhile, we add an ϵ bound to the perturbation between x mal and x * . Intuitively, the hope is that x mal and x * now share the same loss basin, which prevents x mal from falling into a random, suboptimal local minimum of Equation (3). We summarize our complete algorithm Canary in Algorithm 1. In the offline case, we remove line 8 and only use OUT models during the optimization. We also illustrate our reasoning in Figure 1, where we visualize how the adversarially trained queries might alter the original queries' decision boundaries and provide more confident and diverse predictions.
Once a suitable canary has been generated, we follow the same metric-based evaluation strategy described in Section 3.2 but replace (x * , y * ) with (x mal , y * ).
EXPERIMENTS
In this section, we first show that the Canary attack can reliably improve LiRA results under different datasets and different models for both online and offline settings. Further, we investigate the algorithm thoroughly through a series of ablation studies.
EXPERIMENTAL SETTING
We follow the setting of for our main experiment on CIFAR-10 and CIFAR-100 for full comparability. We first train 65 wide ResNets (WRN28-10) (Zagoruyko & Komodakis, 2016) with random even splits of 50000 images to reach 92% and 71% test accuracy for CIFAR-10 and CIFAR-100 respectively. For MNIST, we train 65 8-layer ResNets (He et al., 2016) with random even splits to reach 97% test accuracy. During the experiments, we report the average metrics over 5 runs with different random seeds. For each run, we randomly select a model as the target model and remaining 64 models as shadow models, and test on 5000 random samples with 10 queries.
For the hyperparameters in the Canary attack, we empirically choose ϵ = 2 for CIFAR-10 & CIFAR-100 and ϵ = 6 for MNIST, which we will ablate in Section 4.4. We sample b = 2 shadow models for each iteration and optimize each query for 40 optimization steps using Adam (Kingma & Ba, 2014) with a learning rate of 0.05. For L and L out , we choose to directly minimize/maximize the logits before a softmax on the target label. All experiments in this paper are conducted by one NVIDIA RTX A4000 with 16GB of GPU memory, which allows us to load all shadow models and optimize 10 adversarial queries at the same time, but the experiments could be done with a smaller GPU by optimizing one query at a time or reloading the subsample of models for each iteration.
EVALUATION METRICS
In this paper, we mainly report two metrics: AUC (area under the curve) score of the ROC (receiver operating characteristic) curve and TPR@1%FPR (true positive rate when false positive rate is 1%).
One can construct the full ROC by shifting the probability threshold of the attack to show the TPR under each FPR. The AUC measures the average power of the attack. As mentioned in 2 an attacker might be more interested in TPR with low FPR, so we also specifically report TPR@1%FPR.
CA N A R Y ATTACKS HELP MEMBERSHIP INFERENCE
We show our main results in Table 1 for three datasets. Canary attacks are effective in both online and offline scenarios. The improvement of TPR@1%FPR is significant for all datasets. The difference is especially notable for online CIFAR-10, where we achieve a 4.14% boost over the baseline LiRA (a relative improvement in TPR of 23%). In the case of online CIFAR-100, where the baseline already achieves a very high AUC, Canary attacks only provide an extra 0.19% over the baseline. On average, Canary attacks are most powerful in the more realistic offline scenario. We gain over 3% boost on AUC scores on all datasets and over 2.75% TPR@1%FPR boost for CIFAR-10 and CIFAR-100.
Overall, the improvement on MNIST is relatively small. We believe this can be attributed to the lack of diversity for MNIST, which is known to make membership inference more challenging. In this setting, the difference between the decision boundaries of IN models and OUT models is less pronounced, so it is more difficult to make diverse and reliable queries. Despite these challenges, we still see improvement over LiRA in the offline case -the AUC score is close to random (50.82%) for LiRA here and Canary attacks can improve this to 54.61%.
In addition to WRN28-10, we further verify the ability of Canary attacks for three other models architectures in CIFAR-10: ResNet-18 (He et al., 2016), VGG-16 (Simonyan & Zisserman, 2014), and ConvMixer (Trockman & Kolter, 2022). In Table 2, Canary attacks are able to consistently provide enhancement over different models. The performance of Canary attacks should be related to the reproducibility of the model architecture. If the model decision boundary is highly reproducible, the shadow models should share similar decision boundaries with the target model, and the adversarial query trained on the shadow models will be more transferable to the target model. The order of the model architectures in Table 2 is sorted in descending order of the decision boundary reproducibility according to Somepalli et al. (2022). Indeed, we see from Table 2 that models with higher reproducibility do correlate with more improvement for the online scenario.
ABLATION EXPERIMENTS
In this section, we provide ablation experiments on several crucial hyperparameters of the discussed Canary attacks.
Number of shadow models. As described before, the number of shadow models is comparable to the number of data points in traditional machine learning. We test Canary attacks with 5 different numbers of shadow models: 4, 8, 16, 32, and 64. We see from Figure 2(a), that using more shadow models yields a higher true positive rate when the false positive rate is low. Interestingly, as the number of shadow models initially decreases, the overall performance drops slightly, but such an effect diminishes after the number of shadow models is greater than 16.
Number of queries.
Because of the stochasticity of optimization, different queries can fall into different minima of Equation (3), returning different sets of confidence scores and thus more ways to probe the target model. Therefore, it is essential to investigate how the number of queries affects the membership inference results. We plot the results in Figure 2(b). The ensemble of more adversarial queries consistently enhances both metrics, which means different queries indeed give different signals about the target model.
ϵ bound. The choice of ϵ is important, which is highly related to the transferability. As shown in Figure 2(c), the performance of Canary drops very fast after ϵ = 2. When ϵ = 1 the TPR@1%FPT is slightly lower than when ϵ = 2, which indicates that the perturbation within ϵ = 1 might be too small to be effective.
Batch size. In Figure 2(d), we test Canary with different batch sizes. Mini-batch strategy does improve the performance of Canary attacks. Especially for TPR@1%FPT, the difference is around 2% between the batch size of 2 1 and 2 5 . Optimizing with a smaller batch size prevents the adversarial query from overfitting to the shadow models. Meanwhile, it massively reduces the GPU memory required for the gradient graph, which is a win-win situation for the attacker. Choice of Objectives for L and L out . The choice the target objectives L and L out is also crucial to the generalization of Canary attacks. We test six different objectives to create adversarial queries: 1) CE/reverse CE. 2) CE/CE on a random label other than the true label. 3) CW (Carlini & Wagner, 2017)/reverse CW. 4) CW/CW on a random label. 5) Directly minimize the scaled log score/maximize the scaled log score. 6) Directly minimize the pre-softmax logits of the true label/maximize the pre-softmax logits of the true label. We show the results in Table 3.
During the experiment, for all objectives above, we can easily get very low losses at the end of the optimization, and create Canary queries that perfectly separate the training shadow models. Surprisingly, minimizing/maximizing the pre-softmax logits gives us the biggest improvement, even though it does not explicitly suppress the logits for other labels like other objectives do. Overall, any other choices can also improve the baseline in the online scenario. However, in the offline scenario, only CW/CW and pre-softmax logits provide improvements to TPR@1%FPR.
DIFFERENTIAL PRIVACY
We now challenge Canary attacks with differential privacy guarantees (Abadi et al., 2016). Differential privacy is designed to prevent the leakage of information about the training data. We evaluate Canary attacks in two settings. The first setting only uses norm bounding, where the norm bounding C = 5 and ϵ = ∞, and in another setting, C = 5 and ϵ = 100. In order to follow the convention of practical differential privacy, we replace Batch Normalization with Group Normalization with G = 16 for ResNet-18.
We see in Table 4 that Canary attacks can provide some limited improvement. Both LiRA and Canary attacks are notably less effective when a small amount of noise ϵ = 100 is added during training, which is a very loose bound in practice. However, training with such a powerful defense makes the test accuracy of the target model decrease from 88% to 44%. Differential privacy is still a very effective defense for membership inference attacks, but Canary attacks reliably increase the success chance of membership inference over LiRA.
CONCLUSION
We explore a novel way to enhance membership inference techniques by creating ensembles of adversarial queries. These adversarial queries are optimized to provide maximally different outcomes for the model trained with/without the target data sample. We also investigate and discuss strategies to make the queries trained on the shadow models transferable to the target model. Through a series of experiments, we show that Canary attacks reliably enhance both online and offline membership inference algorithms under three different datasets, four different models, and differential privacy.
Although Canary attacks perform very well in the above experiments, there are several relevant limitations. The optimization process for constructing the ensemble of canaries is markedly more computationally expensive than using data augmentations of the target data point as in Carlini et al. (2022b). Furthermore, effective optimization routines for queries could challenging, especially when considering future applications of this approach to discrete data, like text or tabular data. In principle, we believe it should be possible to devise a strategy to make adversarial queries transferable that do not require ϵ-bounds, but so far have found the method detailed in Canary to be the most successful approach.
ETHICS STATEMENT
Although our key goal is to develop a better membership inference algorithm to help protect data ownership, this technique might be used by a malicious party as a tool for breaching the privacy of the model trainer. On one hand, we find this acceptable, due to the inherent power imbalance between agents that train models and agents that own data. On the other hand, we believe that our results do not represent a fundamental shift in the capabilities of of membership inference attacks. . We find that with increasing width of the WideResNet, attack success increases reliably. This is potentially related to an increase in repeatability of the decision boundaries of these models. In Appendix A.2, we provide more results with more strict privacy budgets. When the ε of differential privacy is 10, the AUC of the attack is very close to 50% which is the random guess, and the TPR@1%FPR is almost zero. Differential privacy is still a very strong defense against membership inference. When ε is too large, then the attack overfits and does not generalize to the unseen test model that is attacked during membership inference. As shown in Appendix A.3, if we increase ε, the Canary objective monotonically decreases, evaluated on the "training" shadow models, but attack success peaks and then decreases.
A.4 CA N A R Y V.S. RANDOM NOISE
We test adding random noise within the same ε ball on the original target image, as shown in Table 5. The attacker doesn't benefit from adding random perturbations.
A.5 COMPUTATIONAL COST Table 6 shows the average attack time in seconds on one single NVIDIA RTX A4000 over 5000 target images with a total of 64 shadow models.
Figure 1 :
1Query Decision Boundary in Model Parameter Space. We illustrate our idea by plotting the decision boundaries of two queries x 1 and x 2 in model parameter space. In this case, the target image is indeed in the training data of the (green) target model. We sketch three cases. (a) Both augmented queries are unable to separate the two distributions, and the membership inference fails. (b) Two optimal adversarial queries are generated that are both diverse and, on average, separate both distributions, and the attack succeeds. (c) Without the constraints, adversarial queries can overfit and again lead to attack failure.
Algorithm 1 Canary Algorithm 1 :
1Input: IN shadow models S in = {θ in 1 , ..., θ in n }, OUT shadow models S out = {θ out 1 , ..., θ out m }, target data point (x * , y * ), batch size b, optimization steps T , perturbation bound ϵ, input domain I 2: ∆ mal = 0 3: for 1, ..., T do 4:Shuffle the index for S in and S out 5:
Figure 2 :
2Hyperparameter Ablation Experiments. We provide ablation experiments on several crucial hyperparameters: number of shadow models, number of adversarial queries, ϵ bound, and batch size.
Figure 3 :
3Results with Different Widths of WRN28.
Figure 4 :
4More Results under Differential Privacy. In all cases except No DP, the norm clipping is 5. Privacy Budget here refers to (ε, 1e−5)-DP, except for the last entry, which sets no budget and does not clip.A APPENDIXA.1 ABLATION STUDY: MODEL WIDTH Appendix A.1 shows the results on different model widths for WRN28. As claimed inSomepalli et al. (2022), wider networks have higher reproducibility. The performance of Canary correlates with the reproducibility (width) of the model.A.2 ABLATION STUDY: PRIVACY BUDGET
Figure 5 :
5Canary Loss (i.e. the objective evaluated on the shadow models used to optimize the canary attack) plotted against Performance (evaluated on the unseen test model) for various values of ε.
Table 1 :
1Main Results on Different Datasets. For three datasets, Canary attacks are effective in both online and offline scenarios.Online
CIFAR-10
CIFAR-100
MNIST
AUC TPR@1%FPR AUC TPR@1%FPR AUC TPR@1%FPR
LiRA
74.36
17.84
94.70
53.92
56.28
3.95
Canary 76.25
21.98
94.89
56.83
58.12
5.23
∆
+1.89
+4.14
+0.19
+2.91
+1.84
+1.28
Offline
AUC TPR@1%FPR AUC TPR@1%FPR AUC TPR@1%FPR
LiRA
55.40
9.85
79.59
42.02
50.82
2.66
Canary 61.54
12.60
82.59
44.78
54.61
3.06
∆
+6.14
+2.75
+3.00
+2.76
+3.79
+0.40
Table 2 :
2Results on Different Models Architecture. Canary attacks are able to consistently outperform LiRA over different models. The order of the model architectures is sorted in descending order of the decision boundary reproducibility according toSomepalli et al. (2022). T@1%F stands for TPR@1%FPR.Online
WRN28-10
ResNet-18
VGG
ConvMixer
AUC T@1%F AUC T@1%F
AUC
T@1%F AUC T@1%F
LiRA
74.36
17.84
76.29
17.05
75.94
20.48
75.97
16.58
Canary 76.25
21.98
76.93
19.34
77.63
20.87
76.39
17.05
∆
+1.89
+4.14
+0.64
+2.29
+1.69
+0.39
+0.42
+0.47
Offline
AUC T@1%F AUC T@1%F
AUC
T@1%F AUC T@1%F
LiRA
55.40
9.85
55.15
6.97
49.96
9.77
54.42
7.96
Canary 61.54
12.60
64.09
11.58
65.55
15.16
62.22
9.93
∆
+6.14
+2.75
+8.94
+4.61
+15.59
+5.39
+7.80
+1.97
Table 3 :
3Results with Different Objectives. We evaluate Canary attacks on different objectives.Directly minimizing/maximizing the pre-softmax logits gives the biggest improvement in both the
online and offline settings.
Online
Offline
AUC TPR@1%FPR AUC TPR@1%FPR
LiRA
74.36
17.84
55.40
9.85
CE/r. CE
75.55
19.85
56.83
9.22
CE/CE
75.55
19.89
59.23
9.77
CW/r. CW 75.37
19.97
56.57
9.26
CW/CW
75.67
20.99
59.27
11.30
Log. Logits 75.82
20.01
59.16
8.04
Logits
76.25
21.98
61.54
12.60
Table 4 :
4Results under Differential Privacy. In both cases, the norm clipping parameter is 5. Even when the target model is trained with differential privacy, Canary attacks reliably increase the success of membership inference.Online
Offline
AUC TPR@1%FPR AUC TPR@1%FPR
LiRA
66.25
9.41
56.12
3.27
Canary 67.17
9.93
59.73
4.41
ϵ = ∞
∆
+0.92
+0.52
+3.61
+1.14
LiRA
52.17
1.18
49.93
1.18
Canary 53.18
1.81
51.38
1.14
ϵ = 100
∆
+1.01
+0.63
+1.45
-0.04
Table 5 :
5Comparison with Random Noise Perturbations. A random noise perturbation in the same ε-ball as the canary does not increase membership success. In this sense, the optimized behavior of the canary attack is crucial. CANARY LOSS V.S. PERFORMANCEAUC TPR@1%FPR
LiRA
74.36
17.84
Random Noise 74.30
17.87
Canary
76.25
21.98
A.3
Table 6 :
6Computational Cost in Seconds. for generation of the attack and query into the model. Not included for both methods is the computational cost to create the array of shadow models.1 Query 2 Queries 5 Queries 10 Queries
LiRA
0.26
0.27
0.36
0.63
Canary
1.03
1.04
1.44
2.44
ACKNOWLEDGEMENTSThis work was supported by the Office of Naval Research (#N000142112557), the AFOSR MURI program, DARPA GARD (HR00112020007), and the National Science Foundation (IIS-2212182 and DMS-1912866).
Deep learning with differential privacy. Martin Abadi, Andy Chu, Ian Goodfellow, Ilya H Brendan Mcmahan, Kunal Mironov, Li Talwar, Zhang, Proceedings of the 2016 ACM SIGSAC conference on computer and communications security. the 2016 ACM SIGSAC conference on computer and communications securityMartin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, pp. 308-318, 2016.
Quantifying membership inference vulnerability via generalization gap and other model metrics. W Jason, Daniel Bentley, Gary Gibney, Sumit Kumar Hoppenworth, Jha, arXiv:2009.05669arXiv preprintJason W Bentley, Daniel Gibney, Gary Hoppenworth, and Sumit Kumar Jha. Quantifying mem- bership inference vulnerability via generalization gap and other model metrics. arXiv preprint arXiv:2009.05669, 2020.
Towards evaluating the robustness of neural networks. Nicholas Carlini, David Wagner, 2017 ieee symposium on security and privacy (sp). IeeeNicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp), pp. 39-57. Ieee, 2017.
The secret sharer: Evaluating and testing unintended memorization in neural networks. Nicholas Carlini, Chang Liu, Úlfar Erlingsson, Jernej Kos, Dawn Song, 28th USENIX Security Symposium (USENIX Security 19). Nicholas Carlini, Chang Liu,Úlfar Erlingsson, Jernej Kos, and Dawn Song. The secret sharer: Evaluating and testing unintended memorization in neural networks. In 28th USENIX Security Symposium (USENIX Security 19), pp. 267-284, 2019.
Membership inference attacks from first principles. Nicholas Carlini, Steve Chien, Milad Nasr, 2022 IEEE Symposium on Security and Privacy (SP). IEEEShuang Song, Andreas Terzis, and Florian TramerNicholas Carlini, Steve Chien, Milad Nasr, Shuang Song, Andreas Terzis, and Florian Tramer. Mem- bership inference attacks from first principles. In 2022 IEEE Symposium on Security and Privacy (SP), pp. 1897-1914. IEEE, 2022a.
Nicholas Carlini, Steve Chien, Milad Nasr, 10.48550/arXiv.2112.03570Shuang Song, Andreas Terzis, and Florian Tramer. Membership Inference Attacks From First Principles. Nicholas Carlini, Steve Chien, Milad Nasr, Shuang Song, Andreas Terzis, and Florian Tramer. Membership Inference Attacks From First Principles. arxiv:2112.03570[cs], April 2022b. doi: 10.48550/arXiv.2112.03570. URL http://arxiv.org/abs/2112.03570.
The privacy onion effect: Memorization is relative. Nicholas Carlini, Matthew Jagielski, Nicolas Papernot, Andreas Terzis, Florian Tramer, Chiyuan Zhang, arXiv:2206.10469arXiv preprintNicholas Carlini, Matthew Jagielski, Nicolas Papernot, Andreas Terzis, Florian Tramer, and Chiyuan Zhang. The privacy onion effect: Memorization is relative. arXiv preprint arXiv:2206.10469, 2022c.
Label-only membership inference attacks. Florian Christopher A Choquette-Choo, Nicholas Tramer, Nicolas Carlini, Papernot, International conference on machine learning. PMLRChristopher A Choquette-Choo, Florian Tramer, Nicholas Carlini, and Nicolas Papernot. Label-only membership inference attacks. In International conference on machine learning, pp. 1964-1974. PMLR, 2021.
Stochastic training is not necessary for generalization. Jonas Geiping, Micah Goldblum, E Phillip, Michael Pope, Tom Moeller, Goldstein, arXiv:2109.14119arXiv preprintJonas Geiping, Micah Goldblum, Phillip E Pope, Michael Moeller, and Tom Goldstein. Stochastic training is not necessary for generalization. arXiv preprint arXiv:2109.14119, 2021.
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
Resolving individuals contributing trace amounts of dna to highly complex mixtures using high-density snp genotyping microarrays. Nils Homer, Szabolcs Szelinger, Margot Redman, David Duggan, Waibhav Tembe, Jill Muehling, V John, Dietrich A Pearson, Stephan, F Stanley, David W Nelson, Craig, PLoS genetics. 481000167Nils Homer, Szabolcs Szelinger, Margot Redman, David Duggan, Waibhav Tembe, Jill Muehling, John V Pearson, Dietrich A Stephan, Stanley F Nelson, and David W Craig. Resolving individuals contributing trace amounts of dna to highly complex mixtures using high-density snp genotyping microarrays. PLoS genetics, 4(8):e1000167, 2008.
Membership inference attacks on machine learning: A survey. Hongsheng Hu, Zoran Salcic, Lichao Sun, Gillian Dobbie, S Philip, Xuyun Yu, Zhang, ACM Computing Surveys. 2021Hongsheng Hu, Zoran Salcic, Lichao Sun, Gillian Dobbie, Philip S Yu, and Xuyun Zhang. Member- ship inference attacks on machine learning: A survey. ACM Computing Surveys (CSUR), 2021.
Datamodels: Predicting Predictions from Training Data. Andrew Ilyas, Sung Min Park, Logan Engstrom, Guillaume Leclerc, Aleksander Madry, 10.48550/arXiv.2202.00622cs, statAndrew Ilyas, Sung Min Park, Logan Engstrom, Guillaume Leclerc, and Aleksander Madry. Data- models: Predicting Predictions from Training Data. arxiv:2202.00622[cs, stat], February 2022. doi: 10.48550/arXiv.2202.00622. URL http://arxiv.org/abs/2202.00622.
Demystifying the membership inference attack. Paul Irolla, Grégory Châtel, 12th CMI Conference on Cybersecurity and Privacy (CMI). IEEEPaul Irolla and Grégory Châtel. Demystifying the membership inference attack. In 2019 12th CMI Conference on Cybersecurity and Privacy (CMI), pp. 1-7. IEEE, 2019.
Matthew Jagielski, Om Thakkar, Florian Tramèr, Daphne Ippolito, Katherine Lee, Nicholas Carlini, Eric Wallace, arXiv:2207.00099Shuang Song, Abhradeep Thakurta, Nicolas Papernot, et al. Measuring forgetting of memorized training examples. arXiv preprintMatthew Jagielski, Om Thakkar, Florian Tramèr, Daphne Ippolito, Katherine Lee, Nicholas Carlini, Eric Wallace, Shuang Song, Abhradeep Thakurta, Nicolas Papernot, et al. Measuring forgetting of memorized training examples. arXiv preprint arXiv:2207.00099, 2022a.
How to combine membership-inference attacks on multiple updated models. Matthew Jagielski, Stanley Wu, Alina Oprea, Jonathan Ullman, Roxana Geambasu, arXiv:2205.06369arXiv preprintMatthew Jagielski, Stanley Wu, Alina Oprea, Jonathan Ullman, and Roxana Geambasu. How to combine membership-inference attacks on multiple updated models. arXiv preprint arXiv:2205.06369, 2022b.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Stolen memories: Leveraging model memorization for calibrated {White-Box} membership inference. Klas Leino, Matt Fredrikson, 29th USENIX security symposium (USENIX Security 20). Klas Leino and Matt Fredrikson. Stolen memories: Leveraging model memorization for calibrated {White-Box} membership inference. In 29th USENIX security symposium (USENIX Security 20), pp. 1605-1622, 2020.
Yiyong Liu, Zhengyu Zhao, Michael Backes, Yang Zhang, arXiv:2208.14933Membership inference attacks by exploiting loss trajectory. arXiv preprintYiyong Liu, Zhengyu Zhao, Michael Backes, and Yang Zhang. Membership inference attacks by exploiting loss trajectory. arXiv preprint arXiv:2208.14933, 2022.
. Openai, Api Openai, OpenAI. OpenAI API, June 2020. URL https://openai.com/blog/openai-api/.
Eluding secure aggregation in federated learning via model inconsistency. Dario Pasquini, Danilo Francati, Giuseppe Ateniese, arXiv:2111.07380arXiv preprintDario Pasquini, Danilo Francati, and Giuseppe Ateniese. Eluding secure aggregation in federated learning via model inconsistency. arXiv preprint arXiv:2111.07380, 2021.
Whitebox vs black-box: Bayes optimal strategies for membership inference. Alexandre Sablayrolles, Matthijs Douze, Cordelia Schmid, Yann Ollivier, Hervé Jégou, International Conference on Machine Learning. PMLRAlexandre Sablayrolles, Matthijs Douze, Cordelia Schmid, Yann Ollivier, and Hervé Jégou. White- box vs black-box: Bayes optimal strategies for membership inference. In International Confer- ence on Machine Learning, pp. 5558-5567. PMLR, 2019.
Ml-leaks: Model and data independent membership inference attacks and defenses on machine learning models. Ahmed Salem, Yang Zhang, Mathias Humbert, Pascal Berrang, Mario Fritz, Michael Backes, arXiv:1806.01246arXiv preprintAhmed Salem, Yang Zhang, Mathias Humbert, Pascal Berrang, Mario Fritz, and Michael Backes. Ml-leaks: Model and data independent membership inference attacks and defenses on machine learning models. arXiv preprint arXiv:1806.01246, 2018.
Membership inference attacks against machine learning models. Reza Shokri, Marco Stronati, Congzheng Song, Vitaly Shmatikov, 2017 IEEE symposium on security and privacy (SP). IEEEReza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. Membership inference at- tacks against machine learning models. In 2017 IEEE symposium on security and privacy (SP), pp. 3-18. IEEE, 2017.
Very deep convolutional networks for large-scale image recognition. Karen Simonyan, Andrew Zisserman, arXiv:1409.1556arXiv preprintKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. Gowthami Somepalli, Liam Fowl, Arpit Bansal, Ping Yeh-Chiang, Yehuda Dar, Richard Baraniuk, Micah Goldblum, Tom Goldstein, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Gowthami Somepalli, Liam Fowl, Arpit Bansal, Ping Yeh-Chiang, Yehuda Dar, Richard Baraniuk, Micah Goldblum, and Tom Goldstein. Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 13699-13708, June 2022.
Systematic evaluation of privacy risks of machine learning models. Liwei Song, Prateek Mittal, 30th USENIX Security Symposium (USENIX Security 21). Liwei Song and Prateek Mittal. Systematic evaluation of privacy risks of machine learning models. In 30th USENIX Security Symposium (USENIX Security 21), pp. 2615-2632, 2021.
Jasper Tan, Daniel Lejeune, Blake Mason, Hamid Javadi, Richard G Baraniuk, arXiv:2205.14055Benign overparameterization in membership inference with early stopping. arXiv preprintJasper Tan, Daniel LeJeune, Blake Mason, Hamid Javadi, and Richard G Baraniuk. Benign overpa- rameterization in membership inference with early stopping. arXiv preprint arXiv:2205.14055, 2022a.
Jasper Tan, Blake Mason, Hamid Javadi, Richard G Baraniuk, arXiv:2202.01243Parameters or privacy: A provable tradeoff between overparameterization and membership inference. arXiv preprintJasper Tan, Blake Mason, Hamid Javadi, and Richard G Baraniuk. Parameters or privacy: A provable tradeoff between overparameterization and membership inference. arXiv preprint arXiv:2202.01243, 2022b.
. Asher Trockman, Kolter, arXiv:2201.09792Patches are all you need? arXiv preprintAsher Trockman and J Zico Kolter. Patches are all you need? arXiv preprint arXiv:2201.09792, 2022.
Privacy preservation in federated learning: An insightful survey from the GDPR perspective. Nguyen Truong, Kai Sun, Siyao Wang, Florian Guitton, Yike Guo, 10.1016/j.cose.2021.102402Computers & Security. 110102402Nguyen Truong, Kai Sun, Siyao Wang, Florian Guitton, and YiKe Guo. Privacy preservation in federated learning: An insightful survey from the GDPR perspective. Computers & Security, 110: 102402, November 2021. ISSN 0167-4048. doi: 10.1016/j.cose.2021.102402. URL https: //www.sciencedirect.com/science/article/pii/S0167404821002261.
Algorithms that remember: Model inversion attacks and data protection law. Michael Veale, Reuben Binns, Lilian Edwards, https:/royalsocietypublishing.org/doi/full/10.1098/rsta.2018.0083Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 37620180083Michael Veale, Reuben Binns, and Lilian Edwards. Algorithms that remember: Model inversion attacks and data protection law. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133):20180083, November 2018. doi: 10.1098/rsta. 2018.0083. URL https://royalsocietypublishing.org/doi/full/10.1098/ rsta.2018.0083.
How Machines Learn: Where Do Companies Get Data for Machine Learning and What Licenses Do They Need. Rachel Wilka, Rachel Landy, Scott A Mckinney, Washington Journal of Law. 133Technology & ArtsRachel Wilka, Rachel Landy, and Scott A. McKinney. How Machines Learn: Where Do Companies Get Data for Machine Learning and What Licenses Do They Need. Washington Journal of Law, Technology & Arts, 13(3):217-244, 2017. URL https://heinonline.org/HOL/P?h= hein.journals/washjolta13&i=226.
Privacy risk in machine learning: Analyzing the connection to overfitting. Samuel Yeom, Irene Giacomelli, Matt Fredrikson, Somesh Jha, IEEE 31st computer security foundations symposium (CSF). IEEESamuel Yeom, Irene Giacomelli, Matt Fredrikson, and Somesh Jha. Privacy risk in machine learn- ing: Analyzing the connection to overfitting. In 2018 IEEE 31st computer security foundations symposium (CSF), pp. 268-282. IEEE, 2018.
. Sergey Zagoruyko, Nikos Komodakis, arXiv:1605.07146Wide residual networks. arXiv preprintSergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016.
Analyzing information leakage of updates to natural language models. Santiago Zanella-Béguelin, Lukas Wutschitz, Shruti Tople, Victor Rühle, Andrew Paverd, Olga Ohrimenko, Boris Köpf, Marc Brockschmidt, Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security. the 2020 ACM SIGSAC Conference on Computer and Communications SecuritySantiago Zanella-Béguelin, Lukas Wutschitz, Shruti Tople, Victor Rühle, Andrew Paverd, Olga Ohrimenko, Boris Köpf, and Marc Brockschmidt. Analyzing information leakage of updates to natural language models. In Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security, pp. 363-375, 2020.
| [
"https://github.com/YuxinWenRick/canary-in-a-coalmine."
]
|
[
"LOCAL DERIVATIONS ON THE LIE ALGEBRA W (2, 2)",
"LOCAL DERIVATIONS ON THE LIE ALGEBRA W (2, 2)"
]
| [
"Qingyan Wu ",
"Shoulan Gao ",
"Dong Liu "
]
| []
| []
| The present paper is devoted to studying local derivations on the Lie algebra W (2, 2) which has some outer derivations. Using some linear algebra methods in [6] and a key construction for W (2, 2) we prove that every local derivation on W (2, 2) is a derivation. As an application, we determine all local derivations on the deformed bms 3 algebra. | 10.1080/03081087.2022.2160426 | [
"https://export.arxiv.org/pdf/2210.14626v1.pdf"
]
| 253,116,984 | 2210.14626 | faa8970fb37c42eaa70fd15c4fc9af6a8067a893 |
LOCAL DERIVATIONS ON THE LIE ALGEBRA W (2, 2)
26 Oct 2022
Qingyan Wu
Shoulan Gao
Dong Liu
LOCAL DERIVATIONS ON THE LIE ALGEBRA W (2, 2)
26 Oct 2022Virasoro algebralocal derivationW (2, 2), deformed bms 3 algebra Mathematics Subject Classification: 15A06, 17A36, 17B40
The present paper is devoted to studying local derivations on the Lie algebra W (2, 2) which has some outer derivations. Using some linear algebra methods in [6] and a key construction for W (2, 2) we prove that every local derivation on W (2, 2) is a derivation. As an application, we determine all local derivations on the deformed bms 3 algebra.
Introduction
The notion of local derivation was originally introduced by Larson and Sourour aroused from studying the reflexivity of the space of linear maps from an algebra to itself (see [14,15]). Let L be an algebra, M be an L-bimodule. A linear mapping ∆ : L → M is said to be a local derivation if for every x in L there exists a derivation D x : L → M, depending on x, satisfying ∆(x) = D x (x). When M is taken to be L, such a local derivation is called a local derivation on L. Local derivations on various algebras are some kind of local properties for the algebras, which turn out to be very interesting (see [7,2,3,18,1,13,16], etc.). Recently, several papers have devoted to studying local derivations for Lie (super)algebras. For examples, it is proved that every local derivation on a finite dimensional semi-simple Lie algebra over an algebraically closed field of characteristic 0 is automatically a derivation in [1], and every local derivation on the Witt algebra is a derivation in [6]. However, there is no a uniform method to determine all local derivations on Lie algebras, so it is an open question to determine all local derivations for some Lie (super)algebras related to the Virasoro algebra.
The infinite dimensional Lie algebra W (2, 2) was first introduced by [22] to classify some simple vertex operator algebras and played an important role in many areas of mathematics and physics. Its structure and representation theories were studied in many papers (see [5,8,9,10,12,17,20,19], etc.). Note that W (2, 2) can be realized as the truncated loop algebra of the Virasoro algebra Vir. More specifically, W (2, 2) is isomorphic to Vir ⊗ C[t, t −1 ]/(t 2 ) ( [11]). In [11], the general truncated loop algebra Vir ⊗ C[t, t −1 ]/(t n ), n ≥ 1 was introduced and its quasi-finite weight modules were classified.
We know that W (2, 2) has some outer derivations, so it takes many difficulties to determine its local derivations. Using some linear algebra methods in [6] and a key construction (see Lemma 4.1 below), we prove that every local derivation on W (2, 2) is a derivation in this paper. With this research we can also determine all local derivations on some related Lie (super)algebras. As an application, we prove that every local derivation on the deformed bms 3 algebra, which corresponds to an infinite dimensional lift of the Maxwell algebra ( [4]), is also a derivation. Certainly, such researches can be extended to the general truncated loop algebra.
The present paper is arranged as follows. In Section 2, we recall some known results and establish some related properties concerning the Lie algebra W (2, 2). In Section 3, we determine all local derivations on the Virasoro subalgebra of W (2, 2). In Section 4, with a new construction we determine the actions of local derivations on I m , and then prove that every local derivation on W (2, 2) is a derivation. As an application, we prove that every local derivation on the deformed bms 3 algebra is also a derivation in Section 5.
Throughout this paper, we denote by Z, C, Z * , C * the sets of all integers, complex numbers, nonzero integers, nonzero complex numbers, respectively. All algebras are defined over C.
Preliminaries
In this section we recall definitions, symbols and establish some auxiliary results for later use in this paper.
A derivation on a Lie algebra L is a linear map D : L → L which satisfies the Leibniz law, that is,
D([v, w]) = [D(v), w] + [v, D(w)]
for all v, w ∈ L. The set of all derivations of L, denoted by Der(L), is a Lie algebra with respect to the commutation operation. For u ∈ L, the map
ad u : L → L, ad u(v) = [u, v], ∀v ∈ L[I m , I n ] = 0, [x, C] = [x, C 1 ] = 0, ∀m, n ∈ Z, x ∈ W (2, 2).
Clearly the Virasoro algebra Vir := span{L m , C | m ∈ Z} is the subalgebra of W (2, 2). It is well known that Vir is the universal central extension of the Witt algebra W , which is the derivation of the Laurent polynomial algebra C[t, t −1 ]. Moreover, W (2, 2) can be realized as the truncated loop algebra of the Virasoro algebra. In fact, W (2, 2) is isomorphic to Vir ⊗ C[t, t −1 ]/(t 2 ) (see [11]).
All local derivations on the Witt algebra were determined in [6]. Proof. Let ∆ be a local derivation of Vir. Since every derivation of the Virasoro algebra is inner ( [23]), we have ∆(C) = 0. Now by the definition of local derivation and Theorem 2.1 we can suppose that ∆(L 0 ) = 0 (also see the proof of Theorem 3.4 below) and ∆(L m ) = a m C for some a m ∈ C. By a m C = ∆(L m ) = [u, L m ], we can get a m = 0.
Lemma 2.3. [9]
The derivation algebra of W (2, 2) is
Der(W (2, 2)) = Inn(W (2, 2)) Cδ,
where δ is an outer derivation defined by δ(L m ) = δ(C) = 0, δ(I m ) = I m and δ(C 1 ) = C 1 for any m ∈ Z.
Local derivations on the Witt subalgebra
Set W := W (2, 2)/(CC ⊕ CC 1 ), the quotient of W (2, 2) by its center. Now we first determine all local derivations on W, and then extend to the Lie algebra W (2, 2). In this section, we shall concern local derivations on the Witt subalgerbra W of W with the same methods in [6].
For a local derivation ∆ : W → W and x ∈ W, we always use the symbol D x for the derivation of W satisfying ∆(x) = D x (x) and D x given by Lemma 2.3 in the following sections.
For a given m ∈ Z * , recall that Z m = Z/mZ is the modulo m residual ring of Z. Then for
any i ∈ Z we haveī ∈ Z m , whereī = {i + km | k ∈ Z}. Let ∆ be a local derivation on W with ∆(L 0 ) = 0. For L m with m = 0, set ∆(L m ) = n∈Z (a n L n + b n I n ), (3.1)
where a n , b n ∈ C for any n ∈ Z. Note that Z =0 ∪1 ∪ · · · ∪ m − 1. Therefore, (3.1) can be written as follows:
∆(L m ) = ī ∈F t i k=s i a i+km L i+km + ī ∈E q i k=p i b i+km I i+km , (3.2) where s i ≤ t i ∈ Z, p i ≤ q i ∈ Z, and E, F ⊂ Z m . For L m + xL 0 , where x ∈ C * , since ∆ is a local derivation, there exists n∈Z (a ′ n L n + b ′ n I n ) ∈ W, where a ′ n , b ′ n ∈ C for any n ∈ Z, such that ∆(L m ) = ∆(L m + xL 0 ) = [ n∈Z a ′ n L n + b ′ n I n , L m + xL 0 ] = ī ∈F t ′ i +1 k=s ′ i ((i + (k − 2)m)a ′ i+(k−1)m + x(i + km)a ′ i+km )L i+km + ī ∈E q ′ i +1 k=p ′ i ((i + (k − 2)m)b ′ i+(k−1)m + x(i + km)b ′ i+km )I i+km . (3.3)
Note that the subset E, F in (3.3) is same as that of (3.2).
Lemma 3.1. Let ∆ be a local derivation on W such that ∆(L 0 ) = 0. Then F = {0} and E = {0} in (3.2) and (3.3).
Proof. It is essentially same as that of Lemma 3.2 in [6].
Assumed that m = 0, a i+s i m = 0, a i+t i m = 0 and a ′ i+s ′ i m = 0, a ′ i+t ′ i m = 0 for someī =0.
Comparing the right hand sides of (3.2) and (3.3) we see that
s i = s ′ i and t i ≤ t ′ i + 1. If t i < t ′ i + 1, from (3.2) and (3.3), we deduce that (i + (t ′ i − 1)m)a ′ i+t ′ i m = 0. Then i + (t ′ i − 1)m = 0, i.e.,ī =0, a contradiction. Thus t i = t ′ i + 1, and s i < t i . Comparing (3.2) and (3.3), we deduce that a i+s i m = x(i + s i m)a ′ i+s i m ; a i+(s i +1)m = (i + (s i − 1)m)a ′ i+s i m + x(i + (s i + 1)m)a ′ i+(s i +1)m ; . . . a i+(t i −1)m = (i + (t i − 3)m)a ′ i+(t i −2)m + x(i + (t i − 1)m)a ′ i+(t i −1)m ; a i+t i m = (i + (t i − 2)m)a ′ i+(t i −1)m . Since i + km = 0 for k ∈ Z, eliminating a ′ i+s i m , · · · , a ′ i+(t i −1)m in this order by substitution we see that a i+t i m + * x −1 + · · · + * x −t i +s i = 0, (3.4)
where * are independent of x. We always find some x ∈ C * not satisfying (3.4), and then get a contradiction. Therefore, F = {0}. Similarly, E = {0}. The lemma follows.
Motivated by Lemma 3.4 in [6], we have the following lemma.
Lemma 3.2. Let ∆ be a local derivation on W such that ∆(L 0 ) = 0. Then for any m ∈ Z * , we have
∆(L m ) = c m L m + d m I m for some c m , d m ∈ C.
Proof. The proof is essentially same as that of Lemmas 3.3, 3.4 in [6].
By Lemma 3.1, (3.2) and (3.3), we have t k=s a km L km = t ′ +1 k=s ′ ((k − 2)ma ′ (k−1)m + xkma ′ km )L km , (3.5) where a ′ i+(s ′ i −1)m = a ′ i+(t ′ i +1)m = 0.
We may assume that a sm , a tm , a ′ s ′ m , a ′ t ′ m = 0. Clearly, s ′ ≤ s ≤ t ≤ t ′ + 1, and s ′ = s if s ′ = 0. Our goal is to prove that s = t = 1, which is essentially same as that of Lemmas 3.3, 3.4 in [6].
Assume that s ′ < 0. Then s ′ = s. If further t ′ ≥ −1, from (3.5) we get a set of equations
a sm = xsma ′ sm ; a (s+1)m = (s − 1)ma ′ sm + x(s + 1)ma ′ (s+1)m ; . . . a −m = −3ma ′ −2m − xma ′ −m ; a 0 = −2ma ′ −m . (3.6)
If a 0 = 0, using the same arguments as for (3.4), the equations in (3.6) make a contradiction. So we consider the case that a 0 = 0. From (3.6), we see that a ′ −m = 0. We continue upwards in (3.6) in this manner to some steps. Then there exists a non-positive integer l such that
a 0 = a −m = · · · = a lm = 0, a (l−1)m = 0, a ′ −m = · · · = a ′ (l−1)m = 0.
If s + 1 < l (≤ 0), then (3.6) becomes
a sm = xsma ′ sm ; a (s+1)m = (s − 1)ma ′ sm + x(s + 1)ma ′ (s+1)m ; . . . a (l−1)m = (l − 3)ma ′ (l−2)m . (3.7)
Using the same arguments as for (3.4), the equations in (3.7) make a contradiction. We need only to consider the case that s + 1 = l, i.e., l − 1 = s. In this case we have that 0 = a ′ (l−1)m = a ′ s ′ m = 0, again a contradiction. Therefore, s ′ < 0, s ′ = s, t ′ < −1. If t < t ′ + 1, from (3.5) we see that
(t ′ − 1)ma t ′ m = 0.
Then (t ′ − 1)m = 0, i.e., t ′ = 1, a contradiction. So t = t ′ + 1 and s < t. We get a set of equations from (3.5)
a sm = xsma ′ sm ; a (s+1)m = (s − 1)ma ′ sm + x(s + 1)ma ′ (s+1)m ; . . . a tm = (t − 2)ma ′ (t−1)m . (3.8)
Using the same arguments as for (3.4), the equations in (3.8) make a contradiction. Hence s ′ ≥ 0. If s ′ ≥ 1, then s = s ′ ≥ 1. If s ′ = 0, then a 0 = 0 by (3.5). So s ≥ 1. Now we have t ≥ s ≥ 1. The left is to prove that t = 1 in (3.5). Otherwise, we assume that t > 1, and then t ′ > 0.
Case 1: t ′ > 1.
In this case we can show that t = t ′ + 1 as in the above arguments. If s ′ ≥ 1, we see that s = s ′ and s < t. From (3.5) we obtain a set of (at least two) equations
a sm = xsma ′ sm ; a (s+1)m = (s − 1)ma ′ sm + x(s + 1)ma ′ (s+1)m ; . . . a tm = (t − 2)ma ′ (t−1)m . (3.9)
Using the same arguments as for (3.4), the equations in (3.9) make a contradiction. So s ′ = 0. Now we have
s ′ = 0, s ≥ 1, t = t ′ + 1 > 2.
From (3.5) we obtain a set of (at least two) equations
a 2m = 2xma ′ 2m ; a 3m = ma ′ 2m + 3xma ′ 3m ; . . . a tm = (t − 2)ma ′ (t−1)m . (3.10)
Using the same arguments again, the equations in (3.10) make a contradiction. So this case is not held.
I km = q ′ +1 k=p ′ ((k − 2)mb ′ (k−1)m + xkmb ′ km )I km , where b ′ i+(p ′ i −1)m = b ′ i+(q ′
i +1)m = 0. By the same considerations as above we can get p = q = 1. The lemma follows. Proof. Let ∆ be a local derivation on W. There exists y ∈ W such that ∆(L 0 ) = [y, L 0 ]. Set ∆ 1 = ∆ − ad(y). Then ∆ 1 is a local derivation such that ∆ 1 (L 0 ) = 0. By Lemma 3.2, there are c 1 , d 1 ∈ C such that
′ i L i + j∈J b ′ j I j ∈ W, where a ′ i , b ′ j ∈ C∆ 1 (L 1 ) = c 1 L 1 + d 1 I 1 .
Set ∆ 2 = ∆ 1 + c 1 ad(L 0 ) + d 1 ad(I 0 ). Then ∆ 2 is a local derivation such that This, together with (4.1), yields ∆(I m ) ∈ CI m . Case 2: m = 0. Set
∆ 2 (L 0 ) = 0, ∆ 2 (L 1 ) = 0.∆(I 0 ) = t i=s a i I i ,
where s ≤ t, a i ∈ C for any i and a s , a t = 0. Choose p < s, p < 0, q >> t such that p + q > t. For
I 0 + L p + L q , there exist a x = i∈I a ′ i L i + j∈J b ′ j I j ∈ W, a ′ ∈ C, where a ′ i , b ′ j ∈ C * for any i ∈ I, j ∈ J, such that t i=s a i I i = ∆(I 0 ) = ∆(I 0 + L p + L q ) = [ i∈I a ′ i L i + j∈J b ′ j I j , I 0 + L p + L q ] + a ′ δ(I 0 ). (4.3)
So i∈I a ′ i L i = b ′ (L p + L q ) for some b ′ ∈ C, and max {j | b ′ j = 0} ≤ q and min {j | b ′ j = 0} ≥ p.
In this case we claim that J ⊂ {p, q, 0}. In fact, if l = max{j ∈ J | b ′ j = 0, j = q} ≥ 0, then there exists a nonzero term b ′ l (l − q)I q+l , where q + l > t in the right hand side of (4.3). It is a contradiction. If l = min{j ∈ J | b j = 0, j = p} < 0, then there exists a nonzero term b ′ l (l − p)I p+l , where p + l < s in the right hand side of (4.3). It is also a contradiction. So the claim holds. Now we can suppose that
a x = b ′ (L p + L q ) + b ′ p I p + b ′ q I q + b ′ 0 I 0 .
Comparing with the coefficients of I i , s ≤ i ≤ t, we get a i = 0 for any i = 0. The proof is completed. Now we are in position to get the main result of this paper. Proof. Let ∆ be a local derivation on W. There exists u ∈ W such that ∆(L 0 ) = [u, L 0 ]. Set ∆ 1 = ∆ − ad(u). Then ∆ 1 is a local derivation such that ∆ 1 (L 0 ) = 0. By Lemma 3.2, there exist c 1 , d 1 ∈ C such that
∆ 1 (L 1 ) = c 1 L 1 + d 1 I 1 .
Set ∆ 2 = ∆ 1 + c 1 ad(L 0 ) + d 1 ad(I 0 ). Then ∆ 2 is a local derivation such that Now we can suppose that ∆ 3 (I 0 ) = cI 0 for some c ∈ C. So there exist b x = i∈K a ′ k L k ∈ W, d ∈ C, where a ′ k ∈ C for any k ∈ K, such that ∆(I 0 ) = ∆(I 0 + I 1 + I 2 ) = [ k∈K a ′ k L k , I 0 + I 1 + I 2 ] + dδ(I 0 + I 1 + I 2 ) = cI 0 .
∆ 2 (L 0 ) = 0, ∆ 2 (L 1 ) = 0.
Local derivations on the deformed bms 3 algebra
The deformed bms 3 algebra corresponds to an infinite-dimensional lift of the (2+1)dimensional Maxwell algebra in the very same way as the W (2, 2) and bms 3 are infinitedimensional lifts of the AdS and the Poincaŕe algebras in 2 + 1 dimensions respectively ( [4]). for any m, n ∈ Z, and the others are zero's.
Clearly the subalgebra span C {L m , I m , C, C 2 | m ∈ Z} is isomorphic to W (2, 2). Moreover, B is also a truncated loop algebra of Vir. In fact, B is isomorphic to the truncated loop algebra Vir ⊗ C[t, t −1 ]/(t 3 ). Proof. Case 1: m = 0. Using the same considerations as Lemma 3.2, we can get that
∆(L ′′ m ) = a m L ′′ m + b m J ′ m + c m I m . So ∆(L m ) + √ 2m∆(J m ) + m 2 ∆(I m ) = a m (L m + √ 2mJ m + m 2 I m ) + b m (J m + √ 2mI m ) + c m I m .
Therefore we have ∆(J m ) ∈ CJ m + CI m . Case 2: m = 0. It is essentially same as that of Lemma 4.2 in Section 4.
Lemma 5.6. Let ∆ be a local derivation on B such that ∆(L m ) = ∆(I m ) = 0 for any m ∈ Z.
Then ∆(J m ) = 0 for any m ∈ Z.
Proof. For any m ∈ Z, ∆(J m ) = a m J m + b m I m for some a m , b m ∈ C by Lemma 5.5.
For J m + L 1 + L 2 , there exist i∈I a ′ i L i + j∈J b ′ j J j + k∈K c ′ k I k ∈ B, b ′ ∈ C, where a ′ i , b ′ j , c ′ k ∈ C for any i ∈ I, j ∈ J, k ∈ K, such that a m J m + b m I m = ∆(J m ) = ∆(J m + L 1 + L 2 ) = [ i∈I a ′ i L i + j∈J b ′ j J j + k∈K c ′ k I k , J m + L 1 + L 2 ] + b ′ δ(J m + L 1 + L 2 ).
It is clear that i∈I a ′ i L i = a ′ (L 1 + L 2 ), and then J ⊂ {m} and K = ∅. By easily calculation we get b m = 0.
For Then the theorem follows from Lemma 5.6.
J m + I 2m , there exist i∈I ′ f ′ i L i + j∈J ′ d ′ j J j ∈ B, c ′ ∈ C, where f ′ i , d ′ j ∈ C for any i ∈ I ′ , j ∈ J ′ , such that a m J m = ∆(J m ) = ∆(J m + I 2m ) = [ i∈I ′ f ′ i L i + j∈J ′ d ′ j J j , J m + I 2m ] + c ′ δ(J m + I 2m ). It can easily get I ′ ⊂ {0, m}, J ′ ⊂ {m, 2m}. So a m J m = ∆(J m + I 2m ) = [f ′ 0 L 0 + f ′ m L m + d ′ m J m + d ′ 2m J 2m , J m + I 2m ] + c ′ δ(J m + I 2m ) = −mf ′ 0 J m − 2mf ′ 0 I 2m − mf ′ m I 3m + md ′ 2m I 3m + 1 2 c ′ J m + c ′ I 2m = 1 2 (c ′ − 2mf ′ 0 )J m + (c ′ − 2mf ′ 0 )I 2m + m(d ′ 2m − f ′ m )I 3m .
Corollary 5.8. Every local derivation on the deformed bms 3 algebra B is a derivation.
Proof. It is essentially same as that of Corollary 4.4.
Remark 5.9. Based on the above researches we can get such results for the general truncated Virasoro algebra Vir⊗C[t, t −1 ]/(t n ) for any n ≥ 1 by induction. Moreover, we can determine local derivations on some related algebras, such as the super Virasoro algebra and the super W (2, 2) algebra (see [21]).
is a derivation and a derivation of this form is called inner derivation. The set of all inner derivations of L, denoted by Inn(L), is an ideal of Der(L).Recall that a map ∆ : L → L is called a local derivation if for every v ∈ L, there exists a derivation D v : L → L (depending on v) such that∆(v) = D v (v).By definition, W (2, 2) is an infinite-dimensional Lie algebra with C-basis {L m , I m , C, C 1 |m ∈ Z} and relations [L m , L n ] = (m − n)L m+n + δ m+n,0 1 12 (m 3 − m)C, [L m , I n ] = (m − n)I m+n + δ m+n,0 1 12 (m 3 − m)C 1 ,
Case 2 :
2t ′ = 1. In the case t = 2 and we still have the last equation in (3.10), which implies that a 2m = 0. It is a contradiction.Combining with Case 1 and Case 2, we get that s = t
Lemma 3. 3 .
3Let ∆ be a local derivation on W such that ∆(L 0 ) = ∆(L 1 ) = 0. Then ∆(L m ) = 0 for any m ∈ Z. Proof. If m ≥ 2, by Lemma 3.2, there exist c m , d m ∈ C and i∈I a
for any i ∈ I, j ∈ J, and I, J are finite subsets of Z, such thatc m L m + d m I m = ∆(L m ) = ∆(L m + L 1 ) = [ i∈I a ′ i L i + j∈J b ′ j I j , L m + L 1 ].Clearly I, J ⊂ {0, 1, m}. By easy calculations we have c m = d m = 0. Similarly, if m < 0, we can also get c m = d m = 0. The proof is completed.
Theorem 3. 4 .
4Let ∆ be a local derivation on W. Then there exists D ∈ Der W such that ∆(L m ) = D(L m ) for any m ∈ Z.
By
Lemma 3.3, we have ∆ 2 (L m ) = 0 for any m ∈ Z. The theorem holds. 4. Local derivations on W (2, 2) Now we use a key construction to determine ∆(I m ) and then to determine all local derivations on W (2, 2). Set L ′ m = L m + mI m , we can easily see that [L ′ m , L ′ n ] = (m − n)L ′ m+n and [L ′ m , I n ] = (m − n)I m+n . So we can get a new construction of W, which plays a key role in our research.
Lemma 4 . 1 .
41The subalgebra span C {L ′ m , I m | m ∈ Z} of W is isomorphic to W. Lemma 4.2. Let ∆ be a local derivation on W such that ∆(L m ) = 0 for any m ∈ Z. Then ∆(I m ) ∈ CI m .Proof. By the definition of local derivation and Lemma 2.3, we have ∆(I m ) ∈ ⊕ k∈Z CI k , ∀m ∈ Z.
of Lemmas 3.1, 3.2 yield that ∆(L ′ m ) = c m L ′ m + d m I m for some c m , d m ∈ C. Therefore, we see that ∆(L ′ m ) = ∆(L m + mI m ) = ∆(L m ) + m∆(I m ) = c m (L m + mI m ) + d m I m . (4.2)
Theorem 4 . 3 .
43Every local derivation on W is a derivation.
(L m ) = 0, ∀m ∈ Z. By Lemma 4.1 and Theorem 3.4, there exists D ∈ Der W such that ∆ 2 (L ′ m ) = D(L ′ m ) for any m ∈ Z. So m∆ 2 (I m ) = D(L m ) + mD(I m ). By Lemma 4.2, we have ∆ 2 (I m ) ∈ CI m . So D = aad I 0 + bδ for some a, b ∈ C by Lemma 2.3. In this case ∆ 2 (I m ) = (b − a)I m for any m ∈ Z * . Set ∆ 3 = ∆ 2 − (b − a)δ we get ∆ 3 (L m ) = 0, ∆ 3 (I n ) = 0, ∀m ∈ Z, n ∈ Z * .
K ⊂ {0, 1, 2}. By easy calculations in (4.4) we can get c = 0. The proof is completed.
Corollary 4. 4 .
4Every local derivation on W (2, 2) is a derivation. Proof. From Corollary 2.2, we just need to consider C 1 in the proofs of Theorem 4.3. Let ∆ be a local derivation of W (2, 2). By definition we have ∆(C 1 ) = 0. Now from Theorem 4.3 we can suppose that ∆(L 0 ) = 0 and ∆(L m ) = a m C 1 and ∆(I m ) = b m C 1 for some a m , b m ∈ C. By a m C 1 = ∆(L m ) = [u, L m ] for some u ∈ W (2, 2), we can get a m = 0. Similarly, by b m C 1 = ∆(I m ) = [w, I m ] + bδ(I m ) for some w ∈ W (2, 2), b ∈ C, we can get b m = 0.
Definition 5. 1 . [ 4 ]
14The deformed bms 3 algebra B is an infinite-dimensional Lie algebra with C-basis {L m , J m , I m , C, C 1 , C 2 |m ∈ Z} and relations [L m , L n ] = (m − n)L m+n + δ m+n,0 1 12 (m 3 − m)C, [L m , J n ] = (m − n)J m+n + δ m+n,0 1 12 (m 3 − m)C 1 , [L m , I n ] = (m − n)I m+n + δ m+n,0 1 12 (m 3 − m)C 2 , [J m , J n ] = (m − n)I m+n + δ m+n,0 1 12 (m 3 − m)C 2 ,
Lemma 5 . 2 .
52The derivation algebra of B isDer( B) = Inn( B) Cδ,where δ is an outer derivation defined by δ(L m ) = δ(C) = 0, δ(J m ) = 1 2 J m , δ(I m ) = I m and δ(C i ) = C i , i = 1, 2, for any m ∈ Z.Proof. It follows by Lemma 2.3 and some easy calculations. Denoted B by the centerless B, i.e. B := B/(CC + CC 1 + CC 2 ). The following result can be obtained by the essentially same considerations as Sections 3, 4. Proposition 5.3. Let ∆ be a local derivation on B. Then there exists D ∈ Der B such that ∆(L m ) = D(L m ) and ∆(I m ) = D(I m ) for any m ∈ Z. Now we use a key construction to determine ∆(J m )(∈ span{I k , J k | k ∈ Z}) and then to determine all local derivations on B as in Section 4. Set L ′′ m = L m + √ 2mJ m + m 2 I m and J ′ m = J m + √ 2mI m , we can easily see that [L ′′ m , L ′′ n ] = (m − n)L ′′ m+n and [L ′′ m , J ′ n ] = (m − n)J ′ m+n . So we have the following result. Lemma 5.4. The subalgebra span C {L ′′ m , J ′ m , I m | m ∈ Z} of B is isomorphic to B. Lemma 5.5. Let ∆ be a local derivation on B such that ∆(L m ) = ∆(I m ) = 0 for any m ∈ Z. Then ∆(J m ) ∈ CJ m + CI m .
So we have c ′ − 2mf ′ 0 = 0 and then a m = 0. The lemma follows. Now we are in position to get the main result of this section. Theorem 5.7. Every local derivation on B is a derivation. Proof. Let ∆ be a local derivation on B. By Proposition 5.3, there exists D ∈ Der B such that ∆(L m ) = D(L m ) and ∆(I m ) = D(I m ) for any m ∈ Z. Replaced ∆ by ∆ − D, it follows ∆(L m ) = ∆(I m ) = 0, ∀m ∈ Z.
AcknowledgmentsThis work is partially supported by the NNSF(Nos. 12071405, 11971315, 11871249)
Local derivations on finite-dimensional Lie algebras. Ayupov Sh, . A Kudaybergenov, K K , Linear Algebra Appl. 493Ayupov Sh. A, Kudaybergenov K. K. Local derivations on finite-dimensional Lie algebras, Linear Al- gebra Appl. 2016;493: 381-398.
Characterizations of derivations on some normed algebras with involution. M Brešar, J. Algebra. 152Brešar M. Characterizations of derivations on some normed algebras with involution, J. Algebra. 1992; 152:454-462.
Mappings which preserve idempotents, local automorphisms, and local derivations. M Brešar, P Šemrl, Canad. J. Math. 45Brešar M,Šemrl P. Mappings which preserve idempotents, local automorphisms, and local derivations, Canad. J. Math. 1993, 45: 483-497.
Generalizing the bms 3 and 2D-conformal algebras by expanding the Virasoro algebra. R Caroca, P Concha, E Rodrguez, P Salgado-Rebolledo, European Physical Journal C. 783Caroca R, Concha P, Rodrguez E, Salgado-Rebolledo P. Generalizing the bms 3 and 2D-conformal alge- bras by expanding the Virasoro algebra, European Physical Journal C. 2018; 78(3): 262-276.
Left-symmetric algebra structures on the W -algebra W (2, 2). H Chen, J Li, Linear Algebra Appl. 437Chen H, Li, J. Left-symmetric algebra structures on the W -algebra W (2, 2), Linear Algebra Appl. 2012; 437: 1821-1834.
Local derivations on the Witt algebra, Linear and Multilinear Algebra. Y Chen, K Zhao, Y Zhao, 70Chen Y, Zhao K, Zhao Y. Local derivations on the Witt algebra, Linear and Multilinear Algebra, 2022; 70(6): 1159-1172.
Local derivations on operator algebra. R L Crist, Journal Functional Analysis. 135Crist R. L. Local derivations on operator algebra, Journal Functional Analysis, 1996; 135: 76-92.
2-Local superderivations on the super Virasoro algebra and the super W (2, 2) algebra. D Dilxat, S Gao, D Liu, Commun. Alg. 4912Dilxat D, Gao S, Liu D. 2-Local superderivations on the super Virasoro algebra and the super W (2, 2) algebra, Commun. Alg. 2021; 49 (12): 5423-5434.
Derivations, central extensions and automorphisms of a Lie algebra. S Gao, C Jiang, Y Pei, Acta Math. Sin. 52Gao S, Jiang C, Pei Y. Derivations, central extensions and automorphisms of a Lie algebra, Acta Math. Sin. 2009; 52: 281-288.
Low-dimensional cohomology groups of the Lie algebras W (a, b). S Gao, C Jiang, Y Pei, Commun. Alg. 392Gao S, Jiang C, Pei Y. Low-dimensional cohomology groups of the Lie algebras W (a, b), Commun. Alg. 2011; 39(2): 397-423.
Simple Harish-Chandra modules, intermediate series modules, and Verma modules over the loop-Virasoro algebra. X Guo, R Lv, K Zhao, Forum Math. 23Guo X, Lv R, Zhao K. Simple Harish-Chandra modules, intermediate series modules, and Verma mod- ules over the loop-Virasoro algebra, Forum Math. 2011; 23: 1029-1052.
Verma modules over the W (2, 2) algebras. Q Jiang, W Zhang, J. Geom. Phys. 98Jiang Q, Zhang W. Verma modules over the W (2, 2) algebras, J. Geom. Phys. 2015; 98: 118-127.
Local derivations. R V Kadison, J. Algebra. 130Kadison R.V. Local derivations, J. Algebra. 1990; 130: 494-509.
algebraic reflexivity, and linear interpolation. D R Larson, Reflexivity, Amer. J. Math. 110Larson D.R. Reflexivity, algebraic reflexivity, and linear interpolation, Amer. J. Math. 1998; 110: 283- 299.
Local derivations and local automorphisms of B(X). D R Larson, A R Sourour, Proc. Sym-pos. Sym-posProvidence, Rhode Island51Larson D.R, Sourour A.R. Local derivations and local automorphisms of B(X), Proc. Sym-pos. Pure Math.,51, Part 2, Providence, Rhode Island, 1990: 187-194.
Local Lie derivations of factor von Neunann algebras. D Liu, J Zhang, Linear Algebra Appl. 519Liu D, Zhang J. Local Lie derivations of factor von Neunann algebras, Linear Algebra Appl. 2017; 519: 208-218.
Classification of Harish-Chandra modules over some Lie algebras related to the Virasoro algebra. D Liu, J. Algebra. 447Liu D. Classification of Harish-Chandra modules over some Lie algebras related to the Virasoro algebra. J. Algebra. 2016; 447: 548-559.
Non-self-adjoint operator algebras and inverse systems of simplicial complexes. S Power, J. Reine Angew. Math. 421Power S. Non-self-adjoint operator algebras and inverse systems of simplicial complexes, J. Reine Angew. Math. 1991; 421: 43-61.
Subsingular vectors in Verma modules, and tensor product of weight modules over the twisted Heisenberg-Virasoro algebra and W (2, 2) algebra. G Radobolja, J. Math. Phys. 5471701Radobolja G. Subsingular vectors in Verma modules, and tensor product of weight modules over the twisted Heisenberg-Virasoro algebra and W (2, 2) algebra, J. Math. Phys. 2013; 54: 071701.
2-Local derivations on the W -algebra W (2, 2). X Tang, J. Algebra Appl. 2012215023713 pagesTang X. 2-Local derivations on the W -algebra W (2, 2), J. Algebra Appl. 2021; 20(12): 2150237 (13 pages).
Local derivations on the super Viraosor algebra and the N = 1 super-BMS 3 algebra. Q Wu, S Gao, D Liu, preprintWu Q, Gao S, Liu D. Local derivations on the super Viraosor algebra and the N = 1 super-BMS 3 algebra, preprint.
W -algebra W (2, 2) and the vertex operator algebra L( 1 2 , 0) ⊗ L( 1 2 , 0). W Zhang, C Dong, Commun. Math. Phys. 285Zhang W, Dong C. W -algebra W (2, 2) and the vertex operator algebra L( 1 2 , 0) ⊗ L( 1 2 , 0), Commun. Math. Phys. 2009; 285: 991-1004.
Some infinite-dimensional complete Lie algebras. L Zhu, D Meng, Chinese Ann. Math. Ser. A. 213Zhu L, Meng D. Some infinite-dimensional complete Lie algebras. Chinese Ann. Math. Ser. A. 2000; 21(3): 311-316.
| []
|
[]
| [
"I Mirebeau :[email protected] \nLaboratoire Léon Brillouin\nUMR12, Centre d'Etudes de Saclay\nCEA/CNRS\n91191Gif sur YvetteFrance\n",
"A Apetrei \nLaboratoire Léon Brillouin\nUMR12, Centre d'Etudes de Saclay\nCEA/CNRS\n91191Gif sur YvetteFrance\n",
"I N Goncharenko \nLaboratoire Léon Brillouin\nUMR12, Centre d'Etudes de Saclay\nCEA/CNRS\n91191Gif sur YvetteFrance\n",
"R Moessner \nLaboratoire de Physique théorique de l'Ecole Normale Supérieure\nUMR8549\nCNRS\n75005ParisFrance\n"
]
| [
"Laboratoire Léon Brillouin\nUMR12, Centre d'Etudes de Saclay\nCEA/CNRS\n91191Gif sur YvetteFrance",
"Laboratoire Léon Brillouin\nUMR12, Centre d'Etudes de Saclay\nCEA/CNRS\n91191Gif sur YvetteFrance",
"Laboratoire Léon Brillouin\nUMR12, Centre d'Etudes de Saclay\nCEA/CNRS\n91191Gif sur YvetteFrance",
"Laboratoire de Physique théorique de l'Ecole Normale Supérieure\nUMR8549\nCNRS\n75005ParisFrance"
]
| []
| In the pyrochlore compounds, Tb2Ti2O7 and Tb2Sn2O7, only the Tb 3+ ions are magnetic. They exhibit quite abnormal -and, in view of their chemical similarity, strikingly different -magnetic behaviour, as probed by neutron diffraction at ambient and applied pressure. Tb2Ti2O7 is a cooperative paramagnet ('spin liquid'), without long range order at ambient pressure; however, it does become ordered under pressure. By contrast, Tb2Sn2O7 enters an "ordered spin ice" state already at ambient pressure. We analyse a simple model which already clearly exhibits some of the qualitative features observed experimentally. Overall, comparing these two compounds emphasizes the power of small perturbations in selecting low-temperature states in geometrically frustrated systems. | 10.1016/j.physb.2006.05.026 | [
"https://export.arxiv.org/pdf/cond-mat/0602384v1.pdf"
]
| 119,085,702 | cond-mat/0602384 | 0f575d1fd933de4847c210cae1bfdd346288e97f |
16 Feb 2006
I Mirebeau :[email protected]
Laboratoire Léon Brillouin
UMR12, Centre d'Etudes de Saclay
CEA/CNRS
91191Gif sur YvetteFrance
A Apetrei
Laboratoire Léon Brillouin
UMR12, Centre d'Etudes de Saclay
CEA/CNRS
91191Gif sur YvetteFrance
I N Goncharenko
Laboratoire Léon Brillouin
UMR12, Centre d'Etudes de Saclay
CEA/CNRS
91191Gif sur YvetteFrance
R Moessner
Laboratoire de Physique théorique de l'Ecole Normale Supérieure
UMR8549
CNRS
75005ParisFrance
16 Feb 2006Preprint submitted to Elsevier Science 23 March 2022Two geometrically frustrated magnets studied by neutron diffraction.spin liquidspin iceneutron diffraction * Corresponding Author
In the pyrochlore compounds, Tb2Ti2O7 and Tb2Sn2O7, only the Tb 3+ ions are magnetic. They exhibit quite abnormal -and, in view of their chemical similarity, strikingly different -magnetic behaviour, as probed by neutron diffraction at ambient and applied pressure. Tb2Ti2O7 is a cooperative paramagnet ('spin liquid'), without long range order at ambient pressure; however, it does become ordered under pressure. By contrast, Tb2Sn2O7 enters an "ordered spin ice" state already at ambient pressure. We analyse a simple model which already clearly exhibits some of the qualitative features observed experimentally. Overall, comparing these two compounds emphasizes the power of small perturbations in selecting low-temperature states in geometrically frustrated systems.
Introduction
Geometrical frustration (GF) [1] is now widely studied in solid state physics, as it seems to play a key role in original phenomena recently observed in new materials. Examples include the large anomalous Hall effect in ferromagnetic pyrochlores or spinels [2], the unconventional superconductivity observed in water substituted NaxCoO2 with triangular Co sheets [3], or the interaction between electric and magnetic properties of multiferroics materials [4].
What is geometrical frustration? Most simply, it occurs when the specific geometry of the lattice prevents magnetic interactions from being satisfied simultaneously. In insulating systems such as the rare earth pyrochlores, the impossibility of a simple Néel ground state due to GF offers the possibility of finding a large variety of alternative, magnetic and non-magnetic, short-or long-ranged ordered states. In the most ex-treme case, paramagnetic behaviour persists down to the lowest temperatures, leading to an extended cooperative paramagnetic, or spin liquid, regime, in which only short-range correlations result [5].
Ferromagnetic interactions on the pyrochlore lattice may also be frustrated, namely when the exchange is dominated by a strong anisotropy which forces the spins in a tetrahedron to point along their local, noncollinear easy axes [6]. This leads to the spin ice state, whose degeneracy can be mapped onto that of real ice [6], leading to approximately the same entropy in the ground state [7].
In real compounds, the eventual choice of the stable magnetic state depends on a subtle energy balance between the frustrated first neighbour exchange energy term and perturbation terms of various origins (longer range interactions, anisotropies, quantum fluctuations, ...). It is of course also determined by thermodynamic parameters, such as temperature, pressure or magnetic field. Counterintuitively, thermal fluctuations can even induce order when ordered states permit softer fluctuations than generic disordered ones. This effect is known as order by disorder [8] and is commonly encountered in frustrated magnetism. It has been well studied by Monte-Carlo simulations, and also received some experimental confirmation [9]. Pressure can change the nature of, and balance between different terms in the Hamiltonian, as they can depend on interatomic distances in different ways. An applied field adds Zeeman energy, and can, for example, stabilize a subset of the original ground states, at times resulting in magnetization plateaus.
In this paper, we study a well known pyrochlore Tb2Ti2O7, which we investigated by neutron diffraction under extreme conditions of temperature (down to 0.1K) and applied pressure (up to 8.7 GPa). We review one of its most fascinating properties, namely its ability to "crystallize" or order magnetically under pressure and we propose a new theoretical approach which accounts for some important peculiarities of this effect. We also compare Tb2Ti2O7 to its sibling compound Tb2Sn2O7, very recently studied, which behaves as an "ordered spin ice". Both compounds have a fully chemically ordered structure, the pyrochlore structure of cubic Fd3m space group, where the Tb 3+ magnetic ions occupy a GF network of corner sharing tetrahedra. Although they differ only by the nature of the nonmagnetic ion (Ti/Sn), they show very different magnetic ground states. The comparison sheds some light on how to select the ground state through very small perturbations, one of the most prominent characteristics of geometrical frustration.
Tb2Ti2O7 : a spin liquid orders under applied pressure
Tb2Ti2O7 is a famous example of a spin liquid, investigated by numerous groups, where short range correlated Tb spins fluctuate down to 70 mK at least [10], that is more than 300 times below the typical energy scale of the magnetic interactions (the Curie-Weiss constant θCW of -19 K, where the minus sign corresponds to AF interactions). The persistence of these fluctuations was checked by muon relaxation [10], at the time scale of the muon probe of about 10 −6 s. At shorter time scales, inelastic neutron scattering showed a quasi elastic signal, whose energy linewidth strongly decreases below about 1 K, indicating a stronger slowing down in this temperature range [11]. Coexisting with the spin liquid phase, spin glass like irreversibilities and anomalies of the specific heat were recently observed in the range 0.1 K-0.8 K [12]. Using high pressure powder neutron diffraction [13], we observed two interesting phenomena induced by pressure [14] i) the onset of antiferromagnetic long range order below a Néel temperature TN of about 2 K: ii) the enhancement of the magnetic correlations in the spin liquid phase above TN. Just below TN, the ordered phase coexists with the spin liquid in a mixed solid-liquid phase, whose relative contributions vary with pressure and temperature. The magnetic Bragg peaks of the simple cubic lattice can be indexed from the crystal structure of Fd3m symmetry, taking a propagation vector k=(1,0,0). It means that in the cubic unit cell with four Tb tetrahedra, two tetrahedra are identical and two have reversed moment directions. A longer wavelength modulation of this structure, involving a much larger unit cell, was also observed in the powder data.
What is the pressure induced ground state? More fundamentally what is the role of pressure? To answer these questions, we performed single crystal neutron diffraction down to very low temperatures (0.14 K), combining hydrostatic pressure with anisotropic stress [15]. We showed that both components play a role in inducing the long range order, and that the ordered moment and Néel temperature can be tuned by the direction of the stress. A stress along a [110] axis, namely along the direction of the first neighbour distances between Tb 3+ ions, is the most efficient in inducing magnetic order (Fig. 1).
FullProf refinements of the single crystal data allowed a determination of the magnetic structure with better precision, especially the local spin structure within a Tb tetrahedron. The structure corresponding to the best refinement (RF =14% is given in table 1. The bond 1-4 along the axis of the stress, which should be reinforced, has AF collinear spins. This corresponds to a natural expectation for AF first neighbor exchange. The orientation of the spin 2 (orthogonal to the 3 others) is more surprising since with 3 collinear spins, the exchange field on the fourth one should be also collinear. Since no collinear structure gave a good fit to the data, it suggests that the real spin structure may be even more complex than the proposed one. In any case, both powder and single crystal data yield an important conclusion: we found that inside a tetrahedron, the magnetization is not compensated, namely the vectorial sum of the four spins is non zero, (although it is of course compensated within the cubic Table 1 Orientation of the magnetic moments in one tetrahedron in the pressure induced state of cell, since magnetisations of the four tetrahedra cancel two by two). This means that in the pressure induced ground state, the local order does not correspond to any configuration which mimimizes the energy in the spin liquid phase. In other words, pressure does not select any energy state among those belonging to the ground state degeneracy of the spin liquid (the ground state expected if one considers Heisenberg spins coupled via first neighbour AF exchange interactions only). The anisotropic component (stress) relieves the frustration in a more drastic way, by creating uncompensated bonds, associated with a very small distortion of the pyrochlore lattice. In addition the isotropic component shortens all distances in the same way, increasing the frustrated exchange interaction. This effect could also contribute to the increase of TN.
The prominent role of stress in inducing magnetic order raises a subsequent question. Could it be stabilized spontaneously by internal stresses? To answer this question, we have now checked the magnetic order at ambient pressure by neutron diffraction in two Tb2Ti2O7 samples with different heat treatments, down to about 0.1 K. In an "as cast" powder sample, we observe at 0.07 K broad magnetic peaks close to the positions expected for the pressure induced magnetic order (Fig. 2). The Lorentzian lineshape corresponds to a finite correlation length of about 25Å (2-3 cubic cells). The peaks disappear around 0.3 K. We also studied a single crystal, which was annealed at 1150 o C for 25 hours to relieve internal stresses. In the second case, the mesoscopic magnetic order is absent and only the liquid-like correlations are observed, down to the minimum temperature of 0.15 K. Since both samples are chemically ordered and stoichiometric within the accuracy of neutron diffraction, it means that the mesoscopic order is induced by internal stresses. The onset of this mesoscopic order may strongly influence the spin glass irreversibilities and anomalies of the specific heat observed in the same temperature range [12], which seem to depend on the heat treatment.
Tb2Sn2O7: an ordered spin ice state
In contrast to Tb2Ti2O7, Tb2Sn2O7 undergoes a transition to an ordered state already at ambient pressure. The magnetic structure, very recently determined by powder neutron diffraction experiments [16] was called an "ordered spin ice". The local order within one tetrahedron is close to the "two in-two out" configuration of spin ice, taking into account a small deviation of 13 o . of the magnetic moments with respect to the local <111> easy anisotropy axes. In the canonical spin ice state, individual tetrahedra keep the mutual orientational disorder allowed by the "ice rules", leading to short range order and ground state entropy [17]. Here the four tetrahedra of the unit cell are identical, leading to an ordered structure with k=0 propagation vector (Fig. 3). The resulting magnetic structure is non-collinear, but exhibits a ferromagnetic component, which represents about 37% of the Tb 3+ ordered moment. This explains the ferromagnetic character of the transition, previously observed by magnetization Fig. 3. Tb2Sn2O7 : an ordered spin ice structure: the local spin structure in a tetrahedron is close to the "2 in-2 out" structure of a spin ice, but individual tetrahedra are identical, leading to an ordered structure with k=0 propagation vector and ferromagnetic character. [18].
Together with the non-collinear magnetic structure, the original effects of the frustration persist in the ordered phase of Tb2Sn2O7. The magnetic order is stabilized in two steps (1.3 K and 0.87 K) corresponding to anomalies of the specific heat, and not in a classical second order transition. The correlation length increases throughout the transition region, and remains limited to 180Å even at very low temperature. The ordered state coexists with slow collective fluctuations, in the time scale of 10 −4 -10 −5 s. They were probed by comparing the Tb 3+ moment value of 5.9(1) µB deduced from neutron diffraction to the much lower value of 3.3(3) µB deduced from the specific heat.
The magnetic order in Tb2Sn2O7 may be compared to that found by Champion et al. [19], who considered the competition between first neighbor exchange and uniaxial anisotropy in a pyrochlore ferromagnet. The model involves two parameters, the strength of the ferromagnetic interaction J and that of the uniaxial anisotropy Da along <111> axes. Ferromagnetic and spin ice states correspond to the cases Da /J = 0 and Da/ J = ∞, respectively. For finite Da/J values, the magnetic order shows many similarities with the observed one. Namely: i) the ground state is ordered in a k=0 four sublattice structure. ii) the local order within one tetrahedron may also be deduced from the spin ice structure. iii) the magnetic transition is of first order, changing to second order with decreasing Da/J. However, the deviations from the local spin ice structure are different in the model and in the real system. In the model, spins are uniformly canted towards the ferromagnetic direction. The ground state magnetization relative to the local moment increases from 0.578=1/ √ 3 (the average magnetization of a tetrahedron in the spin ice case) to 1 (the ferromagnetic case) with decreasing Da/J. By contrast, in Tb2Sn2O7, the deviations of the magnetic moments from the local <111> axes actually reduce the magnetization (to about 0.37 in relative units). So the deviations of the magnetic moments from the local spin ice structure act in an opposite way to that predicted by the finite anisotropy ferromagnetic model.
Finally, in Tb2Sn2O7, the neutron and magnetic data together with the comparison with theory, suggest that here the effective first neighbor interaction becomes ferromagnetic, although the physics of the system cannot be simply reduced to the energy scheme assumed in ref. [19].
A simple model for the influence of stress
An analysis of the effect of pressure applied to an individual tetrahedron already manifests qualitatively two basic experimental results observed in Tb2Ti2O7: the much stronger influence of an anisotropic stress along a [110] axis (as opposed to hydrostatic pressure or to a stress along the [100] direction), and the presence of an uncompensated magnetisation.
In the isotropic problem, all six bonds of the tetrahedron are equivalent. Application of stress in the [110] direction lowers this symmetry, as shown in Fig. 4. In symmetry terms, the bonds form a six-dimensional representation of the tetrahedral group T d , which decomposes into three irreducible representations. A singlet, A amounts to a uniform change of all bonds together. Furthermore, there is a doublet, E, the components of which correspond (a) to strengthening two opposite bonds, and weakening the four others (or vice versa) or (b) to weakening an opposite pair of those four bonds, and strengthening the other pair. Finally, each component of the triplet, T, implies a strengthening/weakening (by an equal amount) of an opposite pair of bonds [20].
The crucial point is that the uniaxial [110] stress can couple to all three representations. In its presence, there are three (instead of only one) symmetryinequivalent bond strengths (see Fig. 4). In other words, the Hamiltonian including the uniaxial [110] stress has a lower symmetry than the isotropic one. The case of [1 0 0] pressure is intermediate: here only two representations are present, the triplet being absent, as illustrated in the left panel of Fig. 4.
Whereas the initial degeneracy of the isotropic system is a signature of the different possible compromises between which bonds to frustrate and which to satisfy, some of these choices have become forbidden as it is not possible to trade off inequivalent bonds against one another.
For simplicity, let us consider a classical, isotropic Heisenberg antiferromagnet at T=0 under stress. We have considered the spin configurations which minimize the energy for different orientations of the stress. We find that these configurations have a compensated magnetization for a stress along [100]. By contrast, a non-compensated magnetic moment can arise for a stress along a [110] axis. A summary of the calculation is given below.
The results for an ice-type model (i.e. a ferromagnet in the presence of anisotropy Da) can be obtained along similar lines. It needs to be borne in mind, however, that (a) the strict ice model (Da/J=∞) does not permit small deviations of the spins from their preferred axes, and that (b) a ferromagnet will generically exhibit a non-compensated moment even in the absence of stress. By contrast, an anisotropic (but strainfree) antiferromagnet has a momentless ground state, which, however, is the simple FeF3-type 'all-in' or 'allout' ground state.
This classical isotropic Hamiltonian has a continuous two-parameter family of degenerate ground states in the isotropic case [1]. This degeneracy is reduced by the strain. For example, in the presence of an E-distortion weakening the average strength of the top/bottom pair of bonds with respect to the other two pairs, a collinear state will be selected. In this state, each spin is parallel to its partner at the other end of the coloured bond, and antiparallel to the other pair of spins. The total spin of the tetrahedron thus remains compensated at zero. For an E distortion of the opposite sign, the top (and bottom) pair of spins will be antialigned; for the full pyrochlore lattice, this generates decoupled chain states. In contrast to this situation, the presence of a T distortion does not change the energy of the isotropic ground states relative to one another to first order. This happens because it couples to a difference in the expectation value of opposing bonds, a difference which vanishes in the unperturbed ground states.
However, in higher order, a difference in relative bond strength can induce a difference between the expectation values of the scalar product Si · Sj across the bottom and top bonds. Such a difference is equivalent to an uncompensated total moment of the tetrahedron (as a tetrahedron with zero moment necessarily has equal expectation values of Si · Sj on opposite bonds). The ground states in the presence of stress are thus close to -but not a subset -of those of the isotropic system.
Discussion
In this section we briefly comment about the relevance of the above model to Tb2Ti2O7, and then focus on the origin of the differences between the two compounds.
At ambient pressure, the fact that Tb2Ti2O7 does not order but remains 'liquid' down to 70 mK, is still a challenge to theory. Sophisticated calculations taking into account the crystal field energy [21] together with dipolar interactions, predict an Ising like behavior for the Tb ion moment in the the ground state (with the moment reduced with respect to the free ion value) and an effective AF first neighbor interaction [22]. These calculations predict at ambient pressure a transition to an AF order similar to that found in FeF3 (a k=0 structure with an "all in-all out" local configuration) below 1-2K, which is however not what is observed experimentally.
The simple model discussed above, already proposed on a more empirical basis in Ref. [16], reproduces the main characteristics of the pressure-induced state in Tb2Ti2O7, namely the stronger effect of degeneracy lifting of the [110] over the [100] stress, and the appearance of an uncompensated magnetisation in the former case. This is presumably the case because it incorporates the most fundamental property of the stress, namely the explicit symmetry breaking it induces. This effect should occur in qualitatively the same way in a much larger class of models.
These results are therefore rather robust but, by the same token, they are also only qualitative: the model in its current form yields little information on the detailed Hamiltonian of the system, nor the origin of the effective nearest-neighbour interaction J and of its sensitivity to pressure. In particular, we have not been able to reproduce the detailed finite-temperature spin structure.
We now turn to Tb2Sn2O7, which behaves as an ordered spin ice. Neutron data as compared with theory strongly suggest that the effective first neighbour interaction has now become ferromagnetic. What is the reason for this change? We can propose the following explanation. In the "true" spin ices (Ho2Ti2O7 or Dy2Ti2O7 with stronger uniaxial anisotropy), it was shown that the effective ferromagnetic interaction results from the influence of the dipolar coupling which overcomes the weak AF superexchange [17]. Taking the same conventional notations, the effective first neighbour interaction J ef f is expressed as J ef f = Jnn+Dnn, where Jnn=J/3 and Dnn=5D/3 are the superexchange and dipolar energy scales, respectively. In Tb2Ti2O7 (Jnn=-0.88 K, Dnn=0.8 K from ref. [22]), this effective interaction remains AF. In Tb2Sn2O7, Sn substitution enlarges the unit cell (from a= 10.149 to 10.426Å in Ti and Sn compounds respectively). This expansion ∆a/a ∼ 2.7%, equivalent to a negative chemical pressure of about 12-15 GPa, should strongly decrease the AF superexchange interaction J. Assuming roughly a decrease of Jnn in the ratio of the Curie-Weiss constants (-19 K and -12 K in Ti and Sn compounds respectively) without big changes in the dipolar constant, we get J ef f = 0.18 K > 0 for Tb2Sn2O7. Therefore the expansion in the unit cell induced by Sn substitution might be enough to switch the compound to the spin ice region of the phase diagram [23].
To go further, microscopic models should take into account the exact nature of the anisotropy, which is not simply uniaxial in the Tb compounds [21]. This involves a reinvestigation of the crystal field levels, currently in progress. It could exhibit more subtle differences between the two compounds than the simple effect of a chemical pressure discussed above.
In conclusion, the two compounds studied here clearly show the rich variety of behaviour exhibited by geometrically frustrated magnets. Comparing them allows one to understand better the key role played by small perturbations in selecting one peculiar state among the many potential magnetic states.
We thank A. Gukasov and O. Isnard for their help in the neutron measurements at LLB and ILL respectively. We also thank G. Dhalenne, A. Revcolevschi, A. Forget and D. Colson, who provided the single crystal and powdered samples. R. M. thanks S. Sondhi and O. Tchernyshyov for collaboration on related work. He was supported in part by the Ministère de la Recherche with an ACI grant.
Fig. 1 .
1Fig 1., deduced from the re inement of the magnetic structure. The stress component is along [01 1]. The atomic coordinates x, y, z, are expressed in the cubic unit cell containing 4 tetrahedra. Two tetrahedra are identical and two have reversed spin orientations. Tb2Ti2O7 : an antiferromagnetic ordered state with k=(1,0,0) propagation vector is induced under pressure. Here an isotropic pressure Pi=2.0 GPa is combined with uniaxial pressure Pu =0.3 GPa along [011] axis. The variation of the peak intensity of the magnetic Bragg peak (120) shows the Néel temperature. The local spin structure in a tetrahedron has non compensated magnetization.
Fig. 2 .
2Tb2Ti2O7 : a mesoscopic order is induced by spontaneous strains at very low temperature. Magnetic neutron diffraction spectra at 0.07 K, showing broad peaks close to the positions of the (100) magnetic peak and secondary magnetic phase (S) of the pressure-induced state[14]. The spectrum of the spin liquid regime at 1.2 K has been subtracted. The incident neutron wavelength is 2.52Å. The broad magnetic peaks disappear at about 0.3 K.
Fig. 4 .
4Tetrahedron under uniaxial stress, denoted by the arrows. Left (right) panel: stress applied in the [100] ([110]) direction. This splits the six bonds into the following symmetry-inequivalent groups: the bond along the [110] direction (bottom), the one perpendicular to it (top) -which remain equivalent for [100] but not for [110] stress -and the four remaining ones.
); reviews of exact diagonalizations and experiments, respectively, are. C Lhuillier, P Sindzingre, J.-B Fouet, ; ) , P Schiffer, A P Ramirez, Comments Cond. Mat. Phys. 791525Can. J. Phys.For an introduction to frustrated magnets, see R. Moessner, Can. J. Phys. 79, 1283, (2001); reviews of exact diagonalizations and experiments, respectively, are C. Lhuillier, P. Sindzingre and J.-B. Fouet, Can. J. Phys. 79, 1525, (2001) and P. Schiffer and A. P. Ramirez, Comments Cond. Mat. Phys. 18, 21, (1996).
. Y Taguchi, Y Oohara, H Yoshisawa, N Nagaosa, Y , Tokura Science. 2912573Y. Taguchi, Y. Oohara, H. Yoshisawa, N. Nagaosa, Y. Tokura Science 291, 2573 (2001).
. K Takada, Nature. 422K. Takada et al. Nature 422, 53, (2003).
. G R Blake, Phys. Rev. B. 71214402G. R. Blake et al., Phys. Rev. B. 71, 214402, (2005).
. J. Villain Z. Phys. B. 3331J. Villain Z. Phys. B 33, 31; (1979).
. M J Harris, Phys. Rev. Lett. 792554M. J. Harris et al., Phys. Rev. Lett. 79, 2554 (1997).
. A P Ramirez, R J Hayashi, R Cava, B S Siddhartan, Shastry, Nature. 399A. P. Ramirez, A Hayashi, R. J. Cava, R. Siddhartan, B.S. Shastry Nature, 399, 333, (1999).
. J Villain, R Bidaux, J P Carton, R Coute, J. Phys. 1263J. Villain, R. Bidaux, J. P. Carton, R. Coute J. Phys. (Paris) 41, 1263, (1980);
. E F , Shender Sov. Phys. JETP. 56E. F. Shender Sov. Phys. JETP 56, 178, (1982).
. A G Gukasov, Europhys. Lett. 792554A. G. Gukasov et al., Europhys. Lett. 79, 2554 (1997);
. J D M Champion, Phys. Rev. B. 6820401J. D. M. Champion et al., Phys. Rev. B 68, 020401 R (2003).
. J S Gardner, Phys. Rev. Lett. 821012J. S. Gardner et al. Phys. Rev. Lett. 82, 1012, (1999).
. Y Yasui, J. Phys. Soc. Jpn. 71Y. Yasui et al., J. Phys. Soc. Jpn. 71, 599, (2002).
. N Hamaguchi, T Matsushita, N Wada, Y Yasui, S Masatoshi, Phys. Rev. B. 69132413N. Hamaguchi, T. Matsushita, N. Wada, Y. Yasui and S. Masatoshi, Phys. Rev. B 69, 132413, (2004).
. I N Goncharenko, High Pressure Res. 24I. N. Goncharenko, High Pressure Res. 24,193,(2004).
. I Mirebeau, I N Goncharenko, P Cadavez-Peres, S T Bramwell, M J P Gingras, J S Gardner, Nature. 420I. Mirebeau, I.N. Goncharenko, P.Cadavez-Peres, S. T. Bramwell, M.J.P. Gingras and J. S. Gardner Nature 420, 54, (2002).
. I Mirebeau, I N Goncharenko, G Dhalenne, A Revcolevschi, Phys. Rev. Lett. 93187204I. Mirebeau, I. N. Goncharenko, G. Dhalenne, A. Revcolevschi, Phys. Rev. Lett. 93, 187204, (2004);
. I Mirebeau, I Goncharenko, J. Phys. Cond. Mat. 17I. Mirebeau and I. Goncharenko J. Phys. Cond. Mat. 17, S771, (2005).
. I Mirebeau, Phys. Rev. Lett. 94246402I. Mirebeau et al. Phys. Rev. Lett. 94, 246402, (2005).
. S T Bramwell, M J P Gingras, Science. 294S. T. Bramwell and M. J. P. Gingras Science 294, 14, (2001).
. K Matsuhira, J. Phys. Soc. Jpn. 711576K. Matsuhira et al. J. Phys. Soc. Jpn. 71,1576,(2002).
. J D M Champion, S T Bramwell, P C W Holdsworth, M J Harris, Europhys. Lett. 5793J. D. M. Champion, S. T. Bramwell, P. C. W. Holdsworth and M. J. Harris, Europhys. Lett. 57, 93 (2002).
. Y Yamashita, K Ueda, Phys. Rev. Lett. 854960Y. Yamashita and K. Ueda, Phys. Rev. Lett. 85, 4960 (2000);
. O Tchernyshyov, R Moessner, S L Sondhi, Phys. Rev. Lett. 8867203O. Tchernyshyov, R. Moessner, S. L. Sondhi, Phys. Rev. Lett. 88, 067203 (2002).
. M J P Gingras, Phys. Rev. B. 626496M. J. P. Gingras et al. Phys. Rev. B 62, 6496, (2000).
. Y Kao, M Enjalran, A Maestro, H Molavian, M J P Gingras, Phys. Rev. B. 68172407Y. Kao, M. Enjalran, A. Del Maestro, H. Molavian and M. J. P. Gingras Phys. Rev. B 68, 172407, (2002);
. M Enjalran, M J P Gingras, Phys. Rev. B. 70174426M. Enjalran and M. J. P. Gingras Phys. Rev. B 70, 174426, (2004).
. B C Hertog, M J P Gingras, Phys. Rev. Lett. 843430B. C. Den Hertog and M. J. P. Gingras Phys. Rev. Lett. 84, 3430, (2000).
| []
|
[
"Text2Chart: A Multi-Staged Chart Generator from Natural Language Text",
"Text2Chart: A Multi-Staged Chart Generator from Natural Language Text"
]
| [
"Md Mahinur Rashid \nDepartment of Computer Science and Enginnering\nUnited International University\n\n",
"Hasin Kawsar Jahan \nDepartment of Computer Science and Enginnering\nUnited International University\n\n",
"RiyasaatAnnysha Huzzat \nDepartment of Computer Science and Enginnering\nUnited International University\n\n",
"Ahmed Rahul \nDepartment of Computer Science and Enginnering\nUnited International University\n\n",
"Tamim Bin Zakir \nDepartment of Computer Science and Enginnering\nUnited International University\n\n",
"Md. SaddamFarhana Meem \nDepartment of Computer Science and Enginnering\nUnited International University\n\n",
"Hossain Mukta \nDepartment of Computer Science and Enginnering\nUnited International University\n\n",
"Swakkhar Shatabda [email protected] \nDepartment of Computer Science and Enginnering\nUnited International University\n\n"
]
| [
"Department of Computer Science and Enginnering\nUnited International University\n",
"Department of Computer Science and Enginnering\nUnited International University\n",
"Department of Computer Science and Enginnering\nUnited International University\n",
"Department of Computer Science and Enginnering\nUnited International University\n",
"Department of Computer Science and Enginnering\nUnited International University\n",
"Department of Computer Science and Enginnering\nUnited International University\n",
"Department of Computer Science and Enginnering\nUnited International University\n",
"Department of Computer Science and Enginnering\nUnited International University\n"
]
| []
| Generation of scientific visualization from analytical natural language text is a challenging task. In this paper, we propose Text2Chart, a multi-staged chart generator method.Text2Chart takes natural language text as input and produce visualization as two-dimensional charts. Text2Chart approaches the problem in three stages. Firstly, it identifies the axis elements of a chart from the given text known as x and y entities. Then it finds a mapping of x-entities with its corresponding y-entities. Next, it generates a chart type suitable for the given text: bar, line or pie. Combination of these three stages is capable of generating visualization from the given analytical text. We have also constructed a dataset for this problem. Experiments show that Text2Chart achieves best performances with BERT based encodings with LSTM models in the first stage to label x and y entities, RandomForest classifier for the mapping stage and fastText embedding with LSTM for the chart type prediction. In our experiments, all the stages show satisfactory results and effectiveness considering formation of charts from analytical text, achieving a commendable overall performance. | 10.1007/978-3-031-05936-0_1 | [
"https://arxiv.org/pdf/2104.04584v1.pdf"
]
| 233,210,334 | 2104.04584 | 14fe90984fb5a046ce7cfc51f334cd4a084b106f |
Text2Chart: A Multi-Staged Chart Generator from Natural Language Text
Md Mahinur Rashid
Department of Computer Science and Enginnering
United International University
Hasin Kawsar Jahan
Department of Computer Science and Enginnering
United International University
RiyasaatAnnysha Huzzat
Department of Computer Science and Enginnering
United International University
Ahmed Rahul
Department of Computer Science and Enginnering
United International University
Tamim Bin Zakir
Department of Computer Science and Enginnering
United International University
Md. SaddamFarhana Meem
Department of Computer Science and Enginnering
United International University
Hossain Mukta
Department of Computer Science and Enginnering
United International University
Swakkhar Shatabda [email protected]
Department of Computer Science and Enginnering
United International University
Text2Chart: A Multi-Staged Chart Generator from Natural Language Text
1 arXiv:2104.04584v1 [cs.CL] 9 Apr 2021Chart GenerationNatural Language ProcessingInformation RetrievalNeural NetworkAutomated Visualization * {mrashid171045hjahan171054ahuzzat171034rrahul171089tzakir171032
Generation of scientific visualization from analytical natural language text is a challenging task. In this paper, we propose Text2Chart, a multi-staged chart generator method.Text2Chart takes natural language text as input and produce visualization as two-dimensional charts. Text2Chart approaches the problem in three stages. Firstly, it identifies the axis elements of a chart from the given text known as x and y entities. Then it finds a mapping of x-entities with its corresponding y-entities. Next, it generates a chart type suitable for the given text: bar, line or pie. Combination of these three stages is capable of generating visualization from the given analytical text. We have also constructed a dataset for this problem. Experiments show that Text2Chart achieves best performances with BERT based encodings with LSTM models in the first stage to label x and y entities, RandomForest classifier for the mapping stage and fastText embedding with LSTM for the chart type prediction. In our experiments, all the stages show satisfactory results and effectiveness considering formation of charts from analytical text, achieving a commendable overall performance.
Introduction
In recent years, advances in Natural Language Processing (NLP) have made huge progress to extract information from natural language texts. Among them a few example tasks are: document summarization [1], title or caption generation from texts, generating textual description of charts [2], named entity recognition [3], etc. There has been several attempts to generate graphs or structural elements from natural language texts or free texts [4,5,6,7]. Scientific charts (bar, line, pie, etc.) are visualizations that are often used in communication. However, automated generation of charts from natural language text always has been a challenging task.
There are very few works in the literature addressing the exact problem of scientific chart generation from natural language text [8,9]. In [8], the authors have presented a infographics generation technique from natural language statements. However, their method is limited to single entity generation only. Text2Chart extends it to multiple entity generation and thus can generate more complex charts. Nevertheless, Generative Pre-trained Transformer 3 (GPT-3) [9] has been a recent popular phenomenon in the field of deep learning. OpenAI has designed this third-generation language model that is trained using neural networks. To the best of our knowledge, there has been an attempt to make a simple chart building tool using GPT-3. As its implementation is not accessible yet, the field of information extraction regarding chart creation can be still considered unexplored to some extent. Moreover, the datasets used in GPT-3 is a very large one and the training is too expensive.
In this paper, we introduce Text2Chart, a multi-staged technique that generates charts from analytical natural language text. Text2Chart works in a combination of three stages. In the first stage, it recognizes x-axis and y-axis entities from the input text. In the second stage, it maps x-axis entities with its corresponding y-axis entities and in the third stage, it predicts the best suited chart type for the particular text input. Text2Chart is limited to three types of charts: bar chart, line chart and pie chart. Tasks in each stage are formulated as supervised learning problems. We have created our own dataset required for the problem. Our dataset is labelled for all three stages of Text2Chart. The dataset is divided into train, validation and test sets. We have used a wide range of evaluation metrics for all the three stages. We have The experimental results shows, best results in the first stage are obtained using BERT embedding and Bidirectional LSTM achieving 0.83 of F 1-score for x-entity recognition and 0.97 F 1-score for y-entity recognition in the test set. In the mapping stage, Random Forest achieves best results of 0.917 of Area under Receiver Operating Characteristic Curve (auROC) in the test set. In the third stage, the model fastText with LSTM layers performs the best to predict the suitable chart type. We have observed that bar charts are suitable to all the texts in our dataset. Thus the problem is a multilabel classification problem. However, the second label prediction task is a binary classification task to distinguish between line chart and pie chart.
Here, Text2Chart achieves best results of auROC 0.64 for pie charts and auROC 0.91 for line charts. The experimental analysis of each stage and in combination shows the overall effective performance of Text2Chart for generating charts from given natural language charts. The rest of the paper is organized as follows: Section 2 presents a brief literature review of the field and related work; Section 3 presents the details of the methodology of Text2Chart; Section 4 presents the experimental analysis and discussion on the results and the paper concludes with brief remarks on the limitations and future work in Section 5.
Related Work
Recent developments in the field of NLP is advancing information extraction in general. One of the first and foremost steps in NLP is the proper vectorization of the input corpora. One of the breakthrough in this area is word2vec proposed in [10]. Word2Vec maps words with similar meaning to adjacent points in a vector space. The embedding is learnt using a neural network on continuous bag of words or skip-gram model. A character-level word embedding is proposed in [11]. Recently, Bidirectional Encoder Representations from Transformers (BERT) is proposed in [12]. BERT is trained on a large corpora and enables pre-trained models to be applicable to transfer learning to a vast area of research. BERT has been successfully applied to solve problems like Named Entity Recognition (NER) [3], text summarization [1], etc.
Text based information processing has been a long quest in the field [13]. Kobayashi et el. [13] have presented a NLP based modelling for line charts. A Hidden Markov Model based chart (bar, line, etc) recognition method is proposed in [14]. Graph neural networks have been employed in [5] to generate logical forms with entities from free text using BERT. In a very recent work [6], Obeid et al. have used transformer based models for text generation from charts. For this work, they have also constructed a large dataset extracting charts from Statista. However, their work focuses on chart summarization and hence called 'Chart-to-Text'. In an earlier work [15], authors have proposed a method for generating ground truth for chart images. Both of the works are limited to bar charts and line charts only. A Generative Adversarial Network, AttnGAN is proposed in [16] that can generate images from text descriptions. Balaji et al. [2] has proposed an automatic chart description generator. CycleGT has been proposed recently that works on both directions: text to graphs and graphs to text [7]. Kim et al. [17] has proposed a pipeline to generate automatic question answering system based on charts.
Automated visualizaton has been always a very fascinating area. A survey of Machine
Learning based visualization methods has been presented in [18]. Deep Eye is proposed in [19] to identify best visualizations from pie chart, bar chart, line chart and scatter chart for a given data pattern. A system for automated E-R diagram generation by detecting different entities from natural language text is presented in [4]. 'Text-to-Viz' is proposed in [8] that generates excellent infographics from given text. However, their method is limited to single entity only. GPT-3 [9] has been a recent phenomenon in the field which has been reported to generate charts from natural language texts. However, GPT-3 implementation is not open yet.
Moreoever, it is trained on an extremely large corpora and an extremely large transformer based model required huge resources. On the light of the review of the existing methods, we believe there is a significant research gap to be addressed in this area.
Our Method
Text2Chart consists of three stages as shown in Fig. 1. It takes a free text as input containing the analytical information. Then it produces x and y axis entities followed by a mapping generation among these elements. In stage 3, the chart type is predicted. A combination of these three are then passed on to the chart generation module. This section presents the detailed procedure of these stages.
Stage 1: x-Axis and y-Axis Label Entity Recognition
In the first stage of our technique, we identify the potential candidate words for both x-axis and y-axis entities of a two dimensional chart. We have formulated the problem as a supervised machine learning task. Here, input to the problem is a paragraph or natural language text and output is a list of words labelled as x-entity and y-entity. The rest of the words or tokens in the text are ignored.
To identify x-entity and y-entity, we build a neural network with different word embeddings and sequence representations. We have employed and experimented with two different strategies -i) detecting both types of entities at once and ii) using a separate models for recognizing x and y entities. Detecting both x and y entities at once shows a drawback as there lies a possibility that a certain type of entity may outperform the loss function of the other types as observed in the experiments (Section 4.3).
Since the two types of entity require a different level of skill set, we have observed that the task of recognizing x entity is far more difficult than recognizing y entity and recognizing x entity requires understanding the samples more deeply than it is required for y entity recognition.
Moreover, the sample space of x entity is much larger than y entity. Therefore, we use separate models for recognizing x and y entities. This later approach outperforms the performance of the former one as we can see in the result section (Section 4.3).
We have experimented both of the strategies using word embedding like Word2Vec [10], fastText [11] and the sequence output of the pre-trained model provided by BERT [12]. For each sample text in the dataset, we take the generated embedding and use it as an input to our model. Then we use layers of Bi-directional LSTM networks. On top of that, we use the time-distribution layer and dense layer to classify each word index that falls into a category of a respected entity or not. The proposed architecture for the first stage of Text2Chart is shown in Fig. 2. Note that the last layer of softmax labels each token either as x-entity, y-entity or none.
Stage 2: Mapping of x and y Label Entities
After identifying the x and y entities in Stage 1, we map each of the identified x entity with its corresponding y entity. While inspecting the data samples we build our first intuition that
x entities and y entities may not appear in a text sample sequentially as they are mapped but independent of their entity type.
For example, if we have an x entity set for a text as {x 1 , x 2 , · · · , x M } and y entity set of that text is {y 1 , y 2 , · · · , y N } and their mapping is as follows
{(x 1 , φ(x 1 )), (x 2 , φ(x 2 )), · · · , (x M , φ(x M )}.
Please note, here x i , y j denotes their position in the sequence. Here the mapping function, φ(x i ) maps an entity x i to another entity, y k . However, there is often found that the entity set lengths are not same M = N and often the sequential order is not maintained. For two x entities
x i , x j if they maps to y k , y l , then a sequential mapping φ guarantees, i ≤ j, k ≤ l whereas the non-sequential mapping will not guarantee that. However, in our observation, non-sequential mapping is not that frequent. In order to address these issues, we propose that the mapping is dependent on the distances between the corresponding entities. We call it our baseline model for this task. From the training dataset, we learn the probability distribution for positive and negative likelihood for distances between x and y entities which are P (d(
x i , y k )|φ(x i ) = y k )
and P (d(x i , y k )|φ(x i ) = y k ) respectively. For the missing values in the range, nearest neighbor smoothing is used to estimate the likelihood values and then normalized to convert it to a probability distribution. The baseline model defines the mapping as in the following equation:
φ(x i ) = argmax k P (d(x i , y k )|φ(x i ) = y k ) P (d(x i , y k )|φ(x i ) = y k ) + P (d(x i , y k )|φ(x i ) = y k )(1)
With the initial encouraging results from this simple baseline model (results are shown in Section 4.4), we further extend this and formulate the problem as a supervised learning problem.
Now each pair of entities, (x i , y k ) are converted to a feature vector suitable for supervised learning setting to find that if that pair is mapped or not.
For a particular entity x i and a particular y k entity, we take the two other entities , one immediate before (x i−1 , y k−1 ) and the next one (x i+1 , y k+1 ) to create the feature vector. For 6 such entity positions, we generate 15 possible pairs and take pairwise distances among them.
Note that, for two similar type entities we take unsigned distance and for different entities signed distances are taken to encode their relative positions into the feature vector. With this feature vector, we train two models: SVM and Random Forests, where the latter works slightly better.
As this is an argmax based calculation, the probability distribution of Random Forest classifier was more consistent than that of SVM. The reason of the inconsistency of the distribution with the scores in SVC is that, the 'argmax' of the scores may not be the argmax of the probabilities.
Therefore we take the auROC as the primary evaluation metrix for this stage. we take the harmonic mean of auROC of both training and validation so that the measure is balanced and they do not outperform each other.
Stage 3: Chart Type Prediction
While generating a chart, we should be aware that type of a chart depends on the information that is conveyed, and the way it is conveyed. Therefore, this sub-task is defined to predict the seemingly appropriate chart type from a text among the most common ones: bar chart, pie chart, and line chart.
Generally, a bar chart is the most commonly accepted chart type for any statistical data.
However, for better visualization and understanding, pie charts and line charts are also used.
Dataset Construction
When we have started this work, no datasets were available for this particular task of automatic in the first stage the token number is higher than labels since a particular x or y entity/label
Performance Evaluation
As Text2Chart is multi-staged and the tasks and related datasets used in the stages are different in nature, they require several different evaluation metrics suitable for particular stage/task in order to evaluate the performance properly. All the methods are trained using the training set and the performance are validated using the validation set. Only after the final model is selected, the model is tested on the test set.
Axis Label Recognition Task
The first stage of our work is x-axis and y-axis label entity recognition. Here we predict whether a given word from the text input can be an x-axis or y-axis entity. We have experimented with our neural architecture model of bidirectional LSTM combining several embeddings, such as fastText, Word2Vec and BERT in order to recognize these entities. For each of the embedding, we have used two different approaches. In the first approach, x-entity and y entity prediction is considered as separate prediction tasks. Here we have the two models, one for each of the tasks.
In the second approach, they are considered together as a combined prediction task. Table 2. Note that we have reported precision, recall and F 1-score for x and y entity predictions. Also the harmonic mean of F 1-score is reported. Note that, the individual approach achieves F 1-score for x and y entities of 0.66 and 0.85 respectively in the validation set which is improved in the combined approach being 0.66 and 0.89. It is clear that the prediction or recognition of x axis entities is much difficult task compared to y axis entity recognition. Here, we can conclude that both models perform almost similar which is also reflected in the harmonic mean of F 1-score respectively 0.74 and 0.76. Table 2. From Table 2, we can see that here combined approach is giving F 1-score of x and y entity recognition task as 0.68 and 0.78 respectively which is almost similar to the performance of the individual approach (0.67 and 0.78 respectively). The performance only differ in the x entity recognition task which is also observed in the harmonic mean of F 1-score. Note that the overall performance of word2vec embedding is significantly worse compared to fastText embedding. Also note that the higher level of overfitting of the word2vec model has reflected in the high values of precision, recall and F 1 score in all the tasks in the training dataset which is not repeated in validation.
Experiments with fastText embedding
Experiments with BERT embedding
We have also experimented with BERT embeddings on the same architecture proposed in Sec- The experimental results with BERT embedding is reported in the third four rows of Table 2.
From the results shown there, we can notice that for BERT embedding, the performances in the individual approach outperform the combined approach in x entity prediction performance.
The results in y entity recognition is almost similar for both of the approaches. Thus the both harmonic mean and F 1-score of x entity recognition are superior in combined approach which are 0.87 and 0.92 respectively compared to those of 0.82 and 0.89 in the individual approach.
To summarize, we can note that the results in BERT embedding are superior to two other embeddings. The best achieved values are shown in bold faced fonts in the Table. Thus, we take the BERT embedding individual x and y entity prediction approach with bidirectional LSTM as the best performing model among those used in the experiments. With the best model, we have also tested its performance on the test dataset. The results are shown in the last row of Table 2.
Here, it is interesting to note that the learned model is not overfitting and the performances in the validation set and test set are not much differing.
Mapping Task
After recognizing the x and y entities with high precision and recall in stage 1, the second stage sets the target to map them in an ordered way. We have first used a transfer model from the best performing model in first stage to see if that helps. However, the very low F 1-score of 0.41 and auROC of 0.64 have discouraged to proceed further in this way. It is evident that the same architecture is not suitable for the different stages due to difference in the type of the task.
Note that, this task is highly imbalanced as the number of positive mappings are very small compared to negative mappings. Thus the model often gets biased towards the negative model and might show poor performance in the positive prediction. The Table 3. In this table, we have reported precision, recall and F 1-score for both of the classes and also the auROC. Note that the results of the baseline model is encouraging with a high auROC of 0.908. However, note that the positive class performance is poor compared to the negative class which leaves room for improvement.
Next we have experimented with the supvervised learning approach describe in Section 3 using Support Vector Machine (SVM) and Random Forest classifiers. In Table 3, we notice that the performance in both of the classes are improved using this approach in both of the classes compared to the baseline model. We note that the performance in the negative class is same. Finally, we have tested the performance of the best performing Random Forest model on the test set and the results are shown in the last row of the Table 3. We see that the performances in the test set are stable and similar to validation set. The ROC curves for training and validation set on all models are given in Fig. 5.
Chart Type Prediction Task
At the third stage, the task is to predict the suitable chart type from the given text. Note that for all the texts in the dataset, bar chart is common and thus we exclude it from classification models. We train two separate models: one for the pie chart and another for the line chart.
The architecture of the model is shown in Fig. 3. This model uses fastText embedding with bidirectional LSTM layers. The network architecture and structure is kept same for both of the classifiers. The neural network has three hidden layers. The first two layers are the LSTM layers with 128 neurons each followed by a dense layer of 512 neurons. The output layer is a
Overall Performance
In order to discuss the overall performance of our work, we have created a pipeline same as shown in Fig generated by Text2Chart in Table 5.
Conclusion
In this paper we have presented Text2Chart, an automatic multi-staged technique that is able to generate charts from human written analytical text. Our technique has been tested on a dataset curated for this task. Despite having a short corpora, Text2Chart provides satisfactory results in every stage regarding automatic chart generation. One of the limitation of our work is the size of the dataset. With a larger dataset, we believe the methodology presented in this paper will provide further improved results. Text2Chart is currently limited to the prediction of only three basic chart types: bar charts, pie charts and line charts. It is possible to extend it for further types. Recently a dataset for chart-to-text has been proposed in [6]. It is possible to use that dataset for the reverse problem also. We believe, it is possible to tune and experiment with more types of suitable neural architecture further for all the stages to improve overall accuracy.
used different combinations of word embeddings like word2vec, fastText, Bidirectional Encoder Representations from Transformers (BERT) with several classifiers or models like Bidirectional Long Short Term Memory (LSTM), Feed Forward Neural Networks, Support Vector Machines and Random Forest.
Figure 1 :
1The overall methodology of Text2Chart.
Figure 2 :
2Proposed Neural Architecture for Recognition of x-axis and y-axis Entities.
Fig. 3 .
3The details of the architecture and experiments are given in Section 4.5.4 Experimental Analysis Text2Chart is implemented using Tensorflow version 2.3. All the experiments have run using Google Colab and the cloud GPU provided with it. The hardware environment of our work has required a CPU of 2.3 GHz, GPU 12 GB, RAM 12.72 GB and Disk of 107 GB. All the experiments have run at least 5 times with different random seeds and only the average results are reported in this section. Source codes and the dataset of Text2Chart will be made available via a public repository (at the time of publication). In the rest of this section, first we describe the process of dataset construction in Section 4.1, then the performance evaluation methods and metrics are presented in Section 4.2. The detailed experimental results of the three stages and overall performance analysis are next presented in respective sections.
Figure 3 :
3Proposed Neural Network Architecture for Chart Type Prediction.
For
axis entity recognition task in the first stage, we adopt the F 1-score and its variant the harmonic mean of f1-scores. We observe the Receiver Operating Characteristic (ROC) curve and the area under curve (auROC) in order to summarize and compare the performances of the classifiers in the second stage of entity mapping. Finally for chart type prediction, we adopt Matthews Correlation Coefficient (MCC) evaluation metric, as MCC being a more reliable statistical rate than F1-score and accuracy in binary classification evaluation for imbalanced dataset[20].
For both of the approaches using fastText (individual and combined), we have used a neural architecture with 4 hidden layers and a dense output layer. The first two hidden layers consist of bidirectional LSTM layers of 512 neurons and 128 neurons followed by time distributed dense layer of 64 neurons and a dense hidden layer with 1024 neurons. Epoch and batch size are kept fixed at 8 for all the models considered here. Experimental results of fastText experiments are given in the first four rows of
tion 3 .
3However, in these experimetns the network structure is different with same number of layers. Here too we have used two approaches: individual and combined. In the individual approach, the first two hidden layers of the neural architecture are bidirectional LSTM with 1024 neurons in each followed by a time distributed dense layer with 1024 neurons and a dense layer with 256 neurons. In the case of x entity recognition, we have used a batch size of 2 and 80 epochs for training. In the case of y entity recognition, the batch size was 8. In the combined approach, the architecture structure has differed only in the last hidden dense layer. Here the number of neurons were 1024. We have used an online training for this combined approach.
Figure 4 :
4Distribution of positive and negative likelihood frequencies of the entity pairs over their distances, here positive and negative distances denote the sequential order of positions of the entities in the text.
baseline model that we try here is based on the probability distribution of the positive and negative likelihoods of the mapped entities Fig. 4. Note that, there is an overlapping area among positive and negative occurrences over the distances. Also note that most of the mappings are in relatively short distances or proximities. Based on that our baseline model is a simple argmax calculation of the likelihood based on Eq. (1). The results of the baseline model is presented in the first four rows of
However, F 1 -Figure 5 :
15score of the Random Forest classifier is slightly lower in case of positive case which is not that significant (0.77 vs 0.78). The fact is evident in auROC. There we see significant improvement achieved by Random Forest classifier compared to SVM. The best values are shown in bold faced font in the table. Thus we conclude that Random Forest is the best performing Receiver Operating Characteristic Curves for training set performances of (a) Probability Model, (b) Support Vector Machines, (c) Random Forest and for validation set of (d) Probability Model, (e) Support Vector Machines, and (f) Random Forest. model for stage 2.
. 1 .Figure 6 :
16Our pipeline merges all the stages of our work and outputs the results we have already discussed and shown in this section. After obtaining the final results, we have checked for all possible errors occurs after completion of each stages. After completing stage 1, if both of the entity set have similar number of entities (N = M ) then we consider 1-to-1 sequential mapping. The cumulative frequency of error count for each of the stages is shown in Fig. 7. This plot shows how each stage cumulatively produces error in the pipeline. However, we notice that although we have a good number of samples without error, there are room to improve and as shown in the figure, the most error-prone task is the task 3 due to the poor performance in pie chart type prediction. We also show one partially correct and one fully correct chart examples Receiver Operating Characteristic Curves for line chart classification in (a) training set, (b) validation set, (c) test set and that of pie chart classification in (d) training set, (e) validation set and (f) test set.
Figure 7 :
7Cumulative frequency of error of three states put in a pipeline on the test set.
Pie charts are suitable if the entities conform to a collection / composition. Line charts are suitable for the cases where the entities are themselves form a continuous domain. For this stage, we have applied fastText word embeddings to build two models with LSTM layers and dense layers. Each model performs binary classification; one is to predict if a pie chart is suited for the text or not, and the other is for line chart. When neither of these two chart types are fitting, only bar chart is assigned to the text. The proposed architecture for stage 3 is shown in
generation of a chart out of a natural language text. Text2Chart requires a specific dataset from which the text samples are suitable for recognizing the chart information. Here chart information refers to the x-axis entities and the corresponding y-axis values respectively. The text samples must contain all these entities to construct the particular chart. We have collected text samples from Wikipedia, other statistical websites and crowd sourcing. We have used crowd sourcing to label the data so that the texts are labelled for all three stages. All the labelled data are then crosschecked by a team of volunteers and only the consensus labels are taken. In total, 717 text samples are taken in the final dataset with 30,027 words/tokens. The average length of the text samples is 53 words and the maximum length is 303 words in a single text.This final dataset is then split in the train, validation and test sets each containing 464, 116 and 137 samples respectively. A summary of the dataset is shown inTable 1. Please note that
Table 1 :
1Summary of datasets used in the experiments.text
x, y entity prediction
mapping chart type
dataset
samples x tokens y tokens x labels
y labels
pairs
pie
line
Training
464
3411
3614
1984
1909
1984
73
58
Validation 116
985
1058
548
529
548
20
11
Test
137
988
1075
574
561
574
20
15
might consists of two words or tokens. All the texts are labelled to be suitable for bar charts
and only the statistics for pie and line charts are shown in the table.
Table 2 :
2Experimental results for the axis label prediction task in the frist stage of Text2Chart.The word2vec embedding represents the word tokens in the corpus by representing the words with common context in a close proximity in the vector space as well. Similar to the experimentsmodel
dataset
Precision
(x)
Recall
(x)
Precision
(y)
Recall
(y)
F 1-
score
(x)
F 1-
score
(y)
Harmonic
F 1-score
fastText
training
0.81
0.80
0.93
0.88
0.80
0.90
0.84
individual validation 0.68
0.64
0.89
0.81
0.66
0.85
0.74
fastText
training
0.81
0.73
0.89
0.97
0.77
0.93
0.84
combined validation 0.73
0.60
0.86
0.93
0.66
0.89
0.76
word2Vec training
0.90
0.88
1.00
1.00
0.89
1.00
0.94
individual validation 0.72
0.62
0.79
0.77
0.67
0.78
0.72
word2Vec training
0.99
0.99
1.00
1.00
0.99
1.00
0.99
combined validation 0.72
0.64
0.83
0.74
0.68
0.78
0.73
BERT
training
0.99
0.99
.99
0.99
0.99
0.99
0.99
individual validation 0.89
0.86
0.95
0.98
0.87
0.97
0.92
BERT
training
0.99
1.00
0.99
1.00
0.99
0.99
0.99
combined validation 0.86
0.78
0.96
0.97
0.82
0.97
0.89
best
test
0.85
0.82
0.96
0.98
0.84
0.97
0.89
Table 3 :
3Experimental results for the mapping task in the second stage.model
dataset
class
Precision Recall F 1-score Harmonic
F 1-score
auROC
baseline
training
0 (-ve)
0.94
0.94
0.94
0.84
0.908
1 (+ve)
0.76
0.76
0.76
validation
0 (-ve)
0.95
0.95
0.95
0.82
0.914
1 (+ve)
0.73
0.73
0.73
SVM
training
0 (-ve)
0.93
0.93
0.93
0.81
0.897
1 (+ve)
0.72
0.72
0.72
validation
0 (-ve)
0.96
0.96
0.96
0.86
0.924
1 (+ve)
0.78
0.78
0.78
Random
training
0 (-ve)
0.95
0.95
0.95
0.85
0.913
Forest
1 (+ve)
0.77
0.77
0.77
validation
0 (-ve)
0.96
0.96
0.96
0.84
0.930
1 (+ve)
0.77
0.77
0.77
best
test
0 (-ve)
0.94
0.94
0.94
0.85
0.917
1 (+ve)
0.77
0.78
0.77
Table 4 :
4Experimental results for chart type prediction task. problem dataset Specificity Sensitivity MCC auROC simple sigmoid layer. We have used RMSprop algorithm to train the models. For pie chart recognition, we set the batch size to 128 and the learning rate to 4e-4. As we have a highly imbalanced dataset, we achieve good enough results in terms of MCC, scoring of 0.22 in the test set as shown in Table 4. The obtained auROC for pie charts is 0.64 in the test set. We avail a better result in terms of recall or sensitivity of 0.94 in the training set, 0.71 in the validation set and 0.75 in the test set. For line charts, we set the batch size to 256 and the learning rate remains as default to 1e-3. In Table 4, we find outstanding results in terms of auROC score of 0.96 in the training set, 0.98 in the validation set and over 0.91 in the test set. Our obtained MCC in the train, validation and test sets is 0.96, 0.92 and 0.51 which is abetter score than the prediction of pie charts. The ROC analysis for both of the tasks are given inFig. 6.Pie Chart
Training set
0.742
0.944
0.51
0.86
Validation set
0.6945
0.714
0.32
0.66
Test set
0.573
0.75
0.22
0.64
Line Chart
Training set
0.9634
0.963
0.96
0.96
Validation set
0.990
0.933
0.92
0.98
Test set
0.893
0.733
0.51
0.91
Table 5 :
5Sample input and outputs of Text2Chart. Tzuyu is a gaming expert . She surveyed 200 individuals to judge the popularity of the video games among her all time favorites . After her survey she concluded that 25 people voted for World of Warcraft , 46 voted for Black Ops , 12 voted for Overwatch , 25 for Modern Warfare , 30 for PUBG , 50 for Sims and 40 for Assassin ' s Creed . Output x entities ['World of Warcraft', 'Black Ops', 'Overwatch', 'Modern Warfare', 'PUBG', 'Sims', 'Assassin', 's Creed'] Mr . Jamal worked in the Meteorological Department for 8 years . He noticed a strange thing in recent times . On certain days of the month , the weather varied strongly . He wrote down the information to make a pattern of the event . The information of the paper is as follows : on the 3rd day of the month the temperature is 36 degrees Celsius , 7th day is 45 degrees Celsius , 9th day is 18 degrees Celsius , 11th day is 21 degrees Celsius , 17th day is 9 degrees Celsius , 19th day is 45 degrees Celsius , 21st day is 36 degrees Celsius , 27th day is 21 degrees Celsius and 29th day is 45 degrees Celsius . He finds a weird pattern in these dates and makes a report and sends it to his senior officer . Output x entities ['3rd day', '7th day', '9th day', '11th day', '17th day', '19th day', '21st day', '27th day', '29th day'] y entities ['36', '45', '18', '21', '9', '45', '36', '21', '45'] chart type ['bar', 'Line']Input
Sample text
y entities
['25', '12', '25', '30', '50', '50', '40', '40']
chart type
['bar']
Input
Sample text
Text summarization with pretrained encoders. Yang Liu, Mirella Lapata, arXiv:1908.08345arXiv preprintYang Liu and Mirella Lapata. Text summarization with pretrained encoders. arXiv preprint arXiv:1908.08345, 2019.
Chart-text: A fully automated chart image descriptor. A Balaji, T Ramanathan, V Sonathi, arXiv:1812.10636arXiv preprintA. Balaji, T. Ramanathan, and V. Sonathi. Chart-text: A fully automated chart image descriptor. arXiv preprint arXiv:1812.10636, 2018.
Introduction to the conll-2003 shared task: Languageindependent named entity recognition. F Erik, Fien Sang, De Meulder, cs/0306050arXiv preprintErik F Sang and Fien De Meulder. Introduction to the conll-2003 shared task: Language- independent named entity recognition. arXiv preprint cs/0306050, 2003.
Automated generation of er diagram from a given text in natural language. Sutirtha Ghosh, Prasenjit Mukherjee, Baisakhi Chakraborty, Rezaul Bashar, 2018 International Conference on Machine Learning and Data Engineering (iCMLDE). IEEESutirtha Ghosh, Prasenjit Mukherjee, Baisakhi Chakraborty, and Rezaul Bashar. Auto- mated generation of er diagram from a given text in natural language. In 2018 International Conference on Machine Learning and Data Engineering (iCMLDE), pages 91-96. IEEE, 2018.
Peter Shaw, Philip Massey, Angelica Chen, Francesco Piccinno, Yasemin Altun, arXiv:1905.08407Generating logical forms from graph representations of text and entities. arXiv preprintPeter Shaw, Philip Massey, Angelica Chen, Francesco Piccinno, and Yasemin Altun. Gen- erating logical forms from graph representations of text and entities. arXiv preprint arXiv:1905.08407, 2019.
Chart-to-text: Generating natural language descriptions for charts by adapting the transformer model. Jason Obeid, Enamul Hoque, arXiv:2010.09142arXiv preprintJason Obeid and Enamul Hoque. Chart-to-text: Generating natural language descriptions for charts by adapting the transformer model. arXiv preprint arXiv:2010.09142, 2020.
Cyclegt: Unsupervised graph-to-text and text-to-graph generation via cycle training. Qipeng Guo, Zhijing Jin, Xipeng Qiu, Weinan Zhang, David Wipf, Zheng Zhang, arXiv:2006.04702arXiv preprintQipeng Guo, Zhijing Jin, Xipeng Qiu, Weinan Zhang, David Wipf, and Zheng Zhang. Cyclegt: Unsupervised graph-to-text and text-to-graph generation via cycle training. arXiv preprint arXiv:2006.04702, 2020.
Text-to-viz: Automatic generation of infographics from proportion-related natural language statements. Weiwei Cui, Xiaoyu Zhang, Yun Wang, He Huang, Bei Chen, Lei Fang, Haidong Zhang, Jian-Guan Lou, Dongmei Zhang, IEEE transactions on visualization and computer graphics. 261Weiwei Cui, Xiaoyu Zhang, Yun Wang, He Huang, Bei Chen, Lei Fang, Haidong Zhang, Jian-Guan Lou, and Dongmei Zhang. Text-to-viz: Automatic generation of infographics from proportion-related natural language statements. IEEE transactions on visualization and computer graphics, 26(1):906-916, 2019.
Language models are few-shot learners. Benjamin Tom B Brown, Nick Mann, Melanie Ryder, Jared Subbiah, Prafulla Kaplan, Arvind Dhariwal, Pranav Neelakantan, Girish Shyam, Amanda Sastry, Askell, arXiv:2005.14165arXiv preprintTom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Lan- guage models are few-shot learners. arXiv preprint arXiv:2005.14165, 2020.
Efficient estimation of word representations in vector space. Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, arXiv:1301.3781arXiv preprintTomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.
Enriching word vectors with subword information. Piotr Bojanowski, Edouard Grave, Armand Joulin, Tomas Mikolov, Transactions of the Association for Computational Linguistics. 5Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. Enriching word vectors with subword information. Transactions of the Association for Computational Lin- guistics, 5:135-146, 2017.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova Bert, arXiv:1810.04805Pretraining of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre- training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
Toward text based information processing: with an example of natural language modeling of a line chart. Ichiro Kobayashi, IEEE SMC'99 Conference Proceedings. Ichiro Kobayashi. Toward text based information processing: with an example of nat- ural language modeling of a line chart. In IEEE SMC'99 Conference Proceedings. 1999
IEEE International Conference on Systems, Man, and Cybernetics (Cat. No. 99CH37028). IEEE5IEEE International Conference on Systems, Man, and Cybernetics (Cat. No. 99CH37028), volume 5, pages 202-207. IEEE, 1999.
Learning-based scientific chart recognition. Y P Zhou, Chew Lim Tan, 4th IAPR International Workshop on Graphics Recognition, GREC. CiteseerYP Zhou and Chew Lim Tan. Learning-based scientific chart recognition. In 4th IAPR International Workshop on Graphics Recognition, GREC, pages 482-492. Citeseer, 2001.
Generating ground truthed dataset of chart images: Automatic or semi-automatic?. Weihua Huang, Jiuzhou Chew Lim Tan, Zhao, International Workshop on Graphics Recognition. SpringerWeihua Huang, Chew Lim Tan, and Jiuzhou Zhao. Generating ground truthed dataset of chart images: Automatic or semi-automatic? In International Workshop on Graphics Recognition, pages 266-277. Springer, 2007.
Attngan: Fine-grained text to image generation with attentional generative adversarial networks. Tao Xu, Pengchuan Zhang, Qiuyuan Huang, Han Zhang, Zhe Gan, Xiaolei Huang, Xiaodong He, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionTao Xu, Pengchuan Zhang, Qiuyuan Huang, Han Zhang, Zhe Gan, Xiaolei Huang, and Xiaodong He. Attngan: Fine-grained text to image generation with attentional generative adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1316-1324, 2018.
Answering questions about charts and generating visual explanations. Enamul Dae Hyun Kim, Maneesh Hoque, Agrawala, Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. the 2020 CHI Conference on Human Factors in Computing SystemsDae Hyun Kim, Enamul Hoque, and Maneesh Agrawala. Answering questions about charts and generating visual explanations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pages 1-13, 2020.
Applying machine learning advances to data visualization: A survey on ml4vis. Qianwen Wang, Zhutian Chen, Yong Wang, Huamin Qu, arXiv:2012.00467arXiv preprintQianwen Wang, Zhutian Chen, Yong Wang, and Huamin Qu. Applying machine learning advances to data visualization: A survey on ml4vis. arXiv preprint arXiv:2012.00467, 2020.
Deepeye: Towards automatic data visualization. Yuyu Luo, Xuedi Qin, Nan Tang, Guoliang Li, IEEE 34th international conference on data engineering (ICDE). IEEEYuyu Luo, Xuedi Qin, Nan Tang, and Guoliang Li. Deepeye: Towards automatic data visualization. In 2018 IEEE 34th international conference on data engineering (ICDE), pages 101-112. IEEE, 2018.
The advantages of the matthews correlation coefficient (mcc) over f1 score and accuracy in binary classification evaluation. Davide Chicco, Giuseppe Jurman, BMC genomics. 211Davide Chicco and Giuseppe Jurman. The advantages of the matthews correlation coeffi- cient (mcc) over f1 score and accuracy in binary classification evaluation. BMC genomics, 21(1):1-13, 2020.
| []
|
[
"ALMA [CI] 3 P 1 − 3 P 0 observations of NGC 6240: a puzzling molecular outflow, and the role of outflows in the global α CO factor of (U)LIRGs",
"ALMA [CI] 3 P 1 − 3 P 0 observations of NGC 6240: a puzzling molecular outflow, and the role of outflows in the global α CO factor of (U)LIRGs"
]
| [
"Claudia Cicone [email protected] \nINAF -Osservatorio Astronomico di Brera\nVia Brera 2820121MilanoItaly\n",
"Paola Severgnini \nINAF -Osservatorio Astronomico di Brera\nVia Brera 2820121MilanoItaly\n",
"Padelis P Papadopoulos \nDepartment of Physics\nSection of Astrophysics, Astronomy and Mechanics\nAristotle University of Thessaloniki\n54124ThessalonikiMacedoniaGreece\n\nResearch Center for Astronomy\nAcademy of Athens\nSoranou Efesiou 4GR-115 27AthensGreece\n\nSchool of Physics and Astronomy\nCardiff University\nQueen's Buildings, The ParadeCF24 3AACardiffUK\n",
"Roberto Maiolino \nCavendish Laboratory\nUniversity of Cambridge\n19 J. J. Thomson AveCB3 0HECambridgeUK\n\nKavli Institute of Cosmology Cambridge\nMadingley RoadCB3 0HACambridgeUK\n",
"Chiara Feruglio \nINAF -Osservatorio Astronomico di Trieste\nvia G.B. Tiepolo 1134143TriesteItaly\n",
"Ezequiel Treister \nInstituto de Astrofisica\nFacultad de Fisica\nPontificia Universidad Catolica de Chile\nCasilla 306, Santiago 22Chile\n",
"George C Privon \nDepartment of Astronomy\nUniversity of Florida\n211 Bryant Space Sciences Center32611 FLGainesvilleUSA\n",
"Zhi-Yu Zhang \nEuropean Southern Observatory\nKarl-Schwarzschild-Strae 285748GarchingGermany\n\nInstitute for Astronomy\nUniversity of Edinburgh\nRoyal Observatory\nBlackford HillEH9 3HJEdinburghUK\n",
"Roberto Della Ceca \nINAF -Osservatorio Astronomico di Brera\nVia Brera 2820121MilanoItaly\n",
"Fabrizio Fiore \nINAF -Osservatorio Astronomico di Roma\nvia Frascati 3300078Monteporzio CatoneItaly\n",
"Kevin Schawinski \nInstitute for Particle Physics and Astrophysics\nETH Zurich\nWolfgang-Pauli-Str. 27CH-8093ZurichSwitzerland\n",
"Jeff Wagg \nSKA Organisation\nLower Withington MacclesfieldSK11 9DLCheshireUK\n",
"Claudia Cicone "
]
| [
"INAF -Osservatorio Astronomico di Brera\nVia Brera 2820121MilanoItaly",
"INAF -Osservatorio Astronomico di Brera\nVia Brera 2820121MilanoItaly",
"Department of Physics\nSection of Astrophysics, Astronomy and Mechanics\nAristotle University of Thessaloniki\n54124ThessalonikiMacedoniaGreece",
"Research Center for Astronomy\nAcademy of Athens\nSoranou Efesiou 4GR-115 27AthensGreece",
"School of Physics and Astronomy\nCardiff University\nQueen's Buildings, The ParadeCF24 3AACardiffUK",
"Cavendish Laboratory\nUniversity of Cambridge\n19 J. J. Thomson AveCB3 0HECambridgeUK",
"Kavli Institute of Cosmology Cambridge\nMadingley RoadCB3 0HACambridgeUK",
"INAF -Osservatorio Astronomico di Trieste\nvia G.B. Tiepolo 1134143TriesteItaly",
"Instituto de Astrofisica\nFacultad de Fisica\nPontificia Universidad Catolica de Chile\nCasilla 306, Santiago 22Chile",
"Department of Astronomy\nUniversity of Florida\n211 Bryant Space Sciences Center32611 FLGainesvilleUSA",
"European Southern Observatory\nKarl-Schwarzschild-Strae 285748GarchingGermany",
"Institute for Astronomy\nUniversity of Edinburgh\nRoyal Observatory\nBlackford HillEH9 3HJEdinburghUK",
"INAF -Osservatorio Astronomico di Brera\nVia Brera 2820121MilanoItaly",
"INAF -Osservatorio Astronomico di Roma\nvia Frascati 3300078Monteporzio CatoneItaly",
"Institute for Particle Physics and Astrophysics\nETH Zurich\nWolfgang-Pauli-Str. 27CH-8093ZurichSwitzerland",
"SKA Organisation\nLower Withington MacclesfieldSK11 9DLCheshireUK"
]
| []
| We present Atacama large millimeter/ submillimeter array (ALMA) and compact array (ACA) [CI] 3 P 1 − 3 P 0 ([CI](1-0)) observations of NGC 6240, which we combine with ALMA CO(2-1) and IRAM Plateau de Bure Interferometer CO(1-0) data to study the physical properties of the massive molecular (H 2 ) outflow. We discover that the receding and approaching sides of the H 2 outflow, aligned east-west, exceed 10 kpc in their total extent. High resolution (0.24 ) [CI](1-0) line images surprisingly reveal that the outflow emission peaks between the two active galactic nuclei (AGN), rather than on either of the two, and that it dominates the velocity field in this nuclear region. We combine the [CI](1-0) and CO(1-0) data to constrain the CO-to-H 2 conversion factor (α CO ) in the outflow, which is on average 2.1 ± 1.2 M (K km s −1 pc 2 ) −1 . We estimate that 60 ± 20 % of the total H 2 gas reservoir of NGC 6240 is entrained in the outflow, for a resulting mass-loss rate oḟ M out = 2500 ± 1200 M yr −1 ≡ 50 ± 30 SFR. This energetics rules out a solely star formation-driven wind, but the puzzling morphology challenges a classic radiative-mode AGN feedback scenario. For the quiescent gas we compute α CO = 3.2 ± 1.8 M (K km s −1 pc 2 ) −1 , which is at least twice the value commonly employed for (ultra) luminous infrared galaxies ((U)LIRGs). We observe a tentative trend of increasing r 21 ≡ L CO(2−1) /L CO(1−0) ratios with velocity dispersion and measure r 21 > 1 in the outflow, whereas r 21 1 in the quiescent gas. We propose that molecular outflows are the location of the warmer, strongly unbound phase that partially reduces the opacity of the CO lines in (U)LIRGs, hence driving down their global α CO and increasing their r 21 values. | 10.3847/1538-4357/aad32a | [
"https://arxiv.org/pdf/1807.06015v1.pdf"
]
| 119,369,070 | 1807.06015 | 7ba9f4b8c7560a995af08f68115123907eed1aee |
ALMA [CI] 3 P 1 − 3 P 0 observations of NGC 6240: a puzzling molecular outflow, and the role of outflows in the global α CO factor of (U)LIRGs
July 18, 2018 16 Jul 2018
Claudia Cicone [email protected]
INAF -Osservatorio Astronomico di Brera
Via Brera 2820121MilanoItaly
Paola Severgnini
INAF -Osservatorio Astronomico di Brera
Via Brera 2820121MilanoItaly
Padelis P Papadopoulos
Department of Physics
Section of Astrophysics, Astronomy and Mechanics
Aristotle University of Thessaloniki
54124ThessalonikiMacedoniaGreece
Research Center for Astronomy
Academy of Athens
Soranou Efesiou 4GR-115 27AthensGreece
School of Physics and Astronomy
Cardiff University
Queen's Buildings, The ParadeCF24 3AACardiffUK
Roberto Maiolino
Cavendish Laboratory
University of Cambridge
19 J. J. Thomson AveCB3 0HECambridgeUK
Kavli Institute of Cosmology Cambridge
Madingley RoadCB3 0HACambridgeUK
Chiara Feruglio
INAF -Osservatorio Astronomico di Trieste
via G.B. Tiepolo 1134143TriesteItaly
Ezequiel Treister
Instituto de Astrofisica
Facultad de Fisica
Pontificia Universidad Catolica de Chile
Casilla 306, Santiago 22Chile
George C Privon
Department of Astronomy
University of Florida
211 Bryant Space Sciences Center32611 FLGainesvilleUSA
Zhi-Yu Zhang
European Southern Observatory
Karl-Schwarzschild-Strae 285748GarchingGermany
Institute for Astronomy
University of Edinburgh
Royal Observatory
Blackford HillEH9 3HJEdinburghUK
Roberto Della Ceca
INAF -Osservatorio Astronomico di Brera
Via Brera 2820121MilanoItaly
Fabrizio Fiore
INAF -Osservatorio Astronomico di Roma
via Frascati 3300078Monteporzio CatoneItaly
Kevin Schawinski
Institute for Particle Physics and Astrophysics
ETH Zurich
Wolfgang-Pauli-Str. 27CH-8093ZurichSwitzerland
Jeff Wagg
SKA Organisation
Lower Withington MacclesfieldSK11 9DLCheshireUK
Claudia Cicone
ALMA [CI] 3 P 1 − 3 P 0 observations of NGC 6240: a puzzling molecular outflow, and the role of outflows in the global α CO factor of (U)LIRGs
July 18, 2018 16 Jul 2018(Received June 29, 2018; Revised June 29, 2018; Accepted July 11, 2018) Submitted to ApJDraft version Typeset using L A T E X twocolumn style in AASTeX62 Corresponding author: 2 Cicone et al.galaxies: active -galaxies: evolution -galaxies: individual (NGC 6240) -galaxies: ISM -submillimeter: ISM
We present Atacama large millimeter/ submillimeter array (ALMA) and compact array (ACA) [CI] 3 P 1 − 3 P 0 ([CI](1-0)) observations of NGC 6240, which we combine with ALMA CO(2-1) and IRAM Plateau de Bure Interferometer CO(1-0) data to study the physical properties of the massive molecular (H 2 ) outflow. We discover that the receding and approaching sides of the H 2 outflow, aligned east-west, exceed 10 kpc in their total extent. High resolution (0.24 ) [CI](1-0) line images surprisingly reveal that the outflow emission peaks between the two active galactic nuclei (AGN), rather than on either of the two, and that it dominates the velocity field in this nuclear region. We combine the [CI](1-0) and CO(1-0) data to constrain the CO-to-H 2 conversion factor (α CO ) in the outflow, which is on average 2.1 ± 1.2 M (K km s −1 pc 2 ) −1 . We estimate that 60 ± 20 % of the total H 2 gas reservoir of NGC 6240 is entrained in the outflow, for a resulting mass-loss rate oḟ M out = 2500 ± 1200 M yr −1 ≡ 50 ± 30 SFR. This energetics rules out a solely star formation-driven wind, but the puzzling morphology challenges a classic radiative-mode AGN feedback scenario. For the quiescent gas we compute α CO = 3.2 ± 1.8 M (K km s −1 pc 2 ) −1 , which is at least twice the value commonly employed for (ultra) luminous infrared galaxies ((U)LIRGs). We observe a tentative trend of increasing r 21 ≡ L CO(2−1) /L CO(1−0) ratios with velocity dispersion and measure r 21 > 1 in the outflow, whereas r 21 1 in the quiescent gas. We propose that molecular outflows are the location of the warmer, strongly unbound phase that partially reduces the opacity of the CO lines in (U)LIRGs, hence driving down their global α CO and increasing their r 21 values.
INTRODUCTION
Massive (M mol > 10 8 M ) and extended (r 1 kpc) outflows of cold and dense molecular (H 2 ) gas have been discovered in a large number of starbursts and active galactic nuclei (AGNs) (Turner 1985;Nakai et al. 1987;Sakamoto et al. 2006;Fischer et al. 2010;Feruglio et al. 2010;Sturm et al. 2011;Alatalo et al. 2011;Dasyra & Combes 2012;Veilleux et al. 2013;Spoon et al. 2013;Combes et al. 2013;Morganti et al. 2013;Feruglio et al. 2013b;Cicone et al. 2014;García-Burillo et al. 2014Zschaechner et al. 2016;Feruglio et al. 2017;Carniani et al. 2017;Barcos-Muñoz et al. 2018;Gowardhan et al. 2018;Fluetsch et al. 2018). Although so far limited mostly to local (ultra) luminous infrared galaxies ((U)LIRGs), these observations indicate that the massloss rates of H 2 gas are higher compared to the ionised gas phase participating in the outflows (Carniani et al. 2015;Fiore et al. 2017). Therefore, molecular outflows, by displacing and perhaps removing the fuel available for star formation, can have a strong impact on galaxy evolution. More luminous AGNs host more powerful H 2 winds, suggesting a direct link between the two (Cicone et al. 2014).
The presence of massive amounts of cold and dense H 2 gas outflowing at v 1000 km s −1 across kpc scales in galaxies is itself puzzling. A significant theoretical effort has gone into reproducing the properties of multiphase outflows in the context of AGN feedback models (Cicone et al. 2018). In one of the AGN radiative-mode scenarios, the outflows result from the interaction of fast highly ionised winds launched from the pc-scales with the kpc-scale interstellar medium (ISM), which occurs through a 'blast-wave' mechanism (Silk & Rees 1998;King 2010;Zubovas & King 2012;Faucher-Giguère & Quataert 2012;Costa et al. 2014;Gaspari & Sadowski 2017;Biernacki & Teyssier 2018). In this picture, because molecular clouds overtaken by a hot and fast wind are quickly shredded (Brüggen & Scannapieco 2016), it is more likely that the high-velocity H 2 gas forms directly within the outflow, by cooling out of the warmer gas (Zubovas & King 2014;Costa et al. 2015;Nims et al. 2015; Thompson et al. 2016;Richings & Faucher-Giguère 2018). An alternative scenario, not requiring shockwaves, is the direct acceleration of the molecular ISM by radiation pressure on dust (Thompson et al. * Marie Sk lodowska-Curie fellow 2015; Ishibashi & Fabian 2015;Costa et al. 2018). This mechanism is most efficient in AGNs deeply embedded in a highly IR optically thick medium, such as local (U)LIRGs.
In order to advance our theoretical understanding of galactic-scale molecular outflows, we need to place more accurate constraints on their energetics. Indeed, most current H 2 outflow mass estimates are based on a single molecular gas tracer (CO or OH), implying uncertainties of up to one order of magnitude (Veilleux et al. 2017;Cicone et al. 2018). The luminosity of the CO(1-0) line, which is optically thick in typical conditions of molecular clouds, can be converted into H 2 mass through an CO(1-0)-to-H 2 conversion factor (α CO ) calibrated using known sources and dependent on the physical state of the gas. For the molecular ISM of isolated (or only slightly perturbed) disk galaxies like the Milky Way, the conventional α CO is 4.3 M (K km s −1 pc 2 ) −1 (Bolatto et al. 2013). Instead, for merger-driven starbursts like most (U)LIRGs, which are characterised by a more turbulent and excited ISM, a lower α CO of ∼ 0.6 − 1.0 M (K km s −1 pc 2 ) −1 is often adopted (Downes & Solomon 1998;Yao et al. 2003;Israel et al. 2015). Such low α CO values have been ascribed to the existence, in the inner regions of these mergers, of a warm and turbulent 'envelope' phase of H 2 gas, not contained in self-gravitating clouds (Aalto et al. 1995). However, some recent analyses of the CO spectral line energy distributions (SLEDs) including high-J ( 3) transitions suggest that near-Galactic α CO values are also possible for (U)LIRGs, especially when a significant H 2 gas fraction is in dense, gravitationally-bound states (Papadopoulos et al. 2012a). Dust-based ISM mass measurements also deliver galactic-type α CO factors for (U)LIRGs, although they depend on the underlying assumptions used to calibrate the conversion (Scoville et al. 2016).
Molecular outflows can be significantly fainter than the quiescent ISM, and so multi-transition observations aimed at estimating their α CO are particularly challenging. Dasyra et al. (2016) and more recently Oosterloo et al. (2017), for the radio-jet driven outflow in IC 5063, derived a low opticallythin α CO of ∼ 0.3 M (K km s −1 pc 2 ) −1 , in line with theoretical predictions by Richings & Faucher-Giguère (2018). On the other hand, for the starburst-driven M82 outflow, Leroy et al. (2015) calculated 1 α 2−1 CO = 1 − 2.5 M (K km s −1 pc 2 ) −1 . The detection of high density gas in the starburst-driven outflow of NGC 253 would also favour an α CO higher than the optically thin value (Walter et al. 2017), and a similar conclusion may be reached for the outflow in Mrk 231, found to entrain a substantial amount of dense H 2 gas (Aalto et al. 2012(Aalto et al. , 2015Cicone et al. 2012;Lindberg et al. 2016).
An alternative method for measuring the molecular gas mass, independent of the α CO factor, is through a tracer such as the 3 P 1 − 3 P 0 transition of neutral atomic carbon (hereafter [CI](1-0)). This line, optically thin in most cases, has an easier partition function than molecules and excitation requirements similar to CO(1-0) 2 . More importantly [CI] is expected to be fully coexisting with H 2 Papadopoulos & Greve 2004). Therefore, by combining the information from CO(1-0) and [CI](1-0) it is possible to derive an estimate of the α CO value. Similar to any optically thin species used to trace H 2 (e.g. dust, 13 CO), converting the [CI](1-0) line flux into a mass measurement is plagued by the unavoidable uncertainty on its abundance. However, in this regard, recent calculations found not only that the average [C/H 2 ] abundance in molecular clouds is more robust than that of molecules such as CO, but also that [CI] can even trace the H 2 gas where CO has been severely depleted by cosmic rays (CRs, Bisbas et al. 2015Bisbas et al. , 2017.
In this work we use new Atacama large millimeter/ submillimeter array (ALMA) and Atacama compact array (ACA) observations of the [CI](1-0) line in NGC 6240 to constrain the physical properties of its molecular outflow. NGC 6240 is a merging LIRG hosting two AGNs with quasar-like luminosities (Puccetti et al. 2016). The presence of a molecular outflow was suggested by van der Werf et al. (1993) based on the detection of high-velocity wings of the ro-vibrational H 2 v = 1 − 0 S(1) 2.12µm line and by Iono et al. (2007) based on the CO(3-2) kinematics, and it was later confirmed by Feruglio et al. (2013b) using IRAM PdBI CO(1-0) observations. This is one of the first interferometric [CI](1-0) observations of a local galaxy (see also Krips et al. 2016), and -to our knowledge -the first spatially-resolved [CI](1-0) observation of a molecular 1 By using the CO(2-1) transition 2 CO(1-0) and [CI](1-0) are similar in critical density but E 10 /k b =5.5 K for CO and 23 K for [CI]; however, as long as most of the H 2 gas has T k > 15−20 K, as expected, the E/k b difference between the two lines makes no real excitation difference in the level population ). outflow in a quasar. Probing the capability of [CI](1-0) to image molecular outflows is crucial: besides being an alternative H 2 tracer independent of the α CO factor, [CI](1-0) is also sensitive to CO-poor gas, which may be an important component of molecular outflows exposed to strong far-ultraviolet (UV) fields (Wolfire et al. 2010) or CR fluxes (Bisbas et al. 2015(Bisbas et al. , 2017Krips et al. 2016;González-Alfonso et al. 2018). Moreover, testing [CI](1-0) as a sensitive molecular probe in a local and well-studied galaxy such as NGC 6240 has a great legacy value for studies at z > 2, where the [CI] lines are very valuable tracers of the bulk of the molecular gas accessible with ALMA .
The paper is organised as follows: in § 2 we describe the data; in § 3.1 we present the CO(1-0), CO(2-1), and [CI](1-0) outflow maps and the [CI](1-0) line moment maps. In § 3.2- § 3.3 we identify the outflowing components of the molecular line emission and derive the α CO and r 21 values separately for the quiescent ISM and the outflow. The outflow energetics is constrained in § 3.4. In § 3.5 we study the variations of α CO and r 21 as a function of σ v and distance of the different spectral components from the nucleus. Our findings are discussed in § 4 and summarised in § 5. Throughout the paper we adopt a standard ΛCDM cosmological model with H 0 = 67.8 km s −1 Mpc −1 , Ω Λ = 0.692, Ω M = 0.308 (Planck Collaboration et al. 2016). At the distance of NGC 6240 (redshift z = 0.02448, luminosity distance D L = 110.3 Mpc), the physical scale is 0.509 kpc arcsec −1 . Uncertainties correspond to 1σ statistical errors. The units of α CO [M (K km s −1 pc 2 ) −1 ] are sometimes omitted.
OBSERVATIONS
The Band 8 observations of the [CI](1-0) (ν [CI] rest = 492.16065 GHz) emission line in NGC 6240 were carried out in May 2016 with the 12 m-diameter antennas of ALMA and in August 2016 with the 7 m-diameter antennas of the Atacama Compact Array (ACA) as part of our Cycle 3 programme 2015.1.00717.S (PI: Cicone). The ALMA observations were effectuated in a compact configuration with 40 antennas (minimum and maximum baselines, b min = 15 m, b max = 640 m), yielding an angular resolution (AR) of 0.24 and a maximum recoverable scale (MRS) of 2.48 . Only one of the two planned 0.7 h-long scheduling blocks was executed, and the total on-source time was 0.12 h. The PWV was 0.65 mm and the average system temperature was T sys = 612 K. The ACA observations were performed using nine antennas with b min = 9 m and b max = 45 m, resulting in AR=2.4 and MRS=14 . The total ACA observing time was 3.5 h, of which 0.7 h on source. The average PWV and T sys were 0.7 mm and 550 K, respectively. J1751+0939 and Titan were used for flux calibration, J1924-2914 for bandpass calibration, J1658+0741 and J1651+0129 for phase calibration.
We employed the same spectral setup for the ALMA and the ACA observations. Based on previous CO(1-0) observations of NGC 6240 (Feruglio et al. 2013b,a), and on the knowledge of the concurrence of the [CI](1-0) and CO(1-0) emissions, we expected the [CI](1-0) line to be significantly broad (full width at zero intensity, FWZI> 1000 km s −1 ). Therefore, to recover both the broad [CI](1-0) line and its adjacent continuum, we placed two spectral windows at a distance of 1.8 GHz, overlapping by 75 MHz in their central 1.875 GHz-wide full sensitivity part, yielding a total bandwidth of 3.675 GHz (2293 km s −1 ) centred at ν [CI] obs = 480.40045 GHz. After calibrating separately the ALMA and ACA datasets with the respective scripts delivered to the PI, we fit and subtracted the continuum in the uv plane. This was done through the CASA 3 task uvcontsub, by using a zeroth-order polynomial for the fit, and by estimating the continuum emission in the following linefree frequency ranges: 478.568< ν obs [GHz] <478.988 and 481.517< ν obs [GHz] <482.230. The line visibilities were then deconvolved using clean with Briggs weighting and robust parameter equal to 0.5. A spectral binning of ∆ν = 9.77 MHz (6 km s −1 ) was applied, and the cleaning masks were chosen interactively. In order to improve the image reconstruction, following the strategy adopted by Hacar et al. (2018), we used the cleaned ACA data cube corrected for primary beam as a source model to initialise the deconvolution of the ALMA line visibilities (parameter 'modelimage' in clean). The synthesised beams of the resulting ACA and ALMA image data cubes are 4.55 × 2.98 (PA=−53.57 deg) and 0.29 × 0.24 (PA= 113.18 deg), respectively. In addition, a lower resolution ALMA [CI](1-0) line data cube was produced by applying a tapering (outer taper of 1.4 ), which resulted in a synthesised beam of 1.28 ×1.02 (PA= 77.49 deg). Primary beam correction was applied to all datasets. We checked the accuracy of the ALMA and ACA relative flux scales by comparing the flux on the overlapping spatial scales between the two arrays, and found that they are consistent within the Band 8 calibration uncertainty of 15%.
As a last step, in order to maximise the uv coverage and the sensitivity to any extended structure possibly 3 Common Astronomy Software Applications (McMullin et al. 2007). filtered out by the ALMA data, we combined the (tapered and non) ALMA image data cubes with the ACA one using the task feather. For a detailed explanation of feather we refer the reader to the CASA cookbook 4 . The same steps were used to produce all the interferometric maps shown in this paper, i.e.: (i) clean of ACA visibilities followed by primary beam correction, (ii) clean of ALMA visibilities by using the ACA images as a source model followed by primary beam correction, and (iii) feather of the ACA and ALMA images. The resulting ACA+ALMA merged images inherit the synthesised beam and cell size (the latter set equal to 0.01 and 0.2 respectively for the higher and lower resolution data) of the corresponding input ALMA images. At the phase tracking centre (central beam), the 1σ sensitivities to line detection, calculated using the line-free spectral channels, are 1.4 mJy beam −1 and 5 mJy beam −1 per dv = 50 km s −1 spectral channel, respectively for the higher and lower resolution [CI](1-0) line data cubes. The sensitivity decreases slightly with distance from the phase center. At a radius of 5 , the [CI](1-0) line sensitivities per dv = 50 km s −1 spectral channel are 3.3 mJy beam −1 and 5.5 mJy beam −1 .
In this paper we make use of the IRAM PdBI CO(1-0) data previously presented by Feruglio et al. (2013b,a). The CO(1-0) line image data cube used in this analysis has a synthesised beam of 1.42 ×1.00 (PA= 56.89 deg) and a cell size of 0.2 . The CO(1-0) 1σ line sensitivity per dv = 50 km s −1 spectral channel is 0.6 mJy beam −1 at the phase center, and 0.65 mJy beam −1 at a 5 radius.
Our analysis also includes ALMA Band 6 (programme 2015.1.00370.S, PI: Treister) snapshot (one minute on source) observations of NGC 6240 targeting the CO(2-1) transition, which were performed in January 2016 (PWV=1.2 mm) using the compact configuration (AR=1.2 , MRS=10 ). These observations were executed in support of the long baseline campaign carried out by Treister et al. (in prep). We calibrated the data using the script for PI, estimated the continuum from the line-free spectral ranges (224.106< ν obs [GHz] <224.279 and 225.780< ν obs [GHz] <225.971) and subtracted it in the uv plane. We deconvolved the line visibilities using clean with Briggs weighting (ro-bust=0.5) and applied a correction for primary beam. The final CO(2-1) cleaned data cube has a synthesised beam of 1.54 × 0.92 (PA= 60.59 deg) and a cell size of 0.2 . The 1σ CO(2-1) line sensitivity per dv = 50 km s −1 channel is 0.80 mJy beam −1 at the phase centre and 1 mJy beam −1 at a 5 radius.
In the analysis that follows, the comparison between the CO(1-0), CO(2-1), and [CI](1-0) line tracers in the molecular outflow of NGC 6240 will be done by using the lower resolution ALMA+ACA [CI](1-0) data cube, which matches in angular resolution (∼ 1.2 ) the IRAM PdBI CO(1-0) and ALMA CO(2-1) data. Unless specified, quoted errors include the systematic uncertainties on the measured fluxes due to flux calibration, which are 10% for the IRAM PdBI CO(1-0) and ALMA CO(2-1) data, and 15% for the ALMA [CI](1-0) line observations. We report the presence of some negative artefacts, especially in the cleaned CO(1-0) and CO(2-1) datacubes. These are due to the interferometric nature of the observations, which does not allow to properly recover all the faint extended emission in a source with a very bright central peak emission such as NGC 6240. However, the negative features lie mostly outside the region probed by our analysis and they are not expected to significantly affect our flux recovery, since the total CO(1-0) and CO(2-1) line fluxes are consistent with previous single-dish measurements (Costagliola et al. 2011;Papadopoulos et al. 2012b). Our total [CI](1-0) line flux is higher than that recovered by Papadopoulos & Greve (2004) by using the James Clerk Maxwell telescope (JCMT, FWHM beam = 10 ), but lower by 34 ± 15% than the flux measured by the Herschel space observatory (Papadopoulos et al. 2014). This indicates that some faint extended emission has been resolved out and/or that there is additional [CI](1-0) line emission outside the field of view of our observations.
DATA ANALYSIS AND RESULTS
Morphology of the extended molecular outflow and its launch region
With the aim of investigating the extent and morphology of the outflow, we produced interferometric maps of the CO(1-0), CO(2-1), and [CI](1-0) high-velocity emissions. The maps, shown in Fig. 1(a,b,c), were generated by merging together and imaging the uv visibilities corresponding to the blue-and red-shifted wings of the molecular lines, integrated respectively within v ∈ (−650, −200) km s −1 and v ∈ (250, 800) km s −1 . These are the velocity ranges that, following from the identification of the outflow components performed in § 3.3 (and further discussed in § 4.1 and Appendix B), are completely dominated by the emission from outflowing gas. The data displayed in panels (a,b,c) have a matched spatial resolution equal to ∼ 1.2 (details in § 2). Panels (d,e) of Fig. 1 show the maps of the blue and red [CI](1-0) line wings at the native spatial resolution of the ALMA Band 8 observations (0.24 , § 2).
The extended (> 5 kpc) components of the outflow are best seen in Fig.1(a,b,c), whereas panels (d,e) provide a zoomed view of the molecular wind in the inner 1-2 kpc. The bulk of the outflow extends eastward of the two AGNs, as already pointed out in the previous analysis of the CO(1-0) data done by Feruglio et al. (2013b). In addition, we identify for the first time a western extension of the molecular outflow, roughly aligned along the same east-west axis as the eastern component. At the sensitivity allowed by our data, we detect at a S/N> 5 CO emission features associated with the outflow up to a maximum distance of 13.3 (6.8 kpc) and 7.2 (3.7 kpc) from the nucleus, respectively in the east and west directions (Fig 1a,b,c). The [CI](1-0) map in Fig 1(c) shows extended structures similar to CO, although the limited field of view of the Band 8 observations does not allow us to probe emission beyond a radius of ∼ 7.5 . Feruglio et al. (2013b) hinted at the possibility that the redshifted CO detected in NGC 6240 could be involved in the feedback process, but did not explicitly ascribe it to the outflow because of the smaller spatial extent of the red wing with respect to the blue wing. In Figure 2 we directly compare the blue and red line wings using the ALMA CO(2-1) data. Based on their close spatial correspondence, whereby the red wing overlaps with the blue one across more than 7 kpc along the east-west direction, we conclude that both the redshifted and blue-shifted velocity components trace the same massive molecular outflow. It follows that the eastern and western sides of the outflow are detected in both their approaching and receding components. At east, the blue-shifted emission is brighter than the redshifted one and dominant beyond a 3.5 kpc radius.
The ALMA [CI](1-0) data can be used to identify with high precision the location of the inner portion of the molecular outflow. Figures 1(d,e) clearly show that the red wing peaks in the midpoint between the two AGNs, and that the blue wing has a maximum of intensity closer to the southern AGN, as already noted by Feruglio et al. (2013b). However, these data reveal for the first time that neither the blue nor the redshifted high velocity [CI](1-0) emissions peak exactly at the AGN positions. The blue wing has a maximum of intensity at RA(J2000) = 16:52:58.8946±0.0011s, Dec(J2000)=+02.24.03.52±0.02 , offset by 0.18 ±0.02 to the north-east with respect to the southern AGN. The red wing instead peaks at RA(J2000) = 16 : 52 : 58.9224±0.0007s, Dec(J2000)=+02.24.04.0158±0.007 , i.e. at an approximately equal distance of 0.8 from the two AGNs. The [CI](1-0) red and blue wing peaks are separated by 0.65 ±0.02 . This separation is consistent with the distance between the CO(2-1) peaks reported by Tacconi et al. (1999), although in that work they were interpreted as the signature of a rotating molecular gas disk.
!"# !$# !%# !&# !"# $%!&'(#
The presence of such nuclear rotating H 2 structure has been largely debated in the literature, especially due to the very high CO velocity dispersion in this region (σ > 300 km s −1 ), and to the mismatch between the dynamics of H 2 gas and stars (Gerssen et al. 2004;Engel et al. 2010). Following Tacconi et al. (1999) and Bryant & Scoville (1999), if a rotating disk is present, its signature should appear at lower projected velocities than those imaged in Fig 1(d,e). In Figure 3 we show the high resolution intensity-weighted moment maps of the
[CI](1-0) line emission within −200 < v[km s −1 ] < 250.
The velocity field does not exhibit the characteristic butterfly pattern of a rotating disk, but it presents a highly asymmetric gradient whereby blue-shifted velocities dominate the southern emission, whereas nearsystemic and redshifted velocities characterise the northern emission. These features in the velocity field are correlated in both velocity and position with the high velocity wings (Figure 1(d,e)). Furthermore, the right panel of Fig. 3 shows that the velocity dispersion is uniform (50 σ v [km s −1 ] 80) throughout the entire source and enhanced (σ v ≥ 100 km s −1 ) in a central hourglassshaped structure extending east-west, which is the same direction of expansion of the larger-scale outflow. This structure has a high-σ v peak with σ v ≥ 150 km s −1 to the east and another possible peak to the west with σ v ≥ 130 km s −1 . Such high-σ v points coincide with the blue and red-shifted velocity peaks detected in the moment 1 map. Therefore, based on Fig. 3, we conclude that the molecular gas emission between the two AGNs is dominated by a nuclear outflow expanding eastwest and connected to the larger-scale outflow shown in Fig.1. The regions of enhanced turbulence may represent the places where the outflow opening angle widens up -hence increasing the line-of-sight velocity dispersion and velocity of the molecular gas.
3.2. The α CO values estimated from the integrated spectra: a reference for unresolved studies Figure 4 shows the CO(1-0), CO(2-1), and [CI](1-0) spectra extracted from the 12 × 6 -size rectangu-!"#$%& Figure 2. Comparison between the CO(2-1) blue and red wing emissions in NGC 6240. For visualisation purposes, only positive contours starting from 5σ are shown, with 1σ= 0.33 mJy beam −1 for the blue wing (blue contours) and 1σ= 0.3 mJy beam −1 for the red wing (red contours). The corresponding interferometric maps including negative contours are displayed in Appendix A (Figure 7). Table 1. Results of the simultaneous fit to the total spectra * 253 ± 2 1234 ± 10 1030 ± 30 L [10 9 K km s −1 pc 2 ] 7.33 ± 0.07 8.96 ± 0.07 1.64 ± 0.05 * The errors quoted in this table are purely statistical and do not include the absolute flux calibration uncertainty.
CO(1-0) CO(2-1) [CI](1-0) Narrow component v † [km s −1 ] −9.1 ± 1.0 −9.1 ± 1.0 −9.1 ± 1.0 σv [km s −1 ] 101.0 ± 1.0 101.0 ± 1.0 101.0 ± 1.0 S peak [
† We employ the optical Doppler definition. The fit allows for a global velocity shift of CO(2-1) and [CI](1-0) with respect to CO(1-0) to take into account the different spectral binning. The best-fit returns:
v CO(2−1) − v CO(1−0) = 10.2 ± 0.9 km s −1 and v [CI](1−0) − v CO(1−0) = −18 ± 4 km s −1 .
lar aperture reported in Fig 1(a), encompassing both the nucleus and the extended molecular outflow of NGC 6240. The analysis of these integrated spectra, described below, is aimed at deriving a source-averaged α CO for the quiescent and outflowing molecular ISM in NGC 6240. Such analysis is included here because it can be useful as a reference for unresolved observations, for example high redshift analogues of this merger. We stress however that the quality of our data, the proximity of the source, and its large spatial extent allow us to perform a much more detailed, spatially-resolved analysis. The latter will be presented in § 3.3 and delivers the most reliable α CO values for the outflow and the quiescent gas. The spectra in Fig. 4 were fitted simultaneously using two Gaussians to account for the narrow core and broad wings of the emission lines, by constraining the central velocity (v) and velocity dispersion (σ v ) of each Gaussian to be equal in the three transitions. Table 1 reports the best-fit results and the corresponding line luminosities calculated from the integrated fluxes following Solomon et al. (1997). The [CI](1-0) line luminosities listed in Table 1 are employed to measure the molecular gas mass (M mol , including the contribution from Helium) associated to the narrow and broad line components. The expression for local thermodynamic equilibrium (LTE, i.e. uniform T ex ) and optically thin emission (τ [CI](1−0) 1), assuming a negligible background (CMB temperature, T CMB T ex ) and the Rayleigh-Jeans approximation (hν [CI](1−0) kT ex ) is: (1) where X CI is the [CI/H 2 ] abundance ratio and T ex is the excitation temperature of the gas (see detailed explanations by and Mangum & Shirley (2015)). We adopt T ex = 30 K and X CI = (3.0 ± 1.5) × 10 −5 , which are appropriate for (U)LIRGs (Weiß et al. 2003(Weiß et al. , 2005Walter et al. 2011;Jiao et al. 2017). These assumptions will be further discussed in § 4.1.2.
M mol [M ] = (4.31 · 10 −5 ) · X −1 CI · 1+ 3e −23.6/Tex[K] + 5e −62.5/Tex[K] · e 23.6/Tex[K] · L [CI](1−0) [K km s −1 pc 2 ],
By defining a [CI](1-0)-to-H 2 conversion factor (α [CI] ) in analogy with the commonly employed α CO factor (Bolatto et al. 2013), Eq 1 resolves into:
M mol [M ] ≡ α [CI] L [CI](1−0) [K km s −1 pc 2 ] with α [CI] = 9.43 [M (K km s −1 pc 2 ) −1 ]. (2)
Using the values in Table 1 and Eq 2, we obtain a total molecular gas mass of M mol = (1.5 ± 0.8) · 10 10 M , of which (5 ± 3) · 10 9 M is in the narrow component, and (10 ± 5) · 10 9 M in the broad wings. We then use these M mol values to estimate α CO : (1-0) spectra extracted from a 12 × 6 -size rectangular region centred at RA=16:52:58.900, Dec=02.24.03.950 and displayed in Fig. 1(a). The rms values per spectral channel are: 3.2 mJy (δv = 53 km s −1 ), 17 mJy (δv = 13 km s −1 ) and 78 mJy (δv = 49 km s −1 ), respectively for CO(1-0), CO(2-1), and [CI](1-0). The spectra were simultaneously fitted using two Gaussian functions (white dashed curves) tied to have the same velocity and width in all three transitions. The best fit results are reported in Table 1. The source-averaged r21 and αCO calculated from this fit are listed in Table 2.
α CO = M mol [M ] L CO(1−0) [K km s −1 pc 2 ] −1 . (3)
The results are reported in the first three rows of Table 2 for the the total, narrow, and broad emissions in NGC 6240. The so-derived α CO factors differ between the narrow and broad line components, being a factor of 1.8 ± 0.5 lower in the latter 5 . In the narrow component, the α CO is significantly higher than the typical (U)LIRG value. As mentioned in § 1, higher α CO values become possible in (U)LIRGs if a significant fraction of the mass is 'hidden' in dense and bound H 2 clouds. This is probably the case of NGC 6240, in which a large study using CO SLEDs from J = 1 − 0 up to J = 13 − 12 from the Herschel space observatory as well as multi-J HCN, CS, and HCO + line data from ground-based observatories, finds α CO ∼ 2 − 4 M (K km s −1 pc 2 ) −1 (Papadopoulos et al. 2014), consistent with our estimates. Table 2 lists also the CO(2-1)/CO(1-0) luminosity ratios, defined as
r 21 ≡ L CO(2−1) /L CO(1−0) .(4)
We find r 21 consistently ∼ 1.2 -hence higher than unity (at the 1.5σ level) -for both the narrow and broad Gaussian components. Since our total CO(1-0) and CO(2-1) fluxes are consistent with previous measurements (Papadopoulos et al. 2012b;Costagliola et al. 2011;Saito et al. 2018), we exclude that spatial filtering due to an incomplete uv coverage is significantly affecting the r 21 values. As pointed out by Papadopoulos et al. (2012b), galaxy-averaged r 21 > 1 values are not uncommon in (U)LIRGs and are indicative of extreme gas conditions. In this analysis of the integrated spectra we derived r 21 > 1 in both the broad and narrow components. However, as we will show in § 3.3 and § 3.5, the spatially-resolved analysis will reveal that r 21 1 values are typical of the outflowing gas and in general of higher-σ v components, while the 'quiescent' ISM has r 21 ∼ 1.
3.3. Spatially-resolved analysis: the average α CO of the quiescent and outflowing gas
The previous analysis ( § 3.2) was based on the spectral fit shown in Fig. 4, where we decomposed the total molecular line emission into a narrow and a broad Gaussian. In first approximation, these two spectral compo- † Calculated from the simultaneous fit to the total CO(1-0), CO(2-1), and [CI](1-0) spectra shown in Fig. 4, whose results are reported in Table 1 (details in § 3.2). ‡ Mean values calculated from the simultaneous fit to the CO(1-0), CO(2-1), and [CI](1-0) spectra extracted from the grid of 13 boxes shown in Fig 1(a), as explained in § 3.3. The corresponding spectral fits are shown in Appendix B (Figs 8,9,10). nents can be respectively identified with the quiescent and outflowing molecular gas reservoirs of NGC 6240. However, the superb S/N and spatial resolution of our data allow us to take this analysis one step further and refine the definition of quiescent and outflowing components. This is done by including the spatially-resolved information provided by the interferometric data, as described below. We divide the central 12 × 6 region employed in the previous analysis into a grid of 13 squared boxes and use them as apertures to extract the corresponding CO(1-0), CO(2-1), and [CI](1-0) spectra. As shown in Fig. 1(a), the central nine boxes have a size of 2 × 2 , while the external four boxes have a size of 3 × 3 . The box spectra are presented in Appendix B (Figures 8, 9, and 10).
For each box, the CO(1-0), CO(2-1), and [CI](1-0) spectra are fitted simultaneously with a combination of Gaussian functions tied to have the same line centres and widths for all three transitions. In the fitting procedure, we minimise the number of spectral components required to reproduce the line profiles, up to a maximum of four Gaussians per box. The Gaussian functions employed by the simultaneous fit span a wide range in FWHM and velocity, shown in Figure 5. The next step is to classify each of these components as 'systemic' or 'outflow'. In many local (U)LIRGs molecular outflows can be traced through components whose kinematical and spatial features deviate from a rotating molecular structure (Cicone et al. 2014;García-Burillo et al. 2014). However, in this source we do not detect any clear velocity gradient that may indicate the presence of a rotating molecular gas disk (Figure 3). Therefore, we adopt a different method and identify as 'quiescent' the gas probed by the spectral narrow line components that are detected throughout the entire source extent (Figures 8, 9, and 10 The velocity-integrated fluxes (total, systemic, outflow) are then converted into line luminosities and, from these, the corresponding α CO and r 21 can be derived by following the same steps as in § 3.2 (Eq. 2-4). Table 2 (bottom three rows) lists the resulting mean values of α CO and r 21 obtained from the analysis of all 13 boxes. In computing the mean, we only include the components detected at a S/N ≥ 3 in each of the transitions used to calculate α CO or r 21 , i.e. CO(1-0) and [CI](1-0) for the former, and CO(1-0) and CO(2-1) for the latter. The new α CO values derived for the systemic and outflowing components, respectively equal to 3.2±1.8 and 2.1±1.2, are perfectly consistent with the previous analysis based on the integrated spectra. Instead, this new analysis delivers different r 21 values for the systemic (1.0 ± 0.2) and outflowing component (1.4±0.3), although still consistent if considering the associated uncertainties (dominated by the flux calibration errors).
By using the CO(1-0) line data and summing the contribution from all boxes, including both the systemic and the outflowing components, we derive a total molecular gas mass of M tot mol = (2.1 ± 0.5) × 10 10 M . To compute the M mol within each box we adopt, when available, the 'global' α CO factor estimated for that same box, otherwise we use the mean value of α CO =2.5 ± 1.4 7 . Compared with previous works recovering the same amount of CO flux, our new M tot mol estimate is higher than in Tacconi et al. (1999) and Feruglio et al. (2013b), but consistent with Papadopoulos et al. (2014).
Molecular outflow properties
In this section we use the results of the spatiallyresolved spectral analysis presented in § 3.3 to constrain the mass (M out ), mass-loss rate (Ṁ out ), kinetic power (1/2Ṁ out v 2 ), and momentum rate (Ṁ out v) of the molecular outflow. We first select the boxes in which an outflow component is detected in the CO(1-0) spectrum with S/N≥ 3. As described in § 3.3, the outflow component is defined as the sum of all Gaussian functions employed by the simultaneous fit that lie outside the rectangular region of the FWHM-v parameter space shown in Fig. 5. With this S/N≥ 3 constraint, 12 boxes (that is, all except W1) are selected to have an outflow component in CO(1-0), and for each box 8 we measure: (i) The average outflow velocity (v out,i ), equal to the mean of the (moduli of the) central velocities of the individual Gaussians classified as 'outflow'.
(ii) The molecular gas mass in outflow (M out,i ), calculated by multiplying the L CO(1−0) of each outflow component by an appropriate α CO . In ten boxes the outflow component is detected with S/N≥ 3 also in the [CI](1-0) transition, hence for these boxes we can use their corresponding α CO factor (see § 3.3). For the remaining two boxes (E1 and W2), we adopt the galaxy-averaged outflow α CO of 2.1 ± 1.2 (Table 2).
(iii) The dynamical timescale of the outflow, defined as τ dyn,i = R i /v out,i , where R i is the distance of the centre of the box from RA=16:52:58.900, Dec=02.24.03.950. This definition cannot be applied to the central box (C1) because the so-estimated R would be zero, hence boosting the mass-loss rate to infinite. Therefore, for box C1, we conservatively assume that most of the outflow emission comes from a radius of 1 , hence we set R = 0.5 kpc. For all boxes we assume the uncertainty on R i to be 0.6 (0.3 kpc), which is half a beam size.
(iv) The mass-loss rateṀ out,i , equal to M out,i /τ dyn,i . All uncertainties are derived by error propagation.
The resulting total outflow mass and mass-loss rate, obtained by adding the contribution from all boxes, are respectively M out = (1.2 ± 0.3) × 10 10 M and dM out /dt = 2500 ± 1200 M yr −1 . As discussed in Appendix B, the largest contribution to bothṀ out and its uncertainty is given by the central box. Indeed, box C1 has at the same time the highest estimated M out and the smallest -and most uncertain -R, because the outflow is launched from within this region, likely close to the mid-point between the two AGNs as suggested by Fig. 1(d,e).
Similar to the mass-loss rate, we calculate the total kinetic power and momentum rate of the outflow by summing the contribution from all boxes with a CO(1-0) outflow component, and we obtain respectively: 1/2Ṁ out v 2 out ≡ i 1/2Ṁ out,i v 2 out,i = (0.033 ± 0.019) L AGN and vṀ out ≡ i v iṀout,i = (80 ± 50) L AGN /c. If all the gas carried by the outflow escaped the system and the mass-loss continued at the current rate, the depletion time-scale of the molecular gas reservoir in NGC 6240 would be τ dep = 8 ± 4 Myr. All the relevant numbers describing the properties of the source and of the molecular outflow are reported in Table 3 and will be discussed in § 4.3 in the context of feedback models.
Very stringent lower limits on the outflow energetics can be derived by assuming that its CO(1-0) emission is fully optically thin. For optically thin gas and T ex = 30 K, the α CO factor would be ∼ 0.34 (Bolatto et al. 2013), and the outflow mass and mass-loss rate would be M out = (1.98 ± 0.09) × 10 9 M and dM out /dt = 430±160 M yr −1 . We however stress that the assumption of fully optically thin CO(1-0) emission in the outflow is not supported by our data, which instead favour an α CO factor for outflowing gas that is intermediate between the optically thin and the optically thick values (for solar metallicities).
Physical properties of quiescent and outflowing gas
Using the results of the spatially-resolved analysis presented in § 3.3, we now study how the α CO and r 21 parameters vary as a function of velocity disper-sion (σ v ) and projected distance (d) from the nucleus of NGC 6240. The relevant plots are shown in Figure 6. To investigate possible statistical correlations, we conduct a Bayesian linear regression analysis of the relations in Fig. 6 following Kelly (2007) 9 .
The left panels of Fig. 6 do not indicate any statistically significant relation between α CO and either σ v or d. Instead, they show that the α CO factor is systematically higher -although formally only at a significance of 1.2σ (Table 2) -in the quiescent gas than in the outflow, regardless of the velocity dispersion of the clouds, or of their position with respect to the merger nucleus. For the non-outflowing components, the α CO factors are at least twice the so-called (U)LIRG value (Downes & Solomon 1998), and reach up to Galactic values. This result is consistent with the multi-transition analysis by Papadopoulos et al. (2014), and is likely due to the state of the dense gas phase that low-J CO lines alone cannot constrain, but which instead is accounted for when using [CI](1-0) as a molecular mass tracer. Nevertheless the outflowing H 2 gas has lower α CO values than the quiescent ISM. This is indeed expected from the ISM physics behind α CO for warm and strongly unbound gas states (Papadopoulos et al. 2012a), i.e. the type of gas that we expect to be embedded in outflows. In particular, in the case that molecular outflows are ubiquitous in (U)LIRGs as suggested by observations (Sturm et al. 2011;Veilleux et al. 2013;Spoon et al. 2013;Cicone et al. 2014), the outflow may be the location of the diffuse and warm molecular gas phase that is not contained in selfgravitating cooler clouds -a sort of 'intercloud' medium advocated by some of the previous analyses based solely on low-J CO, 13 CO line observations (Aalto et al. 1995;Downes & Solomon 1998). Furthermore, the flat trend between α CO and d observed in Fig. 6 does not support the hypothesis that the lower α CO values in (U)LIRGs are related to the collision of the progenitors' disks, since in this case we would naively expect the lower α CO clouds to be concentrated in the central regions of the merger. The α CO values measured for the outflow components are however significantly higher than the optically thin value, suggesting that not all of the outflowing material is diffuse and warm, but there may still be a significant amount of dense gas. These results are further discussed and contextualised in § 4.2.
The right panels of Fig. 6 show a weak correlation between the r 21 and σ v (correlation coefficient, ρ = 0.4± 0.2) and an anti-correlation with the distance, although only for the systemic/quiescent components (ρ = −0.7± Table 2 for the systemic and outflow components, respectively. The grey lines indicate the Milky Way αCO factor (Bolatto et al. 2013, left panels) and the average r21 = 0.8 measured in star forming galaxies (Leroy et al. 2009, right panels). The best-fits obtained from a Bayesian linear regression analysis following the method by Kelly (2007) are plotted using dot-dashed lines: black lines show the best fits to the total sample, whereas blue and red lines correspond to the fits performed separately on the systemic and outflowing components. 0.2). The corresponding best fit relations, of the form r 21 = α + βx, plotted in Fig. 6, have (α, β) = (0.8 ± 0.2, 2.1 ± 1.4 × 10 −3 ) for x = σ v , and (α, β) = (1.5 ± 0.2, −0.33 ± 0.13) for x = d. The systemic ISM shows 0.8 r 21 1.4, whereas the outflow is characterised by higher ratios, with most components in the range 1.2 r 21 2.5, although we observe a large spread in r 21 values at d > 2 kpc.
CO(2-1)/CO(1-0) luminosity ratios of r 21 ∼ 0.8 − 1.0 are typically found in the molecular disks of normal spiral galaxies (Leroy et al. 2009) and are indicative of optically thick CO emission with T kin ∼ 10 − 30 K (under LTE assumptions). Nevertheless such low-J CO line ratios, in absence of additional transitions, are well-known to be highly degenerate tracers of the average gas physical conditions. Higher-J data of CO, molecules with larger dipole moment, and isotopologues can break such degeneracies. Such studies exist for NGC 6240 (Greve et al. 2009;Meijerink et al. 2013;Papadopoulos et al. 2014), and found extraordinary states for the molecular gas, with average densities typically above 10 4 cm −3 and temperatures T kin ∼ 30 − 100 K.
On the contrary, global CO(2-1)/CO(1-0) ratios exceeding unity have a lower degree of degeneracy in terms of the extraordinary conditions that they imply for molecular gas, as they require warmer (T kin 100 K) and/or strongly unbound states (Papadopoulos et al. 2012b). In NGC 6240, optical depth effects are most likely at the origin of the r 21 > 1 values. More specifically, such ratios can result from highly non-virial motions (e.g. the large velocity gradients of the outflowing clouds), causing the CO lines to become partially transparent, as also supported by the tentative trend of increasing r 21 with σ v (Figure 6). This finding independently strengthens our explanation for the lower α CO factors derived for the outflowing gas, which are intermediate between an optically thin and an optically thick value (for typical solar CO abundances).
DISCUSSION
Assumptions and caveats of our analysis
Our results build, on the one hand, on the identification of the outflow components, and on the other hand on the assumption that CO(1-0) and [CI](1-0) trace the same molecular gas, implying that M mol can be mea-sured from [CI](1-0). In this section we further comment on these steps and discuss their caveats and limitations.
The outflow identification
The outflow identification is a fundamental step of our analysis, and leads to one of the most surprising findings, i.e. that 60 ± 20% of the molecular ISM in NGC 6240 belongs to the outflow. This unprecedented result may hold the key to finally understanding the extreme ISM of this source, which makes it an outlier even compared to other (U)LIRGs, as acknowledged by several authors (Meijerink et al. 2013;Papadopoulos et al. 2014;Israel et al. 2015). For example, Meijerink et al. (2013) suggested that the CO line emission in NGC 6240 is dominated by gas settling down after shocks, which would be consistent with gas cooling out of an outflow. A massive outflow would also explain why the gaseous and stellar kinematics are decoupled (Engel et al. 2010;Tacconi et al. 1999).
In § 3.3 we have ascribed to the outflow all spectral line components with FWHM> 400 km s −1 , v < −200 km s −1 or v > +250 km s −1 detected within the central 12 × 6 region investigated in this paper. However, the spatial information is also crucial for identifying outflowing gas, especially in a source undergoing a major merger, since the outflow signatures may be degenerate with gravity-driven dynamical effects. In the specific case of NGC 6240, as explained below and shown in detail in Appendix B, the high S/N and spatial resolution of our observations allow us to disentangle feedbackrelated effects from other mechanisms and reliably identify the outflow emission.
During a galaxy collision, high-v/high-σ v gas can be concentrated in the nuclear region as a consequence of gravitational torques, which cause a fraction of the gas to lose angular momentum and flow toward the center. At the same time, gravitational torques and tidal forces can drive out part of the gas from the progenitors' disks and form large-scale filaments denominated 'tidal tails' and 'bridges'. However, in the case of NGC 6240, these gravity-induced mechanisms can hardly explain the kinematics and morphology of the ∼ 10 kpc-scale, wide opening angle-emission shown in Figs. 1-3. In particular, the high-v/high-σ v structures revealed by the [CI](1-0) moment maps, which are correlated with features observed on much larger scales (see § 3.1), cannot be due to nuclear inflows. In this case, we would indeed expect the σ v of the gas to be enhanced toward the nucleus (or nuclei), rather than in offset positions that are several 100s of pc away from the nuclei or from the geometric center of the AGN pair (see for example the different signature of outflows and inflows in the velocity dispersion maps shown by Davies et al. (2014)). The hourglass-shaped configuration visibile in the [CI](1-0) velocity dispersion map is more suggestive of an outflow opening toward east and west, i.e. along the same directions of expansion of the high-v gas.
On larger scales, tidal tails or bridges produced in galaxy collisions may also affect the dynamical state of the ISM. However, the line-widths of the molecular emission from such filamentary structures are rather low (∼ 50 − 100 km s −1 , Braine et al. 2001). Therefore, in order to reproduce the spatially-and kinematically-coherent structure shown in Figs. 1, and especially the spatial overlap across several kpc between the highly blue-shifted and red-shifted emissions (Figure 2), one would need to postulate a very specific geometry where several tidal tails overlap along the line of sight across more than 10 kpc.
Based on these considerations, and on the detailed discussion reported in Appendix B, we conclude that other mechanisms such as rotating disks or gravityinduced dynamical motions, possibly also coexisting in NGC 6240, are unlikely to significantly affect our outflow energetics estimates.
Combining [CI]
(1-0) and CO(1-0) data to infer αCO and M mol
The second key step of our analysis is to combine the [CI](1-0) and CO(1-0) line observations to derive molecular gas masses. As described in § 3.2-3.4, our strategy is to use the places where [CI](1-0) and CO(1-0) are both detected at a S/N≥ 3 to measure the corresponding α CO . Molecular gas masses are then computed by using the CO(1-0) data. In particular, we select components where CO(1-0) is detected at a S/N≥ 3 and convert L CO(1−0) into M mol by employing either the corresponding [CI]-derived α CO (possible only if [CI](1-0) is also detected with S/N≥ 3) or alternatively by using the mean α CO value appropriate for that component (i.e. 'global', 'systemic', or 'outflow', Table 2).
The fundamental underlying assumption is that [CI](1-0) and CO(1-0) trace the same material. Earlier theoretical works envisioned neutral Carbon to be confined in the external (low extinction A V ) layers of molecular clouds, hence to probe a different volume compared to CO. However, as discussed by , this theory was dismantled by observations finding a very good correlation between [CI] and CO as well as uniform [CI]/CO ratios across a wide range of Galactic environments, including regions shielded from FUV photons (e.g. Keene et al. 1985;Ojha et al. 2001;Tanaka et al. 2011). The few available observations of [CI] lines in local galaxies have further supported the concurrence of CO and [CI] in different physical conditions (Israel et al. 2015;Krips et al. 2016).
The good mixing of CO and [CI] could be a consequence of turbulence and/or cosmic rays. Turbulent diffusion can merge any [CI]-rich H 2 phase (expected to prevail in low A V regions) with the more internal CO-rich H 2 gas, hence uniforming the [CI]/CO abundance ratio throughout molecular clouds (Glover et al. 2015). Cosmic rays, by penetrating deep into molecular clouds and so destroying CO (but not H 2 ) over larger volumes compared to FUV photons, can also help enrich the internal regions of clouds with neutral Carbon (Bisbas et al. 2015(Bisbas et al. , 2017. Both mechanisms are expected to be efficient in (U)LIRGs and in their molecular outflows. The latter are (by definition) highly turbulent environments. Furthermore, cosmic rays originating in the starburst nuclei can leak along such outflows hence influencing the chemistry of their embedded ISM (see discussion in Papadopoulos et al. (2018), and recent results by González-Alfonso et al. (2018)). For these reasons, we can assume that CO and [CI] trace the same molecular gas, for both the quiescent and outflowing components of NGC 6240.
Thanks to the simple three-level partition function of neutral Carbon, and to its lines being optically thin in most cases (including NGC 6240, Israel et al. (2015)), the main sources of uncertainties for [CI]-based mass estimates are X CI and T ex (Eq 1). Previous observations indicate very little variations in X CI in the metalenriched ISM of IR-luminous galaxies at different redshifts (Weiß et al. 2003(Weiß et al. , 2005Danielson et al. 2011;Alaghband-Zadeh et al. 2013), including the extended (r > 10 kpc) circum-galactic medium of the Spiderweb galaxy (Emonts et al. 2018). In our calculations we assumed X CI = (3.0 ± 1.5) × 10 −5 to take into account a systematic uncertainty associated with the [CI]/H 2 abundance ratio. Because of the particular LTE partition function of neutral Carbon, [CI]-derived masses depend little on T ex for T ex 15 K. We set T ex = 30 K, which is consistent with the value that can be estimated from the global [CI]2-1/1-0 brightness temperature ratio measured in NGC 6240 (Papadopoulos et al. 2014).
Therefore, our assumptions regarding the conversion between [CI](1-0) line data and M mol are well justified by previous results. However, we caution that a giant galactic-scale outflow such as the one hosted by NGC 6240 constitutes an unprecedented environment for molecular gas clouds, and there is no comparable laboratory in our Galaxy that can be used as a reliable reference. The study of the physical conditions of such outflows has only just started, and this is the first time that the [CI](1-0) line emission from high-velocity gas components extending by several kpc has been imaged at high spatial resolution. Further investigation is needed, and our work constitutes just a starting point.
The role of outflows in the global α CO factor
The average α CO factors that we measure for the quiescent and outflowing components of the ISM in NGC 6240 (Table 2) are both higher than the classic (U)LIRG α CO . How can we reconcile this result with previous works advocating for significantly lower α CO values in (U)LIRGs? In the case of NGC 6240, our analysis has highlighted several effects that may have plagued previous α CO estimates:
1. The widespread presence of outflowing gas implies that, at any location within this merger, the molecular line emission includes a significant contribution from the outflow, with its overall lower α CO and higher r 21 . As a result, an analysis of the global ISM conditions (especially if based only on low-J CO lines, see also point 3 below) would get contaminated by the warm unbound H 2 envelopes in the outflow, and their larger L CO /M H2 ratios would drive down the global α CO estimate (Yao et al. 2003;Papadopoulos et al. 2012a).
2. The outflow dominates the velocity field of the H 2 gas throughout the entire source, including the central region ( Figure 3). The apparent nuclear north-south velocity gradient identified in previous CO line data (Tacconi et al. 1999;Bryant & Scoville 1999) is actually not compatible with ordered rotation once observed at higher spatial resolution, but it shows several features distinctive of the outflow. Therefore, the assumption that the molecular gas in this area is dominated by ordered motions is broken, making any dynamical mass estimate unreliable (if the outflow is not properly taken into account).
Previous analyses based only on low-J CO lines
have probably missed a substantial fraction of the denser gas phase that is instead accounted for when using the optically thin [CI](1-0) line as a gas mass tracer, or when probing the excitation of the ISM using high-J CO transitions and high density molecular gas tracers. Indeed, the α CO value derived for the quiescent gas reservoir is consistent with a significant contribution from a dense gas state. Even in the outflow, the average α CO is still significantly higher than the optically thin value, hence the presence of dense gas may not be negligible. A conspicuous dense gas phase has already been demonstrated in a few other galaxy-wide molecular outflows (Aalto et al. 2012;Sakamoto et al. 2014;García-Burillo et al. 2014;Alatalo et al. 2015), and its presence would make more likely the formation of stars within these outflows (Maiolino et al. 2017).
4. The molecular gas emission in NGC 6240 is clearly very extended -both spectrally and spatially. As a result, at least some of the previous interferometric observations (especially 'pre ALMA') may have been severely affected by incomplete uv coverages filtering out the emission on larger scales, hence impacting on the measured line fluxes and sizes. Furthermore, as already noted by Tacconi et al. (1999), an insufficient spectral bandwidth may have hindered a correct baseline and/or continuum fitting and subtraction. The latter can be an issue for both single dish and interferometric observations, including observations with ALMA if only one spectral window is employed to sample the line.
Since molecular outflows are a common phenomenon in local (U)LIRGs (Sturm et al. 2011;Veilleux et al. 2013;Spoon et al. 2013;Cicone et al. 2014;Fluetsch et al. 2018), at least some of the above considerations may be generalised to their entire class. Therefore, it is possible that the so-called (U)LIRG α CO factor is an artefact resulting from modelling the molecular ISM of such sources containing massive H 2 outflows.
An interplay of feedback mechanisms at work
The extreme spatial extent of its molecular outflow, makes NGC 6240 one of the few sources -all powerful quasars -hosting H 2 outflows with sizes of 10 kpc (Veilleux et al. (2017), see also the 30 kpcsize [CII]λ158µm outflow at z = 6.4 studied by Cicone et al. (2015)). In comparison, the H 2 gas entrained in the well-studied starburst-driven winds of M 82 and NGC 253 reaches at maximum scales of ∼ 1 − 2 kpc (Walter et al. 2002(Walter et al. , 2017. Furthermore, among all the large-scale molecular outflows discovered so far in quasar host galaxies, the outflow of NGC 6240 is the one that has been observed at the highest spatial resolution (∼ 120 pc). Indeed, the ALMA [CI](1-0) line data allowed us to probe deep into the nuclear region of the merger, close to the launching point of the molecular wind, and surprisingly revealed that the outflow emission peaks between the two AGNs rather than on either of the two. This is apparently at odds with an AGN radiative-mode feedback scenario, in which the multiphase outflow is expected to be generated close to the central engine (Costa et al. 2015).
Nevertheless, the role of the AGN(s) is certified by the extreme energetics of the molecular outflow, which has been constrained here with unprecedented accuracy. By comparing our M out and M tot mol estimates (Table 3), it appears that 60 ± 20 % of the molecular medium is involved in the outflow. The estimated mass-loss rate of 2500 ± 1200 M yr −1 corresponds to η ≡Ṁ out /SFR = 50 ± 30, whereas the lower limit oṅ M out , calculated using the optically thin α CO prescription, corresponds to η ≡Ṁ out /SFR = 9 ± 4. Such high mass loading factors are inconsistent with a purely star formation-driven wind. As a matter of fact, stellar feedback alone can hardly bear outflows with η much higher than unity. Cosmological hydrodynamical simulations incorporating realistic stellar feedback physics, by including mechanisms other than supernovae, can reach up to η ∼ 10 (Hopkins et al. 2012). However, because η in these simulations anti-correlates with the mass of the galaxy, the highest η values are generally predicted for dwarf galaxies, whereas for galaxies with baryonic masses of several 10 10 M such as NGC 6240, the η achievable by stellar feedback can be at maximum 2 − 3.
Based on its energetics, we can therefore rule out that the massive molecular outflow observed in NGC 6240 is the result of star formation alone. The outflow energetics can instead be fully accommodated within the predictions of AGN feedback models (Faucher-Giguère & Quataert 2012;Zubovas & King 2014;Costa et al. 2014). In addition, these models can explain the multiwavelength properties of NGC 6240. At optical wavelengths, NGC 6240 is known to host a ionised wind (Heckman et al. 1990), with large-scale superbubbles expanding by tens of kpc towards north-west and southeast (Veilleux et al. 2003;Yoshida et al. 2016). The Hα emission from the ionised wind shows a close spatial correspondence with the soft X-ray continuum, suggesting the presence of gas cooling out of a shocked medium (Nardini et al. 2013). Furthermore, Wang et al. (2014) detected a diffuse component in hard-X-ray continuum and FeXXV line emission, tracing T ∼ 7 × 10 7 K gas between the two AGNs (north-west of the southern nucleus, similar to the [CI](1-0) blue wing in Fig 1d), as well as in kpc-scale structures that are remarkably coincident with both the strong NIR H 2 emission (Max et al. 2005;van der Werf et al. 1993) and the Hα filaments.
At radio wavelengths, Colbert et al. (1994) reported the detection of non-thermal continuum emission with a steep spectrum extending by several kpc in an arclike structure west of the AGNs, later confirmed also by Baan et al. (2007), together with a possibly similar feature on the eastern side. This structure lacks a clear spatial correspondence with optical or NIR starlight (which excludes a starburst origin) and its complex morphology suggests a connection with the Hα outflow. Theoretically, the association between an AGN-driven wind and extended non-thermal radiation (due to relativistic electrons accelerated by the forward shock) has been predicted by Nims et al. (2015). Observationally, the rough alignment of the western arc-like structure discovered by Colbert et al. (1994) with the molecular outflow studied in this work would also support this hypothesis, although the current data do not allow us to probe the presence of H 2 outflowing gas at the exact position of the radio emission. The total radio power of this arclike feature is comparable with that of extended radio structures previously observed in radio-quiet AGNs with prominent outflows (Morganti et al. 2016). Future facilities like the Cherenkov Telescope Array (CTA) may reveal the γ-ray counterpart of the non-thermal emission, as expected for an AGN outflow shock (Lamastra et al. 2017).
In summary, all these multi-wavelength observational evidences point to a radiative-mode AGN feedback mechanism (Faucher-Giguère & Quataert 2012; Nims et al. 2015). However, at the same time, it is difficult to reconcile a classic model with an H 2 outflow whose emission does not peak on either of the two AGNs. A complex interplay of stellar and AGN feedback must be at work in this source (see also Müller-Sánchez et al. 2018), and we cannot exclude the additional contribution from compact radio-jets (Gallimore & Beswick 2004), which may be accelerating part of the cold material (Mukherjee et al. 2016). Finally, positive feedback may also be at work in NGC 6240. There is indeed a striking correspondence between (i) the morphology of the approaching side of the outflow north-west of the southern AGN (Fig. 1d), (ii) a peak of dust extinction, and (iii) a stellar population with unusually large stellar σ v and blue-shifted velocities (Engel et al. 2010). The latter, according to Engel et al. (2010), may have been formed recently as a result of crushing of molecular clouds, which could be related to the observed outflow event (Maiolino et al. 2017;Zubovas & King 2014) provided these stars are not older than a few Myr.
SUMMARY AND CONCLUSIONS
A powerful multiphase outflow shapes the distribution of gas in NGC 6240, and it is likely at the origin of many of the extraordinary features that for years have puzzled scientists studying this source. In this work we used new ALMA [CI](1-0) line observations, in combination with ALMA CO(2-1) and IRAM PdBI CO(1-0) line data, to study the morphology, energetics, and physical state of the molecular component of the outflow. Our main findings are:
• The molecular outflow extends by more than 10 kpc along the east-west direction, and it is clearly detected in both its approaching (blueshifted) and receding (red-shifted) sides. Its emission peaks between the two AGNs, rather than on either of the two. Furthermore, the outflow dominates the H 2 gas velocity field in the merger nucleus, as shown by the presence of a striking hourglass-shaped feature in the high-res (∼ 0.24 ) [CI](1-0) line moment 2 map. This high-σ v structure, aligned east-west, traces the launch base of the kpc-scale outflow. The outflow, with its large flux contribution to the molecular line emission in the nucleus, can explain both the high gas turbulence and the strong decoupling of stellar and gaseous kinematics evidenced in this source by previous works.
• We combined the [CI](1-0) and CO(1-0) line observations to derive the α CO factor in the outflow, which is on average α CO = 2.1 ± 1.2 M (K km s −1 pc 2 ) −1 . The information on the α CO , in conjunction with a spatiallyresolved spectral analysis of the molecular line emission, allowed us to constrain with unprecedented accuracy the energetics of the molecular outflow. We estimate that the outflow entrains M out = (1.2 ± 0.3) × 10 10 M , corresponding to 60 ± 20 % of the molecular reservoir of NGC 6240. The total mass-loss rate isṀ out = 2500 ± 1200 M yr −1 = 50 ± 30 SFR, which energetically rules out a solely star formation-driven wind.
• For the quiescent gas components, the α CO factors are on average higher than in the outflow (irrespective of their distance from the nucleus), with a mean value of α CO = 3.2 ± 1.8 M (K km s −1 pc 2 ) −1 , i.e. at least twice the so-called (U)LIRG value. This result is consistent with recent multi-transition ISM analyses and is likely due to a dense gas phase that cannot be constrained by using low-J CO lines alone, but which is instead accounted for when using [CI](1-0) as a molecular gas tracer.
• We observe a tentative trend of increasing r 21 ratios with σ v and measure r 21 > 1 values in the outflow ( r 21 = 1.4±0.3), while r 21 1 for quiescent gas. We explain the r 21 > 1 ratios with optical depth effects, whereby the highly non virial motions of the outflowing clouds cause the CO lines to become partially transparent.
• Based on the finding that lower α CO and higher r 21 values are typical of the outflowing clouds, we propose that molecular outflows are the location of the warm and strongly unbound phase -the 'intercloud medium' invoked by previous studies -that drives down the global α CO in (U)LIRGs. However, we note that the [CI]-based α CO factor derived for the outflow is higher than the optically thin value, suggesting that not all of the outflowing material is in such warm diffuse phase but that there may still be a significant amount of dense gas entrained.
• The outflow kinetic power and momentum rate, respectively equal to (0.033 ± 0.019) L AGN and (80 ± 50) L AGN /c, could be fully accommodated within the predictions of AGN 'blast-wave' feedback models. However, the puzzling outflow morphology, with a launch region situated between the two AGNs, and a direction of expansion perpendicular to the axis connecting the two nuclei, challenges a classic AGN feedback scenario. A complex interplay of stellar and AGN feedback processes must be at work in NGC 6240.
This project has received funding from the European Union's Horizon 2020 research and innovation pro-gramme under the Marie Sk lodowska-Curie grant agreement No 664931. The research leading to these results has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 730562 [RadioNet]. R.M. acknowledges ERC Advanced Grant 695671 'QUENCH' and support by the Science and Technology Facilities Council (STFC). E.T. acknowledges support from FONDECYT regular grant 1160999 and Basal-CATA PFB-06/2007. G.C.P. acknowledges support from the University of Florida. We thank the referee for his/her constructive report, which helped us improve the discussion of the results. This paper makes use of the following ALMA data: ADS/JAO.ALMA #2015.1.00717.S and #2015.1.00370.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. This publication makes use of observations carried out with the IRAM Plateau de Bure Interferometer. IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain). C.C. thanks Sandra Burkutean for helping her with the combination of the ALMA and ACA B8 datacubes and Alvaro Hacar Gonzalez for suggesting to use the ACA B8 data as a source model in the cleaning of the ALMA B8 datacubes, which significantly improved the results.
Facilities: ALMA, IRAM PdBI
Software: CASA v4.6.0, GILDAS (Pety 2005) !"# !$# !%# !&# Interferometric maps of the CO(1-0) and CO(2-1) high velocity emissions are shown in Fig. 7 separately for the blue (left panels: a, c) and red (right panels: b, d) line wings. To produce these maps, the CO(1-0) and CO(2-1) uv visibilities have been integrated within the same velocity ranges as in Fig. 1. Figure 7 demonstrates that the blue and red wings of the low-J CO lines in NGC 6240 are both spatially extended on scales of several kpc, with their most extended features aligned preferentially along the east-west direction. The positive contours of the CO(2-1) blue and red line wing emissions shown in Fig. 7(c,d) are overplotted in Figure 2 to allow a direct comparison of their extent and morphology. Figures 8, 9, and 10. The results of the simultaneous fitting procedure described in § 3.3 are overplotted on the data. Each spectrum is labelled with the box ID: IDs C1-C9 correspond to the central 2 × 2 boxes, whereas E1-E2 and W1-W2 are respectively the eastern and western 3 ×3 boxes. The spectral components resulting from the simultaneous fit are classified as 'outflow' or 'quiescent' according to their velocity shift and dispersion, as described in § 3.3. In the following we examine -case by case -the results of such outflow identification procedure (see also § 4.1.1 for a more general discussion).
It is already evident from Fig.1(a,b,c) that the bulk of the molecular line emission at high projected velocities traces a very extended, non-collimated structure aligned east-west, with a large overlap between the blue and redshifted sides ( Figure 2). As best seen in the high S/N CO(2-1) spectra in Fig. 9, the boxes E1, E2, W1, and W2, tracing gas at d > 2 kpc from the nucleus, exhibit broad spectral features -distinguishable from the narrow components -which in some cases (e.g. E1) dominate the total CO flux. Such broad wings are characterised by velocity shifts (v ∼ 300 − 600 km s −1 ) and dispersions (σ v ∼ 125 − 215 km s −1 ) that are much higher than the narrow components detected in the same spectra (v ∼ 30 − 180 km s −1 , σ v ∼ 30 − 100 km s −1 ). As a result, we identify the high-v, high-σ v spectral components at d > 2 kpc as due to an outflow.
Closer to the nucleus, the outflow identification becomes more challenging. However, once we have identified the broad components in E1, E2, W1, W2 as part of an outflow, then it is natural to ascribe similar broad componentsspatially aligned along the east-west axis -to the same outflow. In particular, C4 and C8 (east and west of the nucleus), C2-C3 (north-west of the nucleus, along the same direction as the high-v structure in Fig. 1e) and C6-C7 (south-east of the nucleus, along another direction of outflow expansion as shown in Fig. 1d) also show a broad component extending up to ∼ 1000 km s −1 on both the red-and blue-shifted sides. The outflow identification is more uncertain for boxes C5 and C9, where the wings are less prominent than in other regions, and the spatial alignment with the larger-scale outflow is not obvious. However, the contribution from these boxes to the total outflow mass and mass-loss rate is negligible. Indeed, without C5 and C9, we derive M out = 1.1 × 10 10 M and dM out /dt = 2350 M yr −1 , consistent with the values given in Table 3. Hence the uncertain outflow identification in boxes C5 and C9 does not affect our results.
We now discuss the central box C1, which alone contributes to: M C1 out = (4 ± 2) × 10 9 M and dM C1 out /dt = 1400 ± 1100 M yr −1 . There are several arguments in support of a significant outflow contribution in this region: (i) the striking similarity between the C1 spectrum with that of the adjacent box C8; (ii) the notion that the outflow must originate from within this region, because it hosts two AGNs and most of the star formation activity; (iii) the [CI](1-0) emission at |v| > 200 km s −1 arising from within this region is spatially extended and follows the morphology of the larger-scale outflow (Figure 1(d,e)); (iv) in this region, the outflow dominates even the emission at low projected velocities, as shown by Fig. 3; (iv) the α CO and r 21 values calculated for the outflow components identified in box C1 (data points at d = 0 kpc in the bottom panels of Fig. 6) are consistent with the values measured in the larger-scale outflow.
Therefore, based on the spectral and spatial properties of the molecular line emission that we have ascribed to the outflow, we conclude that alternative mechanisms such as rotating disks and tidal tails are unlikely to significantly affect our outflow energetics estimates.
Figure 1 .
1The extended NGC 6240 outflow observed using different molecular gas tracers. The outflow emission, integrated within v ∈ (−650, −200) km s −1 (blue wing) and v ∈ (250, 800) km s −1 (red wing) and combined together, is shown in the maps (a), (b), and (c) respectively for the CO(1-0), CO(2-1), and[CI](1-0) transitions. The three maps have matched spatial resolution (∼ 1.2 , details in § 2). Contours correspond to: (−3σ, 3σ, 6σ, 12σ, 24σ, 48σ, 150σ) with 1σ= 0.14 mJy beam −1 in panel (a);(−3σ, 3σ, 6σ, 24σ, 48σ, 200σ, 400σ) with 1σ= 0.23 mJy beam −1 in panel (b); (−3σ, 3σ, 6σ, 12σ, 24σ, 48σ) with 1σ= 1.23 mJy beam −1 in panel (c). Panels (d) and (e) show the maps of the[CI](1-0) blue and red wings at the original spatial resolution of the ALMA Band 8 data (0.24 , details in § 2). Contours correspond to (−3σ, 3σ, 6σ, 12σ, 18σ, 20σ) with 1σ= 1.1 mJy beam −1 in panel (d) and 1σ= 1 mJy beam −1 in panel (e). The black crosses indicate the VLBI positions of the AGNs fromHagiwara et al. (2011). The synthesised beams are shown at the bottom-left of each map. The grid encompassing the central 12 × 6 region and employed in the spectral analyses presented in § 3.2 and § 3.3 is drawn in panel (a).
Figure 3 .
3Intensity-weighted moment maps of the[CI](1-0) line emission in the merger nucleus. The maps were computed from the higher resolution ALMA+ACA merged data cube (see § 2) by using the task immoments and by selecting the spectral range v ∈ (−200, 250) km s −1 . Contours correspond to:[-100, -50, 0, 50, 100, 150] km s −1 (moment 1, central panel) and[50, 80, 100, 110, 130, 170, 180] km s −1 (moment 2, right panel).
Figure 4 .
4Total CO(1-0), CO(2-1), and [CI]
Figure 5 .
5FWHM as a function of central velocity of all Gaussian components employed in the simultaneous fitting of the CO(1-0), CO(2-1), and[CI](1-0) box spectra. The blue dashed rectangle constrains the region of the parameter space that we ascribe to the 'systemic' components.
). Our simultaneous fit to the CO(1-0), CO(2-1), and[CI](1-0) spectra returns for these narrow components typical FWHM and central velocities in the ranges: FWHM< 400 km s −1 and −200 < v[km s −1 ] < 250, consistent with what found byFeruglio et al. (2013a) 6 . Based on these results, we assume that all components with −200 < v[km s −1 ] < 250 and FWHM< 400 km s −1 trace quiescent gas that is not involved in the outflow. These constraints correspond to the region of the FWHM-v parameter space delimited by the blue-dashed lines inFig. 5. All components outside this rectangular area are classified as 'outflow'. These assumptions are discussed in detail and validated in Appendix B, whereas more general considerations about our outflow identification method are reported in § 4.1.1.Using the results of the simultaneous fit, we measure, for each box and for each of the CO(1-0), CO(2-1), and[CI](1-0) transitions, the velocity-integrated fluxes apportioned in the 'systemic' and 'outflow' components. These are computed by summing the fluxes from the respectively classified Gaussian functions fitted to the molecular line profiles. For example, in the case of the central box(labelled as 'C1' in Figs. 8, 9, and 10), the simultaneous fit employs three Gaussians: a narrow one classified as 'systemic', and two additional ones classified6 Feruglio et al. (2013a) analysed the CO(1-0) spectra extracted from different positions within NGC 6240 and found maximum velocity shift and FWHM of the narrow Gaussians of |v max sys | = 82 ± 4 km s −1 and FWHM max sys = 380 ± 150 km s −1 .as 'outflow'. The flux of the first Gaussian corresponds to the flux of the 'systemic' component for this box, whereas the total 'outflow' component flux is given by the sum of the fluxes of the other two Gaussians (the errors are added in quadrature).
( a )
aFrom the IRAS Revised Bright Galaxy Sample(Sanders et al. 2003); (b) L Bol = 1.15 LTIR, followingVeilleux et al. (2009);(c) Total bolometric luminosity of the dual AGN system estimated from X-ray data byPuccetti et al. (2016);(d) SFR = (1 − αAGN) × 10 −10 LTIR, followingSturm et al. (2011). (e) Total molecular gas mass in the 12 × 6 region encompassing the nucleus and the outflow, derived in § 3.4. † Maximum distance at which we detect[CI](1-0) in the outflow at a S/N> 3, hence the quoted rmax should be considered a lower limit constraint allowed by current data. ‡ Mean values obtained from the analysis of all boxes.
Figure 6 .
6αCO (left) and r21 (right) as a function of the average velocity dispersion (top) and of the distance from the nucleus (bottom) of the corresponding molecular line components. A detailed explanation on how αCO and r21 were calculated can be found in § 3.3. The y axis on the right side of the αCO plots shows the corresponding [CI](1-0)/CO(1-0) line luminosity ratio. The horizontal blue and red dashed lines are the mean values reported in
Figure 7 .
7CO(1-0) (top row) and CO(2-1) (bottom row) interferometric maps of the outflow emission, integrated in the blue (a, c) and red (b, d) wings, by using the same velocity ranges as inFig 1.The maps have matched spatial resolution (∼ 1.2 , details in § 2). Contours correspond to: (−3σ, −2σ, 2σ, 3σ, 6σ, 12σ, 24σ, 48σ, 60σ) with 1σ= 0.33 mJy beam −1 in panel (a) and 1σ= 0.39 mJy beam −1 in panel (b);(−3σ, 3σ, 10σ, 24σ, 48σ, 200σ, 350σ) with 1σ= 0.33 mJy beam −1 in panel (c) and 1σ= 0.3 mJy beam −1 in panel (d). Similar to Fig. 1, the black crosses indicate the VLBI positions of the AGNs from Hagiwara et al. (2011). APPENDIX A. ADDITIONAL CO(1-0) AND CO(2-1) OUTFLOW MAPS
spectra extracted from the grid shown in panel (a) of Fig 1 are presented in
Table 2 .
2αCO and r21 values * αCO r21 [M (K km s −1 pc 2 ) −1 ] Quoted errors are dominated by systematic uncertainties (e.g. absolute flux calibration errors, error on XCI).Total †
2.1 ± 1.1
1.22 ± 0.14
Total † narrow comp
3.3 ± 1.8
1.25 ± 0.18
Total † broad comp
1.8 ± 0.9
1.21 ± 0.17
Mean ‡ global
2.5 ± 1.4
1.17 ± 0.19
Mean ‡ systemic comp
3.2 ± 1.8
1.0 ± 0.2
Mean ‡ outflow comp
2.1 ± 1.2
1.4 ± 0.3
*
Table 3 .
3Summary of source and outflow propertiesSource properties:
L TIR(8−1000µm) [erg s −1 ]
2.71 × 10 45(a)
L Bol [erg s −1 ]
3.11 × 10 45(b)
L AGN [erg s −1 ]
(1.1 ± 0.4) × 10 45(c)
α AGN ≡ L AGN /L Bol
0.35 ± 0.13
SFR [M yr −1 ]
46 ± 9 (d)
M tot
mol [M ]
(2.1 ± 0.5) × 10 10(e)
Molecular outflow properties (estimated in § 3.4):
rmax [kpc]
2.4 ± 0.3 †
vout [km s −1 ]
250 ± 50 ‡
σout [km s −1 ]
220 ± 20 ‡
τ dyn [Myr]
6.5 ± 1.8 ‡
Mout [M ]
(1.2 ± 0.3) × 10 10
Mout [M yr −1 ]
2500 ± 1200
vṀout [g cm s −2 ]
(3.1 ± 1.2) × 10 36
1/2Ṁoutv 2 [erg s −1 ]
(3.6 ± 1.6) × 10 43
η ≡Ṁout/SFR
50 ± 30
(vṀout)/(L AGN /c)
80 ± 50
(1/2Ṁoutv 2 )/L AGN
0.033 ± 0.019
τ dep ≡ M tot
mol /Ṁout [Myr]
8 ± 4
Available at https://casa.nrao.edu
In estimating the error on this ratio we have ignored the systematic uncertainty on X CI , assuming it affects both α CO measurements in the same way.
This was the case for the three boxes (labelled as 'E1', 'W1' and 'W2' in Figs. 8-10) without a S/N≥ 3 detection of[CI](1-0).
All quantities relevant to the individual boxes are identified by an index i = 1, 12 (e.g. v out,i ) in order to distinguish them from the corresponding galaxy-integrated quantities (e.g. vout).
We used the IDL routine linmix err.pro
Figure 8. CO(1-0) spectra extracted from the grid of 13 boxes shown in panel (a) of Fig 1, with overplotted the results of the simultaneous fit procedure.
Figure 9. CO(2-1) spectra extracted from the grid of 13 boxes shown in panel (a) ofFig 1,with overplotted the results of the simultaneous fit procedure.
Cicone et al.Figure 10. [CI](1-0) spectra extracted from the grid of 13 boxes shown in panel (a) of Fig 1, with overplotted the results of the simultaneous fit procedure.
. S Aalto, R S Booth, J H Black, L E B Johansson, A&A. 300369Aalto, S., Booth, R. S., Black, J. H., & Johansson, L. E. B. 1995, A&A, 300, 369
. S Aalto, S Garcia-Burillo, S Muller, A&A. 53744Aalto, S., Garcia-Burillo, S., Muller, S., et al. 2012, A&A, 537, A44
. A&A. 57485-. 2015, A&A, 574, A85
. S Alaghband-Zadeh, S C Chapman, A M Swinbank, MNRAS. 4351493Alaghband-Zadeh, S., Chapman, S. C., Swinbank, A. M., et al. 2013, MNRAS, 435, 1493
. K Alatalo, L Blitz, L M Young, ApJ. 73588Alatalo, K., Blitz, L., Young, L. M., et al. 2011, ApJ, 735, 88
. K Alatalo, M Lacy, L Lanz, ApJ. 79831Alatalo, K., Lacy, M., Lanz, L., et al. 2015, ApJ, 798, 31
. W A Baan, Y Hagiwara, P Hofner, ApJ. 661173Baan, W. A., Hagiwara, Y., & Hofner, P. 2007, ApJ, 661, 173
. L Barcos-Muñoz, S Aalto, T A Thompson, ApJL. 85328Barcos-Muñoz, L., Aalto, S., Thompson, T. A., et al. 2018, ApJL, 853, L28
. P Biernacki, R Teyssier, MNRAS. 4755688Biernacki, P., & Teyssier, R. 2018, MNRAS, 475, 5688
. T G Bisbas, P P Papadopoulos, S Viti, ApJ. 80337Bisbas, T. G., Papadopoulos, P. P., & Viti, S. 2015, ApJ, 803, 37
. T G Bisbas, E F Van Dishoeck, P P Papadopoulos, ApJ. 83990Bisbas, T. G., van Dishoeck, E. F., Papadopoulos, P. P., et al. 2017, ApJ, 839, 90
. A D Bolatto, M Wolfire, A K Leroy, ARA&A. 51207Bolatto, A. D., Wolfire, M., & Leroy, A. K. 2013, ARA&A, 51, 207
. J Braine, P.-A Duc, U Lisenfeld, A&A. 37851Braine, J., Duc, P.-A., Lisenfeld, U., et al. 2001, A&A, 378, 51
. M Brüggen, E Scannapieco, ApJ. 82231Brüggen, M., & Scannapieco, E. 2016, ApJ, 822, 31
. P M Bryant, N Z Scoville, AJ. 1172632Bryant, P. M., & Scoville, N. Z. 1999, AJ, 117, 2632
. S Carniani, A Marconi, R Maiolino, A&A. 580102Carniani, S., Marconi, A., Maiolino, R., et al. 2015, A&A, 580, A102
. A&A. 105-. 2017, A&A, 605, A105
. C Cicone, M Brusa, C Ramos Almeida, Nature Astronomy. 2176Cicone, C., Brusa, M., Ramos Almeida, C., et al. 2018, Nature Astronomy, 2, 176
. C Cicone, C Feruglio, R Maiolino, A&A. 54399Cicone, C., Feruglio, C., Maiolino, R., et al. 2012, A&A, 543, A99
. C Cicone, R Maiolino, E Sturm, A&A. 56221Cicone, C., Maiolino, R., Sturm, E., et al. 2014, A&A, 562, A21
. C Cicone, R Maiolino, S Gallerani, A&A. 57414Cicone, C., Maiolino, R., Gallerani, S., et al. 2015, A&A, 574, A14
. E J M Colbert, A S Wilson, J Bland-Hawthorn, ApJ. 43689Colbert, E. J. M., Wilson, A. S., & Bland-Hawthorn, J. 1994, ApJ, 436, 89
. F Combes, S García-Burillo, V Casasola, A&A. 558124Combes, F., García-Burillo, S., Casasola, V., et al. 2013, A&A, 558, A124
. T Costa, J Rosdahl, D Sijacki, M Haehnelt, MNRAS. 4734197Costa, T., Rosdahl, J., Sijacki, D., & Haehnelt, M. G. 2018, MNRAS, 473, 4197
. T Costa, D Sijacki, M G Haehnelt, MNRAS. 44430MNRASCosta, T., Sijacki, D., & Haehnelt, M. G. 2014, MNRAS, 444, 2355 -. 2015, MNRAS, 448, L30
. F Costagliola, S Aalto, M I Rodriguez, A&A. 52830Costagliola, F., Aalto, S., Rodriguez, M. I., et al. 2011, A&A, 528, A30
. A L R Danielson, A M Swinbank, I Smail, MNRAS. 4101687Danielson, A. L. R., Swinbank, A. M., Smail, I., et al. 2011, MNRAS, 410, 1687
. K M Dasyra, F Combes, A&A. 5417Dasyra, K. M., & Combes, F. 2012, A&A, 541, L7
. K M Dasyra, F Combes, T Oosterloo, A&A. 5957Dasyra, K. M., Combes, F., Oosterloo, T., et al. 2016, A&A, 595, L7
. R I Davies, W Maciejewski, E K S Hicks, ApJ. 792101Davies, R. I., Maciejewski, W., Hicks, E. K. S., et al. 2014, ApJ, 792, 101
. D Downes, P M Solomon, ApJ. 507615Downes, D., & Solomon, P. M. 1998, ApJ, 507, 615
. B H C Emonts, M D Lehnert, H Dannerbauer, MNRAS. 47760Emonts, B. H. C., Lehnert, M. D., Dannerbauer, H., et al. 2018, MNRAS, 477, L60
. H Engel, R I Davies, R Genzel, A&A. 52456Engel, H., Davies, R. I., Genzel, R., et al. 2010, A&A, 524, A56
. C.-A Faucher-Giguère, E Quataert, MNRAS. 425605Faucher-Giguère, C.-A., & Quataert, E. 2012, MNRAS, 425, 605
. C Feruglio, F Fiore, E Piconcelli, A&A. 55887Feruglio, C., Fiore, F., Piconcelli, E., et al. 2013a, A&A, 558, A87
. C Feruglio, R Maiolino, E Piconcelli, A&A. 518155Feruglio, C., Maiolino, R., Piconcelli, E., et al. 2010, A&A, 518, L155
. C Feruglio, F Fiore, R Maiolino, A&A. 54951Feruglio, C., Fiore, F., Maiolino, R., et al. 2013b, A&A, 549, A51
. C Feruglio, A Ferrara, M Bischetti, A&A. 60830Feruglio, C., Ferrara, A., Bischetti, M., et al. 2017, A&A, 608, A30
. F Fiore, C Feruglio, F Shankar, A&A. 601143Fiore, F., Feruglio, C., Shankar, F., et al. 2017, A&A, 601, A143
. J Fischer, E Sturm, E González-Alfonso, A&A. 51841Fischer, J., Sturm, E., González-Alfonso, E., et al. 2010, A&A, 518, L41
. A Fluetsch, R Maiolino, S Carniani, arXiv:1805.05352submitted to MNRASFluetsch, A., Maiolino, R., Carniani, S., et al. 2018, submitted to MNRAS, arXiv:1805.05352
. J F Gallimore, R Beswick, AJ. 127239Gallimore, J. F., & Beswick, R. 2004, AJ, 127, 239
. S García-Burillo, F Combes, A Usero, A&A. 56735A&AGarcía-Burillo, S., Combes, F., Usero, A., et al. 2014, A&A, 567, A125 -. 2015, A&A, 580, A35
. M Gaspari, A Sadowski, ApJ. 837149Gaspari, M., & Sadowski, A. 2017, ApJ, 837, 149
. J Gerssen, R P Van Der Marel, D Axon, AJ. 12775Gerssen, J., van der Marel, R. P., Axon, D., et al. 2004, AJ, 127, 75
. S C O Glover, P C Clark, M Micic, F Molina, MNRAS. 4481607Glover, S. C. O., Clark, P. C., Micic, M., & Molina, F. 2015, MNRAS, 448, 1607
. E González-Alfonso, J Fischer, S Bruderer, ApJ. 85766González-Alfonso, E., Fischer, J., Bruderer, S., et al. 2018, ApJ, 857, 66
. A Gowardhan, H Spoon, D A Riechers, ApJ. 85935Gowardhan, A., Spoon, H., Riechers, D. A., et al. 2018, ApJ, 859, 35
. T R Greve, P P Papadopoulos, Y Gao, S J E Radford, ApJ. 6921432Greve, T. R., Papadopoulos, P. P., Gao, Y., & Radford, S. J. E. 2009, ApJ, 692, 1432
. A Hacar, M Tafalla, J Forbrich, A&A. 61077Hacar, A., Tafalla, M., Forbrich, J., et al. 2018, A&A, 610, A77
. Y Hagiwara, W A Baan, H.-R Klöckner, AJ. 14217Hagiwara, Y., Baan, W. A., & Klöckner, H.-R. 2011, AJ, 142, 17
. T M Heckman, L Armus, G K Miley, ApJS. 74833Heckman, T. M., Armus, L., & Miley, G. K. 1990, ApJS, 74, 833
. P F Hopkins, E Quataert, N Murray, MNRAS. 4213522Hopkins, P. F., Quataert, E., & Murray, N. 2012, MNRAS, 421, 3522
. D Iono, C D Wilson, S Takakuwa, ApJ. 659283Iono, D., Wilson, C. D., Takakuwa, S., et al. 2007, ApJ, 659, 283
. W Ishibashi, A C Fabian, MNRAS. 45193Ishibashi, W., & Fabian, A. C. 2015, MNRAS, 451, 93
. F P Israel, M J F Rosenberg, P Van Der Werf, A&A. 57895Israel, F. P., Rosenberg, M. J. F., & van der Werf, P. 2015, A&A, 578, A95
. Q Jiao, Y Zhao, M Zhu, ApJL. 84018Jiao, Q., Zhao, Y., Zhu, M., et al. 2017, ApJL, 840, L18
. J Keene, G A Blake, T G Phillips, P J Huggins, C A Beichman, ApJ. 299967Keene, J., Blake, G. A., Phillips, T. G., Huggins, P. J., & Beichman, C. A. 1985, ApJ, 299, 967
. B C Kelly, ApJ. 6651489Kelly, B. C. 2007, ApJ, 665, 1489
. A R King, MNRAS. 4021516King, A. R. 2010, MNRAS, 402, 1516
. M Krips, S Martín, K Sakamoto, A&A. 5923Krips, M., Martín, S., Sakamoto, K., et al. 2016, A&A, 592, L3
. A Lamastra, N Menci, F Fiore, A&A. 60718Lamastra, A., Menci, N., Fiore, F., et al. 2017, A&A, 607, A18
. A K Leroy, F Walter, F Bigiel, AJ. 1374670Leroy, A. K., Walter, F., Bigiel, F., et al. 2009, AJ, 137, 4670
. A K Leroy, F Walter, P Martini, ApJ. 81483Leroy, A. K., Walter, F., Martini, P., et al. 2015, ApJ, 814, 83
. J E Lindberg, S Aalto, S Muller, A&A. 58715Lindberg, J. E., Aalto, S., Muller, S., et al. 2016, A&A, 587, A15
. R Maiolino, H R Russell, A C Fabian, Nature. 544202Maiolino, R., Russell, H. R., Fabian, A. C., et al. 2017, Nature, 544, 202
. J G Mangum, Y L Shirley, PASP. 127266Mangum, J. G., & Shirley, Y. L. 2015, PASP, 127, 266
. C E Max, G Canalizo, B A Macintosh, ApJ. 621738Max, C. E., Canalizo, G., Macintosh, B. A., et al. 2005, ApJ, 621, 738
J P Mcmullin, B Waters, D Schiebel, W Young, K Golap, Astronomical Society of the Pacific Conference Series. R. A. Shaw, F. Hill, & D. J. Bell376127Astronomical Data Analysis Software and Systems XVIMcMullin, J. P., Waters, B., Schiebel, D., Young, W., & Golap, K. 2007, in Astronomical Society of the Pacific Conference Series, Vol. 376, Astronomical Data Analysis Software and Systems XVI, ed. R. A. Shaw, F. Hill, & D. J. Bell, 127
. R Meijerink, L E Kristensen, A Weiß, ApJL. 76216Meijerink, R., Kristensen, L. E., Weiß, A., et al. 2013, ApJL, 762, L16
. R Morganti, W Frieswijk, R J B Oonk, T Oosterloo, C Tadhunter, A&A. 5524Morganti, R., Frieswijk, W., Oonk, R. J. B., Oosterloo, T., & Tadhunter, C. 2013, A&A, 552, L4
. R Morganti, S Veilleux, T Oosterloo, S H Teng, D Rupke, A&A. 59330Morganti, R., Veilleux, S., Oosterloo, T., Teng, S. H., & Rupke, D. 2016, A&A, 593, A30
. D Mukherjee, G V Bicknell, R Sutherland, A Wagner, MNRAS. 461967Mukherjee, D., Bicknell, G. V., Sutherland, R., & Wagner, A. 2016, MNRAS, 461, 967
. F Müller-Sánchez, R Nevin, J M Comerford, Nature. 556345Müller-Sánchez, F., Nevin, R., Comerford, J. M., et al. 2018, Nature, 556, 345
. N Nakai, M Hayashi, T Handa, PASJ. 39685Nakai, N., Hayashi, M., Handa, T., et al. 1987, PASJ, 39, 685
. E Nardini, J Wang, G Fabbiano, ApJ. 765141Nardini, E., Wang, J., Fabbiano, G., et al. 2013, ApJ, 765, 141
. J Nims, E Quataert, C.-A Faucher-Giguère, MNRAS. 4473612Nims, J., Quataert, E., & Faucher-Giguère, C.-A. 2015, MNRAS, 447, 3612
. R Ojha, A A Stark, H H Hsieh, ApJ. 548253Ojha, R., Stark, A. A., Hsieh, H. H., et al. 2001, ApJ, 548, 253
. T Oosterloo, J B Raymond Oonk, R Morganti, A&A. 60838Oosterloo, T., Raymond Oonk, J. B., Morganti, R., et al. 2017, A&A, 608, A38
. P P Papadopoulos, T G Bisbas, Zhang, arXiv:1804.09654MNRAS. ZPapadopoulos, P. P., Bisbas, T. G., & Zhang, Z. 2018, MNRAS, arXiv:1804.09654
. P P Papadopoulos, T R Greve, ApJL. 61529Papadopoulos, P. P., & Greve, T. R. 2004, ApJL, 615, L29
. P P Papadopoulos, W.-F Thi, S Viti, MNRAS. 351147Papadopoulos, P. P., Thi, W.-F., & Viti, S. 2004, MNRAS, 351, 147
. P P Papadopoulos, P Van Der Werf, E Xilouris, K G Isaak, Y Gao, ApJ. 75110Papadopoulos, P. P., van der Werf, P., Xilouris, E., Isaak, K. G., & Gao, Y. 2012a, ApJ, 751, 10
. P P Papadopoulos, P P Van Der Werf, E M Xilouris, MNRAS. 4262601Papadopoulos, P. P., van der Werf, P. P., Xilouris, E. M., et al. 2012b, MNRAS, 426, 2601
. P P Papadopoulos, Z.-Y Zhang, E M Xilouris, ApJ. 788153Papadopoulos, P. P., Zhang, Z.-Y., Xilouris, E. M., et al. 2014, ApJ, 788, 153
J Pety, SF2A-2005: Semaine de l'Astrophysique Francaise. F. Casoli, T. Contini, J. M. Hameury, & L. Pagani721Pety, J. 2005, in SF2A-2005: Semaine de l'Astrophysique Francaise, ed. F. Casoli, T. Contini, J. M. Hameury, & L. Pagani, 721
. P A R Ade, Planck CollaborationN Aghanim, Planck CollaborationA&A. 59413Planck Collaboration, Ade, P. A. R., Aghanim, N., et al. 2016, A&A, 594, A13
. S Puccetti, A Comastri, F E Bauer, A&A. 585157Puccetti, S., Comastri, A., Bauer, F. E., et al. 2016, A&A, 585, A157
. A J Richings, C.-A Faucher-Giguère, MNRAS. 4743673Richings, A. J., & Faucher-Giguère, C.-A. 2018, MNRAS, 474, 3673
. T Saito, D Iono, J Ueda, MNRAS. 47552Saito, T., Iono, D., Ueda, J., et al. 2018, MNRAS, 475, L52
. K Sakamoto, S Aalto, F Combes, A Evans, A Peck, ApJ. 79790Sakamoto, K., Aalto, S., Combes, F., Evans, A., & Peck, A. 2014, ApJ, 797, 90
. K Sakamoto, P T P Ho, A B Peck, ApJ. 644862Sakamoto, K., Ho, P. T. P., & Peck, A. B. 2006, ApJ, 644, 862
. D B Sanders, J M Mazzarella, D.-C Kim, J A Surace, B T Soifer, AJ. 1261607Sanders, D. B., Mazzarella, J. M., Kim, D.-C., Surace, J. A., & Soifer, B. T. 2003, AJ, 126, 1607
. N Scoville, K Sheth, H Aussel, ApJ. 82083Scoville, N., Sheth, K., Aussel, H., et al. 2016, ApJ, 820, 83
. J Silk, M J Rees, A&A. 3311Silk, J., & Rees, M. J. 1998, A&A, 331, L1
. P M Solomon, D Downes, S J E Radford, J W Barrett, ApJ. 478144Solomon, P. M., Downes, D., Radford, S. J. E., & Barrett, J. W. 1997, ApJ, 478, 144
. H W W Spoon, D Farrah, V Lebouteiller, ApJ. 775127Spoon, H. W. W., Farrah, D., Lebouteiller, V., et al. 2013, ApJ, 775, 127
. E Sturm, E González-Alfonso, S Veilleux, ApJL. 73316Sturm, E., González-Alfonso, E., Veilleux, S., et al. 2011, ApJL, 733, L16
. L J Tacconi, R Genzel, M Tecza, ApJ. 524732Tacconi, L. J., Genzel, R., Tecza, M., et al. 1999, ApJ, 524, 732
. K Tanaka, T Oka, S Matsumura, M Nagai, K Kamegai, ApJL. 74339Tanaka, K., Oka, T., Matsumura, S., Nagai, M., & Kamegai, K. 2011, ApJL, 743, L39
. T A Thompson, A C Fabian, E Quataert, N Murray, MNRAS. 449147Thompson, T. A., Fabian, A. C., Quataert, E., & Murray, N. 2015, MNRAS, 449, 147
. T A Thompson, E Quataert, D Zhang, D H Weinberg, MNRAS. 4551830Thompson, T. A., Quataert, E., Zhang, D., & Weinberg, D. H. 2016, MNRAS, 455, 1830
. B E Turner, ApJ. 299312Turner, B. E. 1985, ApJ, 299, 312
. P P Van Der Werf, R Genzel, A Krabbe, ApJ. 405522van der Werf, P. P., Genzel, R., Krabbe, A., et al. 1993, ApJ, 405, 522
. S Veilleux, A Bolatto, F Tombesi, ApJ. 84318Veilleux, S., Bolatto, A., Tombesi, F., et al. 2017, ApJ, 843, 18
. S Veilleux, P L Shopbell, D S Rupke, J Bland-Hawthorn, G Cecil, AJ. 1262185Veilleux, S., Shopbell, P. L., Rupke, D. S., Bland-Hawthorn, J., & Cecil, G. 2003, AJ, 126, 2185
. S Veilleux, D S N Rupke, D.-C Kim, ApJS. 182628Veilleux, S., Rupke, D. S. N., Kim, D.-C., et al. 2009, ApJS, 182, 628
. S Veilleux, M Meléndez, E Sturm, ApJ. 77627Veilleux, S., Meléndez, M., Sturm, E., et al. 2013, ApJ, 776, 27
. F Walter, A Weiß, D Downes, R Decarli, C Henkel, ApJ. 73018Walter, F., Weiß, A., Downes, D., Decarli, R., & Henkel, C. 2011, ApJ, 730, 18
. F Walter, A Weiss, N Scoville, ApJL. 58021Walter, F., Weiss, A., & Scoville, N. 2002, ApJL, 580, L21
. F Walter, A D Bolatto, A K Leroy, ApJ. 835265Walter, F., Bolatto, A. D., Leroy, A. K., et al. 2017, ApJ, 835, 265
. J Wang, E Nardini, G Fabbiano, ApJ. 78155Wang, J., Nardini, E., Fabbiano, G., et al. 2014, ApJ, 781, 55
. A Weiß, D Downes, C Henkel, F Walter, A&A. 42925Weiß, A., Downes, D., Henkel, C., & Walter, F. 2005, A&A, 429, L25
. A Weiß, C Henkel, D Downes, F Walter, A&A. 40941Weiß, A., Henkel, C., Downes, D., & Walter, F. 2003, A&A, 409, L41
. M G Wolfire, D Hollenbach, C F Mckee, ApJ. 7161191Wolfire, M. G., Hollenbach, D., & McKee, C. F. 2010, ApJ, 716, 1191
. L Yao, E R Seaquist, N Kuno, L Dunne, ApJ. 588771Yao, L., Seaquist, E. R., Kuno, N., & Dunne, L. 2003, ApJ, 588, 771
. M Yoshida, M Yagi, Y Ohyama, ApJ. 82048Yoshida, M., Yagi, M., Ohyama, Y., et al. 2016, ApJ, 820, 48
. Z.-Y Zhang, P P Papadopoulos, R J Ivison, Royal Society Open Science. 3160025Zhang, Z.-Y., Papadopoulos, P. P., Ivison, R. J., et al. 2016, Royal Society Open Science, 3, 160025
. L K Zschaechner, F Walter, A Bolatto, ApJ. 832142Zschaechner, L. K., Walter, F., Bolatto, A., et al. 2016, ApJ, 832, 142
. K Zubovas, A King, ApJL. 74534Zubovas, K., & King, A. 2012, ApJL, 745, L34
. K Zubovas, A R King, MNRAS. 439400Zubovas, K., & King, A. R. 2014, MNRAS, 439, 400
| []
|
[
"ONLINE ESTIMATION AND OPTIMIZATION OF UTILITY-BASED SHORTFALL RISK A PREPRINT",
"ONLINE ESTIMATION AND OPTIMIZATION OF UTILITY-BASED SHORTFALL RISK A PREPRINT"
]
| [
"Vishwajit Hegde ",
"Arvind Menon ",
"L A Prashanth ",
"K Jagannathan ",
"\nDepartment of Mechanical Engineering\nDepartment of Physics\nIndian Institute of Technology Madras Chennai\nIndia\n",
"\nDepartment of Computer Science\nIndian Institute of Technology Madras Chennai\nIndia\n",
"\nDepartment of Electrical Engineering\nIndian Institute of Technology Madras Chennai\nIndia\n",
"\nIndian Institute of Technology Madras Chennai\nIndia\n"
]
| [
"Department of Mechanical Engineering\nDepartment of Physics\nIndian Institute of Technology Madras Chennai\nIndia",
"Department of Computer Science\nIndian Institute of Technology Madras Chennai\nIndia",
"Department of Electrical Engineering\nIndian Institute of Technology Madras Chennai\nIndia",
"Indian Institute of Technology Madras Chennai\nIndia"
]
| []
| Utility-Based Shortfall Risk (UBSR) is a risk metric that is increasingly popular in financial applications, owing to certain desirable properties that it enjoys. We consider the problem of estimating UBSR in a recursive setting, where samples from the underlying loss distribution are available oneat-a-time. We cast the UBSR estimation problem as a root finding problem, and propose stochastic approximation-based estimations schemes. We derive non-asymptotic bounds on the estimation error in the number of samples. We also consider the problem of UBSR optimization within a parameterized class of random variables. We propose a stochastic gradient descent based algorithm for UBSR optimization, and derive non-asymptotic bounds on its convergence. | null | [
"https://export.arxiv.org/pdf/2111.08805v2.pdf"
]
| 244,270,521 | 2111.08805 | bb52d05dd6d2f848fc13331aada20debf2434e01 |
ONLINE ESTIMATION AND OPTIMIZATION OF UTILITY-BASED SHORTFALL RISK A PREPRINT
15 Feb 2023
Vishwajit Hegde
Arvind Menon
L A Prashanth
K Jagannathan
Department of Mechanical Engineering
Department of Physics
Indian Institute of Technology Madras Chennai
India
Department of Computer Science
Indian Institute of Technology Madras Chennai
India
Department of Electrical Engineering
Indian Institute of Technology Madras Chennai
India
Indian Institute of Technology Madras Chennai
India
ONLINE ESTIMATION AND OPTIMIZATION OF UTILITY-BASED SHORTFALL RISK A PREPRINT
15 Feb 2023Utility-based shortfall risk · risk-sensitive optimization · non-asymptotic bounds; UBSR estimation · UBSR optimization
Utility-Based Shortfall Risk (UBSR) is a risk metric that is increasingly popular in financial applications, owing to certain desirable properties that it enjoys. We consider the problem of estimating UBSR in a recursive setting, where samples from the underlying loss distribution are available oneat-a-time. We cast the UBSR estimation problem as a root finding problem, and propose stochastic approximation-based estimations schemes. We derive non-asymptotic bounds on the estimation error in the number of samples. We also consider the problem of UBSR optimization within a parameterized class of random variables. We propose a stochastic gradient descent based algorithm for UBSR optimization, and derive non-asymptotic bounds on its convergence.
Introduction
In several financial applications, it is necessary to understand risk sensitivity while maximizing the returns. Several risk measures have been studied in the literature, e.g., mean-variance, Value at Risk (VaR), Conditional Value at Risk (CVaR), distorted risk measure, and prospect theory. In Artzner et al. [1999], the authors consider four properties as desirable for a risk measure, namely positive homogeneity, translation invariance, sub-additivity, and monotonicity. They define a risk measure as being coherent if it possesses the aforementioned properties. In a related development, in Föllmer and Schied [2002], the authors chose to relax the sub-additivity and positive homogeneity requirements of a coherent risk measure, and instead impose a convexity condition on the underlying risk measure. Such a relaxation is justified in practical contexts where the risk is a non-linear function of the underlying random variable (e.g., a financial position). CVaR is a popular risk measure that come under the umbrella of coherent risk measures. Utilitybased shortfall risk (UBSR) Föllmer and Schied [2002] is a risk measure that is closely related to CVaR, and one that belongs to the class of convex risk measures. UBSR as a risk measure is preferable over CVaR for two reasons: (i) Unlike CVaR, UBSR is invariant under randomization; and (ii) UBSR involves a utility function that can be chosen to encode the risk associated with each value the r.v. X takes, while CVaR is concerned primarily with values of X beyond a certain quantile.
In real-world scenarios, the distribution of the underlying r.v. is seldom available in a closed form. Instead, one can obtain samples, which are used to estimate the chosen risk measure. Risk estimation has received a lot of attention in the recent past, cf. Kagrecha et al. [2019], Cassel et al. [2018], Pandey et al. [2021], Thomas and Learned-Miller [2019], Dunkel and Weber [2010], Bhat and Prashanth [2019], Prashanth et al. [2020Prashanth et al. [ , 2016, Wang and Gao [2010], Brown [2007], Mhammedi et al. [2020], Lee et al. [2020], with CVaR being the dominant choice for the risk measure.
In this paper, we focus on recursive estimation of UBSR, in a setting where data arrives in an online fashion. Estimation of UBSR has immediate applications in financial portfolio optimization, cf. Hu and Dali [2016]. Stochastic approximation Robbins and Monro [1951], Borkar [2008] is a procedure that is well-suited for the purpose of online estimation. In the context of UBSR estimation, our main contribution is the non-asymptotic analysis of a stochastic approximation-based estimation scheme. We cast the estimation of UBSR as a stochastic root finding problem, and derive 'finite-sample' bounds for this scheme. Our analysis assumes that the underlying objective satisfies a monotonicity condition. If the monotonicity parameter is known and is used in setting the step-size, the algorithm results in an O(1/n) rate of mean-squared error decay. We also develop another variant that employs a universal step-size, and results in a O(1/n α ) rate, where 0 < α < 1. In addition, we also obtain a 'high probability' result for the concentration of the estimation error.
Moving beyond UBSR estimation, we also consider the problem of optimizing UBSR within a parameterized class of random variables. The motivation for this problem lies in understanding the risk sensitivity in a portfolio management application Rockafellar and Uryasev [2000], Hu and Dali [2016]. Specifically, an investor could choose to distribute his/her capital among different assets, and the decision parameter governing the capital distribution is to be optimized to decide the best allocation. The utility function that goes into the definition of UBSR would encode the investor's risk preference, and the goal is to find the best decision parameter to minimize risk, as quantified by UBSR.
For the problem of UBSR optimization, we propose a stochastic gradient algorithm, and derive non-asymptotic bounds on its performance. Stochastic gradient (SG) methods have a long history, and non-asymptotic analysis of such schemes has garnered a lot of attention over the last decade, see Bottou et al. [2018] for a survey. Unlike in a classic SG setting, the UBSR optimization problem involves biased function measurements, which presents some technical challenges. Specifically, the UBSR estimation scheme is biased, in the sense that the estimation error does not have zero expectation. This is unlike in the classical SG settings, where the estimation error is assumed to be zero mean. In our setting, even though the estimation error is not zero-mean, the error can be reduced by increasing the batch size used for estimation. For the purpose of gradient estimation, we leverage the UBSR sensitivity formula derived in Hu and Dali [2016], and use a natural estimator of this quantity based on i.i.d. samples. By controlling the batch size, we derive a O(1/n) rate for the SG algorithm's mean-squared error to optimize the UBSR under a strongly convex objective. For the case of a convex objective, we obtain a O(1/n) convergence rate for the objective function by employing a phase-wise step-size reduction scheme from Jain et al. [2021].
Related work. Stochastic approximation has been explored in the context of CVaR estimation in Bardou et al. [2009], Bercu et al. [2020], Costa and Gadat [2021]. Recursive estimation of quantiles, variances and medians has been considered earlier in Cardot et al. [2015Cardot et al. [ , 2011, Godichon-Baggioni [2016], Costa and Gadat [2021]. UBSR was introduced in Föllmer and Schied [2002], and non-recursive estimation schemes for UBSR were proposed in Hu and Dali [2016]. A paper closely related to our work from UBSR estimation viewpoint is Dunkel and Weber [2010], which uses a recursive estimation technique. The authors establish asymptotic convergence of their algorithm, and a 'central limit theorem' showing the asymptotic Gaussianity of the scaled estimation error. In contrast, we establish non-asymptotic, i.e., finite-sample bounds for the performance of our recursive estimation method, under similar technical assumptions as Dunkel and Weber [2010], Hu and Dali [2016]. Duchi et al. [2012], Balasubramanian and Ghadimi [2018] consider finite-sample analysis of zeroth order stochastic approximation, but they assume zero-mean noise on the function measurements, which is not the case for UBSR optimization considered here. Other related papers include Bhavsar and Prashanth [2022], Pasupathy et al. [2018] which consider stochastic approximation of an abstract objective function where the function measurements are biased, and the bias can be controlled through a batch size. In a recent paper Prashanth and Bhat [2020], the authors use the estimation scheme from Hu and Dali [2016] to establish concentration inequalities for UBSR estimation.
The rest of the paper is organized as follows: In Section 2, we define the notion of UBSR for a general random variable, and in Section 3, we formulate the estimation as well as optimization problems under a UBSR objective. In Section 4, we describe the stochastic approximation-based scheme for estimating the UBSR of a random variable, and present concentration bounds for this estimation scheme. In Section 5, we present a stochastic gradient algorithm for optimizing the UBSR in a parameterized class of random variables, and present a non-asymptotic bound that quantifies the convergence rate of this algorithm. In Sections 6-7, we provide proofs of the non-asymptotic bounds for UBSR estimation and optimization. Finally, in Section 8, we provide our concluding remarks.
Utility-based shortfall risk
Let X be a random variable, and ℓ(·) be a convex loss function. Let λ be a pre-specified "risk-level" parameter that lies in the interior of the range of ℓ. We first define an acceptance set as follows:
A := {X ∈ L ∞ : E[ℓ(−X)] ≤ λ},(1)
where L ∞ represents the set of bounded random variables.
Using the acceptance set, the utility-based shortfall risk (UBSR) SR λ (X) is defined byFöllmer and Schied [2002] SR λ (X) := inf{t ∈ R : t + X ∈ A}.
(2)
For notational convenience, we have made the dependence of UBSR SR λ (X) on the loss function ℓ implicit. Intuitively, if X represents a financial position, then SR λ (X) denotes the minimum cash that has to be added to X so that it falls into the acceptable set A.
UBSR is a particular example of a convex risk measure Föllmer and Schied [2002], which is a generalization of a coherent risk measure Artzner et al. [1999]. In particular, a coherent risk measure satisfies sub-additivity and positivehomogeneity, and these two properties readily imply convexity.
As a risk measure, UBSR is preferable over the popular Value-at-Risk (VaR), owing to the fact that UBSR is convex. Another closely related risk measure is CVaR (Conditional Value at Risk), which is a coherent risk measure. UBSR has a few advantages over CVaR, namely (i) Unlike CVaR, UBSR is invariant under randomization. Formally, suppose X 1 , X 2 are both acceptable, i.e., g(X i ) ≤ 0, i = 1, 2. If we use any Bernoulli variable Y to choose between X 1 and X 2 , then we still have an acceptable financial position or g(Y ) ≤ 0; and (ii) UBSR involves an loss function that can be chosen to encode the risk associated with each value the r.v. X takes, while CVaR is concerned with values of X beyond VaR at a pre-specified level α. For a loss r.v. X in a financial application, it makes sense to associate more risk with larger losses, and this can be encoded using, for example, an exponential loss function. On the other hand, CVaR considers all losses beyond a certain threshold equally. UBSR has been used for credit risk management under the Normal Copula Model Gupton et al. [1997], which is the foundation of the CreditMetrics industry model, see also Bodie [1991], Dunkel and Weber [2010], Hu and Dali [2016] for usage of UBSR in the context of portfolio optimization.
We now present two examples for the loss function.
Example 2.1 The exponential loss function defined as follows: ℓ(x) = exp(βx). The UBSR for this loss function is closely related to relative entropy. More precisely,
SR λ (X) = 1 β (log E (exp(−βX)) − log λ) .
Thus, minimizing UBSR in this case is equivalent to entropy minimization.
Example 2.2 With p > 1, let ℓ(x) = 1 p x p , x ≥ 0, 0 otherwise.
The reader is referred to Section 4.9 of Föllmer and Schied [2016] for a detailed discussion of these sample loss functions.
Problem formulation
In this paper, we focus on two problems concerning shortfall risk, namely (i) UBSR estimation, and (ii) UBSR optimization within a parameterized family of distributions. We define these two problems below.
Define the function
g(t) := E[ℓ(−X − t)] − λ.(3)
We make the following assumption on the function g defined above.
Assumption 3.1 There exists t l , t u s.t. g(t l ) > 0 and g(t u ) < 0.
Under the above assumption, it can be shown using convexity and monotonicity of the loss function ℓ(·) that SR ℓ,λ (X) is finite, and also the unique root of the function g, i.e., the solution t * that satisfies g(t * ) = 0 coincides with SR ℓ,λ (X). Thus, the problem of UBSR estimation, i.e, estimating SR ℓ,λ (X) of a r.v. X, can be cast as a root finding problem. We consider a setting where the expectation in the definition of g(·) cannot be explicitly evaluated, Instead, we have access to samples from the distribution of X, and we use a stochastic root-finding scheme for the UBSR estimation.
Next, we define the the problem of UBSR optimization. Suppose that X belongs to a parameterized family of distributions {X(θ) : θ ∈ Θ}, where Θ is a compact and convex subset of R. The SR optimization problem for this parametrized class is given as
Find θ * ∈ arg min θ∈Θ SR λ (X(θ)).(4)
For the sake of simplicity, we focus on the case of a scalar parameter θ. Again, assuming that we have access to samples from the distribution of X, we use a stochastic gradient descent technique for SR optimization.
UBSR estimation
We consider a setting where the expectation in the definition of the function g cannot be explicitly evaluated. Instead, we assume that have access to samples from the distribution of X in an online fashion, and the goal is to have a recursive estimation scheme for UBSR.
Stochastic approximation Borkar [2008] is a class of algorithms for solving stochastic root-finding problems. UBSR estimation is a root-finding problem since one has to find a t * satisfying g(t * ) = 0, or E[ℓ(−X − t * )] = λ. For this problem, Dunkel and Weber [2010] proposed a stochastic approximation scheme for estimating UBSR, assuming access to a quantile oracle. In practical applications, it may not be realistic to assume sample access from the quantile function of the underlying distribution. In contrast, we propose a simple stochastic approximation scheme that estimates UBSR using i.i.d. samples from the distribution of the r.v. X. Moreover, in Dunkel and Weber [2010], the authors perform an asymptotic convergence analysis, while derive non-asymptotic bounds for UBSR estimation.
We propose a method to incrementally estimate UBSR using each additional sample. Specifically, we use the following update iteration: t n = Γ(t n−1 + a n (ĝ(t n−1 ))),
whereĝ(t) = ℓ(ξ n − t) − λ is an estimate of g(t) using an i.i.d. sequence {ξ i } from the distribution of −X, and Γ is a projection operator defined by Γ(x) = min(max(t l , x), t u ). Such a projection operator has been used in the context of UBSR estimation earlier, cf. Dunkel and Weber [2010].
One could estimate UBSR for a fixed set of samples using either a stochastic root-finding recursion such as (5) above, or perform a sample-average approximation using a binary search, as proposed in Hu and Dali [2016]. In the latter work, the authors provide asymptotic convergence/rate guarantees, while using the stochastic root-finding approach, we establish non-asymptotic bounds as well. We prefer the root-finding approach due to its iterative nature, which would make it more widely applicable in machine learning applications with streaming data (e.g., multi-armed bandits). The concentration bounds that we derive for UBSR estimation below are relevant in such an application context.
Main results
In addition to Assumption 3.1, we make the following assumptions for the bounds on UBSR estimation.
Assumption 4.1 |X| ≤ M 0 a.s. Assumption 4.2 There exists a µ 1 , L 1 > 0 s.t. −L 1 < ℓ ′ (−t) ≤ −µ 1 , for all t ∈ [t l , t u ].
Assumption 4.3 Let ε n =ĝ(t n ) − g(t n ). Then, there exists a σ > 0 such that E[ε 2 n ] ≤ σ 2 for all n ≥ 1.
Previous works on UBSR estimation (cf. Hu and Dali [2016], Dunkel and Weber [2010]) make similar assumptions.
In Föllmer and Schied [2002], the definition of UBSR is for a r.v. that is bounded, justifying Assumption 4.1. In Assumption 4.2, we assume ℓ is differentiable, which implies that g is differentiable as well. It is easy to see that the loss functions in Examples 2.1 and 2.2 satisfy 4.2. Next, Assumption 4.3 requires that the underlying noise variance is bounded: a natural assumption in the context of an estimation problem.
The first result below is a non-asymptotic bound on the estimation error E[(t n − SR λ (X)) 2 ] for a stepsize choice that requires the knowledge of µ 1 from Assumption 4.2.
Theorem 4.1 Suppose Assumptions 3.1 to 4.3 hold. Setting the step size a k = c k with 1 2 < µ 1 c, we have
E[(t n − SR λ (X)) 2 ] ≤ exp L 2 1 c 2 π 2 6 (t 0 − SR λ (X)) 2 n 2µ1c + σ 2 2 2µ1c c 2 (2µ 1 c − 1)n .(6)
Proof See Section 6.1.
Remark 4.1 The non-asymptotic bounds for UBSR estimation in this section, and for UBSR optimization in Section 5 are stated in a form similar to those for other stochastic approximation schemes, cf. Frikha and Menozzi [2012]. The first term on the RHS in the bound above concerns the initial error, i.e., the rate at which the algorithm 'forgets' the starting point t 1 . The second term relates to the noise variance in UBSR estimation. From the bound above, together with the fact that 1 2 < µ 1 c, it is apparent that the initial error is forgotten faster than the error due to the noise. On a different note, from the bound in (6), it is apparent that E[(t n − SR λ (X))] scales linearly with the reciprocal of the monotonicity parameter µ 1 . Dunkel and Weber [2010], the authors propose a stochastic approximation scheme that uses the quantile function of X. Lettingt n denote their stochastic approximation iterate, they establish that n 1/2 (t n − SR λ (X)) is asymptotically normal, say N (0, ζ 2 ) for a step-size choice that requires the knowledge of g ′ (SR λ (X)). Under mild regularity conditions (cf. Gerencsér [1999]), the asymptotic normality result implies nE(t n −SR λ (X)) 2 converges to a constant that depends on ζ 2 . The result we derived in Theorem 4.1 holds for all n, and does not require access to the quantile function of X. Nevertheless, our O(1/n) non-asymptotic bound is consistent with the asymptotic convergence rate of Dunkel and Weber [2010].
Remark 4.2 In
Next, we present a high probability bound for the SR estimation algorithm in (5).
Theorem 4.2 Suppose Assumptions 3.1 to 4.3 hold. Set the step size a k = c k with 1 2 < µ 1 c. Then, for any δ ∈ (0, 1), the following bound holds w.p. at least (1 − δ) :
|t n − SR λ (X)| ≤ log (1/δ) C 1 n + exp L 2 1 c 2 π 2 12 E[|t 1 − t * |] n µ1c + cσ2 2µ1c (2µ 1 c − 1) √ n ,(7)
where
C 1 = (2µ1c−1) 2 4µ 1 c+4 c 2 L 2 1 M 2 0 exp − L 2 1 c 2 π 2 6 .
Proof See Section 6.2.
The two results presented above required the knowledge of the monotonicity parameter µ 1 , which is typically unknown in a risk-sensitive learning setting. We now present a bound on the UBSR estimation error under a universal stepsize, i.e., one which does not require the knowledge of µ 1 .
Theorem 4.3 Suppose Assumptions 3.1 to 4.3 hold. Choose an n 0 such that a n0 L 2 1 < µ 1 . Then, we have the following bounds for two different step sizes:
Case I: Set a k = c k . Then, for any n ≥ n 0 , E[(t n − SR λ (X)) 2 ] ≤ C(n 0 ) E[(t 0 − SR λ (X)) 2 ] + σ 2 π 2 6 1 n µ1c + K 1 (n),
where C(n 0 ) = (1 + cL 1 ) 2n0 (n 0 + 1) µ1c and
K 1 (n) = O (1/n µ1c ) if µ 1 c < 1 O (log n/n) if µ 1 c = 1, O (1/n) if µ 1 c > 1.
Case II: Set a k = c k α for some α ∈ (0, 1). Then, for any n ≥ n 0 ,
E[(t n − SR λ (X)) 2 ] ≤ C(n 0 ) E[(t 0 − SR λ (X)) 2 ] + σ 2 c 2 n 0 exp − µ 1 cn 1−α 1 − α + 2σ 2 c 2 (µ 1 c) α 1−α (1 − α)n α , where C(n 0 ) = (1 + cL 1 ) 2n0 exp µ1cn 1−α 0 1−α .
Proof The proof proceeds by dividing the analysis into two parts about n 0 . See Section 6.3 for the details.
A few remarks are in order.
Remark 4.3 For Case I, the estimation error can decay as 1/n if c is chosen such that 2µ 1 c > 1. However, if µ 1 is not known, such a choice may not be feasible. Indeed, the error can decay much slower if c is such that 2µ 1 c is much smaller than 1. For Case II above, the estimation error decays as 1/n α where α can be chosen arbitrarily close to 1 when deciding the step size, and this choice does not depend on µ 1 . However, as α approaches 1, the first term grows in an unbounded manner. An advantage with the larger stepsize c/k α in Case II is that the initial error is forgotten exponentially fast, the corresponding rate is 1/n µ1c for the stepsize c/k.
Remark 4.4 The step size in Case II above is typically used in conjunction with iterate averaging Polyak and Juditsky [1992], Ruppert [1991]. We can also use iterate averaging in this setting, but we can show that it does not improve the error decay rate derived for Case II without employing iterate averaging. From a practical perspective, outputting the 'last iterate' is often preferable over iterate averaging, especially when the latter does not improve the convergence rate appreciably.
Remark 4.5 The authors in Dunkel and Weber [2010] analyze a iterate-averaged variant of the SR estimation algorithm (5), while assuming the knowledge of g ′ (SR λ (X)) for setting the step-size constant c. The rate they derive under this assumption is O(1/n) asymptotically. In comparison, our analysis is for a universal step-size, and we obtain a non-asymptotic bound of O(1/n α ), for α ∈ (0, 1). In practice, the knowledge of g ′ (SR λ (X)) is seldom available, motivating the universal step-size choice. The rate we derive in this case is comparable to the one obtained in Fathi and Frikha [2013] for general stochastic approximation schemes.
The final result on UBSR estimation is a high probability bound for a universal stepsize choice.
Theorem 4.4 Suppose Assumptions 3.1 to 4.3 hold. Set the step size a k = c k α with α ∈ (0, 1), and choose an n 0 such that L 2 1 a n0 < µ 1 . Then, for any δ ∈ (0, 1), and for any n ≥ n 0 , we have the following bound w.p. at least (1 − δ):
|t n − SR λ (X)| ≤ C 2 exp − µ 1 cn 1−α 2(1 − α) + C 3 n α/2 ,(8)
where
C 2 = 8L 1 M 0 log (1/δ) (1 + c 2 L 2 1 ) n0+1 c 2 ) c 2 L 2 1 + C(n 0 ) (E[(t 0 − SR λ (X)) 2 ] + σ 2 c 2 n 0 ), and C 3 = 8L 1 M 0 log (1/δ) 2(µ 1 c) α 1−α c 2 (1 − α) + σ 2 2(2µ 1 c) α 1−α c 2 (1 − α) .
In the above, µ 1 , σ 2 , and L 1 are specified in Assumptions 4.2 and 4.3, while the constant C(n 0 ) is as defined in Theorem 4.3.
Proof See Section 6.4.
In the result above, we have chosen the stepsize to be c/k α as choosing c/k does not guarantee a O(1/n) rate (see Remark 4.3).
UBSR Optimization
Recall the UBSR optimization problem:
Find θ * ∈ arg min θ∈Θ SR λ (X(θ)),(9)
where Θ is a compact and convex subset of R.
In this section, we devise a stochastic gradient algorithm that aims to solve the problem (9) using the following update iteration:
θ k+1 = θ k − a k h ′ k (θ k ),(10)
where a k is a step-size parameter, and h ′ k (θ k ) is an estimate of dSR λ (θ) dθ .
We operate in a risk-sensitive learning framework, i.e., we do not have direct access to UBSR SR λ (θ) and its derivative dSR λ (θ) dθ , for any θ. Instead, we can obtain samples of the underlying r.v. X(θ) corresponding to any parameter θ, and use these samples to form the estimate h ′ k (·). Let m k denote the number of i.i.d. samples in iteration k of (10) to estimate h ′ k (θ k ). In the section below, we describe the derivative estimation scheme, and subsequently present non-asymptotic bounds for the iterate governed by (10).
Estimation of UBSR derivative
We begin by presenting the expression for the derivative of SR λ (X(θ)) w.r.t. θ, derived in Hu and Dali [2016] :
Letting ξ = −X, dSR λ (θ) dθ = A(θ) B(θ) ,(11)
where
A(θ) E[(ℓ ′ (ξ(θ) − SR λ (θ)))ξ ′ (θ)], and B(θ) E[ℓ ′ (ξ(θ) − SR λ (θ))].
The expression above is derived by first interchanging the differentiation and integration operators in dSR λ (θ) dθ , and then invoking the implicit function theorem. The assumptions needed to justify these steps are given below.
We now present a scheme for estimating the UBSR derivative dSR λ (θ) dθ , for a given θ. Suppose we are given samples {ξ 1 , . . . , ξ m } from the distribution of −X(θ) for a given parameter θ. Using these samples, we form a biased estimator h ′ m (θ) of UBSR derivative as follows:
h ′ m (θ) = A m B m ,(12)
where
A m (θ) = 1 m m i=1 ℓ ′ (ξ i (θ) − t m (θ))ξ ′ i (θ), B m (θ) = 1 m m i=1 ℓ ′ (ξ i (θ) − t m (θ)).
In the above, t m (θ) is estimate of SR λ (θ), which is obtained by running (5) for m iterations. Notice that the estimate defined above is a ratio of estimates for the quantities A(θ) and B(θ), which are used in the expression (11) for
dSR λ (θ) dθ
. Notice that A m (θ) and B m (θ) are not unbiased estimates of A(θ) and B(θ), since the UBSR estimate t m (θ) is biased. Hence, it is apparent that h ′ m (θ) is a biased estimate of the UBSR derivative. Even though our estimator A m /B m for UBSR derivative is biased, we show in Lemma 5.1 that the estimation error is of the order O(1/m) from a mean-square error viewpoint, which implies our estimator converges to the UBSR derivative as the number of samples m tends to infinity.
Assumptions. We make the following assumptions for analyzing the consistency property of the UBSR derivative estimate (12). Recall that ξ = −X.
Assumption 5.1 The partial derivatives ∂ℓ(ξ(θ − t(θ))))/∂θ, ∂ℓ(ξ(θ) − t(θ))/∂t exist w.p. 1, and there exists a
β 1 > 0 such that E (ℓ ′ (ξ(θ) − SR λ (θ))) 2 ≤ β 1 < ∞, ∀θ ∈ Θ.
Assumption 5.2 The loss function ℓ(·) satisfies w.p. 1
|ℓ ′ (ξ(θ) − t)| ≤ L 1 , |ℓ ′′ (ξ(θ) − t)| ≤ L 2 , ∀(θ, t) ∈ Θ × [t l , t u ].
Assumption 5.3 The loss function ℓ(·) is twice differentiable, and for any θ ∈ Θ, ℓ ′ (ξ(θ) − SR λ (θ)) > η w.p. 1.
Assumption 5.4 sup θ∈Θ |ξ ′ (θ)| ≤ M 2 , and ξ ′ is L 3 -Lipschitz for all θ ∈ Θ w.p. 1.
We now discuss the motivation behind the assumptions listed above. The second moment bounds in Assumption 5.1 facilitate a convergence rate result for the estimator (12), and a similar assumption has been made in Hu and Dali [2016] in the context of an asymptotic normality result. The Lipschitz conditions in Assumption 5.2 allow the interchange of expectation and differentiation operators in arriving at the expression (11) for UBSR derivative, see also Hu and Dali [2016]. From the condition in Assumption 5.3 and the definition of B m , it is apparent that B m (θ) > η. Finally, the conditions in Assumptions 5.3 and 5.4 in conjunction with Assumption 5.2 ensure that the function
ℓ ′ (ξ(θ) − SR λ (θ)))ξ ′ (θ)
is Lipschitz, and this in turn enables the derivation of a convergence rate result for the estimate (12). Note that the loss functions in Examples 2.1 and 2.2 satisfy all the assumptions.
We now present a rate result for the UBSR derivative estimate (12).
Lemma 5.1 Suppose Assumptions 3.1 to 4.2 hold for every θ ∈ Θ and Assumptions 5.1 to 5.4 hold. Then for all m ≥ 1, the UBSR derivative estimator (12) satisfies
E h ′ m (θ) − dSR λ (θ) dθ ≤ C 4 √ m , and E h ′ m (θ) − dSR λ (θ) dθ 2 ≤ C 5 , where C 4 = √ β1(L1L3+2M2L2)ςM 2 0 η 2 and C 5 = 8β1M 2 2 (L 2 1 +β1) η 4
. Here the constants β 1 , L 1 , L 2 , L 3 , M 0 , M 2 and η are as specified in Assumptions 4.1 and 5.1 to 5.4.
Proof The proof uses a connection between empirical and true mean of a r.v. to the 1-Wasserstein distance between empirical and true distribution functions. For a detailed proof, see Section 7.1.
Under assumptions similar to those listed above, the authors in Hu and Dali [2016] establish an asymptotic consistency as well as normality results. In contrast, we establish a result in the non-asymptotic regime, with an O( 1 √ m ) that matches the aforementioned asymptotic rate.
Non-asymptotic bounds for UBSR optimization
In this section, we derive non-asymptotic bounds for UBSR optimization using the biased derivative estimates given above. We treat the strongly convex case first and then generalize to any convex SR λ (·) in the subsequent subsection.
Strongly convex case
In this subsection, let us assume that the UBSR objective SR λ (θ) is a strongly convex function, i.e., Assumption 5.5 For any θ ∈ Θ, the function h(θ) = SR λ (θ) satisfies h ′′ (θ) > µ 2 , for some µ 2 > 0.
In order to derive a non asymptotic bound for the last iterate, we need an upper bound on the second derivative of SR λ (θ). This is provided in the following lemma.
Lemma 5.2 The absolute value of the second derivative of SR λ (θ), |h ′′ (θ)| is bounded as
|h ′′ (θ)| ≤ 2L 1 L 2 M 2 2 + L 2 1 L 3 η 2 = L 4 ,(13)
where L 1 , L 2 , L 3 , M 2 and η are specified in Assumptions 5.2 to 5.4.
Proof The expression for h ′′ (θ) is obtained by differentiating h ′ (θ).
h ′′ (θ) = d 2 SR λ (θ) dθ 2 = dA dθ B − dB dθ A B 2 ,(14)
where
A = E[ℓ ′ (ξ(θ) − SR λ )ξ ′ (θ)], B = E[ℓ ′ (ξ(θ) − SR λ )], dA dθ = E[ℓ ′′ (ξ(θ) − SR λ )ξ ′ (θ) 2 + ℓ ′ (ξ(θ) − SR λ )ξ ′′ (θ)] and dB dθ = E[ℓ ′′ (ξ(θ) − SR λ )ξ ′ (θ)].
Using assumptions 5.2 to 5.4, we bound the absolute value of h ′′ (θ) as follows:
|h ′′ (θ)| ≤ dA dθ B + dB dθ A B 2 ≤ (L 2 M 2 2 + L 1 L 3 )L 1 + (L 2 M 2 )L 1 M 2 η 2 = 2L 1 L 2 M 2 2 + L 2 1 L 3 η 2 .
We now present a non-asymptotic bound for the last iterate θ n of the algorithm (10) with gradient estimates formed using (12). The batch size m used for gradient estimation is kept constant in each iteration k = 1, . . . , n. Using the results from Lemma 5.1 in conjunction with Assumption 5.5, we present a bound on the error E[ θ n − θ * 2 ] in the optimization parameter in the theorem below.
Theorem 5.1 Suppose Assumptions 3.1 to 4.2 hold for every θ ∈ Θ and Assumptions 5.1 to 5.5 hold. Let θ * denote the minimum of SR λ (·). Set a k = c/k in (10), with µ 2 c > 1 2 . For each iteration of (10), let m denote the batch size used for computing the estimate (12) corresponding to the parameter θ k , k = 1, . . . , n. Then, for all n ≥ 1, we have
E[ θ n − θ * 2 ] ≤ exp c 2 L 2 4 π 2 6 3 θ 0 − θ * 2 n 2µ2c + C 6 n + C 7 m ,(15)
where C 6 = 3C52 2µ 2 c c 2 (2µ2c−1) , and C 7 = 3C 2 4 c 2 2 5µ 2 c (µ2c) 2 , with C 4 and C 5 as defined in Lemma 5.1.
The batch size could be chosen as a function of the horizon n. Results in a similar spirit, i.e., where a stochastic gradient algorithm is run for n iterations, and the parameters such as step-size and batch size are set as a function of n are common in the literature, cf. Ghadimi and Lan [2013], Balasubramanian and Ghadimi [2018].
Proof See Section 7.2.
The first term in (15) represents the initial error, and it is forgotten at a rate faster than O(1/n) since µ 2 c > 1/2. The overall rate for the algorithm would depend on the choice of the batch size m, and it is apparent that the error E[ θ n − θ * 2 ] does not vanish with a constant batch size. As in the case of Theorem 4.1, we observe that the error E[ θ n − θ * ] has an inverse dependence on the strong convexity parameter µ 2 .
We now present a straightforward corollary of the result in Theorem 5.1 with a batch size that ensures the error in the parameter vanishes asymptotically.
Corollary 5.1 Under conditions of Theorem 5.1, with m = n ρ for some ρ ∈ (0, 1], we have
E[ θ n − θ * 2 ] ≤ exp c 2 L 2 4 π 2 6 3 θ 0 − θ * 2 n 2µ2c + C 6 n + C 7 n ρ = O 1 n ρ .
A few remarks are in order.
Remark 5.1 From the result in the corollary above, it is easy to see that the optimal choice of batch size is m = Θ(n), and this in turn ensures an O 1 n rate of convergence for the stochastic gradient algorithm (10). With a biased derivative estimation scheme in a slightly different context, the authors in Atchade et al. [2014] show that an increasing batch size is necessary for the error of gradient descent type algorithm to vanish. Finally, the O(1/n) bound in Theorem 5.1, which is for a setting where gradient estimates are biased, matches the minimax complexity result for strongly convex optimization with a stochastic first order oracle, cf. Agarwal et al. [2012].
Remark 5.2 In the result above, we have bounded the error E[ θ n − θ * 2 ] in the optimization parameter. Using Assumption 5.2 and m = Θ(n), we can also bound the optimization error E[SR λ (θ n )] − SR λ (θ * )] using Corollary 5.1 as follows:
E[SR λ (θ n )] − SR λ (θ * ) ≤ L 2 2 E[ θ n − θ * 2 ] = O 1 n .
For achieving this O 1 n rate, we used a batch size of Θ(n) in each iteration of (10), leading to a total sample complexity of Θ(n 2 ).
Remark 5.3
To understand the deviation from the non-asymptotic analysis of a regular stochastic gradient algorithm (cf. Moulines and Bach [2011]), we provide a brief sketch of the proof of Theorem 5.1.
Letting M k = 1 0 [h ′′ (mθ k + (1 − m)θ * )]dm, and z n = θ n − θ * , we have z k = z k−1 (1 − a k M k−1 ) − a k ε k−1 , where ε k = h ′ m (θ k ) − h ′ (θ k ))
. Unlike the setting of Moulines and Bach [2011], the noise in derivative estimate ε k is biased, i.e., E[ε k ] = 0. Now, unrolling the recursion above and taking expectations, we obtain
E z n 2 ≤ 3E[ z 0 2 ] n k=1 (1 − a k M k−1 ) 2 + 3E[ n k=1 [a k ε k−1 n j=k+1 (1 − a j M j−1 )] 2 (16) ≤ 3E[ z 0 2 ]n −2µ2c + 3 n k=1 c 2 k 2 E[ε 2 k−1 ](P k+1:n ) 2 I + 3 n k =l a k a l E[|ε l−1 |]E[|ε k−1 |]P k+1:n P l+1:n II ,(17)
where P i:j = j k=i (1 − a k M k−1 ) 2 . In the above, we used strong convexity to bound the first term in (16). Term (II) in (17) is extra when compared to the analysis in the unbiased case. The rest of proof uses the bounds obtained in Lemma 5.1 to bound terms (I) and (II) on the RHS of (17).
Convex case
In this subsection, we relax the strong convexity assumption, and work with any convex SR λ (·) function:
Assumption 5.6 For any θ ∈ Θ, the function h(θ) = SR λ (θ) satisfies h ′′ (θ) ≥ 0.
Next, since Θ is assumed to be a compact and convex subset of R in the problem (9), it has a finite diameter, as specified in the assumption below.
Assumption 5.7 The set Θ satisfies |θ 1 − θ 2 | ≤ D, ∀ θ 1 , θ 2 ∈ Θ, for some D > 0.
The stochastic gradient descent expression is given as follows:
θ k+1 = Π Θ (θ k − a k h ′ m (θ k )),(18)
where a k is a step-size parameter, h ′ m (θ k ) is an estimate of dSR λ (θ) dθ using m samples and Π Θ is the projection on to the set Θ.
The analysis in the convex case is for an algorithm that requires the knowledge of the horizon n, which is the number of iterations for which (18) is run. Using the value of n, we employ the following step-size selection scheme from Jain et al. [2021]:
n i = n − ⌈2 −i n⌉, 0 ≤ i ≤ p, and n p+1 = n,(19)
p := inf{i : 2 −i n ≤ 1}. In essence, the above scheme splits the horizon n into p phases, and keeps the step-size constant within a given phase.
Theorem 5.2 Suppose Assumptions 3.1 to 4.2 hold for every θ ∈ Θ, Assumptions 5.1 to 5.4, 5.6 and 5.7 hold. Suppose the update in (18) is performed for n iterations with step-size a k and batch size m k set as follows:
a k = a 0 2 −i √ n , and m k = 2 i n,
for some constant a 0 when n i < k ≤ n i+1 , 0 ≤ i ≤ p with n i , p as defined in (19). Then for any n ≥ 4,
E[h(θ n ) − h(θ * )] ≤ K 2 √ n + K 3 n ,(21)
where K 2 = 4D 2 a0 + 39DC 4 + (10C 5 + 11B 2 )a 0 , K 3 = 16a 0 BC 4 and B = L1M2 η .
Proof See Section 7.3.
Remark 5.4 From the bound above, it is apparent that for obtaining the O 1 √ n rate, the sample complexity of the algorithm (18) is n 2 log n.
Comparison to optimization with an inexact gradient oracle
In this section, we compare our contributions in the context of UBSR optimization to previous works that consider stochastic gradient algorithms with inputs from an inexact gradient oracle. A few recent works on this topics are For invoking the results from either Bhavsar and Prashanth [2022] or Pasupathy et al. [2018] for UBSR optimization, one requires a non-asymptotic bound for UBSR estimation, which we derive in our paper. In particular, these references consider an abstract optimization setting where the objective function measurements are biased, and the bias can be controlled through a batch size parameter. The bounds in Section 4 would enable UBSR optimization through a stochastic gradient scheme, and the results from these two references would apply. The gradient estimation scheme in the aforementioned references is based on the idea of simultaneous perturbation (or in simpler terms, finite differences), which is a 'black-box' scheme, i.e., does not use the form/structure of the objective function. In contrast, we use the form of the UBSR objective, which in turn leads to an expression for its derivative. Using this expression, we form an estimate of UBSR derivative from i.i.d. samples, and then analyze the statistical properties of the 'direct' estimator in Lemma 5.1. Thus, the bounds we derive in Theorem 5.1 are specialized to the UBSR optimization problem, leading to precise constants. Finally, in Pasupathy et al. [2018], the authors only provide asymptotic rate results in the form of central limit theorems, while we study the UBSR optimization problem from a non-asymptotic viewpoint. The bounds we derive contain precise guidelines for choosing step-size and batch size parameters, which aid practical implementations.
Next, the stochastic optimization framework considered in Balasubramanian and Ghadimi [2018] is not directly applicable for UBSR optimization, as they assume that the objective function measurements have zero-mean noise, while UBSR estimation results in a noise component with a positive mean. The latter can be controlled using the batch size used for estimation. In Karimi et al. [2019], the authors derive a non-asymptotic bound of the order O(log n/ √ n) using a stochastic gradient algorithm for a biased stochastic optimization problem. However, their framework does not feature a batch size parameter, and their result requires the existence of a Lyapunov function. Finally, in Devolder [2011], the author considers a biased gradient oracle, and provides a O(1/n) bound. However, their results are not directly applicable for UBSR optimization, as they consider a deterministic bias parameter, and their result does not feature a tunable batch-size parameter.
6 Proofs for SR estimation 6.1 Proof of Theorem 4.1
From the update rule (5), and the fact that SR λ (X) lies within the projected region [t l , t u ], we obtain z n = t n − SR λ (X) = T (t n−1 + a n (g(t n−1 ) + ε n−1 )) − T (SR λ (X)) = T (t n−1 + a n (g(t n−1 ) + ε n−1 )) − SR λ (X).
For any k ≥ 1, define
J k = 1 0 g ′ (mt k + (1 − m)SR λ (X))dm.(23)
Using Assumption 4.2, we obtain J k ≤ −µ 1 , for all k ≥ 1. Using J n we can express g(t n ) as,
g(t n ) = 1 0 g ′ (mt n + (1 − m)SR λ (X))dm(t n − SR λ (X)) = J n z n .
Squaring on both sides of (22), and using the fact that projection is non-expansive (see Lemma 10 in Vijayan and Prashanth [2021]), we obtain z 2 n ≤ [z n−1 + a n (g(t n−1 ) + ε n−1 )] 2 ≤ [z n−1 + a n (J n−1 z n−1 + ε n−1 )] 2 ≤ [z n−1 (1 + a n J n−1 ) + a n ε n−1 ] 2 ≤ z 2 n−1 (1 + a n J n−1 ) 2 + a 2 n ε 2 n−1 + 2z n−1 (1 + a n J n−1 )a n ε n−1 . Taking expectation E[z n |F n−1 ], where F n−1 is the sigma field generated by {t k , k < n}, and using E[ε n ] = 0, we obtain E[z 2 n ] ≤ (1 + a n J n−1 ) 2 E[z 2 n−1 ] + a 2 n E[ε 2 n−1 ] + 2E[z n−1 ](1 + a n J n−1 )a n E[ε n−1 ] ≤ (1 + a n J n−1 ) 2 E[z 2 n−1 ] + a 2 n E[ε 2 n−1 ]. Using Assumption 4.2, we have
E[z 2 n ] ≤ (1 + a n J n−1 ) 2 E[z 2 n−1 ] + a 2 n σ 2 ≤ E[z 2 0 ] n k=1 (1 + a k J k−1 ) 2 + σ 2 n k=1 [a 2 k n j=k+1 (1 + a j J j−1 ) 2 ].(24)
Using −L 1 ≤ J k ≤ −µ 1 , we have
(1 + a k J k ) 2 = 1 + 2a k J k + a 2 k J 2 k ≤ 1 − 2a k µ 1 + a 2 k L 2 1 ≤ exp −2µ 1 a k + L 2 1 a 2 k . Hence, we obtain E[z 2 n ] ≤ E[z 2 0 ] exp −2µ 1 n k=1 a k + L 2 1 n k=1 a 2 k + σ 2 n k=1 a 2 k exp −2µ 1 n j=k+1 a j + L 2 1 n j=k+1 a 2 j ≤ E[z 2 0 ] exp −2µ 1 c log n + L 2 1 c 2 π 2 6 + σ 2 n k=1 a 2 k exp −2µ 1 c log n k + 1 + L 2 1 c 2 π 2 6 (25) ≤ exp L 2 1 c 2 π 2 6 E[z 2 0 ] n 2µ1c + σ 2 n k=1 a 2 k n k + 1 −2µ1c ≤ exp L 2 1 c 2 π 2 6 E[z 2 0 ] n 2µ1c + σ 2 n −2µ1c n k=1 c 2 k 2 (k + 1) 2µ1c ≤ exp L 2 1 c 2 π 2 6 E[z 2 0 ] n 2µ1c + σ 2 2 n 2µ1c n k=1 c 2 k 2µ1c−2 (26) ≤ exp L 2 1 c 2 π 2 6 E[z 2 0 ] n 2µ1c + σ 2 2 4µ1c c 2 (2µ 1 c − 1) 1 n .
For the inequality in (25), we have used n k=1 1
k 2 ≤ ∞ k=1 1 k 2 = π 2 6
. For bounding the sum in (26), we used
1 n 2µ1c n k=1 k 2µ1c−2 ≤ 1 n 2µ1c n+1 0 k 2µ1c−2 dk ≤ (n + 1) 2µ1c−1 n 2µ1c (2µ 1 c − 1) ≤ 2 µ1c (2µ 1 c − 1) 1 n .(27)
Hence proved.
Proof of Theorem 4.2
We use the technique from Frikha and Menozzi [2012], and tailor the analysis to the SR estimation problem, instead of a general stochastic approximation scheme in Frikha and Menozzi [2012]. Moreoever, unlike the bounds in the aforementioned reference, we make all the constants explicit.
The centered form of the iterate z n = t n − SR λ (X) can be written as a telescoping sum as follows:
|z n | − E[|z n |] = n k=1 g k − g k−1 = n k=1 D k , where g k = E[|z k ||F k ], D k = g k − g k−1 and F k = σ(t 1 , . . . , t k ).
Let t i j (t) denote the iterate at time instant j, given that t i = t. Using this notation, we have
E[|t i j+1 (t) − t i j+1 (t ′ )| 2 ] ≤ E[|t i j (t) − t i j (t ′ ) + a j (ĝ(t i j+1 (t)) −ĝ(t i j (t ′ ))| 2 ] ≤ E[|t i j (t) − t i j (t ′ )| 2 ] + 2a j E[t i j+1 (t) − t i j+1 (t ′ )]E[ĝ(t i j+1 (t)) −ĝ(t i j (t ′ ))]+ a 2 J E[|ĝ(t i j+1 (t)) −ĝ(t i j (t ′ ))| 2 ] ≤ (1 − 2µ 1 a j + a 2 j L 2 1 )E[|t i j (t) − t i j (t ′ )| 2 ]
. Unrolling the recursion above, we obtain
E[|t i n (t) − t i n (t ′ )| 2 ] ≤ |t − t ′ | 2 n j=1 (1 − 2µ 1 a j + a 2 j L 2 1 ), leading to E[|t n − SR λ (X)||t i = t] − E[|t n − SR λ (X)||t i = t ′ ] ≤ E[|t i n (t) − t i n (t ′ )|] ≤ |t − t ′ |( n−1 j=1 (1 − 2µ 1 a j + a 2 j L 2 1 )) 1/2 ≤ a i |ĝ −ĝ ′ |( n−1 j=1 (1 − 2µ 1 a j + a 2 j L 2 1 )) 1/2 ≤ Γ i |ĝ −ĝ ′ |. where Γ i = a i ( n−1 j=i (1 − 2µ 1 a j + a 2 j L 2 1 )) 1/2 , t = t i−1 + a iĝ , and t ′ = t i−1 + a iĝ ′ . Now, P(|z n | − E[|z n |] > ε) = P ( n k=1 D k > ε) ≤ exp(−λε)(E[exp(λ n k=1 D k )]) ≤ exp(−λε)E[exp(λ n−1 k=1 D k )]E[exp(λD n )|F n−1 ].(28)
From the proof passage in [Prashanth et al., 2021, p. 585], it can be seen that an Γ-Lipschitz function f of a r.v. Z satisfying |Z| ≤ M 0 is Γ 2 M 2 0 -sub-Gaussian, i.e.,
E[exp(λ(f (Z))] ≤ exp λ 2 Γ 2 M 2 0 2 .
Using Assumption 4.1, and the fact that ℓ is L 1 Lipschitz, we haveĝ is L 2 1 M 2 0 -sub-Gaussian. Next, D n is a Γ n -Lipschitz function ofĝ, implying D n is 4Γ 2 n L 2 1 M 2 0 -sub-Gaussian. Using the bound above in (28), we obtain E[exp(λD n )|F n−1 ] ≤ exp 2λ 2 Γ 2 n L 2 1 M 2 0 . Plugging this bound into (28), followed by an optimization over λ, we obtain
P(|z n | − E[|z n |] > ε) ≤ exp(−λε) exp(2λ 2 L 2 1 M 2 0 n k=1 Γ 2 k ) ≤ exp − ε 2 16L 2 1 M 2 0 n k=1 Γ 2 k .(29)
We now specialize the bound in (29) using a k = c/k, with 1/2 < µ 1 c. In particular, we first compute n k=1 Γ 2 k for this stepsize choice, and subsequently derive the high probability bound. k + 1 n 2µ1c ≤ exp L 2 1 c 2 π 2 6 2 4µ1c c 2 (2µ 1 c − 1) 1 n .
Using the bound on n k=1 Γ 2 k in (29), we obtain
P(|z n | − E[|z n |] > ε) ≤ exp −C 1 nε 2 ,(30)
where
C 1 = (2µ 1 c − 1) 2 4µ1c+4 c 2 L 2 1 M 2 0 exp − L 2 1 c 2 π 2 6
. Using the bound on E[|z n |] from Theorem 4.1 in (30), we have
P |z n | − E|z n | ≤ log (1/δ) cn + exp L 2 1 c 2 π 2 12 E[|t 1 − SR λ (X)|] n µ1c + cσ2 2µ1c (2µ 1 c − 1) √ n ≥ 1 − δ.
Hence proved.
Proof of Theorem 4.3
The passage leading up to (24) holds for any choice of stepsize, and does not require the knowledge of µ 1 for setting the stepsize constant c. Using (24) as the starting point, we have
E[z 2 n ] ≤ E[z 2 0 ] n k=1 (1 + a k J k−1 ) 2 + σ 2 n k=1 [a 2 k n j=k+1 (1 + a j J j−1 ) 2 ].(31)
We split the terms on the RHS above into two regimes: k < n 0 and k ≥ n 0 . From Assumption 4.2, we have |J k | < L 1 .
We shall now simplify (31) under two different stepsize choices.
Case I: a k = c k Notice that
n k=1 (1 + a k J k−1 ) 2 = n0 k=1 (1 + a 2 k J 2 k−1 + 2a k J k−1 ) n k=n0+1 (1 + a k J k−1 ) 2 ≤ n0 k=1 (1 + a 2 k L 2 1 + 2a k L 1 ) n k=n0+1 (1 + a k J k−1 ) 2 ≤ (1 + cL 1 ) 2n0 e −µ1 n n 0 +1 a k ≤ (1 + cL 1 ) 2n0 e −µ1c log n n 0 +1 ≤ (1 + cL 1 ) 2n0 n 0 + 1 n µ1c ≤ C(n 0 ) 1 n µ1c ,
where C(n 0 ) = (1 + cL 1 ) 2n0 (n 0 + 1) µ1c .
We now handle the second term in (31) as follows: (1 + a j J j−1 ) 2 ] ≤ (1 + cL 1 ) 2n0 n 0 + 1 n µ1c n0−1 k=1 a 2 k + n k=n0 a 2 k k + 1 n µ1c ≤ (1 + cL 1 ) 2n0 (n 0 + 1) µ1c π 2 6 1 n µ1c + c 2 n µ1c n k=n0 c 2 k 2 (k + 1) µ1c .
In the above, we used n k=1 a 2 k = n k=1 c 2 k 2 < c 2 π 2 6 to arrive at the inequality in (32).
We now simplify (32) based on the value of µ 1 c in the following three cases:
Case a: µ 1 c > 1
Using the bound in (27), we have n k=n0 c 2 k 2 ( k+1 n ) µ1c ≤ 2 µ 1 c c 2 (µ1c−1) 1 n . Substituting this bound in (32), we obtain
E[z 2 n ] ≤ C(n 0 ) E[z 2 0 ] + σ 2 π 2 6 1 n µ1c + σ 2 c 2 2 µ1c (µ 1 c − 1) 1 n .(33)
Case b: µ 1 c = 1 In this case, we have n k=n0 c 2 k 2 k + 1 n µ1c ≤ 2 n n k=n0 c 2 k ≤ 2c 2 log(n + 1) n .
Substituting the bound derived above in (32), we obtain
E[z 2 n ] ≤ C(n 0 ) E[z 2 0 ] + σ 2 π 2 6 1 n + 2σ 2 c 2 log(n + 1) n .(34)
Case c: µ 1 c < 1 In this case, we can infer that
1 n µ1c n k=n0 c 2 k 2 (k + 1) µ1c ≤ 2 µ1c n µ1c n k=n0 c 2 k (1+(1−µ1c)) ≤ 2 µ1c c 2 (1 − µ 1 c)n ,
leading to the following overall bound:
E[z 2 n ] ≤ C(n 0 ) E[z 2 0 ] + σ 2 π 2 6 1 n µ1c + σ 2 2 µ1c c 2 (1 − µ 1 c) 1 n .(35)
We now turn to analyzing the case when the stepsize a k is larger than c/k.
Case II: a k = c k α for α ∈ (0, 1). First, we bound a factor in the first term of (31) as follows:
n k=1 (1 + a k J k−1 ) 2 = n0 k=1 (1 + a 2 k J 2 k−1 + 2a k J k ) n k=n0+1 (1 + a k J k−1 ) 2 ≤ (1 + cL 1 ) 2n0 exp −µ 1 n n0+1 a k ≤ (1 + cL 1 ) 2n0 exp − µ 1 c(n 1−α − n 1−α 0 ) 1 − α ≤ (1 + cL 1 ) 2n0 exp µ 1 cn 1−α 0 1 − α exp − µ 1 cn 1−α 1 − α ≤ C(n 0 ) exp − µ 1 cn 1−α 1 − α ,(36)
where C(n 0 ) = (1 + cL 1 ) 2n0 exp
µ1cn 1−α 0 1−α .
We now bound the second term in (31) by splitting the term around n 0 as follows: (1 + a j J j−1 ) 2 ]
≤ C(n 0 ) exp − µ 1 cn 1−α 1 − α n0−1 k=1 a 2 k + n k=n0 a 2 k exp − µ 1 c(n 1−α − k 1−α ) 1 − α ≤ C(n 0 )c 2 n 0 exp − µ 1 cn 1−α 1 − α + c 2 exp − µ 1 cn 1−α 1 − α n k=n0 k −2α exp µ 1 ck 1−α 1 − α (37) ≤ C(n 0 )c 2 n 0 exp − µ 1 cn 1−α 1 − α + 2(µ 1 c) α 1−α c 2 1 − α 1 n α .(38)
In arriving at (38), we have bounded the sum c 2 exp(− µ1cn 1−α 1−α ) n k=n0 k −2α exp( µ1ck 1−α 1−α ) in (37) by using arguments similar to those used in arriving at [Prashanth et al., 2021, Eq. (79)]. In particular, the latter bound uses Jensen's inequality and the convexity of f (x) = x −2α exp(x 1−α ). (36) and (38) in (31), we obtain
Substituting the bounds in
E[z 2 n ] ≤ C(n 0 ) E[z 2 0 ] + σ 2 c 2 n 0 exp − µ 1 cn 1−α 1 − α + σ 2 2(µ 1 c) α 1−α c 2 (1 − α)n α .(39)
Hence proved.
Proof of Theorem 4.4
Recall that n 0 is chosen such that for all n ≥ n 0 , we have c n α L 2 1 < µ 1 . Notice that
n k=1 Γ 2 k = n0−1 k=1 Γ 2 k + n k=n+0 Γ 2 k .(40)
We simplify the first term on the RHS as follows:
n0−1 k=1 Γ 2 k = n0−1 k=1 a 2 k ( n−1 j=k (1 − 2µ 1 a j + a 2 j L 2 1 )) = n0−1 k=1 a 2 k ( n0−1 j=k (1 − 2µ 1 a j + a 2 j L 2 1 ))( n j=n0
(1 − 2µ 1 a j + a 2 j L 2 1 ))
≤ (1 + c 2 L 2 1 ) n0 n0−1 k=1 a 2 k (1 + c 2 L 2 1 ) −k n j=n0 (1 − a j (2µ 1 − a j L 2 1 )) ≤ (1 + c 2 L 2 1 ) n0 n0−1 k=1 a 2 k exp(−µ 1 n j=n0 a j ) ≤ (1 + c 2 L 2 1 ) n0 exp − µ 1 c(n 1−α − n 1−α 0 ) 1 − α n0−1 k=1 a 2 k (1 + c 2 L 2 1 ) −k ≤ (1 + c 2 L 2 1 ) n0+1 c 2 c 2 L 2 1 exp − µ 1 c(n 1−α − n 1−α 0 ) 1 − α .(41)
We now simplify the second term on the RHS of (40) as follows:
n k=n0 Γ 2 k = n k=n0 a 2 k ( n−1 j=k (1 − 2µ 1 a j + a 2 j L 2 1 )) ≤ n k=n0 a 2 k n j=k (1 − a j (2µ 1 − a j L 2 1 )) ≤ n k=n0 a 2 k exp(−µ 1 n j=k a j ) ≤ n k=n0 a 2 k exp − µ 1 c(n 1−α − k 1−α ) 1 − α ≤ exp − µ 1 cn 1−α 1 − α n k=n0 c 2 k 2α exp µ 1 ck 1−α 1 − α ≤ 2(µ 1 c) α 1−α c 2 1 − α 1 n α .(42)
Using (41) and (42) in (40), we obtain
n k=1 Γ 2 k = n0−1 k=1 Γ 2 k + n k=n0 Γ 2 k ≤ (1 + c 2 L 2 1 ) n0+1 c 2 c 2 L 2 1 exp − µ 1 c(n 1−α − n 1−α 0 ) 1 − α + 2(µ 1 c) α 1−α c 2 1 − α 1 n α .
Using the above bound in (29), we obtain (43), we obtain
P(|z n | − E[|z n |] > ε) ≤ exp −cnε 2 ,(43)P |t n − SR λ (X)| ≤ C 2 exp − µ 1 cn 1−α 2(1 − α) + C 3 n α/2 ≥ 1 − δ,
where C 2 and C 3 are as defined in the theorem statement. Hence proved.
7 Proofs for SR optimization 7.1 Proof of Lemma 5.1
For a given t ∈ [t l , t u ], define
f m (t) = 1 m m i=1 ℓ ′ (ξ i (θ) − t) , and f (t) = E[ℓ ′ (ξ(θ) − t].
Let F denote the cumulative distribution function of ξ, and F n denote the empirical distribution function, i.e., F n (x) = 1 m m i=1 I {ξ − t ≤ x}, for all x ∈ R . Then, we have f m (t) = ℓ ′ dF m , and f (t) = ℓ ′ dF.
Using the fact that ℓ ′ is L 2 Lipschitz from Assumption 5.2, we obtain
|f m (t) − f (t)| ≤ L 2 W 1 (F n , F ),(44)
where W 1 (F 1 , F 2 ) = sup |E(f (X) − E(f (Y ))|, where the sup is over 1-Lipschitz functions.
Applying Theorem 3.1 of Lei [2020] with p = 1, q = 2, d = 1 there and using Assumption 4.1, we obtain
E[W 1 (F n , F )] ≤ ςM 2 0 √ m , leading to E |f m (t) − f (t)| ≤ L 2 ςM 2 0 √ m .
In the above, ς is a universal constant.
Along similar lines, we can infer
E f m (t) −f (t) ≤ (L 1 L 3 + M 2 L 2 )ςM 2 0 √ m ,(45)
wheref
m (t) = 1 m m i=1 ℓ ′ (ξ i (θ) − t) ξ ′ (θ), andf (t) = E[ℓ ′ (ξ(θ) − t)ξ ′ (θ)]. E h ′ m (θ) − dSR λ (θ) dθ = E A m (θ) B m (θ) − A(θ) B(θ) ≤ |B(θ)|E[|A m (θ) − A(θ)|] + |A(θ)|E[|B m (θ) − B(θ)|] η 2 ≤ |B(θ)| sup t∈[t l ,tu] E|f (t m ) −f (t)|] η 2 + |A(θ)| sup t∈[t l ,tu] E[|f (t m ) − f (t)|] η 2 ≤ √ β 1 (L 1 L 3 + M 2 L 2 )ςM 2 0 + √ β 1 L 2 ςM 2 0 M 2 η 2 √ m ,
where the final inequality used Assumptions 4.1, 5.1, 5.2 and 5.4. This proves the first claim.
For the second claim in the statement of the lemma, i.e.,
E h ′ m (θ) − dSR λ (θ) dθ 2 ≤ C 5 , we have E h ′ m (θ) − dSR λ (θ) dθ 2 = E A m (θ) B m (θ) − A(θ) B(θ) 2 = E B(θ)A m (θ) − A(θ)B(θ) + A(θ)B(θ) − A(θ)B m (θ) B m (θ)B 2 = E B(θ)(A m (θ) − A(θ)) − A(θ)(B m (θ) − B(θ)) B m (θ)B 2 ≤ 2B 2 (θ)E[|A m (θ) − A(θ)| 2 ] + 2A 2 (θ)E[|B m (θ) − B(θ)| 2 ] η 4 ≤ 4B 2 (θ)(E[A 2 m (θ)] + E[A 2 (θ)]) + 4A 2 (θ)(E[B 2 m (θ)] + E[B 2 (θ)]) η 4 ≤ 4β 1 (L 2 1 M 2 2 + β 1 M 2 2 ) + 4β 1 M 2 2 (L 2 1 + β 1 ) η 4 = 8β 1 M 2 2 (L 2 1 + β 1 ) η 4 = C 5 ,
where the final inequality used Assumptions 5.1, 5.2 and 5.4. Hence proved.
Proof of Theorem 5.1
We first rewrite the update rule (10) as follows:
θ n = θ n−1 − a n h ′ m (θ n−1 ) = θ n−1 − a n (h ′ (θ n−1 ) + ε n−1 ) , where ε n−1 = h ′ m (θ n−1 ) − h ′ (θ n−1 )). Letting z n = θ n − θ * , we have z n = z n−1 − a n (h ′ (θ n−1 ) + ε n−1 ) .
Let M k = 1 0 [h ′′ (mθ k + (1 − m)θ * )]dm. Then, h ′ (θ n ) = 1 0 [h ′′ (mθ k + (1 − m)θ * )
]dm(θ n − θ * ) = M n z n , and z n = z n−1 (1 − a n M n−1 ) − a n ε n−1 .
Unrolling the equation above, we obtain
z n = z 0 n k=1 (1 − a k M k−1 ) − n k=1 [a k ε k−1 n j=k+1 (1 − a j M j−1 )].
Taking expectations, using Jensen's inequality together with the fact a − b 2 ≤ 3 a 2 + 3 b 2 , we obtain
E[ z n 2 ] ≤ 3E[ z 0 2 ] n k=1 (1 − a k M k−1 ) 2 + 3E[ n k=1 [a k ε k−1 n j=k+1 (1 − a j M j−1 )] 2 ≤ 3E[ z 0 2 ] n k=1 (1 − a k M k−1 ) 2 + 3E[( n k=1 [a k ε k−1 n j=k+1 (1 − a j M j−1 )) 2 ] ≤ 3E[ z 0 2 ](P 1:n ) 2 + 3E[( n k=1 a k ε k−1 P k+1:n ) 2 ] (where P i:j = j k=i (1 − a k M k−1 )) ≤ 3E[ z 0 2 ] exp c 2 L 2 4 π 2 6 n −2µ2c + 3E[( n l=1 n k=1
[a k a l ε l−1 ε k−1 P k+1:n P l+1:n )]
≤ 3E[ z 0 2 ] exp c 2 L 2 4 π 2 6 n −2µ2c + 3E[ n k=1
a 2 k ε 2 k−1 (P k+1:n ) 2 + n k =l a k a l ε l−1 ε k−1 P k+1:n P l+1:n ]
≤ 3E[ z 0 2 ] exp c 2 L 2 4 π 2 6 n −2µ2c + 3 n k=1 c 2 k 2 E[ε 2 k−1 ](P k+1:n ) 2 I + 3 n k =l a k a l E[|ε l−1 |]E[|ε k−1 |]P k+1:n P l+1:n II .(46)
We bound P 2 i:j as follows:
P 2 i:j = j k=i (1 − a k M k−1 ) 2 = j k=i (1 + a 2 k M 2 k−1 − 2a k M k−1 ) ≤ exp j k=i (a 2 k M 2 k−1 − 2a k M k−1 ) ≤ exp j k=i (a 2 k L 2 4 − 2a k µ 2 ) ≤ exp c 2 L 2 4 π 2 6 e − j k=i 2a k µ2 ≤ exp c 2 L 2 4 π 2 6 i j 2µ2c .
We now bound term (I) using Lemma 5.1 as follows:
I = n k=1 c 2 k 2 E[ε 2 k−1 ](P k+1:n ) 2 ≤ C 5 exp c 2 L 2 4 π 2 6 n k=1 c 2 k 2 k + 1 n 2µ2c ≤ C 5 exp c 2 L 2 4 π 2 6 2 2µ2c c 2 (2µ 2 c − 1) 1 n .(47)
Next, using Lemma 5.1, we bound the term (II) on the RHS of (46) as follows: (1 − a j M j−1 )
II = n k =l a k a l E[|ε l−1 |]E[|ε k−1 |]P k+1:n P l+1:n ≤ C 4 √ m
Proof The proof of this lemma follows directly from Lemma 5.1. Let h ′ (θ) = dSR λ (θ) dθ . We first provide upper bound for |h ′ (θ)|.
|h ′ (θ)| = |E[(ℓ ′ (ξ(θ) − h(θ))ξ ′ (θ)]| |E[(ℓ ′ (ξ(θ) − h(θ))]| ≤ |E[(ℓ ′ (ξ(θ) − h(θ))ξ ′ (θ)]| η ≤ |E[(ℓ ′ (ξ(θ) − h(θ))|ξ ′ (θ)|]| η ≤ L 1 M 2 η = B.(50)
Using the fact that |x| − |y| ≤ |x − y| for any x, y ∈ R followed by an application of Lemma 5.1, we obtain
E[|h ′ m (θ)|] ≤ E[|h ′ m (θ) − h ′ (θ)|] + E[|h ′ (θ)|] ≤ C 4 √ m + |h ′ (θ)|.(51)
Using (|x| − |y|) 2 ≤ (|x − y|) 2 for any x, y ∈ R, we obtain
E[h ′ m (θ) 2 ] ≤ E[(h ′ m (θ) − h ′ (θ)) 2 ] + 2E[|h ′ m (θ)|]|h ′ (θ)| − h ′ (θ) 2 ≤ C 5 + 2 C 4 √ m + |h ′ (θ)| |h ′ (θ)| − h ′ (θ) 2 = C 5 + 2 C 4 √ m |h ′ (θ)| + h ′ (θ) 2 ≤ C 5 + 2BC 4 √ m + B 2 ,
where the second inequality follows from Lemma 5.1 and (51). The last inequality follows from (50).
Lemma 7.2 Suppose Assumptions 3.1 to 4.2 hold for all θ ∈ Θ and Assumptions 5.1 to 5.4, 5.7 and 5.6 hold. Suppose that the update in (18) is performed for n steps with step-size sequence {a k } n k=1 . Then for any 1 < k 0 < k 1 ≤ n,
k1 k=k0 2a k E[h(θ k ) − h(θ k0 )] ≤ k1 k=k0 (2a k DA k + a 2 k B k ),(52)
where
A k = C4 √ m k , B k = C 5 + 2BA k + B 2 . Proof Let δ k = h ′ m (θ k ) − h ′ (θ k ) and ζ k = |θ k − θ k0 |. From (18), we obtain ζ 2 k+1 = (Π Θ (θ k − a k h ′ m (θ k )) − θ k0 ) 2 ≤ (θ k − a k h ′ m (θ k ) − θ k0 ) 2 (53) = ζ 2 k − 2a k h ′ m (θ k )(θ k − θ k0 ) + a 2 k h ′ m (θ k ) 2 = ζ 2 k − 2a k (δ k + h ′ (θ k ))(θ k − θ k0 ) + a 2 k h ′ m (θ k ) 2 = ζ 2 k − 2a k δ k (θ k − θ k0 ) − 2a k h ′ (θ k )(θ k − θ k0 ) + a 2 k h ′ m (θ k ) 2 .(54)
The inequality in (53) holds because θ k0 belongs to the set Θ, and the operator Π Θ is non-expansive.
Taking expectation on both sides of (54), and using Lemma 7.1, we obtain
E[ζ 2 k+1 ] ≤ E[ζ 2 k ] − 2a k E[h ′ (θ k )(θ k − θ k0 )] − 2a k E[δ k (θ k − θ k0 )] + a 2 k [C 5 + 2BC 4 √ m k + B 2 ] ≤ E[ζ 2 k ] − 2a k E[h ′ (θ k )(θ k − θ k0 )] + 2a k C 4 √ m k |θ k − θ k0 | + a 2 k [C 5 + 2BC 4 √ m k + B 2 ] = E[ζ 2 k ] − 2a k E[h ′ (θ k )(θ k − θ k0 )] + 2a k A k ζ k + a 2 k [C 5 + 2BA k + B 2 ] ≤ E[ζ 2 k ] − 2a k E[h(θ k ) − h(θ k0 )] + 2a k A k ζ k + a 2 k B k ,(55)
where the second inequality follows from Lemma 5.1, while the last inequality follows from Assumption 5.6. Rearranging the terms in (55), we obtain
2a k E[h(θ k ) − h(θ k0 )] ≤ E[ζ 2 k ] − E[ζ 2 k+1 ] + 2a k A k ζ k + a 2 k B k .
Suming over k = k 0 to k 1 and using Assumption 5.7 to bound ζ k with D, we get (52).
Lemma 7.3 Suppose Assumptions 3.1 to 4.2 hold for all θ ∈ Θ and Assumptions 5.1 to 5.4, 5.6 and 5.7 hold. Then, with a k = a and m k = m, ∀k ≥ 1,
n k=1 E[h(θ k ) − h(θ * )] ≤ D 2 2a + 2nDA + naB 2 2 ,(56)where A = C4 √ m . Proof Let δ k = h ′ m (θ k ) − h ′ (θ k ) and ρ k+1 = θ k − a k (h ′ (θ k ) + δ k ). Using convexity of h(θ), we obtain h(θ k ) − h(θ * ) ≤ h ′ (θ k )(θ k − θ * ) = θ k − ρ k+1 a k − δ k (θ k − θ * ) = 1 a k (θ k − ρ k+1 − a k δ k )(θ k − θ * ) = 1 2a k (θ k − θ * ) 2 + (θ k − ρ k+1 − a k δ k ) 2 − (ρ k+1 − θ * + a k δ k ) 2 (57) = 1 2a k (θ k − θ * ) 2 − (ρ k+1 − θ * + a k δ k ) 2 + a k 2 h ′ (θ k ) 2 ,
where the equality in (57) is obtained using ab = 1 2 (a 2 + b 2 − (a − b) 2 ). Using h ′ (θ k ) ≤ B from (50), we obtain
h(θ k ) − h(θ * ) ≤ 1 2a k (θ k − θ * ) 2 − (ρ k+1 − θ * ) 2 − a 2 k δ 2 k − 2a k (ρ k+1 − θ * )δ k + a k 2 B 2 ≤ 1 2a k (θ k − θ * ) 2 − (ρ k+1 − θ * ) 2 − 2a k (ρ k+1 − θ * )δ k + a k 2 B 2 .
Taking expectations, and using (ρ k+1 − θ * ) 2 ≥ (θ k+1 − θ * ) 2 , we obtain
E[h(θ k ) − h(θ * )] ≤ 1 2a k E[(θ k − θ * ) 2 ] − E[(θ k+1 − θ * ) 2 ] − 2a k E[|θ k+1 − θ * ||δ k |] + a k 2 B 2 ≤ 1 2a k E[(θ k − θ * ) 2 ] − E[(θ k+1 − θ * ) 2 ] + 2a k A k E[|θ k+1 − θ * |] + a k 2 B 2 .(58)
By summing (58) over k, and using a k = a and m k = m along with the inequality |θ k − θ * | ≤ D, ∀k ≥ 1, we obtain (56).
Proof of Theorem 5.2:
For 0 ≤ i ≤ p + 1, define ν i as follows:
ν i = arg inf ni<k≤ni+1 E[h(θ k )], i ∈ [p + 1], and ν 0 = arg inf ⌈ n 4 ⌉<k≤n1 E[h(θ k )].(59)
The horizon n is split into p phases with each phase having a constant step-size and batch-size. We need to show that the final iterate θ n is close to an optimal θ * . Using ν p+1 = n, we obtain
E[h(θ n )] = E[h(θ ν0 )] + p i=0 E[h(θ νi+1 ) − h(θ νi )].(60)
In order to bound E[h(θ νi+1 ) − h(θ νi )], consider the case when i ≥ 1. Using Lemma 7.2 with k 0 = ν i and k 1 = n i+2 , we obtain
ni+2 k=νi 2a k E[h(θ k ) − h(θ νi )] n i+2 − ν i + 1 ≤ ni+2 k=νi (2a k DA k + a 2 k B k ) n i+2 − ν i + 1 ≤ 2a ni+1 DA ni+1 + a 2 ni+1 B ni+1 ,(61)
where the inequality in (61) follows from the fact that a k is a non increasing sequence and m k is a non decreasing sequence resulting in A k and B k being non increasing sequences as well. Also note that ν i ≥ n i + 1. Now we define the step-size a k and the batch size m k as some polynomial function of n as follows:
a k = a 0 2 −i n α1 , and m k = 2 i n α2 ,
for some positive constants a 0 , α 1 and α 2 when n i < k ≤ n i+1 , 0 ≤ i ≤ p. Substituting a k and m k in (61), we get ni+2 k=νi 2a k E[h(θ k ) − h(θ νi )] n i+2 − ν i + 1 ≤ 2DC 4 a 0 2 −3i/2 n α1+α2/2 + a 2 0 2 −2i n 2α1 C 5 + 2BC 4 2 i/2 n α2/2 + B 2 .
Next, we derive a lower bound for the expression on the left hand side of (63). Using E[h(θ k ) − h(θ νi )] ≥ 0 whenever n i < k ≤ n i+1 , we obtain
ni+2 k=νi 2a k E[h(θ k ) − h(θ νi )] n i+2 − ν i + 1 ≥ ni+2 k=ni+1+1 2a k E[h(θ k ) − h(θ νi )] n i+2 − ν i + 1 ≥ 2a ni+2 n i+2 − n i+1 n i+2 − n i E[h(θ νi+1 ) − h(θ νi )] ≥ 2a ni+2 5 E[h(θ νi+1 ) − h(θ νi )] = 2 −i a 0 5n α1 E[h(θ νi+1 ) − h(θ νi )],(64)
where the second inequality follows from the assumption E[h(θ νi+1 ) − h(θ νi )] ≥ 0, and the fact that n i+2 − n i+1 ≥ n i+2 −ν i +1. The last inequality follows from Lemma 4 of Bhavsar and Prashanth [2022]. Combining the inequalities in (63) and (64), we obtain E[h(θ νi+1 ) − h(θ νi )] ≤ 10DC 4 2 −i/2 n α2/2 + 5a 0 2 −i n α1 C 5 + 2BC 4 2 i/2 n α2/2 + B 2 .
The proof for the case when i = 0 is similar to the above. Using (65) in (60), we obtain
E[h(θ n )] ≤ E[h(θ ν0 )] + p i=0
10DC 4 2 −i/2 n α2/2 + 5a 0 2 −i n α1 C 5 + 2BC 4 2 i/2 n α2/2 + B 2 ≤ E[h(θ ν0 )] + ∞ i=0 10DC 4 2 −i/2 n α2/2 + 5a 0 2 −i n α1 C 5 + 2BC 4 2 i/2 n α2/2 + B 2 = E[h(θ ν0 )] + 10DC 4 n α2/2 (1 − 1/ √ 2) + 5(C 5 + B 2 )a 0 n α1 (1 − 1/2) + 10BC 4 a 0 n α1+α2/2 (1 − 2 −3/2 ) ≤ inf ⌈ n 4 ⌉≤k≤n1 E[h(θ k )] + 35DC 4 n α2/2 + 10(C 5 + B 2 )a 0 n α1 + 16BC 4 a 0 n α1+α2/2 .
For k ≤ n 1 , a k = a0 n α 1 and m k = n α2 . Using the fact that infimum is smaller than the weighted average, we obtain
inf ⌈ n 4 ⌉≤k≤n1 E[h(θ k ) − h(θ * )] ≤ 1 n 1 − ⌈ n 4 ⌉ + 1 n1 k=⌈ n 4 ⌉ E[h(θ k ) − h(θ * )] ≤ 2 n 1 n1 k=1 E[h(θ k ) − h(θ * )](67)≤ 2 n 1 D 2 n α1 2a 0 + 2n 1 DC 4 n α2/2 + n 1 a 0 B 2 2 (68) ≤ 4D 2 a 0 n 1−α1 + 4DC 4 n α2/2 + a 0 B 2 n α1 ,(69)
where (67) follows from n 1 ≤ 2(n 1 − ⌈ n 4 ⌉ + 1), (68) follows from Lemma 7.3 and (69) follows from the fact that n 1 ≥ n 4 . Using (69) in (66), we obtain the following:
E[h(θ n ) − h(θ * )] ≤ 4D 2 a 0 n 1−α1 + 39DC 4 n α2/2 + (10C 5 + 11B 2 )a 0 n α1 + 16BC 4 a 0 n α1+α2/2 .
The values for α 1 and α 2 which will result in the tightest bound are 1/2 and 1 respectively. Substituting these values, we get the main claim of Theorem 5.2.
Concluding Remarks
We considered the problem of estimating Utility Based Shortfall Risk (UBSR) in an online setting, when samples from the underlying loss distribution are available one sample at a time. We cast the UBSR estimation problem as a stochastic approximation based root finding scheme. We derived non-asymptotic convergence guarantees on the mean-squared error of our UBSR estimator for different step sizes. We also derived high probability bounds for the concentration of the estimation error.
Finally we considered the UBSR optimization problem, when the loss distribution belongs to a parameterized family. We proposed a stochastic gradient descent scheme, and derived non-asymptotic convergence guarantees under finite second moments. We faced the challenge of working with biased gradient estimates, which we addressed using batching. More broadly, the techniques developed in this work are applicable in a variety of settings, to characterize the finite sample performance of stochastic approximation and SGD algorithms.
Bhavsar and Prashanth [2022],Pasupathy et al. [2018],Duchi et al. [2012],Karimi et al. [2019],Devolder [2011].
( 1
1− a j M j−1 ) 2 k j=l+1
As future work, it would be interesting to explore UBSR optimization in a risk-sensitive reinforcement learning setting. An orthogonal direction of future research is to extend the UBSR optimization algorithm to a vector parameter context, using a gradient estimation scheme based on finite differences, and the simultaneous perturbation method.k + 1 nThe main claim follows by substituting the bounds obtained in(47)and(48)in(46).Proof of Theorem 5.2We state and prove three useful results in the following lemmas, which aid the proof of Theorem 5.2.Lemma 7.1 Suppose Assumptions 3.1 to 4.2 hold for all θ ∈ Θ and Assumptions 5.1 to 5.4 hold. Then for all m ≥ 1,
Coherent measures of risk. P Artzner, F Delbaen, J Eber, D Heath, Mathematical finance. 93P. Artzner, F. Delbaen, J. Eber, and D. Heath. Coherent measures of risk. Mathematical finance, 9(3):203-228, 1999.
Convex measures of risk and trading constraints. H Föllmer, A Schied, Finance and stochastics. 64H. Föllmer and A. Schied. Convex measures of risk and trading constraints. Finance and stochastics, 6(4):429-447, 2002.
Distribution oblivious, risk-aware algorithms for multi-armed bandits with unbounded rewards. A Kagrecha, J Nair, K Jagannathan, Advances in Neural Information Processing Systems. A. Kagrecha, J. Nair, and K. Jagannathan. Distribution oblivious, risk-aware algorithms for multi-armed bandits with unbounded rewards. In Advances in Neural Information Processing Systems, pages 11269-11278, 2019.
A general approach to multi-armed bandits under risk criteria. A Cassel, S Mannor, A Zeevi, Proceedings of the 31st Conference On Learning Theory. the 31st Conference On Learning TheoryA. Cassel, S. Mannor, and A. Zeevi. A general approach to multi-armed bandits under risk criteria. In Proceedings of the 31st Conference On Learning Theory, pages 1295-1306, 2018.
Estimation of spectral risk measures. A K Pandey, L A Prashanth, S P Bhat, AAAI Conference on Artificial Intelligence. A. K. Pandey, L. A. Prashanth, and S. P. Bhat. Estimation of spectral risk measures. In AAAI Conference on Artificial Intelligence, 2021.
Concentration inequalities for conditional value at risk. P Thomas, E Learned-Miller, International Conference on Machine Learning. P. Thomas and E. Learned-Miller. Concentration inequalities for conditional value at risk. In International Conference on Machine Learning, pages 6225-6233, 2019.
Stochastic root finding and efficient estimation of convex risk measures. J Dunkel, S Weber, Operations Research. 585J. Dunkel and S. Weber. Stochastic root finding and efficient estimation of convex risk measures. Operations Research, 58(5):1505-1521, 2010.
Concentration of risk measures: A Wasserstein distance approach. P Sanjay, L A Bhat, Prashanth, Advances in Neural Information Processing Systems. 32Sanjay P Bhat and L. A. Prashanth. Concentration of risk measures: A Wasserstein distance approach. Advances in Neural Information Processing Systems, 32:11762-11771, 2019.
Concentration bounds for CVaR estimation: The cases of light-tailed and heavy-tailed distributions. L A Prashanth, K Jagannathan, R K Kolla, International Conference on Machine Learning (Accepted). 2020L. A. Prashanth, K. Jagannathan, and R. K. Kolla. Concentration bounds for CVaR estimation: The cases of light-tailed and heavy-tailed distributions. In International Conference on Machine Learning (Accepted), 2020.
Cumulative prospect theory meets reinforcement learning: prediction and control. L A Prashanth, J Cheng, M C Fu, S I Marcus, C Szepesvári, International Conference on Machine Learning. L. A. Prashanth, J. Cheng, M. C. Fu, S. I. Marcus, and C. Szepesvári. Cumulative prospect theory meets reinforcement learning: prediction and control. In International Conference on Machine Learning, pages 1406-1415, 2016.
Deviation inequalities for an estimator of the conditional value-at-risk. Y Wang, F Gao, Operations Research Letters. 383Y. Wang and F. Gao. Deviation inequalities for an estimator of the conditional value-at-risk. Operations Research Letters, 38(3):236-239, 2010.
Large deviations bounds for estimating conditional value-at-risk. D B Brown, Operations Research Letters. 356D. B. Brown. Large deviations bounds for estimating conditional value-at-risk. Operations Research Letters, 35(6): 722-730, 2007.
Pac-bayesian bound for the conditional value at risk. Z Mhammedi, B Guedj, R C Williamson, Advances in Neural Information Processing Systems. H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. LinCurran Associates, Inc33Z. Mhammedi, B. Guedj, and R. C. Williamson. Pac-bayesian bound for the conditional value at risk. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 17919-17930. Curran Associates, Inc., 2020.
Learning bounds for risk-sensitive learning. J Lee, S Park, J Shin, Advances in Neural Information Processing Systems. H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. LinCurran Associates, Inc33J. Lee, S. Park, and J. Shin. Learning bounds for risk-sensitive learning. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 13867-13879. Curran Associates, Inc., 2020.
Convex risk measures: efficient computations via monte carlo. Available at SSRN 2758713. Z Hu, Z Dali, Z. Hu and Z. Dali. Convex risk measures: efficient computations via monte carlo. Available at SSRN 2758713, 2016.
A stochastic approximation method. The annals of mathematical statistics. H Robbins, S Monro, H. Robbins and S. Monro. A stochastic approximation method. The annals of mathematical statistics, pages 400-407, 1951.
Stochastic Approximation: A Dynamical Systems Viewpoint. V Borkar, Cambridge University PressV. Borkar. Stochastic Approximation: A Dynamical Systems Viewpoint. Cambridge University Press, 2008.
Optimization of conditional value-at-risk. R T Rockafellar, S Uryasev, Journal of risk. 2R. T. Rockafellar and S. Uryasev. Optimization of conditional value-at-risk. Journal of risk, 2:21-42, 2000.
Optimization methods for large-scale machine learning. L Bottou, F Curtis, J Nocedal, Siam Review. 602L. Bottou, F. E Curtis, and J. Nocedal. Optimization methods for large-scale machine learning. Siam Review, 60(2): 223-311, 2018.
Making the last iterate of sgd information theoretically optimal. P Jain, D Nagaraj, P Netrapalli, SIAM Journal on Optimization. 312P. Jain, D. Nagaraj, and P. Netrapalli. Making the last iterate of sgd information theoretically optimal. SIAM Journal on Optimization, 31(2):1108-1130, 2021.
Computing VaR and CVaR using stochastic approximation and adaptive unconstrained importance sampling. O Bardou, N Frikha, G Pagès, Monte Carlo Methods and Applications. 153O. Bardou, N. Frikha, and G. Pagès. Computing VaR and CVaR using stochastic approximation and adaptive uncon- strained importance sampling. Monte Carlo Methods and Applications, 15(3):173-210, 2009.
Stochastic approximation algorithms for superquantiles estimation. Bernard Bercu, Manon Costa, Sébastien Gadat, arXiv:2007.14659arXiv preprintBernard Bercu, Manon Costa, and Sébastien Gadat. Stochastic approximation algorithms for superquantiles estimation. arXiv preprint arXiv:2007.14659, 2020.
Non asymptotic controls on a recursive superquantile approximation. M Costa, S Gadat, Electronic Journal of Statistics. 152M. Costa and S. Gadat. Non asymptotic controls on a recursive superquantile approximation. Electronic Journal of Statistics, 15(2):4718-4769, 2021.
Online estimation of the geometric median in Hilbert spaces : non asymptotic confidence balls. H Cardot, P Cénac, A Godichon, H. Cardot, P. Cénac, and A. Godichon. Online estimation of the geometric median in Hilbert spaces : non asymptotic confidence balls, 2015.
Efficient and fast estimation of the geometric median in Hilbert spaces with an averaged stochastic gradient algorithm. H Cardot, P Cénac, P Zitt, H. Cardot, P. Cénac, and P. Zitt. Efficient and fast estimation of the geometric median in Hilbert spaces with an averaged stochastic gradient algorithm, 2011.
Estimating the geometric median in hilbert spaces with stochastic gradient algorithms: Lp and almost sure rates of convergence. A Godichon-Baggioni, Journal of Multivariate Analysis. 146A. Godichon-Baggioni. Estimating the geometric median in hilbert spaces with stochastic gradient algorithms: Lp and almost sure rates of convergence. Journal of Multivariate Analysis, 146:209-222, 2016.
Finite sample convergence rates of zero-order stochastic optimization methods. J C Duchi, M I Jordan, M J Wainwright, A Wibisono, Neural Information Processing Systems. J. C. Duchi, M. I. Jordan, M. J. Wainwright, and A. Wibisono. Finite sample convergence rates of zero-order stochastic optimization methods. In Neural Information Processing Systems, pages 1448-1456, 2012.
Zeroth-order (non)-convex stochastic optimization via conditional gradient and gradient updates. K Balasubramanian, S Ghadimi, Proceedings of the 32nd International Conference on Neural Information Processing Systems. the 32nd International Conference on Neural Information Processing SystemsK. Balasubramanian and S. Ghadimi. Zeroth-order (non)-convex stochastic optimization via conditional gradient and gradient updates. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pages 3459-3468, 2018.
Non-asymptotic bounds for stochastic optimization with biased noisy gradient oracles. Nirav Bhavsar, L A Prashanth, IEEE Transactions on Automatic Control. 2022To appearNirav Bhavsar and L. A. Prashanth. Non-asymptotic bounds for stochastic optimization with biased noisy gradient oracles. IEEE Transactions on Automatic Control, page (To appear), 2022.
On sampling rates in simulation-based recursions. R Pasupathy, P Glynn, S Ghosh, F S Hashemi, SIAM Journal on Optimization. 281R. Pasupathy, P. Glynn, S. Ghosh, and F. S. Hashemi. On sampling rates in simulation-based recursions. SIAM Journal on Optimization, 28(1):45-73, 2018.
A Wasserstein distance approach for concentration of empirical risk estimates. L A Prashanth, Sanjay P Bhat, arXiv:1902.10709arXiv preprintL. A. Prashanth and Sanjay P. Bhat. A Wasserstein distance approach for concentration of empirical risk estimates. arXiv preprint arXiv:1902.10709, 2020.
Creditmetrics-technical document,(new york. M Greg, Gupton, C Christopher, Mickey Finger, Bhatia, jp morganGreg M Gupton, Christopher C Finger, and Mickey Bhatia. Creditmetrics-technical document,(new york, jp morgan), 1997.
Shortfall risk and pension fund asset management. Zvi Bodie, 0015198XFinancial Analysts Journal. 473Zvi Bodie. Shortfall risk and pension fund asset management. Financial Analysts Journal, 47(3):57-61, 1991. ISSN 0015198X.
Stochastic finance. Hans Föllmer, Alexander Schied, de GruyterHans Föllmer and Alexander Schied. Stochastic finance. de Gruyter, 2016.
Concentration Bounds for Stochastic Approximations. N Frikha, S Menozzi, Electron. Commun. Probab. 1747N. Frikha and S. Menozzi. Concentration Bounds for Stochastic Approximations. Electron. Commun. Probab., 17:no. 47, 1-15, 2012.
Convergence rate of moments in stochastic approximation with simultaneous perturbation gradient approximation and resetting. L Gerencsér, IEEE Trans. Autom. Contr. 445L. Gerencsér. Convergence rate of moments in stochastic approximation with simultaneous perturbation gradient approximation and resetting. IEEE Trans. Autom. Contr., 44(5):894-905, 1999.
Acceleration of stochastic approximation by averaging. B T Polyak, A B Juditsky, SIAM Journal on Control and Optimization. 304B. T. Polyak and A. B. Juditsky. Acceleration of stochastic approximation by averaging. SIAM Journal on Control and Optimization, 30(4):838-855, 1992.
Stochastic approximation. Handbook of Sequential Analysis. D Ruppert, D. Ruppert. Stochastic approximation. Handbook of Sequential Analysis, pages 503-529, 1991.
Transport-entropy inequalities and deviation estimates for stochastic approximation schemes. M Fathi, N Frikha, Electronic Journal of Probability. 18M. Fathi and N. Frikha. Transport-entropy inequalities and deviation estimates for stochastic approximation schemes. Electronic Journal of Probability, 18, 2013.
Stochastic first-and zeroth-order methods for nonconvex stochastic programming. S Ghadimi, G Lan, SIAM J. Optim. 23S. Ghadimi and G. Lan. Stochastic first-and zeroth-order methods for nonconvex stochastic programming. SIAM J. Optim., 23:2341-2368, 2013.
Y F Atchade, G Fort, E Moulines, arXiv:1402.2365On stochastic proximal gradient algorithms. 23arXiv preprintY. F. Atchade, G. Fort, and E. Moulines. On stochastic proximal gradient algorithms. arXiv preprint arXiv:1402.2365, 23, 2014.
Information-theoretic lower bounds on the oracle complexity of stochastic convex optimization. Alekh Agarwal, Peter L Bartlett, Pradeep Ravikumar, Martin J Wainwright, IEEE Transactions on Information Theory. 585Alekh Agarwal, Peter L. Bartlett, Pradeep Ravikumar, and Martin J. Wainwright. Information-theoretic lower bounds on the oracle complexity of stochastic convex optimization. IEEE Transactions on Information Theory, 58(5): 3235-3249, 2012.
Non-asymptotic analysis of stochastic approximation algorithms for machine learning. E Moulines, F Bach, Advances in neural information processing systems. 24E. Moulines and F. Bach. Non-asymptotic analysis of stochastic approximation algorithms for machine learning. Advances in neural information processing systems, 24:451-459, 2011.
Non-asymptotic analysis of biased stochastic approximation scheme. B Karimi, B Miasojedow, E Moulines, H Wai, Conference on Learning Theory. PMLRB. Karimi, B. Miasojedow, E. Moulines, and H. Wai. Non-asymptotic analysis of biased stochastic approximation scheme. In Conference on Learning Theory, pages 1944-1974. PMLR, 2019.
Stochastic first order methods in smooth convex optimization. O Devolder, CORE. Technical reportO. Devolder. Stochastic first order methods in smooth convex optimization. Technical report, CORE, 2011.
Smoothed functional-based gradient algorithms for off-policy reinforcement learning: A non-asymptotic viewpoint. Nithia Vijayan, L A Prashanth, arXiv:2101.02137arXiv preprintNithia Vijayan and L. A. Prashanth. Smoothed functional-based gradient algorithms for off-policy reinforcement learning: A non-asymptotic viewpoint. arXiv preprint arXiv:2101.02137, 2021.
Concentration bounds for temporal difference learning with linear function approximation: the case of batch data and uniform sampling. L A Prashanth, N Korda, R Munos, Machine Learning. 1103L. A. Prashanth, N. Korda, and R. Munos. Concentration bounds for temporal difference learning with linear function approximation: the case of batch data and uniform sampling. Machine Learning, 110(3):559-618, 2021.
Convergence and concentration of empirical measures under Wasserstein distance in unbounded functional spaces. J Lei, Bernoulli. 261J. Lei. Convergence and concentration of empirical measures under Wasserstein distance in unbounded functional spaces. Bernoulli, 26(1):767-798, 2020.
| []
|
[
"Giant-atom entanglement in waveguide-QED systems including non-Markovian effect",
"Giant-atom entanglement in waveguide-QED systems including non-Markovian effect"
]
| [
"Xian-Li Yin \nDepartment of Physics\nSynergetic Innovation Center for Quantum Effects and Applications\nKey Laboratory of Low-Dimensional Quantum Structures and Quantum Control of Ministry of Education\nKey Laboratory for Matter Microstructure and Function of Hunan Province\nHunan Normal University\n410081ChangshaChina\n",
"Jie-Qiao Liao \nDepartment of Physics\nSynergetic Innovation Center for Quantum Effects and Applications\nKey Laboratory of Low-Dimensional Quantum Structures and Quantum Control of Ministry of Education\nKey Laboratory for Matter Microstructure and Function of Hunan Province\nHunan Normal University\n410081ChangshaChina\n"
]
| [
"Department of Physics\nSynergetic Innovation Center for Quantum Effects and Applications\nKey Laboratory of Low-Dimensional Quantum Structures and Quantum Control of Ministry of Education\nKey Laboratory for Matter Microstructure and Function of Hunan Province\nHunan Normal University\n410081ChangshaChina",
"Department of Physics\nSynergetic Innovation Center for Quantum Effects and Applications\nKey Laboratory of Low-Dimensional Quantum Structures and Quantum Control of Ministry of Education\nKey Laboratory for Matter Microstructure and Function of Hunan Province\nHunan Normal University\n410081ChangshaChina"
]
| []
| We study the generation of quantum entanglement between two giant atoms coupled to a common onedimensional waveguide. Here each giant atom interacts with the waveguide at two separate coupling points. Within the Wigner-Weisskopf framework for single coupling points, we obtain the time-delayed quantum master equations governing the evolution of the two giant atoms for three different coupling configurations: separated, braided, and nested couplings. For each coupling configuration, we consider both the Markovian and non-Markovian entanglement dynamics of the giant atoms, which are initially in two different separable states: single-and double-excitation states. Our results show that the generated entanglement depends on the phase shift, time delay, atomic initial state, and the coupling configuration. For the single-excitation initial state, there exists the steady-state entanglement for each coupling in both the Markovian and non-Markovian regimes due to the appearance of the dark state. For the double-excitation initial state, we observe entanglement sudden birth via adjusting the phase shift in both regimes. In particular, the maximally achievable entanglement for the nested coupling is about one order of magnitude larger than those of separate and braided couplings. We also find that the maximal entanglement for these three coupling configurations can be enhanced in the case of small time delays. This work can be utilized for the generation and control of entanglement in quantum networks based on giant-atom waveguide-QED systems, which have wide potential applications in quantum information processing. | null | [
"https://export.arxiv.org/pdf/2303.14746v1.pdf"
]
| 257,766,413 | 2303.14746 | 919e11fab93f682ab350c7b9e64881eb3a0a8081 |
Giant-atom entanglement in waveguide-QED systems including non-Markovian effect
26 Mar 2023
Xian-Li Yin
Department of Physics
Synergetic Innovation Center for Quantum Effects and Applications
Key Laboratory of Low-Dimensional Quantum Structures and Quantum Control of Ministry of Education
Key Laboratory for Matter Microstructure and Function of Hunan Province
Hunan Normal University
410081ChangshaChina
Jie-Qiao Liao
Department of Physics
Synergetic Innovation Center for Quantum Effects and Applications
Key Laboratory of Low-Dimensional Quantum Structures and Quantum Control of Ministry of Education
Key Laboratory for Matter Microstructure and Function of Hunan Province
Hunan Normal University
410081ChangshaChina
Giant-atom entanglement in waveguide-QED systems including non-Markovian effect
26 Mar 2023(Dated: March 28, 2023)
We study the generation of quantum entanglement between two giant atoms coupled to a common onedimensional waveguide. Here each giant atom interacts with the waveguide at two separate coupling points. Within the Wigner-Weisskopf framework for single coupling points, we obtain the time-delayed quantum master equations governing the evolution of the two giant atoms for three different coupling configurations: separated, braided, and nested couplings. For each coupling configuration, we consider both the Markovian and non-Markovian entanglement dynamics of the giant atoms, which are initially in two different separable states: single-and double-excitation states. Our results show that the generated entanglement depends on the phase shift, time delay, atomic initial state, and the coupling configuration. For the single-excitation initial state, there exists the steady-state entanglement for each coupling in both the Markovian and non-Markovian regimes due to the appearance of the dark state. For the double-excitation initial state, we observe entanglement sudden birth via adjusting the phase shift in both regimes. In particular, the maximally achievable entanglement for the nested coupling is about one order of magnitude larger than those of separate and braided couplings. We also find that the maximal entanglement for these three coupling configurations can be enhanced in the case of small time delays. This work can be utilized for the generation and control of entanglement in quantum networks based on giant-atom waveguide-QED systems, which have wide potential applications in quantum information processing.
I. INTRODUCTION
Quantum entanglement is the key resource for various quantum information applications [1][2][3], such as quantum key distribution [4,5], quantum dense coding [6], quantum teleportation [7], and quantum-computing technology [8]. The generation of quantum entanglement has been theoretically and experimentally studied in a variety of systems, such as optical systems [9], trapped ion systems [10,11], cavity quantum electrodynamics (QED) systems [12,13], circuit-QED systems [14][15][16], and waveguide-QED systems [17][18][19]. In particular, waveguide-QED systems provide an outstanding platform for generating long-distance quantum entanglement, and hence they can be regarded as a promising candidate for quantum information processing [20][21][22][23][24][25][26][27][28].
In traditional waveguide-QED systems, the atoms interact with the waveguide at single points and are commonly assumed as point-like objects, called dipole approximation [29]. This approximation is valid in quantum optics when the atoms are assumed to be much smaller than the wavelength of the coupled fields. However, the recent experimental and theoretical advances on giant atoms [30] indicate that this approximation becomes invalid when considering the coupling of the superconducting qubits either to the surface acoustic waves (SAWs) or microwave waveguides at multiple coupling points. Typically, quantum interference will take place in coupled quantum systems with multiple coupling points. It has been shown that the quantum interference effect will cause some interesting physical phenomena, such as frequency- * Corresponding author: [email protected] dependent Lamb shifts and relaxation rates [31], decoherencefree interatomic interaction [32][33][34][35], unconventional bound states [36][37][38][39][40], non-Markovian decay dynamics [41][42][43][44][45][46][47], and single-photon scattering [48][49][50][51].
Recently, many schemes have been proposed to generate long-range quantum entanglement between distant emitters in traditional waveguide-QED systems [52][53][54][55][56]. In addition, to increase the maximally achievable entanglement, chiral waveguide setups have also been used to generate twoqubit entanglement [57][58][59][60]. However, compared to the traditional waveguide-QED systems, quantum interference effects are more abundant and adjustable in giant-atom waveguide-QED systems. Consequently, the generation of entanglement between giant atoms can exhibit new features that cannot appear for small atoms [61,62]. Moreover, the non-Markovian retarded effect should be considered when the propagating time of photons between coupling points is comparable to the atomic lifetime. In this scenario, an interesting question is the study of the joint influence of quantum interference effect and the non-Markovian retarded effect on the entanglement generation.
In this paper we study the generation of quantum entanglement between two giant atoms with three different coupling configurations: the separate, braided, and nested couplings [32]. Here, each giant atom couples to a common waveguide at two separate coupling points. Based on the Wigner-Weisskopf theory [63] for single coupling points, we obtain the time-delayed quantum master equation of the two giant atoms for three different couplings. It can be shown that the time-delayed quantum master equation will reduce to the local quantum master equation derived through the SLH formalism in Ref. [32] under the Markovian limit (i.e., the case of the zero time delay). Concretely, we focus on the entan-glement dynamics of the two giant atoms by considering two different initially separable states. For a certain initial state and finite value of the time delay, we find that the entanglement dynamics between the two giant atoms can exhibit different features due to their different coupling configurations. In the case of the Markovian regime and the initial singleexcitation state, the maximally achievable entanglement of both the braided and nested couplings can exceed 0.5. However, for the separate coupling, the maximal entanglement can only reach 0.5, which is consistent with the small-atom case. When the giant atoms are initially in the double-excitation state, the maximal entanglement of the nested coupling is about one order of magnitude larger than those of the other two couplings. Moreover, we can observe the sudden birth of entanglement for these three couplings by adjusting the phase shift. In the case of the non-Markovian regime and the singleexcitation initial state, the generation of entanglement is delayed when the time delay takes a finite value. This indicates that the non-Markovian retarded effect works. Meanwhile, this non-Markovian effect also reduces the value of the stationary entanglement. For the double-excitation initial state, we find that the maximal entanglement for these three couplings can be enhanced for a small value of the time delay. As the time delay increases to much larger than the lifetime of the giant atoms, there is no entanglement generation in these three couplings for both the single-and double-excitation initial states.
The rest of this paper is organized as follows. In Sec. II, we introduce the physical system for two giant atoms coupled to a common waveguide and present the Hamiltonians. In Sec. III, we derive the time-delayed quantum master equations of the two giant atoms for three different coupling configurations and analyze the mechanism of entanglement generation of the two giant atoms. In Sec. IV, we study the entanglement dynamics between the two giant atoms in different phase shifts, time delays, and initial states. Finally, we conclude this paper in Sec. V.
II. SYSTEM AND HAMILTONIANS
We consider a two-giant-atom waveguide-QED system, in which each giant atom interacts with a common onedimensional waveguide through two separate coupling points, as shown in Fig. 1. By changing the arrangement of the coupling points, there are three different coupling configurations: separated [ Fig. 1(a)], braided [ Fig. 1(b)], and nested [ Fig. 1(c)] couplings. The locations of these coupling points are labelled by the coordinates x jn , with j = a, b marking the giant atoms and n = 1, 2 denoting the two coupling points of each atom. Under the rotating-wave approximation (RWA), the Hamiltonian of the system reads ( = 1) [51]
H =Ĥ 0 +Ĥ I ,(1)whereĤ 0 = ω 0 j=a,bσ + jσ − j + k ω kĉ † kĉ k ,(2)(a) 1 a x 2 a x (b) 1 a x 2 a x (c) 1 a x 2 a x 1 b x 1 b x 1 b x 2 b x 2 b x 2 b x 0 Z 0 Z 0 Z 0 Z 0 Z 0 Z 0 T 0 T 0 T FIG. 1.
Schematic of the coupling configurations for double twolevel giant atoms with energy separation ω 0 interacting with a common waveguide: (a) separate, (b) braided, and (c) nested couplings. The positions of the coupling points are labelled by x jn , with j = a, b and n = 1, 2 referring to the giant atoms the coupling points, respectively. In all panels, the two giant atoms are initially prepared in two different separable states. The θ 0 = k 0 d is the accumulated phase shift when the single photon passes through the neighboring coupling points of the giant atoms with the waveguide. andĤ I = j=a,b n=1,2 k (g jn e ikx jnĉ kσ
+ j + H.c.).(3)
Here ω 0 is the transition frequency between the excited state |e j and the ground state |g j of the giant atoms. The operator σ + j = |e j j g| (σ − j = |g j j e|) is the raising (lowering) operator of the giant atom j, andĉ k (ĉ † k ) is the annihilation (creation) operator of the propagating photons in the waveguide with wave vector k and frequency ω k . The constant term g jn is the coupling strength related to the coupling points x jn . For simplicity, we consider the case where the coupling strengthes at each coupling point are equal to g.
In the interaction picture with respect to H 0 , the atomwaveguide coupling Hamiltonian becomeŝ
V I (t) = g j=a,b n=1,2 B (x jn , t)σ + j e iω 0 t +B † (x jn , t)σ − j e −iω 0 t ,(4)
withB(x jn , t) = k e ikx jn e −iω k tĉ k being the operator associated with the fields.
III. QUANTUM MASTER EQUATIONS OF THE TWO GIANT ATOMS
To study quantum entanglement of the two giant atoms, we treat the fields in the waveguide as the environment of the (7) for three different coupling configurations. The superoperatorsL ind̺ (t − nt d ) with n = 1, 2, and 3 represent the individual non-local evolution of the two giant atoms. The superoperatorsL coll̺ (t − nt d ) with n = 1, 2, and 3 describe the non-local exchanging interaction and collective decay of the two giant atoms.
Superoperator
Coupling
Separated coupling Braided coupling Nested couplinĝ
L ind̺ (t − t d ) j (−i)γ sin θ 0 [σ + jσ − j ,̺(t − t d )] +2γ cos θ 0D [σ − j ]̺(t − t d ) 0 −iγ sin θ 0 [σ + bσ − b ,̺(t − t d )] +2γ cos θ 0D [σ − b ]̺(t − t d ) L ind̺ (t − 2t d ) 0 j (−i)γ sin(2θ 0 )[σ + jσ − j ,̺(t − 2t d )] +2γ cos(2θ 0 )D[σ − j ]̺(t − 2t d ) 0 L ind̺ (t − 3t d ) 0 0 −iγ sin(3θ 0 )[σ + aσ − a ,̺(t − 3t d )] +2γ cos(3θ 0 )D[σ − a ]̺(t − 3t d ) L coll̺ (t − t d ) i j −iγ 2 sin θ 0 [σ + iσ − j ,̺(t − t d )] +γ cos θ 0 (σ − i̺ (t − t d )σ + j − 1 2 [σ + iσ − j ,̺(t − t d )] + ) i j −3iγ 2 sin θ 0 [σ + iσ − j ,̺(t − t d )] +3γ cos θ 0 (σ − i̺ (t − t d )σ + j − 1 2 [σ + iσ − j ,̺(t − t d )] + ) i j (−i)γ sin θ 0 [σ + iσ − j ,̺(t − t d )] +2γ cos θ 0 (σ − i̺ (t − t d )σ + j − 1 2 [σ + iσ − j ,̺(t − t d )] + ) L coll̺ (t − 2t d ) i j (−i)γ sin(2θ 0 )[σ + iσ − j ,̺(t − 2t d )] +2γ cos(2θ 0 )(σ − i̺ (t − 2t d )σ + j − 1 2 [σ + iσ − j ,̺(t − 2t d )] + ) 0 i j (−i)γ sin(2θ 0 )[σ + iσ − j ,̺(t − 2t d )] +2γ cos(2θ 0 )(σ − i̺ (t − 2t d )σ + j − 1 2 [σ + iσ − j ,̺(t − 2t d )] + ) L coll̺ (t − 3t d ) i j −iγ 2 sin(3θ 0 )[σ + iσ − j ,̺(t − 3t d )] +γ cos(3θ 0 )(σ − i̺ (t − 3t d )σ + j − 1 2 [σ + iσ − j ,̺(t − 3t d )] + ) i j −iγ 2 sin(3θ 0 )[σ + iσ − j ,̺(t − 3t d )] +γ cos(3θ 0 )(σ − i̺ (t − 3t d )σ + j − 1 2 [σ + iσ − j ,̺(t − 3t d )] + ) 0
atoms and derive quantum master equation to govern the evolution of the two atoms. The formal master equation of the system in the interaction picture reads [64] ρ
I (t) = − t 0 dsTr w V I (t), V I (s),ρ w ⊗ρ I (s) ,(5)
whereρ I (t) is the density matrix of the two atoms,ρ w is the density matrix of the fields in the waveguide, and Tr w {•} denotes taking trace over these fields. We consider the case where all the field modes in the waveguide are initially in the vacuum stateρ w = |∅ ∅| with |∅ representing the empty states. Then, we have
Tr w [B † (x jn , t)B(x jn , s)ρ w ] = 0.(6)
Using the Wigner-Weisskopf approximation at each single coupling point and assuming ω k ≈ ω 0 + (k − k 0 )υ g , with k 0 (υ g ) being the wave vector (group velocity) of the field at frequency ω 0 [65,66], the dynamics of the two giant atoms in these three different coupling configurations are governed by the following unified time-delayed quantum master equatioṅ
ρ(t) =L locρ (t) + 3 n (L ind +L coll )̺(t − nt d ).(7)
Hereafter, we drop the superscript "I" and always refer to the master equation in the interaction picture. In addition, we introduce the definition
̺(t − nt d ) ≡ρ(t − nt d )Θ(t − nt d ),(8)
where Θ(t) is the Heaviside step function. The superoperator
L locρ (t) = 2γD[σ − a ]ρ(t) + 2γD[σ − b ]ρ(t)(9)
describes the local dissipation of the giant atoms a and b with the damping rate γ = 4πg 2 /υ g , and
D[ô]ρ(t) =ôρ(t)ô † − (ô †ôρ (t) +ρ(t)ô †ô )/2(10)
is the standard Lindblad superoperator. The superoperatorŝ L ind̺ (t − nt d ) with n = 1, 2, and 3 describe the individual non-local evolution of the giant atoms. Here, the time delay t d = d/υ g is introduced, where we assume that the distances between neighboring coupling points are equal to d. The su-peroperatorsL coll̺ (t − nt d ) with n = 1, 2, and 3 describe the non-local exchanging interaction and collective decay of the giant atoms. The specific expressions ofL ind̺ (t − nt d ) and L coll̺ (t − nt d ) for these three couplings in Fig. 1 are summarized in Table I. Note that the time-delayed quantum master equation for a single giant atom has been derived in Ref. [67]. It is interesting to point out that under the Markovian approximation nt d → 0, the density operator̺(t−nt d ) is replaced byρ(t). In this case, the nonlocal Markovian quantum master equation (7) is reduced to the following local master equatioṅ
ρ(t) = −i[Ĥ ′ ,ρ(t)] + j=a,b Γ jD [σ − j ]ρ(t) + i j Γ coll σ − iρ (t)σ + j − 1 2 [σ + iσ − j ,ρ(t)] + . (11)
Here the Hamiltonian takes the following form
H ′ = δω aσ + aσ − a + δω bσ + bσ − b + g ab (σ + aσ − b + H.c.),(12)
where δω a and δω b are the Lamb shifts of the two giant atoms and g ab is the exchanging interaction strength. The parameters Γ j=a,b and Γ coll in Eq. (11) are the individual and collective decay rates of the giant atoms, respectively. We note that Eq. (11) is consistent with the quantum master equation derived by the SLH formalism in Ref. [32] after returning back to the Schrödinger picture. In addition, we neglect the nonradiative decay ratio γ nr and pure dephasing γ φ of the giant atoms in Eq. (7), because the decay rates γ nr and γ φ are much smaller than the coupling rate γ in realistic physical systems. For example, the quantity γ ′ = γ nr + γ φ is generally at least ten times smaller than the decay rate γ in superconducting qubit systems [68,69]. Before studying the entanglement generation for a general case in the two-atom space, we first consider the entanglement generation in the Markovian limit γt d → 0 when the two atoms are initially in the single-excitation state and two-excitation state. When we consider the Markovian limit and assume the system dynamics is restricted into the singleexcitation subspace, the jump terms in the local quantum master equation (11) can be neglected to obtain the non-Hermitian effective Hamiltonian
H eff = j=a,b δω jσ + jσ − j + i j gσ + iσ − j − i 2 j=a,b Γ jσ + jσ − j − i 2 i j Γ collσ + iσ − j .(13)
In this case, the density matrix can be expressed aŝ ρ(t) = |ψ(t) ψ(t)|, and |ψ(t) is governed by the following Schrödinger equation
i ∂|ψ(t) ∂t =Ĥ eff |ψ(t) .(14)
According to Eq. (14), it is straightforward to analytically solve the system dynamics of the two giant atoms by assuming their general state in the single-excitation subspace as
|ψ(t) = c eg (t) |e a |g b + c ge (t) |g a |e b ,(15)
where c eg (t) and c ge (t) are the probability amplitudes. Using the Laplace transform and its inverse, we can obtain the analytical expressions of c eg (t) and c ge (t) under the corresponding initial conditions.
To study the entanglement generation in containing the twoexcitation components, we work in the collective state representation, where the two giant-atom system behaves as a single four-level system with states |ψ 2 = |e a |e b , |ψ 0 = |g a |g b , and |ψ ± . According to the eigen-equationĤ ′ |ψ ± = E ± |ψ ± , the expressions of the collective states |ψ ± are given by
|ψ ± = N ± δω a − δω b ± Ω g ab |e a |g b + 2|g a |e b ,(16)
with the corresponding eigenvalues
E ± = 1 2 (δω a + δω b ± Ω),(17)
where Ω = 4g 2 ab + (δω a − δω b ) 2 is the level shift induced by the exchanging interaction and the difference of the Lamb shifts of the two giant atoms. The normalization constants N ± in Eq. (16) are defined by
N ± = 4 + 1 g 2 ab (δω a − δω b ± Ω) 2 −1/2 .
(18) Figure 2 shows the energy-level diagram of the double twolevel giant atoms, including the levels and transition rates between different levels. To obtain the transition rates between these levels, we use the basis {|ψ 2 , |ψ + , |ψ − , |ψ 0 } to obtain the evolution of the diagonal elements of the quantum master equation (11) aṡ
ρ 22 (t) = −(Γ a + Γ b )ρ 22 (t), ρ ++ (t) = Γ 2+ ρ 22 (t) + Γ ++ ρ ++ (t) + Γ +− ρ +− (t) + Γ −+ ρ −+ (t), ρ −− (t) = Γ 2− ρ 22 (t) + Γ −− ρ −− (t) + Γ +− ρ +− (t) + Γ −+ ρ −+ (t), ρ 00 (t) = Γ +0 ρ ++ (t) + Γ +− ρ +− (t) + Γ −+ ρ −+ (t) + Γ −0 ρ −− (t).(19)
According to Eqs. (11) and (19), the transition rates are given by
Γ 2+ = Γ b α + − Γ a α − + 4g ab Γ coll 2Ω , Γ 2− = Γ a α + − Γ b α − − 4g ab Γ coll 2Ω , Γ +0 = −Γ ++ = Γ a α + − Γ b α − + 4g ab Γ coll 2Ω , Γ −0 = −Γ −− = Γ b α + − Γ a α − − 4g ab Γ coll 2Ω , Γ +− = Γ −+ = (Γ a − Γ b ) √ −α + α − + 2g ab Γ coll −α − α + − −α + α − 4Ω ,(20)
with α ± = (δω a − δω b ) ± Ω. Equation (20) indicates that the transition rates between the collective states of the two giant atoms depend on the parameters g, δω j , Γ j , and Γ coll , which can be adjusted by tuning the phase shift θ 0 or designing different coupling configurations. It is straightforward to prove that Γ +− = Γ −+ = 0 for the separate and braided couplings, and hence there are no ρ +− (t) and ρ −+ (t) terms in Eq. (20). After obtaining these transition rates, the entanglement generation will be clarified based on Fig. 2. In addition, we would like to point out that the system evolves in the absence of the external pumping. To generate the long-lived maximally entangled states, one may drive the double-giant-atom waveguide-QED system with external fields [62]. Below, we will study the entanglement dynamics between the two giant atoms for three different coupling configurations in both the Markovian and non-Markovian regimes, in which the time delay is neglected and considered, respectively.
IV. ENTANGLEMENT DYNAMICS BETWEEN TWO GIANT ATOMS
In this section, we study the entanglement generation between the two giant atoms for three different coupling configurations shown in Fig. 1. To determine the entanglement dynamics of the giant atoms, we need to solve the quantum master equations of the reduced density operatorρ describing the two giant atoms. The entanglement of the double twolevel giant atoms can be quantified by the concurrence [70], which is defined as
C(t) = max(0, λ 1 − λ 2 − λ 3 − λ 4 ),(21)
where λ i are the eigenvalues (in descending order) of the spinflipped density matrixρ =ρ(σ y ⊗σ y )ρ * (σ y ⊗σ y ), withσ y being the Pauli spin-flip operator. Note that C = 1 and C = 0 correspond to a maximally entangled state and a separable state, respectively. The time-delayed quantum master equation (7) can be numerically solved under given initial conditions. For each coupling configuration, we will consider that the two giant atoms are initially in the single-excitation state |ψ(0) = |e a |g b and the double-excitation state |ψ(0) = |e a |e b , respectively.
A. Entanglement generation between two separate giant atoms
We begin by considering the two separated giant atoms depicted in Fig. 1(a). In this case, the time-delayed quantum master equation is given bẏ
ρ(t) =L locρ (t) +L ind̺ (t − t d ) +L coll̺ (t − t d ) +L coll̺ (t − 2t d ) +L coll̺ (t − 3t d ),(22)
where the local dissipation operatorL locρ (t) is given by Eq. the two separate giant atoms, we can obtain the Lamb shifts δω a = δω b = γ sin θ 0 , the exchanging coupling strength g ab = γ[sin θ 0 + 2 sin(2θ 0 ) + sin(3θ 0 )]/2, the individual decay rates Γ a = Γ b = 2γ(1 + cos θ 0 ), and the collective decay rate Γ coll = γ[cos θ 0 + 2 cos(2θ 0 ) + cos(3θ 0 )]. To see the entanglement generation, we consider that the two separate giant atoms are initially in two different separable states.
In Fig. 3, we show the time evolution of the concurrences C (S ) eg and C (S ) ee as functions of the dimensionless quantities γt and θ 0 /π at various values of the time delay γt d . Note that the superscript "S " denotes the separate-coupling case and the subscript "eg" ("ee") corresponds the atomic initial state |ψ(0) = |e a |g b (|e a |e b ). The left and right columns in Fig. 3 represent the cases of single-and double-excitation initial states, respectively. Figures 3(a)−3(f) show that, for a finite value of γt d , both C (S ) eg and C (S ) ee are modulated by the phase shift θ 0 . Meanwhile, the dependence of C (S ) eg and C (S ) ee on θ 0 is a 2π-periodic function. For a phase shift θ 0 ∈ [0, π], both C (S ) eg and C (S ) ee satisfy the relation C (S ) eg(ee) (t, θ 0 ) = C (S ) eg(ee) (t, 2π − θ 0 ). From Fig. 3(a) we can see that when |ψ(0) = |e a |g b and γt d = 0, the C (S ) eg is zero for t = 0 since the initial state is separable, and then it increases gradually, except for some special phases θ 0 = (2n + 1)π, with an integer n. This is because when θ 0 = (2n + 1)π, the exchanging interaction strength g ab , individual decay rate Γ a (Γ b ), and collective decay rate Γ coll become zero. Then the two separate giant atoms are decoupled from the waveguide. Hence there is no entanglement generation between the two giant atoms. When θ 0 = (n + 1/2)π and 2nπ, we find that C (S ) eg tends asymptotically to a steadystate value 0.5 in the long-time limit. In this case, the generated entanglement does not decay since the dark state appears. We notice that the exchanging interaction strength is zero but the individual and collective decay rates are non-zero at θ 0 = (n + 1/2)π and 2nπ. For other phase shifts, such as θ 0 = π/4 and 3π/4, it can be seen that the C (S ) eg decreases to zero after reaching its maximal value, as shown in the valleys in Fig. 3(a).
According to Eqs. (14) and (15), the concurrence C (S ) eg in the case of the single-excitation initial state and the Markovian limit can be analytically obtained as
C (S ) eg (t) = e −2(1+cos θ 0 )γt sinh 4e 2iθ 0 cos(θ 0 /2) 2 γt .(23)
By substituting θ 0 = π/2 and 2π into Eq. (23), the concurrence becomes C (S ) eg (t) = (1 − e −4γt )/2 and C (S ) eg (t) = (1 − e −8γt )/2, respectively. For the two values of θ 0 , it is straightforward to find that C (S ) eg (t) approaches a steady-state value 0.5 at the rates 4γ and 8γ, respectively. It can be proved that, for the separate coupling, the individual decays and Lamb shifts satisfy the relations Γ a = Γ b and δω a = δω b . In this case, the transition rates Γ +− = Γ +− = 0 and the states |ψ ± are reduced to the symmetric and antisymmetric states |± = (|e a |g b + |g a |e b )/ √ 2. In particular, by substituting θ 0 = 2π + |ǫ| (θ 0 = π/2 + |ǫ|) with |ǫ| ≪ 1 into Eq. (20), we obtain the transition rates Γ +0 ≈ 8γ and Γ −0 ≈ 0 (Γ +0 ≈ 0 and Γ +0 ≈ 4γ). This means that, when θ 0 = 2π + |ǫ| (θ 0 = π/2 + |ǫ|), the state |+ (|− ) becomes a dark state, which is completely decoupled from the waveguide. As a result, the concurrence C (S ) eg (t) does not decay and it reaches a stationary value C (S ) eg (t → ∞) = 0.5. Note that here we let θ 0 slightly deviate from 2π and π/2 for ensuring Ω 0 in Eq. (20).
For the two separate giant atoms initially in the state |ψ(0) = |e a |e b , it can be seen from Fig. 3(b) that the entanglement dynamics exhibits some features different from Fig. 3(a). In this initial state, as shown by the decay process in Fig. 2, the two giant atoms first evolve into a mixture of two maximally entangled states |+ and |− [see Eq. (16)], and eventually decay to the ground state |ψ 0 = |g a |g b . In general, the mixing of the states |+ and |− is not an entangled state. From Fig. 3(a), we see that there is no entangle-ment generation for all values of θ 0 at the initial finite time. The concurrence C (S ) ee is created at later times for some phase shifts, due to the asymmetry between the two cascades shown in Fig. 2. This phenomenon is known as the entanglement sudden (delayed) birth [71][72][73]. For this coupling configuration and initial state, the maximal generated entanglement can only reach a small value C (S ) ee ≈ 0.03. When the time delay γt d is taken into account, the time evolution of the concurrences C (S ) eg and C (S ) ee can also be obtained by numerically solving the time-delayed quantum master equation (22). As shown in Figs. 3(c)−3(h), C (S ) eg and C (S ) ee are characterized by different features by adjusting γt d from a small value (i.e., γt d = 0.1) to a large value (i.e., γt d = 1). In Figs. 3(c) and 3(d), we take γt d = 0.1, which corresponds the case where the propagation time t d of photons between the neighboring coupling points is less than the lifetime 1/γ of each giant atom. As can be seen from Fig. 3(c), when the non-Markovian retarded effect exists, C (S ) eg is mainly created around θ 0 = 2nπ, (n + 1/2)π, and (2n + 1)π + |ǫ| with the increase of time. However, the maximally achievable steadystate values of C (S ) eg decrease compared with Fig. 3(a). We find that C (S ) eg exhibits slight oscillation before achieving its steadystate value at θ 0 = 2nπ. In addition, the steady-state value of C (S ) eg at θ 0 = 2nπ is smaller than that at θ 0 = (n + 1/2)π due to different quantum interference effect in these phase shifts, as shown in Fig. 3(c). For the initial state |ψ(0) = |e a |e g , the maximal value of C (S ) ee in Fig. 3(d) is largely enhanced but decays to zero faster when θ 0 is near 2nπ.
As the time delay gets a further increase to γt d = 1, i.e., the propagating time of photons between neighboring coupling points is comparable to the lifetime of the giant atoms. In this case, both the concurrences C (S ) eg and C (S ) ee exhibit stronger oscillation and more peaks. For the initial state |ψ(0) = |e a |g g , C (S ) eg is also created at later times in the presence of the time delay. In particular, C (S ) eg is created even when θ 0 = π, as shown in Fig. 3(e), which is a remarkable symbol of the non-Markovian recovery phenomenon. For the concurrence C (S ) ee in Fig. 3(f), it is mainly created around θ 0 = 2nπ and its peak value decreases comparing with Fig. 3(d). When we consider the limit case γt d → ∞, the two giant atoms individually decay. Then C (S ) eg and C (S ) ee become independent of θ 0 and always retain their initial value C (S ) eg = C (S ) ee = 0, as shown in Figs. 3(g) and 3(h).
B. Entanglement generation between two braided giant atoms
We now turn to the case of two braided giant atoms, as shown in Fig. 1(b). According to Eq. (7) and Table I, the time-delayed quantum master equation for this coupling configuration is given bẏ
ρ(t) =L locρ (t) +L ind̺ (t − 2t d ) +L coll̺ (t − t d ) +L coll̺ (t − 3t d ),(24)
where the local dissipation operatorL locρ (t) for the braided giant atoms is also given by Eq. (9). The second term on the right-hand side of Eq. (24) represents the non-local time evolution of the braided giant atoms, with the frequency shift γ sin(2θ 0 ) and the damping rate 2γ cos(2θ 0 ). The superop-eratorL coll̺ (t − t d ) [L coll̺ (t − 3t d )] describes the non-local exchanging interaction and collective decay of the braided atoms, with the exchanging interaction strength 3γ sin θ 0 /2 [γ sin(3θ 0 )/2] and the damping rate 3γ cos θ 0 [γ cos(3θ 0 )]. In the Markovian limit, we can obtain the local quantum master equation for the two braided giant atoms, with the effective exchanging interaction strength g ab = γ[3 sin θ 0 + sin(3θ 0 )]/2, the individual decay rates Γ a = Γ b = 2γ[1 + cos(2θ 0 )], and the collective decay rate Γ coll = γ[3 cos θ 0 + cos(3θ 0 )]. It has been shown that there exists an interatomic interaction without decoherence at θ 0 = (n + 1/2)π [32]. Below, we will show that this kind of interaction enables the entanglement dynamics of the two braided giant atoms to exhibit significant difference from those of two other coupling configurations. When the phase shift is in the region of θ 0 ∈ [0, π/2], we have the relation C (B) eg(ee) (t, θ 0 ) = C (B) eg(ee) (t, π − θ 0 ). In the case of γt d = 0 and |ψ(0) = |e a |g b , as shown in Fig. 4(a), the concurrence C (B) eg is characterized by an oscillating process when θ 0 is near (n + 1/2)π, whereas it approaches a steady-state value 0.5 at θ 0 = nπ in the long-time limit. The oscillation of C (B) eg near θ 0 = (n + 1/2)π is caused by the nonzero exchanging interaction. This is because when θ 0 → (n + 1/2)π, the collective decay rate and the exchanging interaction strength become Γ coll → 0 and g ab → γ, respectively. In this case, the concurrence C (B) eg is mainly characterized by an oscillating process. However, when θ 0 = 2nπ, we obtain g ab → 0 and Γ coll → 4γ, which leads to a non-oscillatory contribution to C (B) eg . According to these analyses, it can be seen that the concurrence C (B) eg depends on the two parameters g ab and Γ coll in different ways. The oscillation of C (B) eg is caused by the exchanging coupling g ab , whereas the non-oscillatory contribution of C (B) eg comes from the collective decay Γ coll [52]. When θ nπ/2, we see that C (B) exhibits a fast increase followed by a very slow decay [for θ 0 → (2n + 1)π] or an oscillating decay [for θ 0 → (n + 1/2)π], as shown by the valleys in Fig. 4(a).
For the braided coupling, we can also obtain the analytical expression of C (B) eg (t) in the Markovian limit. In terms of Eqs. (14) and (15), we have
C (B) eg (t) = | sinh[(3e iθ 0 + e 3iθ 0 )γt]| e 4γt cos 2 θ 0 .(25)
From Eq. (25), we find that when θ 0 = (n + 1/2)π and 2nπ, the concurrence is reduced to C (B) eg (t) = | sin(2γt)| and C (B) eg (t) = (1 − e −8γt )/2, respectively. Therefore, C (B) eg (t) exhibits a periodic oscillation in the range from zero to one with a period π/(2γ) when θ 0 = (n + 1/2)π. While it tends asymptotically to a steady-state value 0.5 at a rate 8γ when θ 0 = (2n + 1)π.
In Fig. 4(b), the two braided giant atoms is initially in the state |ψ(0) = |e a |e b , which shows that there is no entanglement generation at earlier times even when γt d = 0, but at some finite times, the entanglement suddenly begins to create for some values of θ 0 . Similar to the separate coupling, both the frequency shift and individual decay rate of each giant atom for the braided coupling are also equal, i.e., δω a = δω b = γ sin(2θ 0 ) and Γ a = Γ b = 1 + cos(2θ 0 ). In this case, the entangled states |ψ ± given by Eq. (16) also become the symmetric and antisymmetric states |± . According to Eq. (20), the states |± decay to the ground state |ψ 0 = |g a |g b with different rates. In particular, we find that the maximal value of the C (B) ee can achieve about 0.03 by adjusting the value of θ 0 , which is consistent with the case of the separate coupling [see Fig. 3(b)]. However, the value of θ 0 corresponding to the maximally achievable C (B) ee and C (S ) ee is different due to different coupling configurations. We also find that there is no entanglement generation for θ 0 = π and π/2. To explain this phenomenon, we substitute θ 0 = π + |ǫ| (π/2 + |ǫ|) into Eq. (2) to obtain Γ 2+ = Γ +0 = 0 and Γ 2− = Γ −0 = 8γ (Γ 2+ = Γ 2− = Γ +0 = Γ −0 = 0). This means that when θ 0 = π + |ǫ|, the cascade process on the left-side hand in Fig. 2 is forbidden. For the cascade process on the right-hand side, even though it is allowed, the population of |ψ 2 = |e a |e b decays to the ground state |ψ 0 very fast. Therefore, there is no entanglement generation. When θ 0 = π/2 + |ǫ|, both the left and right cascade processes are forbidden, and hence the population in the state |ψ 2 cannot decay into the states |± . versus γt and θ 0 /π when γt d takes a small value 0.1. When the giant atoms are initially in the state |ψ(0) = |e a |g b , we find that the peak values of C (B) eg decrease gradually as time increases even when θ 0 = (n +1/2)π, as shown in Fig. 4(c). This is because the non-Markovian retarded effect starts to work in this case, where the decoherence-free interaction of the two braided giant atoms is partially suppressed. In addition, the steady-state value of C (B) eg at θ 0 = nπ is less than that of the Markovian case, as shown in Fig. 4(a). Different from the case of γt d = 0 in Fig. 4(b), the concurrence C (B) ee in Fig. 4(d) is only created around θ 0 = nπ, and it achieves larger values but decays to zero faster. From Fig. 4(d), we see that C (B) ee can also be created when θ 0 = nπ in the joint influence of the quantum interference effect and the non-Markovian retarded effect.
By increasing the time delay to γt d = 1, the concurrence C (B) eg exhibit stronger oscillation and more peaks when θ 0 is near to 2nπ and (n + 1/2)π, as shown in Figs. 4(e) and 4(f). As time increases, C (B) eg is characterized by a slow nonexponential oscillating decay process around θ 0 = (n + 1/2)π. However, when θ 0 = 2nπ, C (B) eg approaches a steady-state value more slowly. For the initial state |ψ(0) = |e a |e b , there appear more oscillations and peaks in C (B) ee around θ 0 = 2nπ. With the increase of time, the peak values of C (B) ee decrease gradually. When the time delay gets a larger value, i.e., γt d → ∞, both C (B) eg and C (B) ee retain their initial values C (B) eg = C (B) ee = 0 [Figs. 4(g) and 4(h)], which is consistent with the separatecoupling case.
C. Entanglement generation between two nested giant atoms
Finally, we study the entanglement generation for the nested coupling, as shown in Fig. 1(c). According to Eq. (7) and Table I, the time-delayed quantum master equation of the two nested giant atoms is given bẏ [L coll̺ (t − 2t d )] denotes the non-local exchanging interaction and collective decay of the nested giant atoms, with the exchanging interaction strength γ sin θ 0 [γ sin(2θ 0 )] and the decay rate 2γ cos θ 0 [2γ cos(2θ 0 )], respectively. In the Markovian limit, the local quantum master equation for the two nested giant atoms is given by Eq. (11), with the corresponding g ab = γ[sin θ 0 +sin(2θ 0 )], δω a = γ sin(3θ 0 ), δω b = γ sin θ 0 , Γ a = 2γ[1 + cos(3θ 0 )], Γ b = 2γ(1 + cos θ 0 ), and Γ coll = 2γ[cos θ 0 + cos(2θ 0 )]. Note that the frequency shifts and the individual decay rates of the two nested giant atoms are not equal. Compared with the other two coupling configurations, we will see that this feature allows the nested coupling to create greater entanglement in the case of double-excitation initial state and Markovian limit.
ρ(t) =L locρ (t) +L ind̺ (t − t d ) +L ind̺ (t − 3t d ) +L coll̺ (t − t d ) +L coll̺ (t − 2t d ).(26)
In Figs. 5(a)−5(h) we show the entanglement dynamics of the two nested giant atoms when the time delay γt d takes various values. The initial states are taken as |ψ(0) = |e a |g b and |e a |e b in the left and right columns of Fig. 5, respectively. Similar to the separate-coupling case, the concurrences C (N) eg and C (N) ee are also phase dependent with a period 2π and satisfy the relation C (N) eg(ee) (t, θ 0 ) = C (N) eg(ee) (t, 2π−θ 0 ) for θ 0 ∈ [0, π]. For the initial state |ψ(0) = |e a |g g and θ 0 = 2nπ, we find that
C (N)
eg has a steady-state value for a finite value of γt d , as shown in Figs. 5(a), 5(c), and 5(e). When the phase shift θ 0 → π, the concurrence C (N) eg is characterized by a very slow initial increase, which is same as the separate coupling [see Fig. 3(a)]. However, for the nested coupling, the maximal value of the generated entanglement in Fig. 5(a) can exceed 0.5 at some values of θ 0 . For example, it can be seen that the concurrence fast achieves a peak value C (N) eg ≈ 0.78 at θ 0 = π/3, followed by a fast decay to zero. To explain this feature, we take θ 0 = π/3 to obtain g ab = √ 3γ, Γ a = 0, Γ b = 3γ, and Γ coll = 0. In this case, C (N) eg exhibits a fast increase (caused by the non-zero exchanging interaction strength) followed by a fast decay (due to the non-zero individual decay rate of giant atom b).
For the nested coupling configuration, the analytical expression for C (N) eg (t) can be derived from Eqs. (14) and (15) as
C (N) eg (t) = F(t, θ 0 ) 4 cos θ 0 2 [(3 cos θ 0 − 1) 2 + 4 sin 2 θ 0 ] ,(27)
where we introduce the function
F(t, θ 0 ) = e −(A+D)γt [(1 + e Aγt )(1 − e Bγt )A +2ie 2iθ 0 (1 − e Bγt )(1 − e B * γt ) sin θ 0 ],(28)
with
A = (5 − 2e iθ 0 + e 2iθ 0 )(e iθ 0 + e 2iθ 0 ) 2 , B = 8e −4iθ 0 cos 2 θ 0 2 (3 cos θ 0 + 2i sin θ 0 − 1), D = 2 + cos θ 0 + cos (3θ 0 ) .(29)
By substituting θ 0 = 2nπ into Eq. (27), we have C (N) eg (t) = (1 − e −8γt )/2, which approaches a steady-state value 0.5 at a rate 8γ in the long-time limit. This feature is consistent with the separate-coupling case. When θ 0 = 2nπ, we find that the quantities for the two couplings all become δω a = δω b = 0, g ab = 0, and Γ a = Γ b = Γ coll = 4γ. Figure 5(b) depicts the time evolution of the concurrence C (N) ee and its dependence on the phase shift when γt d = 0. It can be seen that there is no entanglement generation at earlier times, and entanglement suddenly starts to create at some finite time for some values of θ 0 . In particular, we find that the maximal generated entanglement between the two nested giant atoms initially in the state |ψ(0) = |e a |e b can achieve a maximal value C (N) ee ≈ 0.37, which is about one order of magnitude larger than those of both the separate and braided couplings. In the previous discussions, we have shown that the two giant atoms have equal frequency shifts for the separate and braided couplings. However, for the nested giant atoms, their frequency shifts are δω a = γ sin(3θ 0 ) and δω b = γ sin θ 0 , respectively, which are not equal except for θ 0 = nπ and (2n + 1)π/4. For the initial state |ψ(0) = |e a |e b , we also find that, at some finite time, C (N) ee is characterized by a fast initial increase followed by a slow decay when θ 0 → π. To explain this feature, we substitute θ 0 = π + |ǫ| (such as 0.99π) into Eq. (20) to obtain Γ 2+ ≈ 0.004, Γ 2− ≈ 0.006, Γ +0 ≈ 0.01, and Γ −0 ≈ 0.00005, which satisfy the relation Γ +0 > Γ 2− > Γ 2+ ≫ Γ −0 . Therefore, when we adjust the phase shift to θ 0 → π, the decay process from the state |ψ + to |ψ 0 is faster than that from |ψ 2 to |ψ + . However, the decay process from |ψ − and |ψ 0 is much less than that from |ψ 2 and |ψ − . The asymmetric decay between the two cascade processes leads to the entanglement generation between the two nested giant atoms.
When the time delay γt d is non-zero, the non-Markovian effect will affect the entanglement dynamics of the two nested giant atoms. For a small value γt d = 0.1, as shown in Fig. 5(c), the overall shape of the concurrence C (N) eg remains roughly unchanged, but both the maximally achievable values of C (N) eg at θ 0 2nπ and the steady-state values of C (N) eg at θ 0 = 2nπ decrease. However, for the initial state |ψ(0) = |e a |e b , as shown in Fig. 5(d), we see that the maximally achievable value of C (N) ee is enhanced due to the joint influence of the quantum interference effect and the non-Markovian retardation effect.
By further increasing the time delay to γt d = 1, the non-Markovian retarded effect becomes stronger. In this case, the concurrence C (N) eg is created even when θ 0 = π, as shown in Fig. 5(e). Meanwhile, there appear new peaks in the valleys of C (N) eg . For the initial state |ψ(0) = |e a |e b , C (N) eg is created at later times when θ 0 is near π. In addition, the generation time of entanglement is more delayed compared with the Markovian limit in Fig. 5(b). We also see that the maximally achievable value of C (N) ee [ Fig. 5(f)] decreases for the large value γt d = 1. Finally, we also find from Figs. 5(g) and 5(h) that there is no entanglement generation in the limit γt d → ∞.
Therefore, the entanglement generation of the two giant atoms depends on the atomic initial state, coupling configuration, and phase shift, when the time delay is within an appropriate range. This requires that the propagating time of photons between neighboring coupling points cannot be much larger than the lifetime of the giant atoms.
V. CONCLUSION
In conclusion, we have studied the entanglement generation between two giant atoms with three different coupling configurations. Using the Wigner-Weisskopf approach for single coupling points, we have obtained the time-delayed quantum master equation governing the dynamics of the two giant atoms. By taking into account and neglecting the time delay between the coupling points, we have considered the entanglement generation in both the Markovian and non-Markovian regimes, respectively. In particular, we have analyzed the entanglement generation when the two giant atoms are initially in two different separable states. It has been shown that the entanglement generation between the two giant atoms depends on the coupling configurations, phase shift, time delay, and atomic initial state. For a certain initial state and coupling con-figuration of the two giant atoms, the entanglement dynamics is affected by the joint influence of quantum interference and non-Markovian effect. However, we would like to remark that the time delay within an appropriate range is a significant condition to generate and control the entanglement. This work will pave the way for quantum information processing in the giant-atom waveguide-QED systems including the non-Markovian effect.
FIG. 2 .
2Scheme of the levels and decays for the collective states of the two giant atoms.
( 9 )
9. The superoperatorL ind̺ (t − t d ) in Eq.(22) represents the non-local time evolution of the two separate giant atoms, with the frequency shift γ sin θ 0 and the damping rate 2γ cos θ 0 . The superoperatorL coll̺ (t − nt d ) describes the non-local exchanging interaction and collective decay of the two giant atoms. For example, when n = 1, the exchanging interaction strength is γ sin(θ 0 )/2 and the damping rate is γ cos θ 0 . In the Markovian limit γt d → 0, the nonlocal superoperators in Eq. (22) becomes local. Then forFIG. 3. Concurrences C (S ) eg and C (S ) ee as functions of the scaled evolution time γt and the scaled phase shift θ 0 /π at given values of γt d . In the the left and right columns, the giant atoms are initially in the states |ψ(0) = |e a |g b and |e a |e b , respectively. In panels (a,b), (c,d), (e,f), and (g,h), we take the time delay γt d = 0, 0.1, 1, and ∞, respectively.
Figures 4(a)−4(h) show the concurrences C (B) eg and C (B) ee versus the dimensionless quantities γt and θ 0 /π when the time delay γt d takes different values. In the left and right columns of Fig. 4, the two braided giant atoms are initially in the states |ψ(0) = |e a |g b and |e a |e b , respectively. We see that C (B) eg and C (B) ee in Figs. 4(a)−4(f) are phase dependent with a period of π, which is different from the case of the separate coupling.
FIG. 4 .
4Concurrences C (B) eg and C (B) ee as functions of the scaled evolution time γt and the scaled phase shift θ 0 /π when γt d takes various values. In the the left and right columns, the giant atoms are initially in the states |ψ(0) = |e a |g b and |e a |e b , respectively. In panels (a,b), (c,d), (e,f), and (g,h), we take the time delay γt d = 0, 0.1, 1, and ∞, respectively.
Figures 4(c) and 4(d) show the concurrences C (B) eg and C (B) ee
FIG. 5 .
5Concurrences C (N) eg and C (N) ee as functions of the scaled evolution time γt and the scaled phase shift θ 0 /π at given values of γt d . In the the left and right columns, the giant atoms are initially in the states |ψ(0) = |e a |g b and |e a |e b , respectively. In panels (a,b), (c,d), (e,f), and (g,h), we take the time delay γt d = 0, 0.1, 1, and ∞, respectively.The first term in Eq. (26) represents the local dissipation operator of the two nested giant atoms [see Eq.(9)]. The superoperatorL ind̺ (t − t d ) [L ind̺ (t − 3t d )] describes the non-local time evolution of giant atom a (b), with the frequency shift γ sin θ 0 [γ sin(3θ 0 )] and the decay rate 2γ cos θ 0 [2γ cos(3θ 0 )], respectively. The superoperatorL coll̺ (t − t d )
TABLE I .
ITime nonlocal terms in Eq.
Can quantummechanical description of physical reality be considered complete?. A Einstein, B Podolsky, N Rosen, Phys. Rev. 47777A. Einstein, B. Podolsky, and N. Rosen, Can quantum- mechanical description of physical reality be considered com- plete? Phys. Rev. 47, 777 (1935).
Quantum entanglement. R Horodecki, P Horodecki, M Horodecki, K Horodecki, Rev. Mod. Phys. 81865R. Horodecki, P. Horodecki, M. Horodecki, and K. Horodecki, Quantum entanglement, Rev. Mod. Phys. 81, 865 (2009).
F J Duarte, T S Taylor, Quantum Entanglement Engineering and Applications. LondonIOP PublishingF. J. Duarte and T. S. Taylor, Quantum Entanglement Engineer- ing and Applications (IOP Publishing, London, 2021).
Quantum Cryptography Based on Bell's Theorem. A K Ekert, Phys. Rev. Lett. 67661A. K. Ekert, Quantum Cryptography Based on Bell's Theorem, Phys. Rev. Lett. 67, 661 (1991).
Quantum key distribution using gaussianmodulated coherent states. F Grosshans, G Van Assche, J Wenger, R Brouri, N J Cerf, P Grangier, Nature. 421238F. Grosshans, G. Van Assche, J. Wenger, R. Brouri, N. J. Cerf, and P. Grangier, Quantum key distribution using gaussian- modulated coherent states, Nature (London) 421, 238 (2003).
Communication via One-and Two-Particle Operators on Einstein-Podolsky-Rosen States. C H Bennett, S J Wiesner, Phys. Rev. Lett. 692881C. H. Bennett and S. J. Wiesner, Communication via One-and Two-Particle Operators on Einstein-Podolsky-Rosen States, Phys. Rev. Lett. 69, 2881 (1992).
Teleporting an Unknown Quantum State via Dual Classical and Einstein-Podolsky-Rosen Channels. C H Bennett, G Brassard, C Crépeau, R Jozsa, A Peres, W K Wootters, Phys. Rev. Lett. 701895C. H. Bennett, G. Brassard, C. Crépeau, R. Jozsa, A. Peres, and W. K. Wootters, Teleporting an Unknown Quantum State via Dual Classical and Einstein-Podolsky-Rosen Channels, Phys. Rev. Lett. 70, 1895 (1993).
The Physical Implementation of Quantum Computation. David P Divincenzo, Fortschr. Phys. 48771David P. DiVincenzo, The Physical Implementation of Quantum Computation, Fortschr. Phys. 48, 771 (2000).
Multiphoton entanglement and interferometry. J.-W Pan, Z.-B Chen, C.-Y Lu, H Weinfurter, A Zeilinger, M Żukowski, Rev. Mod. Phys. 84777J.-W. Pan, Z.-B. Chen, C.-Y. Lu, H. Weinfurter, A. Zeilinger, and M.Żukowski, Multiphoton entanglement and interferome- try, Rev. Mod. Phys. 84, 777 (2012).
Quantum dynamics of single trapped ions. D Leibfried, R Blatt, C Monroe, D Wineland, Rev. Mod. Phys. 75281D. Leibfried, R. Blatt, C. Monroe, and D. Wineland, Quan- tum dynamics of single trapped ions, Rev. Mod. Phys. 75, 281 (2003).
Observation of entanglement between a single trapped atom and a single photon. B B Blinov, D L Moehring, L M Duan, C Monroe, Nature. 428153B. B. Blinov, D. L. Moehring, L. M. Duan, and C. Monroe, Observation of entanglement between a single trapped atom and a single photon, Nature (London) 428, 153 (2004).
Colloquium: Manipulating quantum entanglement with atoms and photons in a cavity. J M Raimond, M Brune, S Haroche, Rev. Mod. Phys. 73565J. M. Raimond, M. Brune, and S. Haroche, Colloquium: Ma- nipulating quantum entanglement with atoms and photons in a cavity, Rev. Mod. Phys. 73, 565 (2001).
Cavity quantum electrodynamics. H Walther, B T H Varcoe, B G Englert, T Becker, Rep. Prog. Phys. 691325H. Walther, B. T. H. Varcoe, B. G. Englert, and T. Becker, Cavity quantum electrodynamics, Rep. Prog. Phys. 69, 1325 (2006).
Circuit quantum electrodynamics. A Blais, A L Grimsmo, S M Girvin, A Wallraff, Rev. Mod. Phys. 9325005A. Blais, A. L. Grimsmo, S. M. Girvin, and A. Wallraff, Circuit quantum electrodynamics, Rev. Mod. Phys. 93, 025005 (2021).
Cavity quantum electrodynamics for superconducting electrical circuits: An architecture for quantum computation. A Blais, R.-S Huang, A Wallraff, S M Girvin, R J Schoelkopf, Phys. Rev. A. 6962320A. Blais, R.-S. Huang, A. Wallraff, S. M. Girvin, and R. J. Schoelkopf, Cavity quantum electrodynamics for supercon- ducting electrical circuits: An architecture for quantum compu- tation, Phys. Rev. A 69, 062320 (2004).
Strong coupling of a single photon to a superconducting qubit using circuit quantum electrodynamics. A Wallraff, D I Schuster, A Blais, L Frunzio, R.-S Huang, J Majer, S Kumar, S M Girvin, R J Schoelkopf, Nature. 431162A. Wallraff, D. I. Schuster, A. Blais, L. Frunzio, R.-S. Huang, J. Majer, S. Kumar, S. M. Girvin, and R. J. Schoelkopf, Strong coupling of a single photon to a superconducting qubit using circuit quantum electrodynamics, Nature (London) 431, 162 (2004).
Colloquium: Strongly interacting photons in one-dimensional continuum. D Roy, C M Wilson, O Firstenberg, Rev. Mod. Phys. 8921001D. Roy, C. M. Wilson, and O. Firstenberg, Colloquium: Strongly interacting photons in one-dimensional continuum, Rev. Mod. Phys. 89, 021001 (2017).
Microwave photonics with superconducting quantum circuits. X Gu, A F Kockum, A Miranowicz, Y Liu, F Nori, Phys. Rep. 7181X. Gu, A. F. Kockum, A. Miranowicz, Y.-x. Liu, and F. Nori, Microwave photonics with superconducting quantum circuits, Phys. Rep. 718, 1 (2017).
Waveguide quantum electrodynamics: collective radiance and photon-photon correlations. A S Sheremet, M I Petrov, I V Iorsh, A V Poshakinskiy, A N Poddubny, Rev. Mod. Phys. 9515002A. S. Sheremet, M. I. Petrov, I. V. Iorsh, A. V. Poshakinskiy, and A. N. Poddubny, Waveguide quantum electrodynamics: collec- tive radiance and photon-photon correlations, Rev. Mod. Phys. 95, 015002 (2023).
The quantum internet. H J Kimble, Nature. 4531023H. J. Kimble, The quantum internet, Nature (London) 453, 1023 (2008).
Quantum Optics with Surface Plasmons. D E Chang, A S Sørensen, P R Hemmer, M D Lukin, Phys. Rev. Lett. 9753002D. E. Chang, A. S. Sørensen, P. R. Hemmer, and M. D. Lukin, Quantum Optics with Surface Plasmons, Phys. Rev. Lett. 97, 053002 (2006).
Strongly Correlated Two-Photon Transport in a One-Dimensional Waveguide Coupled to a Two-Level System. J.-T Shen, S Fan, Phys. Rev. Lett. 98153003J.-T. Shen and S. Fan, Strongly Correlated Two-Photon Trans- port in a One-Dimensional Waveguide Coupled to a Two-Level System, Phys. Rev. Lett. 98, 153003 (2007).
Correlated two-photon transport in a one-dimensional waveguide side-coupled to a nonlinear cavity. J.-Q Liao, C K Law, Phys. Rev. A. 8253836J.-Q. Liao and C. K. Law, Correlated two-photon transport in a one-dimensional waveguide side-coupled to a nonlinear cavity, Phys. Rev. A 82, 053836 (2010).
Two-Photon Scattering by a Driven Three-Level Emitter in a One-Dimensional Waveguide and Electromagnetically Induced Transparency. D Roy, Phys. Rev. Lett. 10653601D. Roy, Two-Photon Scattering by a Driven Three-Level Emit- ter in a One-Dimensional Waveguide and Electromagnetically Induced Transparency, Phys. Rev. Lett. 106, 053601 (2011).
Stimulated Emission from a Single Excited Atom in a Waveguide. E Rephaeli, S Fan, Phys. Rev. Lett. 108143602E. Rephaeli and S. Fan, Stimulated Emission from a Sin- gle Excited Atom in a Waveguide, Phys. Rev. Lett. 108, 143602(2012).
Photonic Circuits with Time Delays and Quantum Feedback. H Pichler, P Zoller, Phys. Rev. Lett. 11693601H. Pichler and P. Zoller, Photonic Circuits with Time Delays and Quantum Feedback, Phys. Rev. Lett. 116, 093601 (2016).
Colloquium: Quantum matter built from nanoscopic lattices of atoms and photons. D E Chang, J S Douglas, A González-Tudela, C.-L Hung, H J Kimble, Rev. Mod. Phys. 9031002D. E. Chang, J. S. Douglas, A. González-Tudela, C.-L. Hung, and H. J. Kimble, Colloquium: Quantum matter built from nanoscopic lattices of atoms and photons, Rev. Mod. Phys. 90, 031002 (2018).
Non-Markovian Collective Emission from Macroscopically Separated Emitters. K Sinha, P Meystre, E A Goldschmidt, F K Fatemi, S L Rolston, P Solano, Phys. Rev. Lett. 12443603K. Sinha, P. Meystre, E. A. Goldschmidt, F. K. Fatemi, S. L. Rolston, and P. Solano, Non-Markovian Collective Emission from Macroscopically Separated Emitters, Phys. Rev. Lett. 124, 043603 (2020).
D F Walls, G J Milburn, Quantum Optics. BerlinSpringer2nd ed.D. F. Walls and G. J. Milburn, Quantum Optics, 2nd ed. (Springer, Berlin, 2008).
Quantum optics with giant atoms-the first five years. A F Kockum, Mathematics for Industry. SingaporeSpringerA. F. Kockum, Quantum optics with giant atoms-the first five years, in Mathematics for Industry (Springer Singapore, Singa- pore, 2021), pp. 125-146.
Designing frequency-dependent relaxation rates and Lamb shifts for a giant artificial atom. A F Kockum, P Delsing, G Johansson, Phys. Rev. A. 9013837A. F. Kockum, P. Delsing, and G. Johansson, Designing frequency-dependent relaxation rates and Lamb shifts for a gi- ant artificial atom, Phys. Rev. A 90, 013837 (2014).
Decoherence-Free Interaction between Giant Atoms in Waveguide Quantum Electrodynamics. A F Kockum, G Johansson, F Nori, Phys. Rev. Lett. 120140404A. F. Kockum, G. Johansson, and F. Nori, Decoherence-Free Interaction between Giant Atoms in Waveguide Quantum Elec- trodynamics, Phys. Rev. Lett. 120, 140404 (2018).
Waveguide quantum electrodynamics with superconducting artificial giant atoms. B Kannan, M J Ruckriegel, D L Campbell, A F Kockum, J Braumüller, D K Kim, M Kjaergaard, P Krantz, A Melville, B M Niedzielski, A Vepsäläinen, R Winik, J L Yoder, F Nori, T P Orlando, S Gustavsson, W D Oliver, Nature. 583775B. Kannan, M. J. Ruckriegel, D. L. Campbell, A. F. Kockum, J. Braumüller, D. K. Kim, M. Kjaergaard, P. Krantz, A. Melville, B. M. Niedzielski, A. Vepsäläinen, R. Winik, J. L. Yoder, F. Nori, T. P. Orlando, S. Gustavsson, and W. D. Oliver, Waveg- uide quantum electrodynamics with superconducting artificial giant atoms, Nature (London) 583, 775 (2020).
Mechanism of decoherence-free coupling between giant atoms. A Carollo, D Cilluffo, F Ciccarello, Phys. Rev. Research. 243184A. Carollo, D. Cilluffo, and F. Ciccarello, Mechanism of decoherence-free coupling between giant atoms, Phys. Rev. Re- search 2, 043184 (2020).
Chiral quantum optics with giant atoms. A Soro, A F Kockum, Phys. Rev. A. 10523712A. Soro and A. F. Kockum, Chiral quantum optics with giant atoms, Phys. Rev. A 105, 023712 (2022).
Oscillating bound states for a giant atom. L Guo, A F Kockum, F Marquardt, G Johansson, Phys. Rev. Research. 243014L. Guo, A. F. Kockum, F. Marquardt, and G. Johansson, Os- cillating bound states for a giant atom, Phys. Rev. Research 2, 043014 (2020).
Tunable Chiral Bound States with Giant Atoms. X Wang, T Liu, A F Kockum, H.-R Li, F Nori, Phys. Rev. Lett. 12643602X. Wang, T. Liu, A. F. Kockum, H.-R. Li, and F. Nori, Tunable Chiral Bound States with Giant Atoms, Phys. Rev. Lett. 126, 043602 (2021).
Qubitphoton bound states in topological waveguides with long-range hoppings. C Vega, M Bello, D Porras, A González-Tudela, Phys. Rev. A. 10453522C. Vega, M. Bello, D. Porras, and A. González-Tudela, Qubit- photon bound states in topological waveguides with long-range hoppings, Phys. Rev. A 104, 053522 (2021).
Topology and retardation effect of a giant atom in a topological waveguide. W Cheng, Z Wang, Y.-X Liu, Phys. Rev. A. 10633522W. Cheng, Z. Wang, and Y.-X. Liu, Topology and retardation effect of a giant atom in a topological waveguide, Phys. Rev. A 106, 033522 (2022).
Oscillating bound states in non-Markovian photonic lattices. K H Lim, W K Mok, L C Kwek, Phys. Rev. A. 10723716K. H. Lim, W. K. Mok, and L. C. Kwek, Oscillating bound states in non-Markovian photonic lattices, Phys. Rev. A 107, 023716 (2023).
Giant acoustic atom: A single quantum system with a deterministic time delay. L Guo, A Grimsmo, A F Kockum, M Pletyukhov, G Johansson, Phys. Rev. A. 9553821L. Guo, A. Grimsmo, A. F. Kockum, M. Pletyukhov, and G. Johansson, Giant acoustic atom: A single quantum system with a deterministic time delay, Phys. Rev. A 95, 053821 (2017).
Nonexponential decay of a giant artificial atom. G Andersson, B Suri, L Guo, T Aref, P Delsing, Nat. Phys. 151123G. Andersson, B. Suri, L. Guo, T. Aref, and P. Delsing, Nonex- ponential decay of a giant artificial atom, Nat. Phys. 15, 1123 (2019).
Photonic simulation of giant atom decay. S Longhi, Opt. Lett. 453017S. Longhi, Photonic simulation of giant atom decay, Opt. Lett. 45, 3017 (2020).
Single-photon nonreciprocal excitation transfer with non-Markovian retarded effects. L Du, M.-R Cai, J.-H Wu, Z Wang, Y Li, Phys. Rev. A. 10353701L. Du, M.-R. Cai, J.-H. Wu, Z. Wang, and Y. Li, Single-photon nonreciprocal excitation transfer with non-Markovian retarded effects, Phys. Rev. A 103, 053701 (2021).
Giant atoms with timedependent couplings. L Du, Y.-T Chen, Y Zhang, Y Li, Phys. Rev. Research. 423198L. Du, Y.-T. Chen, Y. Zhang, and Y. Li, Giant atoms with time- dependent couplings, Phys. Rev. Research 4, 023198 (2022).
Non-Markovian disentanglement dynamics in double-giant-atom waveguide-QED systems. X.-L Yin, W.-B Luo, J.-Q Liao, Phys. Rev. A. 10663703X.-L. Yin, W.-B. Luo, J.-Q. Liao, Non-Markovian disentangle- ment dynamics in double-giant-atom waveguide-QED systems, Phys. Rev. A 106, 063703 (2022).
Collective radiance of giant atoms in non-Markovian regime. Q.-Y Qiu, Y Wu, X.-Y Lü, Sci. China Phys. Mech. Astron. 66224212Q.-Y. Qiu, Y. Wu, and X.-Y. Lü, Collective radiance of giant atoms in non-Markovian regime, Sci. China Phys. Mech. As- tron. 66, 224212 (2023).
Single-photon scattering and bound states in an atom-waveguide system with two or multiple coupling points. W Zhao, Z Wang, Phys. Rev. A. 10153855W. Zhao and Z. Wang, Single-photon scattering and bound states in an atom-waveguide system with two or multiple cou- pling points, Phys. Rev. A 101, 053855 (2020).
Single-photon frequency conversion via a giant Λ-type atom. L Du, Y Li, Phys. Rev. A. 10423712L. Du and Y. Li, Single-photon frequency conversion via a giant Λ-type atom, Phys. Rev. A 104, 023712 (2021).
Coherent single-photon scattering spectra for a giant-atom waveguide-QED system beyond the dipole approximation. Q Y Cai, W Z Jia, Phys. Rev. A. 10433710Q. Y. Cai and W. Z. Jia, Coherent single-photon scattering spec- tra for a giant-atom waveguide-QED system beyond the dipole approximation, Phys. Rev. A 104, 033710 (2021).
Single-photon scattering in a giant-molecule waveguide-QED system. X.-L Yin, Y.-H Liu, J.-F Huang, J.-Q Liao, Phys. Rev. A. 10613715X.-L. Yin, Y.-H. Liu, J.-F. Huang, and J.-Q. Liao, Single-photon scattering in a giant-molecule waveguide-QED system, Phys. Rev. A 106, 013715 (2022).
Entanglement of Two Qubits Mediated by One-Dimensional Plasmonic Waveguides. A González-Tudela, D Martin-Cano, E Moreno, L Martin-Moreno, C Tejedor, F J Garcia-Vidal, Phys. Rev. Lett. 10620501A. González-Tudela, D. Martin-Cano, E. Moreno, L. Martin- Moreno, C. Tejedor, and F. J. Garcia-Vidal, Entanglement of Two Qubits Mediated by One-Dimensional Plasmonic Waveg- uides, Phys. Rev. Lett. 106, 020501 (2011).
Persistent Quantum Beats and Long-Distance Entanglement from Waveguide-Mediated Interactions. H Zheng, H U Baranger, Phys. Rev. Lett. 110113601H. Zheng and H. U. Baranger, Persistent Quantum Beats and Long-Distance Entanglement from Waveguide-Mediated Inter- actions, Phys. Rev. Lett. 110, 113601 (2013).
Mesoscopic Entanglement Induced by Spontaneous Emission in Solid-State Quantum Optics. A González-Tudela, D Porras, Phys. Rev. Lett. 11080502A. González-Tudela and D. Porras, Mesoscopic Entanglement Induced by Spontaneous Emission in Solid-State Quantum Op- tics, Phys. Rev. Lett. 110, 080502 (2013).
Generation, manipulation, and detection of two-qubit entanglement in waveguide QED. C Gonzalez-Ballestero, E Moreno, F J G Vidal, Phys. Rev. A. 8942328C. Gonzalez-Ballestero, E. Moreno, and F. J. G. Vidal, Genera- tion, manipulation, and detection of two-qubit entanglement in waveguide QED, Phys. Rev. A 89, 042328 (2014).
Bound states and entanglement generation in waveguide quantum electrodynamics. P Facchi, M S Kim, S Pascazio, F V Pepe, D Pomarico, T Tufarelli, Phys. Rev. A. 9443839P. Facchi, M. S. Kim, S. Pascazio, F. V. Pepe, D. Pomarico, and T. Tufarelli, Bound states and entanglement generation in waveguide quantum electrodynamics, Phys. Rev. A 94, 043839 (2016).
Quantum Spin Dimers from Chiral Dissipation in Cold-Atom Chains. T Ramos, H Pichler, A J Daley, P Zoller, Phys. Rev. Lett. 113237203T. Ramos, H. Pichler, A. J. Daley, and P. Zoller, Quantum Spin Dimers from Chiral Dissipation in Cold-Atom Chains, Phys. Rev. Lett. 113, 237203 (2014).
Chiral route to spontaneous entanglement generation. C Gonzalez-Ballestero, A Gonzalez-Tudela, F J Garciavidal, E Moreno, Phys. Rev. B. 92155304C. Gonzalez-Ballestero, A. Gonzalez-Tudela, F. J. GarciaVidal, and E. Moreno, Chiral route to spontaneous entanglement gen- eration, Phys. Rev. B 92, 155304 (2015).
Multiqubit entanglement in bidirectional-chiral-waveguide QED. I M Mirza, J C Schotland, Phys. Rev. A. 9412302I. M. Mirza and J. C. Schotland, Multiqubit entanglement in bidirectional-chiral-waveguide QED, Phys. Rev. A 94, 012302 (2016)
Microresonators enhancing long-distance dynamical entanglement generation in chiral quantum networks. W K Mok, J B You, L C Kwek, D Aghamalyan, Phys. Rev. A. 10153861W. K. Mok, J. B. You, L. C. Kwek, and D. Aghamalyan, Microresonators enhancing long-distance dynamical entangle- ment generation in chiral quantum networks, Phys. Rev. A 101, 053861 (2020).
Entanglement preparation and nonreciprocal excitation evolution in giant atoms by controllable dissipation and coupling. H Yu, Z Wang, J.-H Wu, Phys. Rev. A. 10413720H. Yu, Z. Wang, and J.-H. Wu, Entanglement preparation and nonreciprocal excitation evolution in giant atoms by con- trollable dissipation and coupling, Phys. Rev. A 104, 013720 (2021).
Generation of Maximally Entangled Long-Lived States with Giant Atoms in a Waveguide. A C Santos, R Bachelard, Phys. Rev. Lett. 13053601A. C. Santos and R. Bachelard, Generation of Maximally En- tangled Long-Lived States with Giant Atoms in a Waveguide, Phys. Rev. Lett. 130, 053601 (2023).
M O Scully, M S Zubairy, Quantum Optics. CambridgeCambridge University PressM. O. Scully and M. S. Zubairy, Quantum Optics (Cambridge University Press, Cambridge, 1997).
H.-P Breuer, F Petruccione, The Theory of Open Quantum Systems. New YorkOxford University PressH.-P. Breuer and F. Petruccione, The Theory of Open Quantum Systems (Oxford University Press, New York, 2002).
Coherent Single Photon Transport in a One-Dimensional Waveguide Coupled with Superconducting Quantum Bits. J.-T Shen, S Fan, Phys. Rev. Lett. 95213001J.-T. Shen and S. Fan, Coherent Single Photon Transport in a One-Dimensional Waveguide Coupled with Superconducting Quantum Bits, Phys. Rev. Lett. 95, 213001 (2005).
Theory of single-photon transport in a single-mode waveguide. I. Coupling to a cavity containing a two-level atom. J.-T Shen, S Fan, Phys. Rev. A. 7923837J.-T. Shen and S. Fan, Theory of single-photon transport in a single-mode waveguide. I. Coupling to a cavity containing a two-level atom, Phys. Rev. A 79, 023837 (2009).
Spatial-nonlocality-induced non-Markovian electromagnetically induced transparency in a single giant atom. Y T Zhu, S Xue, R B Wu, W L Li, Z H Peng, M Jiang, Phys. Rev. A. 10643710Y. T. Zhu, S. Xue, R. B. Wu, W. L. Li, Z. H. Peng, and M. Jiang, Spatial-nonlocality-induced non-Markovian electromag- netically induced transparency in a single giant atom, Phys. Rev. A 106, 043710 (2022).
Microwave quantum optics with an artificial atom in one-dimensional open space. I.-C Hoi, C M Wilson, G Johansson, J Lindkvist, B Peropadre, T Palomaki, P Delsing, New J. Phys. 1525011I.-C. Hoi, C. M. Wilson, G. Johansson, J. Lindkvist, B. Per- opadre, T. Palomaki, and P. Delsing, Microwave quantum op- tics with an artificial atom in one-dimensional open space, New J. Phys. 15, 025011 (2013).
Coherent control of a multi-qubit dark state in waveguide quantum electrodynamics. M Zanner, T Orell, C M F Schneider, R Albert, S Oleschko, M L Juan, M Silveri, G Kirchmair, Nat. Phys. 18538M. Zanner, T. Orell, C. M. F. Schneider, R. Albert, S. Oleschko, M. L. Juan, M. Silveri, and G. Kirchmair, Coherent control of a multi-qubit dark state in waveguide quantum electrodynamics, Nat. Phys. 18, 538 (2022).
Entanglement of Formation of an Arbitrary State of Two Qubits. W K Wootters, Phys. Rev. Lett. 802245W. K. Wootters, Entanglement of Formation of an Arbitrary State of Two Qubits, Phys. Rev. Lett. 80, 2245 (1998).
Delayed sudden birth of entanglement. Z Ficek, R Tanaś, Phys. Rev. A. 7754301Z. Ficek and R. Tanaś, Delayed sudden birth of entanglement, Phys. Rev. A 77, 054301 (2008).
Retamal, Sudden Birth versus Sudden Death of Entanglement in Multipartite Systems. C E Lopez, G Romero, F Lastra, E Solano, J , Phys. Rev. Lett. 10180503C. E. Lopez, G. Romero, F. Lastra, E. Solano, and J. C. Re- tamal, Sudden Birth versus Sudden Death of Entanglement in Multipartite Systems, Phys. Rev. Lett. 101, 080503 (2008).
Sudden death and sudden birth of entanglement in common structured reservoirs. L Mazzola, S Maniscalco, J Piilo, K.-A Suominen, B M Garraway, Phys. Rev. A. 7942302L. Mazzola, S. Maniscalco, J. Piilo, K.-A. Suominen, and B. M. Garraway, Sudden death and sudden birth of entanglement in common structured reservoirs, Phys. Rev. A 79, 042302 (2009).
| []
|
[
"Iterative Visual Reasoning Beyond Convolutions",
"Iterative Visual Reasoning Beyond Convolutions"
]
| [
"Xinlei Chen \nCarnegie Mellon University\n\n",
"Li-Jia Li \nGoogle\n",
"Li Fei-Fei \nGoogle\n",
"Abhinav Gupta \nCarnegie Mellon University\n\n"
]
| [
"Carnegie Mellon University\n",
"Google",
"Google",
"Carnegie Mellon University\n"
]
| []
| We present a novel framework for iterative visual reasoning. Our framework goes beyond current recognition systems that lack the capability to reason beyond stack of convolutions. The framework consists of two core modules: a local module that uses spatial memory [4] to store previous beliefs with parallel updates; and a global graph-reasoning module. Our graph module has three components: a) a knowledge graph where we represent classes as nodes and build edges to encode different types of semantic relationships between them; b) a region graph of the current image where regions in the image are nodes and spatial relationships between these regions are edges; c) an assignment graph that assigns regions to classes. Both the local module and the global module roll-out iteratively and cross-feed predictions to each other to refine estimates. The final predictions are made by combining the best of both modules with an attention mechanism. We show strong performance over plain ConvNets, e.g. achieving an 8.4% absolute improvement on ADE [55] measured by per-class average precision. Analysis also shows that the framework is resilient to missing regions for reasoning. | 10.1109/cvpr.2018.00756 | [
"https://arxiv.org/pdf/1803.11189v1.pdf"
]
| 4,408,847 | 1803.11189 | 23f4be489bccb28601acac9776a7440100aa6ddd |
Iterative Visual Reasoning Beyond Convolutions
Xinlei Chen
Carnegie Mellon University
Li-Jia Li
Google
Li Fei-Fei
Google
Abhinav Gupta
Carnegie Mellon University
Iterative Visual Reasoning Beyond Convolutions
We present a novel framework for iterative visual reasoning. Our framework goes beyond current recognition systems that lack the capability to reason beyond stack of convolutions. The framework consists of two core modules: a local module that uses spatial memory [4] to store previous beliefs with parallel updates; and a global graph-reasoning module. Our graph module has three components: a) a knowledge graph where we represent classes as nodes and build edges to encode different types of semantic relationships between them; b) a region graph of the current image where regions in the image are nodes and spatial relationships between these regions are edges; c) an assignment graph that assigns regions to classes. Both the local module and the global module roll-out iteratively and cross-feed predictions to each other to refine estimates. The final predictions are made by combining the best of both modules with an attention mechanism. We show strong performance over plain ConvNets, e.g. achieving an 8.4% absolute improvement on ADE [55] measured by per-class average precision. Analysis also shows that the framework is resilient to missing regions for reasoning.
Introduction
In recent years, we have made significant advances in standard recognition tasks such as image classification [16], detection [37] or segmentation [3]. Most of these gains are a result of using feed-forward end-to-end learned ConvNet models. Unlike humans where visual reasoning about the space and semantics is crucial [1], our current visual systems lack any context reasoning beyond convolutions with large receptive fields. Therefore, a critical question is how do we incorporate both spatial and semantic reasoning as we build next-generation vision systems.
Our goal is to build a system that can not only extract and utilize hierarchy of convolutional features, but also improve its estimates via spatial and semantic relationships. But what are spatial and semantic relationships and how can they be used to improve recognition? Take a look at Fig. 1. An example of spatial reasoning (top-left) would be: if three regions out of four in a line are "window", then the fourth is also likely to be "window". An example of semantic reasoning (bottom-right) would be to recognize "school bus" even Figure 1. Current recognition systems lack the reasoning power beyond convolutions with large receptive fields, whereas humans can explore the rich space of spatial and semantic relationships for reasoning: e.g. inferring the fourth "window" even with occlusion, or the "person" who drives the "car". To close this gap, we present a generic framework that also uses relationships to iteratively reason and build up estimates.
if we have seen few or no examples of it -just given examples of "bus" and knowing their connections. Finally, an example of spatial-semantic reasoning could be: recognition of a "car" on road should help in recognizing the "person" inside "driving" the "car".
A key recipe to reasoning with relationships is to iteratively build up estimates. Recently, there have been efforts to incorporate such reasoning via top-down modules [38,48] or using explicit memories [51,32]. In the case of top-down modules, high-level features which have class-based information can be used in conjunction with low-level features to improve recognition performance. An alternative architecture is to use explicit memory. For example, Chen & Gupta [4] performs sequential object detection, where a spatial memory is used to store previously detected objects, leveraging the power of ConvNets for extracting dense context patterns beneficial for follow-up detections.
However, there are two problems with these approaches: a) both approaches use stack of convolutions to perform local pixel-level reasoning [11], which can lack a global reasoning power that also allows regions farther away to directly communicate information; b) more importantly, both approaches assume enough examples of relationships in the training data -so that the model can learn them from scratch, but as the relationships grow exponentially with increasing number of classes, there is not always enough data. A lot of semantic reasoning requires learning from few or no examples [14]. Therefore, we need ways to exploit additional structured information for visual reasoning.
In this paper, we put forward a generic framework for both spatial and semantic reasoning. Different from current approaches that are just relying on convolutions, our framework can also learn from structured information in the form of knowledge bases [5,56] for visual recognition. The core of our algorithm consists of two modules: the local module, based on spatial memory [4], performs pixel-level reasoning using ConvNets. We make major improvements on efficiency by parallel memory updates. Additionally, we introduce a global module for reasoning beyond local regions. In the global module, reasoning is based on a graph structure. It has three components: a) a knowledge graph where we represent classes as nodes and build edges to encode different types of semantic relationships; b) a region graph of the current image where regions in the image are nodes and spatial relationships between these regions are edges; c) an assignment graph that assigns regions to classes. Taking advantage of such a structure, we develop a reasoning module specifically designed to pass information on this graph. Both the local module and the global module roll-out iteratively and cross-feed predictions to each other in order to refine estimates. Note that, local and global reasoning are not isolated: a good image understanding is usually a compromise between background knowledge learned a priori and image-specific observations. Therefore, our full pipeline joins force of the two modules by an attention [3] mechanism allowing the model to rely on the most relevant features when making the final predictions.
We show strong performance over plain ConvNets using our framework. For example, we can achieve 8.4% absolute improvements on ADE [55] measured by per-class average precision, where by simply making the network deeper can only help ∼1%.
Related Work
Visual Knowledge Base. Whereas past five years in computer vision will probably be remembered as the successful resurgence of neural networks, acquiring visual knowledge at a large scale -the simplest form being labeled instances of objects [39,30], scenes [55], relationships [25] etc.deserves at least half the credit, since ConvNets hinge on large datasets [44]. Apart from providing labels using crowd-sourcing, attempts have also been made to accumulate structured knowledge (e.g. relationships [5], ngrams [10]) automatically from the web. However, these works fixate on building knowledge bases rather than using knowledge for reasoning. Our framework, while being more general, is along the line of research that applies visual knowledge base to end tasks, such as affordances [56], image classification [32], or question answering [49]. Context Modeling. Modeling context, or the interplay between scenes, objects and parts is one of the central problems in computer vision. While various previous work (e.g. scene-level reasoning [46], attributes [13,36], structured prediction [24,9,47], relationship graph [21,31,52]) has approached this problem from different angles, the breakthrough comes from the idea of feature learning with Con-vNets [16]. On the surface, such models hardly use any explicit context module for reasoning, but it is generally accepted that ConvNets are extremely effective in aggregating local pixel-to-level context through its ever-growing receptive fields [54]. Even the most recent developments such as top-down module [50,29,43], pairwise module [40], iterative feedback [48,34,2], attention [53], and memory [51,4] are motivated to leverage such power and depend on variants of convolutions for reasoning. Our work takes an important next step beyond those approaches in that it also incorporates learning from structured visual knowledge bases directly to reason with spatial and semantic relationships. Relational Reasoning. The earliest form of reasoning in artificial intelligence dates back to symbolic approaches [33], where relations between abstract symbols are defined by the language of mathematics and logic, and reasoning takes place by deduction, abduction [18], etc. However, symbols need to be grounded [15] before such systems are practically useful. Modern approaches, such as path ranking algorithm [26], rely on statistical learning to extract useful patterns to perform relational reasoning on structured knowledge bases. As an active research area, there are recent works also applying neural networks to the graph structured data [42,17,27,23,35,7,32], or attempting to regularize the output of networks with relationships [8] and knowledge bases [20]. However, we believe for visual data, reasoning should be both local and global: discarding the twodimensional image structure is neither efficient nor effective for tasks that involve regions.
Reasoning Framework
In this section we build up our reasoning framework. Besides plain predictions p 0 from a ConvNet, it consists of two core modules that reason to predict. The first one, local module, uses a spatial memory to store previous beliefs with parallel updates, and still falls within the regime of convolution based reasoning (Sec. 3.1). Beyond convolutions, we present our key contribution -a global module that reasons directly between regions and classes represented as nodes in a graph (Sec. 3.2). Both modules build up estimation it-
Reasoning with Convolutions
Our first building block, the local module, is inspired from [4]. At a high level, the idea is to use a spatial memory S to store previously detected objects at the very location they have been found. S is a tensor with three dimensions. The first two, height H and width W , correspond to the reduced size (1/16) of the image. The third one, depth D (=512), makes each cell of the memory c a vector that stores potentially useful information at that location.
S is updated with both high-level and mid-level features. For high-level, information regarding the estimated class label is stored. However, just knowing the class may not be ideal -more details about the shape, pose etc. can also be useful for other objects. For example, it would be nice to know the pose of a "person" playing tennis to recognize the "racket". In this paper, we use the logits f before soft-max activation, in conjunction with feature maps from a bottom convolutional layer h to feed-in the memory.
Given an image region r to update, we first crop the corresponding features from the bottom layer, and resize it to a predefined square (7×7) with bi-linear interpolation as h. Since high-level feature f is a vector covering the entire region, we append it to all the 49 locations. Two 1×1 convolutions are used to fuse the information [4] and form our input features f r for r. The same region in the memory S is also cropped and resized to 7×7, denoted as s r . After this alignment, we use a convolutional gated recurrent unit (GRU) [6] to write the memory:
s r = u • s r + (1 − u) • σ(W f f r + W s (z • s r ) + b), (1)
where s r is the updated memory for r, u is update gate, z is reset gate, W f , W s and b are convolutional weights and bias, and • is entry-wise product. σ(·) is an activation function. After the update, s r is placed back to S with another crop and resize operation 1 . Parallel Updates. Previous work [4] made sequential updates to memory. However, sequential inference is inefficient and GPU-intensive -limiting it to only give ten outputs per image [4]. In this paper we propose to update the regions in parallel as an approximation. In overlapping cases, a cell can be covered multiple times from different regions. When placing the regions back to S, we also calculate a weight matrix Γ where each entry γ r,c ∈[0, 1] keeps track of how much a region r has contributed to a memory cell c: 1 meaning the cell is fully covered by the region, 0 meaning not covered. The final values of the updated cell is the weighted average of all regions.
The actual reasoning module, a ConvNet C of three 3×3 convolutions and two 4096-D fully-connected layers, takes S as the input, and builds connections within the local window of its receptive fields to perform prediction. Since the two-dimensional image structure and the location information is preserved in S, such an architecture is particularly useful for relationships with spatial reasoning.
Beyond Convolutions
Our second module goes beyond local regions and convolutions for global reasoning. Here the meaning of global is two-fold. First is spatial, that is, we want to let the regions farther away to directly communicate information with each other, not confined by the receptive fields of the reasoning module C. Second is semantic, meaning we want to take advantage of visual knowledge bases, which can provide relationships between classes that are globally true (i.e. commonsense) across images. To achieve both types of reasoning, we build a graph G = (N , E), where N and E denote node sets and edge sets, respectively. Two types of nodes are defined in N : region nodes N r for R regions, and class nodes N c for C classes.
As for E, three groups of edges are defined between nodes. First for N r , a spatial graph is used to encode spatial relationships between regions (E r→r ). Multiple types Figure 3. Illustration of directly passing information on a graph with multiple edge types. Here four nodes are linked with two edge types. Each node is represented as an input feature vector mi (aggregated as M ). Weight matrix Wj is learned for edge type j to transform inputs. Then adjacency matrix Aj is applied to pass information to linked nodes. Finally, output G is generated by accumulating all edge types and apply activation function.
of edges are designed to characterize the relative locations. We begin with basic relationships such as "left/right", "top/bottom" and we define edge weights by measuring the pixel-level distances between the two. Note that we do not use the raw distance x directly, but instead normalizing it to
[0, 1] with a kernel κ(x)= exp(−x/∆) (where ∆=50 is the bandwidth)
, with the intuition that closer regions are more correlated. The edge weights are then used directly in the adjacency matrix of the graph. Additionally, we include edges to encode the coverage patterns (e.g. intersection over union, IoU [12]), which can be especially helpful when two regions overlap.
A second group of edges lie between regions and classes, where the assignment for a region to a class takes place. Such edges shoulder the responsibility of propagating beliefs from region to class (e r→c ) or backwards from class to region (e c→r ). Rather than only linking to the most confident class, we choose full soft-max score p to define the edge weights of connections to all classes. The hope that it can deliver more information and thus is more robust to false assignments.
Semantic relationships from knowledge bases are used to construct the third group of edges between classes (E c→c ). Again, multiple types of edges can be included here. Classical examples are "is-kind-of" (e.g. between "cake" and "food"), "is-part-of" (e.g. between "wheel" and "car"), "similarity" (e.g. between "leopard" and "cheetah"), many of which are universally true and are thus regarded as commonsense knowledge for humans. Such commonsense can be either manually listed [39] or automatically collected [5]. Interestingly, even relationships beyond these (e.g. actions, prepositions) can help recognition [32]. Take "person ride bike" as an example, which is apparantly more of an imagespecific relationship. However, given less confident predictions of "person" and "bike", knowing the relationship "ride" along with the spatial configurations of the two can also help prune other spurious explanations. To study both cases, we experimented with two knowledge graphs in this paper: one created in-house with mostly commonsense edges, and the other also includes more types of relationships accumulated at a large-scale. For the actual graphs used in our experiments, please see Sec. 4.1 for more details. Now we are ready to describe the graph-based reasoning module R. As the input to our graph, we use M r ∈R R×D to denote the features from all the region nodes N r combined, where D (=512) is the number of feature channels. For each class node n c , we choose off-the-shelf word vectors [22] as a convenient representation, denoted as M c ∈R C×D . We then extend previous works [42,35] and pass messages directly on G (See Fig. 3). Note that, because our end-goal is to recognize regions better, all the class nodes should only be used as intermediate "hops" for better region representations. With this insight, we design two reasoning paths to learn the output features G r : a spatial path on which only region nodes are involved:
G spatial r = e∈Er→r A e M r W e ,(2)
where A e ∈R r×r is the adjacency matrix of edge type e, W e ∈R d×d is weight (bias is ignored for simplicity). The second reasoning path is a semantic one through class nodes:
G semantic c = e∈Ec→c A e σ(A er→c M r W er→c + M c W c )W e ,(3)
where we first map regions to classes through A er→c and W er→c , combine the intermediate features with class features M c , and again aggregate features from multiple types of edges between classes. Finally, the output for regions G r are computed by merging these two paths:
G r = σ(G spatial r + σ(A ec→r G semantic c W ec→r )),(4)
which first propagates semantic information back to regions, and then applies non-linear activation (See Fig. 4).
Just like convolution filters, the above-described paths can also be stacked, where the output G r can go through another set of graph operations -allowing the framework to perform joint spatial-semantic reasoning with deeper features. We use three stacks of operations with residual connections [16] in R, before the output is fed to predict.
Iterative Reasoning
A key ingredient of reasoning is to iteratively build up estimates. But how does information pass from one iteration to another? Our answer is explicit memory, which stores all the history from previous iterations. The local module uses spatial memory S, and the global module uses another memory M but without spatial structures. At iteration i, module also gives new predictions f g i from R. These new predictions as high-level features can then be used to get the updated memories S i+1 and M i+1 . The new memories will lead to another round of updated f i+1 s and the iteration goes on.
While one can do local and global reasoning in isolation, both the modules work best in conjunction. Therefore, for our full pipeline we want to join force of both modules when generating the predictions. To this end, we introduce cross-feed connections. After reasoning, both the local and global features are then concatenated together to update the memories S i+1 and M i+1 using GRU. In this way, spatial memory can benefit from global knowledge of spatial and semantic relationships, and graph can get a better sense of the local region layouts.
Attention
Inspired from the recent work on attention [3], we make another modification at the model output. Specifically, instead of only generating scores f , the model also has to produce an "attention" value a that denotes the relative confidence of the current prediction compared to the ones from other iterations or modules. Then the fused output is a weighted version of all predictions using attentions. Mathematically, if the model roll-outs I times, and outputs N =2I+1 (including I local, I global and 1 from plain Con-vNet) predictions f n , using attentions a n , the final output f is calculated as: f = n w n f n , where w n = exp(−a n ) n exp(−a n )
.
Note again that here f n is the logits before soft-max, which is then activated to produce p n . The introduction of attention allows the model to intelligently choose feasible predictions from different modules and iterations.
Training
Finally, the overall framework is trained end-to-end, with a total loss function consists of: a) plain ConvNet loss L 0 ; b) local module loss L l i ; c) global module loss L g i ; and d) the final prediction loss with attentions L f . Since we want our reasoning modules to focus more on the harder examples, we propose to simply re-weight the examples in the loss, based on predictions from previous iterations. Formally, for region r at iteration i≥1, the crossentropy loss for both modules is computed as:
L i (r) = max(1. − p i−1 (r), β) r max(1. − p i−1 (r ), β) log(p i (r)),(6)
where p i (r) is the soft-max output of the ground-truth class, and β∈[0, 1] controls the entropy of the weight distribution: when β=1, it is uniform distribution; and when β=0, entropy is minimized. In our experiments, β is set to 0.5. p i−1 (r) is used as features without back-propagation. For both local and global, p 0 (r) is the output from the plain ConvNet.
Experiments
In this section we evaluate the effectiveness of our framework. We begin with our experimental setups, which includes the datasets to work with (Sec. 4.1), the task to evaluate on (Sec. 4.2) and details of our implementation (Sec. 4.3). We discuss our results and analyze them in Sec. 4.4 and Sec. 4.5 respectively.
Datasets and Graphs
Datasets are biased [45]. For context reasoning we would naturally like to have scene-focused datasets [55] as opposed to object-focused ones [39]. To showcase the capabilities of our system, we need densely labeled dataset with a large number of classes. Finally, one benefit of using knowledge graph is to transfer across classes, therefore a dataset with long-tail distribution is an ideal test-bed. Satisfying all these constraints, ADE [55] and Visual Genome (VG) [25] where regions are densely labeled in open vocabulary are the main picks of our study.
For ADE, we use the publicly released training set (20, 210) images for training, and split the validation set (2, 000 images) into val-1k and test-1k with 1, 000 images each. The original raw names are used due to a more detailed categorization [55]. We filter out classes with less than five instances, which leaves us with 1, 484 classes. With the help of parts annotations in the dataset, a commonsense knowledge graph is created with five types of edges between classes: a) "is-part-of" (e.g. "leg" and "chair"); b) "is-kind-of" (e.g. "jacket" and "clothes"); c) "plural-form" (e.g. "tree" and "trees"); d) "horizontal-symmetry" (e.g. "left-arm" and "right-arm"); e) "similarity" (e.g. "handle" and "knob"). Notice that the first four types are directed edges, hence we also include their inverted versions.
For VG, the latest release (v1.4) is used. We split the entire set of 108, 077 images into 100K, 4, 077 and 4K as train, val and test set. Similar pre-processing is done on VG, except that we use synsets [39] instead of raw names due to less consistent labels from multiple annotators. 3, 993 classes are used. For knowledge graph between classes, we take advantage of the relationship annotations in the set, and select the top 10 most frequent relationships to automatically construct edges beyond commonsense relationships constructed for ADE. For each type of relationships, the edge weights are normalized so that each row of the adjacency matrix is summed-up to one. While this approach results in a noisier graph, it also allows us to demonstrate that our approach is scalable and robust to noise. Finally, we also show experiments on COCO [30]. However, since it is detection oriented -has only 80 classes picked to be mutually-exclusive, and covers less percentage of labeled pixels, we only report results a) without the knowledge graph and b) without a test split (trainval35k [4] for training and minival for evaluation). This setup is for analysis purposes only.
Task and Evaluation
We evaluate our system on the task of region classification, where the goal is to assign labels to designated regions denoted by rectangular bounding boxes. For both training and testing, we use provided ground-truth locations. We picked this task for three reasons. The first one is on evaluation. As the number of classes increases in the vocabulary, missing labels are inevitable, which is especially severe for object parts (e.g. "rim", "arm") and related classes (e.g. "shoes" vs. "sneakers") where external knowledge is valuable. If there are missing labels, fair evaluation becomes much more difficult since accuracy becomes impossible to evaluate -cannot tell if a prediction is wrong, or the label itself is missing. Interestingly, such an issue also happens to other research areas (e.g. recommendation systems [41] and link prediction [28]). Borrowing ideas from them, a practical solution is to evaluate only on what we already know -in our case ground-truth regions. Second, although region classification is a simplified version of object detection and semantic segmentation, it maintains a richer set of la-bels, especially including "stuff" classes like "road", "sky", and object instances. Modeling "stuff-object" and instancelevel relationships is a crucial capability which would be missed in a pure detection/segmentation setting. Finally as our experiment will show (Sec. 4.5), while object detectors can be used off-the-shelf, the additional manually defined parameters and components (e.g. overlapping threshold for a region to be positive/negative, predefined scale/aspect ratio sets of anchors [37]) in its pipeline pose limitations on how much context can benefit. For example, after nonmaximal suppression (NMS), highly overlapping objects (e.g. "window" and "shutter") will be suppressed, and ironically this is exactly where context reasoning could have helped. On the other hand, by feeding fixed regions directly for end-to-end learning, we can at least factorize the recognition error from the localization one [19], and get a clean focus on how context can help discriminating confusing classes.
Since ADE is a segmentation dataset, we convert segmentation masks to bounding boxes. For object classes (e.g. "person"), each instance is created a separate box. Part (e.g. "head") and part-of-part (e.g. "nose") are also included. For VG and COCO, boxes are directly used.
For evaluation, we use classification accuracy (AC) and average precision (AP) [12]. Note that since all the regions are fixed with known labels, there is no need to set a region overlap threshold for AP. Results can be aggregated in two ways: the first way ("per-class") computes metrics separately for each class in the set, and take the mean; since the final scores are all taken from a calibrated soft-max output, a second way ("per-instance") that computes metrics simultaneously for all classes. Intuitively, "per-class" assigns more weights to instances from rare classes.
Implementation Details
A simplified version of tf-faster-rcnn 2 is used to implement our baseline for region classification, with region proposal branch and bounding box regression components removed. Unless otherwise noted, ResNet-50 [16] pre-trained on ImageNet [39] is used as our backbone image classifier, and images are enlarged to shorter size 600 pixels during both training and testing. Specifically, full-image shared convolutional feature maps are computed till the last conv4 layer. Then the ground-truth boxes are used as regions-ofinterest to compute region-specific features (crop and resize to 7×7 without max-pool). All layers of conv5 and up are then adopted to obtain the final feature for the baseline prediction p 0 . Batch normalization parameters are fixed.
For the local module, we use the last conv4 layer as our mid-level features to feed the spatial memory S. For the global module, mid-level features are the final conv5 (2048-D) layer after avg-pool. Both features are fused with the logits before soft-max f , and then fed into the memory cells. Word vectors from fastText [22] are used to represent each class, which extracts sub-word information and Figure 5. Qualitative examples from ADE test-1k (best if zoomed-in). For regions highlighted in blue, the predictions from baseline and our model are compared. Other regions are also listed to provide the context. For example, the "right-leg" is less confused with "left-leg" after reasoning (top-left); the "mouse" on the "desk" is predicted despite low resolution (top-third); and "detergent-dispenser" is recognized given the context of "washing-machine" (top-right). At bottom-right we show a failure case where context does not help "remote-control", probably because it has never appeared on the "night-table" before and no semantic relationship is there to help. generalizes well to out-of-vocabulary words. ReLU is selected as the activation function. We roll-out the reasoning modules 3 times and concurrently update all regions at each iteration, as more iterations do not offer more help.
We apply stochastic gradient descent with momentum to optimize all the models, and use the validation set to tune hyper-parameters. Our final setups are: 5e −4 as the initial learning rate, reduced once (0.1×) during fine-tuning; 1e −4 as weight decay; 0.9 as momentum. For ADE, we train 320K iterations and reduce learning rate at 280K. For VG and COCO the numbers are 640K/500K and 560K/320K, respectively 3 . We use a single image per step, and the only data augmentation technique used during training is leftright flipping 4 . No augmentation is used in testing.
Main Results
Quantitative results on ADE test-1k and VG test are shown in Tab. 1. Besides plain ConvNet p 0 , we also add three more baselines. First, we use ResNet-101 as the backbone to see the performance can benefit from deeper networks. Second, we increase the input image size with a shorter side 800 pixels, which is shown helpful especially for small objects in context [29]. Finally, to check whether our performance gain is a result of more parameters, we include model ensemble as the third baseline where the prediction of two separate baseline models are averaged.
As can be seen, our reasoning modules are performing much better than all the baselines on ADE. The local module alone can increase per-class AP by 7.8 absolute points. Although the global module alone is not as effective (4.4% improvement), the performance gain it offers is complementary to the local module, and combining both modules we arrive at an AP of 48.5% compared to the baseline AP 40.1%. On the other hand, deeper network and larger input size can only help ∼1%, less than model ensembles. Additionally, our models achieve higher per-class metric gains than per-instance ones, indicating that rare classes get helped more -a nice property for learning from few examples. Some qualitative results are listed in Fig. 5.
We also report the speed for future reference. On Titan Xp, the final model on ADE trains at 0.344s per iteration, compared to the baseline ResNet-50 at 0.163s and ResNet-101 at 0.209s. For testing, our model takes 0.165s, whereas ResNet-50 0.136s, ResNet-101 0.156s. We believe the additional cost is minimal with regard to the extra accuracy.
We see a similar but less significant trend on VG. This can potentially be a result of noisier labels -for ADE (and COCO shown later), the per-instance AP and AC values are within 0.1%, intuitively suggesting that higher scores usually correspond to correct classifications. However, on VG the difference is at ∼0.5%, meaning more of the highly confident predictions are not classified right, which are likely caused by missing ground-truths.
Analysis
Our analysis is divided into two major parts. In the first part, we conduct thorough ablative analysis on the framework we have built. Due to space limitation, we only report results on ADE here at Tab. 2, for more analysis on VG, please check our supplementary material.
As can be seen, re-weighting hard examples with Eq. 6 helps around 0.5% regardless of reasoning modules. Spatial memory S is critical in the local module -if replaced Next, we want to analyze if our framework is robust to missing regions: if some percentage of regions are not used for reasoning. This will be a common scenario if we use our framework in the detection setting -the underlying region proposal network [37] may itself miss some regions. We perform this set of experiments on COCO, since its regions are object-focused. We test three variations. In the first variation, the same region classification pipeline is applied as-is. In the other two, we drop regions. While we could have done it randomly, we simulate the real-world scenario by using region proposals from faster R-CNN [37] (1190K/900K, minival detection mAP 32.4%) for testing, where 300 region proposals after NMS are applied to filter the groundtruth regions (max IoU>δ). Evaluation is only done on the remaining regions. Here we choose not to use region proposals directly, since the model has seen ground truth regions only. We test two variations: a) "pre", where the regions are filtered before inference, i.e. only the remaining ground-truths are fed for reasoning; "post", where regions are filtered after inference. Note that for the baseline, "pre" and "post" makes no difference performance-wise.
The results are summarized in Tab. 3. Interestingly, despite lacking a knowledge graph, our global module works better than the local module with the region graph alone, Table 3. Results with missing regions when region proposals are used. COCO minival is used since it is more detection oriented. pre filters regions before inference, and post filters after inference. likely due to its power that allows direct region-to-region communication even for farther-away pairs. Combining the two, we report 3.7% absolute advantage on per-class AP over the baseline even with all classes being objects -no "stuff" classes involved. In Fig. 6, we vary δ from 0 to .9: with 0 keeping all regions and 0.9 dropping the most. As the trend shows, while the reasoning module suffers when regions are dropped, it is quiet resilient and the performance degradation is smooth. For example (listed in Tab. 3), with an IoU threshold δ of 0.5 that recalls 78.1% of the ground truth boxes, we still outperform the baseline by 2.4% in the "post" setting, and 2.2% in "pre" where not all regions can be fed for reasoning. The lower gap implies a) region proposals are usually corresponding to easy examples where less context is needed, and b) context reasoning frameworks like ours benefit from more known regions. At δ=.8 the recall (30.5%) is so small that it cannot afford much reasoning, and at δ=.9 (recall 3.9%), reasoning even hurts the performance.
Method
Conclusion
We presented a novel framework for iterative visual reasoning. Beyond convolutions, it uses a graph to encode spatial and semantic relationships between regions and classes and passes message on the graph. We show strong performance over plain ConvNets, e.g. achieving an 8.4% absolute gain on ADE and 3.7% on COCO. Analysis also shows that our reasoning framework is resilient to missing regions caused by current region proposal approaches.
Figure 2 .
2Overview of our reasoning framework. Besides a plain ConvNet that gives predictions, the framework has two modules to perform reasoning: a local one (Sec. 3.1) that uses spatial memory Si, and reasons with another ConvNet C; and a global one (Sec. 3.2) that treats regions and classes as nodes in a graph and reasons by passing information among them. Both modules receive combined high-level and mid-level features, and roll-out iteratively (Sec. 3.3) while cross-feeding beliefs. The final prediction f is produced by combining all the predictions fi with attentions ai (Sec. 3.4). eratively (Sec. 3.3), with beliefs cross-fed to each other. Finally taking advantage of both local and global, we combine predictions from all iterations with an attention mechanism (Sec. 3.4) and train the model with sample re-weighting (Sec. 3.5) that focuses on hard examples (See Fig. 2).
Figure 4 .
4S i is followed by convolutional reasoning module C to generate new predictions f l i for each region. Similarly, global Two reasoning paths used in our global reasoning module R. Taking the region and class inputs Mr and Mc, the spatial path directly passes information in the region graph with region-toregion edges Er→r, whereas the semantic path first assigns regions to classes with er→c, passes the information on to other classes with class-to-class edges Ec→c, and then propagates back. Final outputs are combined to generate output region features Gr.
Figure 6 .
6Trends of recall and per-class AP when varying IoU threshold δ from 0 to .9 to drop regions. See text for details.
Table 1 .
1Main results on ADE test-1k and VG test. AP is average precision, AC is classification accuracy. Superscripts show the improvement ∇ over the baseline.% Method
per-instance
per-class
AP ∇
AC ∇
AP ∇
AC ∇
ADE
Baseline
67.0
67.0
40.1
33.2
w/ ResNet-101
68.2
68.3
40.8
34.4
w/ 800-input
68.2
68.2
41.0
34.3
Ensemble
68.7
68.8
42.9
35.3
Ours -Local
71.6 +4.6
71.7 +4.7
47.9 +7.8
38.7 +5.7
Ours -Global
69.8 +2.8
69.8 +2.8
44.5 +4.4
36.8 +3.6
Ours -Final
72.6 +5.6
72.6 +5.6
48.5 +8.4
39.5 +6.3
VG
Baseline
49.1
49.6
16.9
12.1
w/ ResNet-101
50.3
50.8
18.0
13.0
w/ 800-input
49.5
50.0
17.0
12.2
w/ Ensemble
50.2
50.7
17.7
12.3
Ours -Local
51.4 +2.3
51.9 +2.3
18.8 +1.9
12.8 +0.7
Ours -Global
50.9 +1.8
51.5 +1.9
18.3 +1.4
12.6 +0.5
Ours -Final
51.7 +2.6
52.2 +2.6
19.1 +2.2
12.9 +0.8
Table 2 .
2Ablative analysis on ADE test-1k. In the first row of each block we repeat Local, Global and Final results from Tab. 1. Others see Sec. 4.5 for details. by feeding last conv4 layer directly the performance drops almost to baseline. Local context aggregator C is less influential for ADE since the regions including background are densely labeled. A different story takes place at the global module: removing the reasoning module R steeply drops performance, whereas further removing memory M does not hurt much. Finally, for our full pipeline, removing cross-feeding and dropping the number of iterations both result in worse performance. Missing Regions. So far we have shown results when all the regions are present.% Analysis
per-instance
per-class
AP
AC
AP
AC
Local
Ours -Local
71.6
71.7
47.9
38.7
w/o re-weight
71.3
71.3
46.7
37.9
w/o C
70.9
71.0
46.1
37.5
w/o S
67.6
67.6
42.1
34.4
Global
Ours -Global
69.8
69.8
44.5
36.8
w/o re-weight
69.2
69.2
43.8
36.7
w/o spatial
67.8
67.8
41.5
35.0
w/o semantic
69.1
69.2
43.9
35.9
w/o R
67.1
67.2
41.5
34.5
w/o M & R
67.1
67.1
41.0
34.0
Final
Ours -Final
72.6
72.6
48.5
39.5
w/o re-weight
72.1
72.2
47.3
38.6
w/o cross-feed
72.2
72.2
47.6
39.0
2 iterations
71.9
72.0
48.1
39.0
Different from previous work[4] that introduces an inverse operation to put the region back, we note that crop and resize itself with proper extrapolation can simply meet this requirement.
https://github.com/endernewton/tf-faster-rcnn
Training longer still reduces cross-entropy, but drops both AP and AC.4 The labels for class pairs like "left-hand" and "right-hand" are swapped for flipped images.
Acknowledgements: This work was supported in part by ONR MURI N000141612007. XC would also like to thank Shengyang Dai and Google Cloud AI team for support dur-ing the internship.
Scene perception: Detecting and judging objects undergoing relational violations. I Biederman, R J Mezzanotte, J C Rabinowitz, Cognitive psychology. 142I. Biederman, R. J. Mezzanotte, and J. C. Rabinowitz. Scene perception: Detecting and judging objects undergoing re- lational violations. Cognitive psychology, 14(2):143-177, 1982. 1
Human pose estimation with iterative error feedback. J Carreira, P Agrawal, K Fragkiadaki, J Malik, CVPR. J. Carreira, P. Agrawal, K. Fragkiadaki, and J. Malik. Hu- man pose estimation with iterative error feedback. In CVPR, 2016. 2
Attention to scale: Scale-aware semantic image segmentation. L.-C Chen, Y Yang, J Wang, W Xu, A L Yuille, CVPR. L.-C. Chen, Y. Yang, J. Wang, W. Xu, and A. L. Yuille. At- tention to scale: Scale-aware semantic image segmentation. In CVPR, 2016. 1, 2, 5
Spatial memory for context reasoning in object detection. X Chen, A Gupta, arXiv:1704.042246arXiv preprintX. Chen and A. Gupta. Spatial memory for context reasoning in object detection. arXiv preprint arXiv:1704.04224, 2017. 1, 2, 3, 6
Neil: Extracting visual knowledge from web data. X Chen, A Shrivastava, A Gupta, ICCV. 24X. Chen, A. Shrivastava, and A. Gupta. Neil: Extracting visual knowledge from web data. In ICCV, 2013. 2, 4
Empirical evaluation of gated recurrent neural networks on sequence modeling. J Chung, C Gulcehre, K Cho, Y Bengio, arXiv:1412.3555J. Chung, C. Gulcehre, K. Cho, and Y. Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv:1412.3555, 2014. 3
R Das, A Neelakantan, D Belanger, A Mccallum, arXiv:1607.01426Chains of reasoning over entities, relations, and text using recurrent neural networks. arXiv preprintR. Das, A. Neelakantan, D. Belanger, and A. McCallum. Chains of reasoning over entities, relations, and text using recurrent neural networks. arXiv preprint arXiv:1607.01426, 2016. 2
Large-scale object classification using label relation graphs. J Deng, N Ding, Y Jia, A Frome, K Murphy, S Bengio, Y Li, H Neven, H Adam, ECCV. J. Deng, N. Ding, Y. Jia, A. Frome, K. Murphy, S. Bengio, Y. Li, H. Neven, and H. Adam. Large-scale object classifica- tion using label relation graphs. In ECCV, 2014. 2
Discriminative models for multi-class object layout. C Desai, D Ramanan, C C Fowlkes, IJCV. 951C. Desai, D. Ramanan, and C. C. Fowlkes. Discrimina- tive models for multi-class object layout. IJCV, 95(1):1-12, 2011. 2
Learning everything about anything: Webly-supervised visual concept learning. S K Divvala, A Farhadi, C Guestrin, CVPR. S. K. Divvala, A. Farhadi, and C. Guestrin. Learning ev- erything about anything: Webly-supervised visual concept learning. In CVPR, 2014. 2
An empirical study of context in object detection. S K Divvala, D Hoiem, J H Hays, A A Efros, M Hebert, CVPR. S. K. Divvala, D. Hoiem, J. H. Hays, A. A. Efros, and M. Hebert. An empirical study of context in object detec- tion. In CVPR, 2009. 2
The pascal visual object classes (voc) challenge. M Everingham, L Van Gool, C K Williams, J Winn, A Zisserman, IJCV. 8826M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman. The pascal visual object classes (voc) chal- lenge. IJCV, 88(2):303-338, 2010. 4, 6
Describing objects by their attributes. A Farhadi, I Endres, D Hoiem, D Forsyth, CVPR. A. Farhadi, I. Endres, D. Hoiem, and D. Forsyth. Describing objects by their attributes. In CVPR, 2009. 2
One-shot learning of object categories. L Fei-Fei, R Fergus, P Perona, TPAMI28L. Fei-Fei, R. Fergus, and P. Perona. One-shot learning of object categories. TPAMI, 28(4):594-611, 2006. 2
The symbol grounding problem. S Harnad, Physica D: Nonlinear Phenomena. 421-3S. Harnad. The symbol grounding problem. Physica D: Non- linear Phenomena, 42(1-3):335-346, 1990. 2
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, CVPR. 6K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016. 1, 2, 4, 6
M Henaff, J Bruna, Y Lecun, arXiv:1506.05163Deep convolutional networks on graph-structured data. arXiv preprintM. Henaff, J. Bruna, and Y. LeCun. Deep convolu- tional networks on graph-structured data. arXiv preprint arXiv:1506.05163, 2015. 2
Interpretation as abduction. J R Hobbs, M Stickel, P Martin, D Edwards, ACL. J. R. Hobbs, M. Stickel, P. Martin, and D. Edwards. Inter- pretation as abduction. In ACL, 1988. 2
Diagnosing error in object detectors. D Hoiem, Y Chodpathumwan, Q Dai, ECCV. 6D. Hoiem, Y. Chodpathumwan, and Q. Dai. Diagnosing error in object detectors. ECCV, 2012. 6
Deep neural networks with massive learned knowledge. Z Hu, Z Yang, R Salakhutdinov, E P Xing, EMNLP. 2Z. Hu, Z. Yang, R. Salakhutdinov, and E. P. Xing. Deep neu- ral networks with massive learned knowledge. In EMNLP, 2016. 2
Image retrieval using scene graphs. J Johnson, R Krishna, M Stark, L.-J Li, D Shamma, M Bernstein, L Fei-Fei, CVPR. J. Johnson, R. Krishna, M. Stark, L.-J. Li, D. Shamma, M. Bernstein, and L. Fei-Fei. Image retrieval using scene graphs. In CVPR, 2015. 2
A Joulin, E Grave, P Bojanowski, M Douze, H Jégou, T Mikolov, Fasttext, arXiv:1612.03651zip: Compressing text classification models. 47arXiv preprintA. Joulin, E. Grave, P. Bojanowski, M. Douze, H. Jégou, and T. Mikolov. Fasttext.zip: Compressing text classification models. arXiv preprint arXiv:1612.03651, 2016. 4, 7
Semi-supervised classification with graph convolutional networks. T N Kipf, M Welling, arXiv:1609.02907arXiv preprintT. N. Kipf and M. Welling. Semi-supervised classifica- tion with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016. 2
Efficient inference in fully connected crfs with gaussian edge potentials. P Krähenbühl, V Koltun, NIPS. P. Krähenbühl and V. Koltun. Efficient inference in fully connected crfs with gaussian edge potentials. In NIPS, 2011. 2
Visual genome: Connecting language and vision using crowdsourced dense image annotations. R Krishna, Y Zhu, O Groth, J Johnson, K Hata, J Kravitz, S Chen, Y Kalantidis, L.-J Li, D A Shamma, arXiv:1602.0733225R. Krishna, Y. Zhu, O. Groth, J. Johnson, K. Hata, J. Kravitz, S. Chen, Y. Kalantidis, L.-J. Li, D. A. Shamma, et al. Vi- sual genome: Connecting language and vision using crowd- sourced dense image annotations. arXiv:1602.07332, 2016. 2, 5
Random walk inference and learning in a large scale knowledge base. N Lao, T Mitchell, W W Cohen, EMNLP. 2N. Lao, T. Mitchell, and W. W. Cohen. Random walk in- ference and learning in a large scale knowledge base. In EMNLP, 2011. 2
Y Li, D Tarlow, M Brockschmidt, R Zemel, arXiv:1511.05493Gated graph sequence neural networks. arXiv preprintY. Li, D. Tarlow, M. Brockschmidt, and R. Zemel. Gated graph sequence neural networks. arXiv preprint arXiv:1511.05493, 2015. 2
The link-prediction problem for social networks. D Liben-Nowell, J Kleinberg, Journal of the Association for Information Science and Technology. 587D. Liben-Nowell and J. Kleinberg. The link-prediction prob- lem for social networks. Journal of the Association for In- formation Science and Technology, 58(7):1019-1031, 2007. 6
Feature pyramid networks for object detection. T.-Y Lin, P Dollár, R Girshick, K He, B Hariharan, S Belongie, arXiv:1612.0314427T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie. Feature pyramid networks for object detection. arXiv:1612.03144, 2016. 2, 7
Microsoft coco: Common objects in context. T.-Y Lin, M Maire, S Belongie, J Hays, P Perona, D Ramanan, P Dollár, C L Zitnick, ECCV. 26T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ra- manan, P. Dollár, and C. L. Zitnick. Microsoft coco: Com- mon objects in context. In ECCV, 2014. 2, 6
Visual relationship detection with language priors. C Lu, R Krishna, M Bernstein, L Fei-Fei, ECCV. C. Lu, R. Krishna, M. Bernstein, and L. Fei-Fei. Visual re- lationship detection with language priors. In ECCV, 2016. 2
The more you know: Using knowledge graphs for image classification. K Marino, R Salakhutdinov, A Gupta, arXiv:1612.04844arXiv preprintK. Marino, R. Salakhutdinov, and A. Gupta. The more you know: Using knowledge graphs for image classification. arXiv preprint arXiv:1612.04844, 2016. 1, 2, 4
Physical symbol systems. A Newell, Cognitive science. 42A. Newell. Physical symbol systems. Cognitive science, 4(2):135-183, 1980. 2
Stacked hourglass networks for human pose estimation. A Newell, K Yang, J Deng, ECCV. A. Newell, K. Yang, and J. Deng. Stacked hourglass net- works for human pose estimation. In ECCV, 2016. 2
Learning convolutional neural networks for graphs. M Niepert, M Ahmed, K Kutzkov, ICML. 24M. Niepert, M. Ahmed, and K. Kutzkov. Learning convolu- tional neural networks for graphs. In ICML, 2016. 2, 4
Relative attributes. D Parikh, K Grauman, ICCV. D. Parikh and K. Grauman. Relative attributes. In ICCV, 2011. 2
Faster r-cnn: Towards real-time object detection with region proposal networks. S Ren, K He, R Girshick, J Sun, arXiv:1506.014976S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: To- wards real-time object detection with region proposal net- works. arXiv:1506.01497, 2015. 1, 6, 8
U-net: Convolutional networks for biomedical image segmentation. O Ronneberger, P Fischer, T Brox, MIC-CAIO. Ronneberger, P. Fischer, and T. Brox. U-net: Convolu- tional networks for biomedical image segmentation. In MIC- CAI, 2015. 1
Imagenet large scale visual recognition challenge. O Russakovsky, J Deng, H Su, J Krause, S Satheesh, S Ma, Z Huang, A Karpathy, A Khosla, M Bernstein, IJCV. 11536O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. IJCV, 115(3):211-252, 2015. 2, 4, 5, 6
A Santoro, D Raposo, D G Barrett, M Malinowski, R Pascanu, P Battaglia, T Lillicrap, arXiv:1706.01427A simple neural network module for relational reasoning. arXiv preprintA. Santoro, D. Raposo, D. G. Barrett, M. Malinowski, R. Pascanu, P. Battaglia, and T. Lillicrap. A simple neu- ral network module for relational reasoning. arXiv preprint arXiv:1706.01427, 2017. 2
Itembased collaborative filtering recommendation algorithms. B Sarwar, G Karypis, J Konstan, J Riedl, WWW. B. Sarwar, G. Karypis, J. Konstan, and J. Riedl. Item- based collaborative filtering recommendation algorithms. In WWW, 2001. 6
The graph neural network model. F Scarselli, M Gori, A C Tsoi, M Hagenbuchner, G Monfardini, TNN. 2014F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini. The graph neural network model. TNN, 20(1):61-80, 2009. 2, 4
A Shrivastava, R Sukthankar, J Malik, A Gupta, arXiv:1612.06851Beyond Skip Connections: Top-Down Modulation for Object Detection. A. Shrivastava, R. Sukthankar, J. Malik, and A. Gupta. Be- yond Skip Connections: Top-Down Modulation for Object Detection. arXiv:1612.06851, 2016. 2
Revisiting Unreasonable Effectiveness of Data in Deep Learning Era. C Sun, A Shrivastava, S Singh, A Gupta, ICCV. C. Sun, A. Shrivastava, S. Singh, and A. Gupta. Revisiting Unreasonable Effectiveness of Data in Deep Learning Era. In ICCV, 2017. 2
Unbiased look at dataset bias. A Torralba, A A Efros, CVPR. A. Torralba and A. A. Efros. Unbiased look at dataset bias. In CVPR, 2011. 5
Context-based vision system for place and object recognition. A Torralba, K P Murphy, W T Freeman, M A Rubin, ICCV. A. Torralba, K. P. Murphy, W. T. Freeman, M. A. Rubin, et al. Context-based vision system for place and object recogni- tion. In ICCV, 2003. 2
Auto-context and its application to highlevel vision tasks and 3d brain image segmentation. Z Tu, X Bai, TPAMI. 3210Z. Tu and X. Bai. Auto-context and its application to high- level vision tasks and 3d brain image segmentation. TPAMI, 32(10):1744-1757, 2010. 2
Convolutional pose machines. S.-E Wei, V Ramakrishna, T Kanade, Y Sheikh, CVPR. 1S.-E. Wei, V. Ramakrishna, T. Kanade, and Y. Sheikh. Con- volutional pose machines. In CVPR, 2016. 1, 2
Dick, and A. van den Hengel. Ask me anything: Free-form visual question answering based on knowledge from external sources. Q Wu, P Wang, C Shen, A , CVPR. Q. Wu, P. Wang, C. Shen, A. Dick, and A. van den Hen- gel. Ask me anything: Free-form visual question answering based on knowledge from external sources. In CVPR, 2016. 2
Top-down learning for structured labeling with convolutional pseudoprior. S Xie, X Huang, Z Tu, ECCV. S. Xie, X. Huang, and Z. Tu. Top-down learning for struc- tured labeling with convolutional pseudoprior. In ECCV, 2016. 2
Dynamic memory networks for visual and textual question answering. arXiv, 1603. C Xiong, S Merity, R Socher, 1C. Xiong, S. Merity, and R. Socher. Dynamic memory net- works for visual and textual question answering. arXiv, 1603, 2016. 1, 2
D Xu, Y Zhu, C B Choy, L Fei-Fei, arXiv:1701.02426Scene graph generation by iterative message passing. arXiv preprintD. Xu, Y. Zhu, C. B. Choy, and L. Fei-Fei. Scene graph generation by iterative message passing. arXiv preprint arXiv:1701.02426, 2017. 2
Stacked attention networks for image question answering. Z Yang, X He, J Gao, L Deng, A Smola, CVPR. Z. Yang, X. He, J. Gao, L. Deng, and A. Smola. Stacked attention networks for image question answering. In CVPR, 2016. 2
Visualizing and understanding convolutional networks. M D Zeiler, R Fergus, ECCV. M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In ECCV, 2014. 2
B Zhou, H Zhao, X Puig, S Fidler, A Barriuso, A Torralba, arXiv:1608.05442Semantic understanding of scenes through the ade20k dataset. arXiv preprintB. Zhou, H. Zhao, X. Puig, S. Fidler, A. Barriuso, and A. Tor- ralba. Semantic understanding of scenes through the ade20k dataset. arXiv preprint arXiv:1608.05442, 2016. 1, 2, 5
Building a largescale multimodal knowledge base system for answering visual queries. Y Zhu, C Zhang, C Ré, L Fei-Fei, arXiv:1507.05670Y. Zhu, C. Zhang, C. Ré, and L. Fei-Fei. Building a large- scale multimodal knowledge base system for answering vi- sual queries. arXiv:1507.05670, 2015. 2
| [
"https://github.com/endernewton/tf-faster-rcnn"
]
|
[]
| [
"A V Nefediev \nInstitute of Theoretical and Experimental Physics\nB.Cheremushkinskaya 25117218MoscowRussia\n",
"Yu A Simonov \nInstitute of Theoretical and Experimental Physics\nB.Cheremushkinskaya 25117218MoscowRussia\n"
]
| [
"Institute of Theoretical and Experimental Physics\nB.Cheremushkinskaya 25117218MoscowRussia",
"Institute of Theoretical and Experimental Physics\nB.Cheremushkinskaya 25117218MoscowRussia"
]
| []
| In the deconfinement phase of QCD quarks and gluons interact with the dense stochastic colour-magnetic vacuum. We consider the dynamics of quarks in this deconfinement phase using the Field Correlators Method and derive an effective nonperturbative interquark potential, in addition to the usual perturbative short-ranged interaction. We find the resulting angular-momentum-dependent interaction to be attractive enough to maintain bound states and, for light quarks (and gluons), to cause emission of quark and gluon pairs. Possible consequences for the strong interacting quark-gluon plasma are briefly discussed.1 The correlators D E and D H appear at the structure δ ij in these colour-electric and colour-magnetic correlators, respectively (see also Eq. (5)), whereas D E,H 1 parametrise the terms with derivatives. | 10.1007/s11450-008-1018-7 | [
"https://export.arxiv.org/pdf/hep-ph/0703306v1.pdf"
]
| 119,345,585 | hep-ph/0703306 | 44b765434eec2f8a6591d585dcb96388fe420b02 |
arXiv:hep-ph/0703306v1 29 Mar 2007
A V Nefediev
Institute of Theoretical and Experimental Physics
B.Cheremushkinskaya 25117218MoscowRussia
Yu A Simonov
Institute of Theoretical and Experimental Physics
B.Cheremushkinskaya 25117218MoscowRussia
arXiv:hep-ph/0703306v1 29 Mar 2007Nonperturbative dynamics in the colour-magnetic QCD vacuum
In the deconfinement phase of QCD quarks and gluons interact with the dense stochastic colour-magnetic vacuum. We consider the dynamics of quarks in this deconfinement phase using the Field Correlators Method and derive an effective nonperturbative interquark potential, in addition to the usual perturbative short-ranged interaction. We find the resulting angular-momentum-dependent interaction to be attractive enough to maintain bound states and, for light quarks (and gluons), to cause emission of quark and gluon pairs. Possible consequences for the strong interacting quark-gluon plasma are briefly discussed.1 The correlators D E and D H appear at the structure δ ij in these colour-electric and colour-magnetic correlators, respectively (see also Eq. (5)), whereas D E,H 1 parametrise the terms with derivatives.
Introduction
The picture of the strong interacting quark-gluon plasma (SQGP) seems to be adequate for explaining the recent data on ion-ion collisions [1]. It was found on the lattice [2] that colourmagnetic vacuum fields do not change across the phase transition, thus supporting the conjecture made in Ref. [3] that, above the temperature of deconfinement T c , the QCD vacuum loses its confining colour-electric part, while the colour-magnetic part remains intact. This idea can be economically expressed in the formalism of the Field Correlators Method (FCM) [4], where the Gaussian correlators of the colour-electric and colour-magnetic fields, E i (x)E j (y) and H i (x)H j (y) ( . . . denotes irreducible correlators), are parametrised through the correlation functions D E , D E 1 , D H , and D H 1 , respectively (see Ref. [4] for the details of the formalism) 1 . Notice that among those only D E vanishes above the T c [2,3]. The usual (time-like) string tension σ E and the so-called spatial string tension σ s are given by the double integrals from the corresponding correlators,
σ E = 1 2 D E (ξ)d 2 ξ, σ s ≡ σ H = 1 2 D H (ξ)d 2 ξ.(1)
Both string tensions coincide below the T c but σ E is expected to vanish in the deconfinement phase, whereas σ H remains nearly constant in the vicinity of the critical temperature, both below and above the T c [5,6]. Moreover, σ H grows at large T , as √ σ H ∝ T g 2 (T ) [6], signalising that D H grows as O(T 2 g 4 (T )). At the same time it was conjectured in Ref. [7], and confirmed later on the lattice [2,8], that the non-confining correlator D E 1 does not vanish either above the T c and leads to strong interaction between quarks and gluons. In Refs. [8,9], bound states due to D E 1 in quark and gluonic systems where found in qualitative agreement with the lattice data, Ref. [10]. Many efforts based on the colour-electric type forces have been applied to clarify the dynamics of QCD above the deconfinement phase transition. Still, without the colourmagnetic forces, the dynamical picture of the SQGP is not complete since colour-electric forces cannot bind quarks and gluons for large angular momentum where colour-magnetic forces are most important. Thus the purpose of the present Letter is to clarify the matter and to study orbitally excited bound states of quarks above the T c . For the sake of simplicity, we consider only the confining correlators D E,H and neglect the other two, D E,H 1 . Furthermore, we study a strongly interacting system at the fixed temperature T , which can be either below or above the deconfinement temperature T c , and develop the standard Hamiltonian approach to it. We consider the situation when the colour-electric interaction is switched off above the deconfinement temperature and the dynamics of quarks and gluons is governed by the colour-magnetic forces only. Finally, we use the interaction under consideration to estimate the strength of the interaction in the SQGP by comparing the mean potential energy of light quarks (and gluons) in the plasma with their mean kinetic energy. We find the corresponding parameter to be of order σ H /T 2 . Since this parameter is large for quarks and even 9/2 times larger for gluons we conclude that SQGP is very strongly coupled and it should be viewed as a liquid, at least.
QCD string and the spin-independent interaction in the quark-antiquark system
In this section we derive the effective spin-independent quark-antiquark interaction arising due the QCD string formation. Following the method proposed in Refs. [11,12] we consider the gauge-invariant Green's function in the vacuum at temperature T :
G(x 1 , x 2 |y 1 , y 2 ) = Ψ out (x 1 , x 2 )Ψ † in (y 1 , y 2 ) ,(2)
where the wave functions of the initial and final colourless qq states are built with the help of the parallel transporter Φ(x, y) = P exp i x y dz µ A µ (z) . Using the standard path integral approach and the Feynman-Schwinger representation for the single-quark propagator, one arrives at the Green's function (2) in the form [11]:
G(x 1 , x 2 |y 1 , y 2 ) = ∞ 0 ds (Dz) w x 1 y 1 e −K ∞ 0 ds (Dz) w x 2 y 2 e −K T rW (C) ,(3)
with the kinetic energies K = 1 4 s 0 dτż 2 µ andK = 1 4 s 0 dτż 2 µ and with the "winding path measure" (Dz) w xy taking into account Matsubara periodic boundary conditions. All spin-dependent terms are neglected in Eq. (3) -they will be restored below, in Section 3. The closed contour C runs over the quark trajectories and the dynamics of the system is defined by the averaged Wilson loop T rW (C) . In the Gaussian approximation, one finds that [4] 1
N C T rW (C) = exp − 1 2 S dσ µν (x) S dσ λρ (x ′ ) T rF µν (x)Φ(x, x ′ )F λρ (x ′ )Φ(x ′ , x) ,(4)
where S is the minimal surface bounded by the contour C. Keeping only the string-generating field strength correlators, we have
T rF µν (x)Φ(x, x ′ )F λρ (x ′ )Φ(x ′ , x) = (δ µλ δ νρ − δ µρ δ νλ )D((x − x ′ ) 2 ).(5)
In what follows we distinguish between the electric and magnetic contributions in Eq. (5), so that the structure dσ 0i dσ 0i enters Eq. (4) multiplied by D E , whereas the spatial part dσ jk dσ jk is accompanied by D H . The electric and magnetic string tensions are defined then according to Eq. (1).
We synchronise the quarks in the laboratory frame, putting x 10 = x 20 = t, and adopt the straight-line ansatz for the minimal string writing
dσ µν (x) = ε ab ∂ a w µ (t, β)∂ b w ν (t, β)dtdβ, {a, b} = {t, β},(6)
where 0 t t max , 0 β 1, and the profile function is defined by the trajectories of the quarks, w µ (t, β) = βx 1µ + (1 − β)x 2µ . For further convenience let us introduce two vectors:
r = x 1 − x 2 , ρ = [( x 1 − x 2 ) × (β˙ x 1 + (1 − β)˙ x 2 )] ≡ r ω,(7)
which allow one to write the differentials in a compact form,
dσ 0i (x)dσ 0i (x ′ ) = r(t) r(t ′ )dtdt ′ dβdβ ′ , dσ jk (x)dσ jk (x ′ ) = 2 ρ(t) ρ(t ′ )dtdt ′ dβdβ ′ .(8)
Presenting the averaged Wilson loop as
1 N C T rW (C) = e −J , J = J E + J H ,
one can write for the electric and magnetic contributions separately:
J E = tmax 0 dt dt ′ 1 0 dβ dβ ′ r(t) r(t ′ )D E ((x − x ′ ) 2 ),(9)J H = tmax 0 dt dt ′ 1 0 dβ dβ ′ ρ(t, β) ρ(t ′ , β ′ )D H ((x − x ′ ) 2 ).(10)
The correlation functions D E,H decrease in all directions of the Euclidean space-time with the correlation length T g which is measured on the lattice to be rather small, T g ≈ 0.2 ÷ 0.3 fm [13]. Therefore, only close points x and x ′ are correlated, so that one can neglect the difference between r(t) and r(t ′ ), ρ(t, β) and ρ(t ′ , β ′ ) in Eqs. (9), (10) and also write:
(x − x ′ ) 2 = (x(t, β) − x(t ′ , β ′ )) 2 = g ab ξ a ξ b , ξ a = t − t ′ , ξ b = β − β ′ .(11)
The induced metric tensor is g ab = g a δ ab , g 1 g 2 = det g = r 2 + ρ 2 = r 2 (1 + ω 2 ). Now, after an appropriate change of variables and introducing the string tensions, according to Eq. (1), one readily finds:
J E = σ E tmax 0 dt 1 0 dβ r 2 r 2 + ρ 2 = σ E r tmax 0 dt 1 0 dβ 1 √ 1 + ω 2 ,(12)J H = σ H tmax 0 dt 1 0 dβ ρ 2 r 2 + ρ 2 = σ H r tmax 0 dt 1 0 dβ ω 2 √ 1 + ω 2 .(13)
For σ E = σ H = σ the sum of J E and J H reproduces the well-known action of the Nambu-Goto string.
For further analysis we resort, as was done in Ref. [12], to the Hamiltonian description of the quark-antiquark system under consideration. We also turn over to Minkowski space-time.
Then the Lagrangian of the quark-antiquark system can be derived from the exponent in Eq. (3) in the form:
L = −m 1 1 −˙ x 2 1 − m 2 1 −˙ x 2 2 − σ E r 1 0 dβ 1 1 − [ n × (β˙ x 1 + (1 − β)˙ x 2 )] 2 + σ H r 1 0 dβ [ n × (β˙ x 1 + (1 − β)˙ x 2 )] 2 1 − [ n × (β˙ x 1 + (1 − β)˙ x 2 )] 2 , n = r r .(14)
Two particular cases of the Lagrangian (14) are of most interest. The first such case corresponds to equal masses, whereas in the other case one mass is assumed infinitely large. Then, using the standard technique, one can proceed to the Hamiltonian of the quark-antiquark system,
H = 1 ξ p 2 r + m 2 µ + µ + 1 0 dβ σ 2 1 r 2 2ν + ν 2 + σ 2 r + L 2 r 2 [ξµ + 2 1 0 dβν(β − ξ/2) 2 ] ,(15)
where ξ = 1 for the case of equal masses (m 1 = m 2 = m) and ξ = 2 for the case of the heavy-light system (m 1 → ∞, m 2 = m).
Here
σ 1 = σ H + η 2 (σ H − σ E ), σ 2 = 2η(σ E − σ H ).
The fields µ, ν(β), and η(β) are the auxiliary fields, also called in the literature the einbeins [14] 2 . Generally speaking, the einbein fields appear in the Lagrangian and, even in absence of the corresponding velocities, they can be considered as extra degrees of freedom introduced to the system. The einbeins can be touched upon when proceeding from the Lagrangian of the system to its Hamiltonian and thus they mix with the ordinary particles coordinates and momenta. Besides, in order to preserve the number of physical degrees of freedom, constrains are to be imposed on the system and then the formalism of constrained systems quantisation [15] is operative (see, for example, Ref. [16] for the open straight-line QCD string quantisation using this formalism). A nontrivial algebra of constraints and the process of disentangling the physical degrees of freedom and non-physical ones make the problem very complicated. In the meantime, a simpler approach to einbeins exists which amounts to considering all (or some) of them as variational parameters and thus to taking extrema in the einbeins either in the Hamiltonian or in its spectrum [17]. Being an approximate approach this technique appears accurate enough (see, for example, Ref. [18]) providing a simple but powerful and intuitive method of investigation. In this Letter we follow the given technique, so that extrema in all three einbeins are understood in the Hamiltonian (15). Notice that, for σ E = σ H = σ, the field η drops from the Hamiltonian and the standard expression for the string with quarks at the ends [12] readily comes out from Eq. (15). Notice that the kinetic part of the Hamiltonian (15) has a very clear structure: the radial motion of the quarks happens with the effective mass µ, whereas for the orbital motion the mass is somewhat different, containing the contribution of the inertia of the string.
We take the extrema in ν(β) and η(β) now, approximating η(β) by a uniform in β distribution. Then the Hamiltonian (15) takes the form
H = 1 ξ p 2 r + m 2 µ + µ + V SI (r),(16)
2 The einbein µ is introduced in the Lagrangian via the substitution 2 √ AB → A/µ + Bµ and allows one to simplify the quark kinetic term. The continuous einbein ν(β) enters through the same trick for the string term. Finally, the second continuous einbein η(β) appears due to the substitution A 2 /B → −Bη 2 + 2ηA. As soon as extrema are taken in all einbeins, the initial form of the Lagrangian is restored. and the spin-independent potential reads:
V SI (r) = η 0 σ E r + 1 η 0 − η 0 σ H r + 1 ξ µy 2 , η 0 = y arcsin y ,(17)
with y being the solution of the transcendental equation
l(l + 1) σ H r 2 = ξ 4y 1 + η 2 0 1 − σ E σ H 1 η 0 − 1 − y 2 + µy σ H r .(18)
The interested reader can find the details of a similar evaluation performed for the Hamiltonian (15) with σ E = σ H = σ in Ref. [19,20]. The remaining einbein µ is to be considered as the variational parameter to minimise the spectrum of the Hamiltonian (16). Obviously, the extremal value of µ depends on quantum numbers and acquires two contributions: one coming from the current quark mass m and the other, purely dynamical, contribution coming from the mean value of the radial component of the momentum p r . It is instructive to pinpoint the difference in the potential (17) below and above the T c .
At small r's, the potential (17) turns to the centrifugal barrier l(l + 1)/(ξµr 2 ), whereas its large-r behaviour differs dramatically for the temperatures below and above the T c . Indeed, the leading large-r contribution to the inter-quark potential corresponds to y ≪ 1 and, for T < T c , reads:
V conf (r) = σ E r.(19)
This is the linear confinement which is of a purely colour-electric nature and which admits angular-momentum-dependent corrections (see Refs. [12,19]).
In the deconfinement phase, at T > T c , the colour-electric part of the potential (17) vanishes, the leading long-range term coming from the angular-momentum-dependent part of the interaction:
V SI (r) = 3l(l + 1) ξ 2 σ H r 3 + . . . .(20)
Interestingly, in the deconfinement phase in absence of the confining potential, the spinindependent interaction becomes short-ranged decreasing as 1/r 3 at large inter-quark separations. This feature means the full compensation of the centrifugal barrier which would naively behave as 1/r 2 instead. The reason is obvious: at large inter-quark separations, the effective quark mass µ is to be compared to the "mass" of the string σr. The bound-state problem solved in the potential (19) gives a large value p r ∝ σ E r , so that, even for light (massless) quarks, their effective mass µ appears quite large (µ ≫ m). On the contrary, for light quarks and in absence of the strong confining interaction (19), the values of µ are small (µ ≈ m) and can be neglected as compared to the string contribution σ H r. This makes the spin-dependent terms in the effective inter-quark interaction important in this regime, as opposed to the confinement phase, where they give only small corrections to the bound states formed in the confining potential (19).
In the next section we turn to the derivation of spin-dependent contributions to the interquark potential.
Nonperturbative spin-dependent interactions
In this section we return to the Green's function of the quark-antiquark system (3) and restore spin-dependent terms. To this end we notice that the interaction of the quark spins with the background gluonic field is to be added at the exponent. It enters in the standard combination σ µν F µν , with σ µν = 1 4i (γ µ γ ν − γ ν γ µ ), and appears under the integral in the proper time τ . After averaging over the background field, the resulting expression for the Wilson loop can be again written in the form of Eq. (4),
1 N C T rW (C) = exp − 1 2 S dπ µν (x) S dπ λρ (x ′ ) T rF µν (x)Φ(x, x ′ )F λρ (x ′ )Φ(x ′ , x) ,(21)
but with the differential dπ µν containing the spinor part,
dπ µν (x) = ds µν (x) − iσ µν dτ.(22)
The nonperturbative spin-dependent interaction appears from the combination of differentials involving σ µν . We skip the details of the derivation which can be found, for example, in Refs. [21,22] and quote the result here. The leading spin-dependent term is the spin-orbit interaction (we omit contributions of D E,H 1 which bring about short-range terms O(r −3 )),
V SO (r) = S 1 L 1 2µ 2 1 − S 2 L 2 2µ 2 2 1 r dV 0 dr + 2 r dV 1 dr ,(23)
where V i (r) can be expressed through the colour-electric and colour-magnetic field correlators,
1 r dV 0 dr = 2 r ∞ 0 dτ r 0 dλD E (τ, λ), 2 r dV 1 dr = − 4 r ∞ 0 dτ r 0 dλ 1 − λ r D H (τ, λ).(24)
Notice that this result [21,22], is not due to the 1/m expansion, but is obtained with the only approximation made being the Gaussian approximation for field correlators. Accuracy of this approximation was checked both at T = 0 [23] and at T > T c [24] to be of the order of one percent.
Only the V 1 potential survives above the T c . Besides the corresponding µ-dependent denominator is to be corrected according to the discussion of the previous section -namely, one of µ's is to be augmented by the string rotation term yielding
V SO (r) = − 2ξ S L µr(ξµ + 2 ν(β − ξ/2) 2 ) ∞ 0 dτ r 0 dλD H (τ, λ) 1 − λ r ,(25)
where
ν(β − ξ/2) 2 ≡ 1 0 dβν(β − ξ/2) 2 = σ H r 2ξ 2 y 2 1 + η 2 0 1 η 0 − 1 − y 2 ,(26)
and S = S 1 + S 2 , for the light-light system, and S is the light-quark spin, for the heavy-light quarkonium.
Bound states of heavy quarks above the deconfinement temperature
In the previous sections we derived the effective nonperturbative inter-quark potential, including spin-independent terms and the spin-orbital interaction. It follows from Eq. (25) that above the deconfinement temperature, for the states with the total momentum J = l + S, S L > 0 and the potential V SO (r) becomes attractive, with a possibility to maintain bound states. Furthermore, its slow decrease as r → ∞ suggests that an infinite number of bound states exists, with the binding energies asymptotically approaching zero. Let us study these bound states in more detail. Hereafter σ E = 0 and we use the notation σ for the magnetic tension σ H . In view of an obvious similarity of the light-light and heavy-light cases (the difference manifesting itself only in numerical coefficients), we investigate numerically only the light-light system, as a paradigmatic example. Furthermore, for r ≫ T g , the potential (25) does not depend on the form of the correlator D H since
2 ∞ 0 dτ r 0 dλD H (τ, λ) 1 − λ r ≈ r≫Tg σ.(27)
Finally, we neglect the perturbative part of the inter-quark interaction for it is screened to a large extend contributing to short-ranged forces only whereas the effect discussed in this work is essentially a long-ranged effect. Therefore we study the spectrum of bound states in the potential
V (r) = arcsin y y − y arcsin y σr + µy 2 − σl µr(µ + 2 ν(β − 1/2) 2 ) ,(28)
which is the sum of the spin-independent term (17) and the spin-orbital term (25); y is the solution of Eq. (18) with σ E = 0. In Fig. 1 we plot the effective potential (28) for three values of the quark mass: m = 1GeV, 2GeV, and 3GeV. The resulting eigenenergy ε nrl (µ) is added then to the free part of the Hamiltonian (15),
M nrl (µ) = m 2 µ + µ + ε nrl (µ),(29)
and this sum is minimised with respect to the einbein µ,
∂M nr l (µ) ∂µ µ=µ 0 = 0, M nrl = M nrl (µ 0 ).(30)
Parameter m b , GeV m c , GeV m s , GeV σ, GeV 2 T g , fm bb cc ss n r = 0 -0.007 -0.19 -45 n r = 1 -5×10 −4 -0.015 -2.7 Table 2: The binding energy E nrl ≡ M nrl − 2m (in MeV) for the ground state and for the first radial excitation in the potential (28) with l = 1 for the bb, cc, and ss quarkonia.
In Table 1 we present the set of parameters used in our numerical calculations, whereas in Table 2 we give the results for the binding energy for the bb, cc, and ss quarkonia above the T c for l = 1 and n r = 0, 1. We ensure therefore that for l = 0 the potential (28) does support bound states. The binding energy is small (|E nrl | ≪ T for the b and c quarks and |E nrl | T for s quarks) so these bound states can dissociate easily.
Bound states of light quarks above the deconfinement temperature
In this section we turn to the problem of binding of light quarks. The effective potential (28) admits different forms at different inter-quark separations, depending on which contribution, of the quark mass term µ or of the "string mass" 2 ν(β − 1/2) 2 ), gives the dominating contribution, that is for µ ≫ σr and µ ≪ σr. If large distances contribute most to the bound state formation (the latter case), then V (r) = O(l(l + 1)/(σr 3 )) + O(l/(µr 2 )), where the first term comes from the spin-independent interaction (see Eq. (24)) and the other stems from the spin-orbit potential. The dependence of the binding energy on µ is expected then to be rather moderate, approximately as 1/µ. On the contrary, in the former case with the string dynamics giving a correction to the quark mass term, the potential (28) can be approximated as
V (r) ≈ l(l + 1) µr 2 − σl µ 2 r ,(31)
that is by the sum of the centrifugal barrier and the attractive Coulomb-like potential with the effective coupling
α eff = σl µ 2 .(32)
The corresponding eigenenergy can be found in any textbook in Quantum Mechanics and gives a stronger dependence on µ,
ε(µ) ∝ −µα 2 eff ∝ − σ 2 l 2 µ 3 .(33)
Let us consider the states with l = 1 and n r = 0. We follow now the procedure described in detail in the previous section, solve the full problem numerically, and find the dependence of the eigenenergy ε on the einbein µ to be
ε(µ) ∝ − 1 µ 2.79 .(34)
Comparing this to Eq. (33) we find a good agreement, with the small deviation in the power resulting from the proper string dynamics. We conclude therefore that the dynamics of the system develops at the inter-quark separations T g ≪ r m/σ (since, for the set of parameters given in Table 1, m σT g then there is room for such separations).
Let us discuss now the procedure of minimisation of the spectrum (29) in µ. First of all, let us notice that, in the einbein field formalism, the calculation of the spectrum naively looks like a nonrelativistic calculation due to the "nonrelativistic" form of the kinetic energy in the Hamiltonian with the einbein field µ introduced. In the meantime, the full relativistic form of the quark kinetic energy is readily restored as µ takes its extremal value and hence this is the procedure of taking extremum in µ in the masses (29) to sum up an infinite series of relativistic corrections and thus to restore the relativistic spectrum. For example, the relativistic ground state eigenenergy E 0 = m 1 − (Zα) 2 of the one-body Dirac equation with the Coulomb potential −Zα/r can be reproduced exactly with the help of the einbein technique. Finally, one can visualise the form of µ considering the effective Dirac equation for the light quark in the field of the static antiquark source. When written in the form of a second-order differential equation, it contains the spin-orbit term of the same form as given in Eq. (23) but with µ replaced by the combination ǫ + m + U − V , where U and V are scalar and vector potentials, respectively. For light (massless) quarks this combination takes drastically different values below and above the deconfinement temperature. Indeed, in the confining phase of QCD, when spontaneous chiral symmetry breaking leads to a strong effective, dynamically generated scalar potential U, this effective "µ" is large. On the contrary, above the T c , when U is small "µ" is also small (it can be even negative since the eigenenergy ǫ may have any sign).
We find numerically that the extremum in µ for M nrl (µ), Eq. (29), exists for m exceeding the value of approximately 0.22GeV (for the given σ = 0.2GeV), and no extremum exists for smaller values of the quark mass (see Fig. 2 for the dependence of the binding energy E nrl on the quark mass). This property of the bound state spectrum can be easily understood using the analogy with the bound state problem for the Dirac equation with the potential in the form of a deep square well or Coulomb potential discussed above. For example, for the Coulomb potential, a problem appears as the coupling exceeds unity -the well-known problem of Z > 137. From Eq. (32) we easily find this critical phenomenon to happen at m ≈ µ √ σ ≈ 0.4GeV. This estimate is in good agreement with the result of our direct numerical calculations quoted above.
Physically this situation means that many quark-antiquark and/or gluon pairs are formed and finally stabilise the vacuum. Formally the problem is not anymore a two-body problem, but rather many-body, so that many-body techniques are to be applied. For example, in electrodynamics with Z > 137, one can derive the resulting self-consistent field of the Thomas-Fermi type [25]. A similar situation can be expected in the deconfinement phase of QCD. In absence of the linear potential, the einbein µ (playing the role of the effective quark mass) is not anymore bounded from below by the values of order √ σ E ≃ 0.4GeV coming from the binding energy in the linearly rising potential. To see the onset of this phenomenon in the framework of our two-body (one-body for the heavy-light case) Hamiltonian, one should take into account the negative-energy part of the spectrum, when the full matrix form of the Hamiltonian is considered [26]. Indeed, the matrix structure of the Hamiltonian occurs in the path-integral formalism from the two-fold time-forward/backward motion described by the positive/negative values of µ. Off-diagonal terms in the matrix Hamiltonian produce the turning points in the particle trajectory and result in Z-graphs. Notice that the same is true for the glueballs and gluelumps since in this case equations are the same as for light-light and heavy-light quarkonia, respectively, but with the quark spin replaced by the gluon spin and σ H by 9 4 σ H . Concluding this section one can say that colour-magnetic (spin-dependent) interaction acting on light quark or gluonic systems enforces nonperturbative creation of light qq and gg pairs.
Discussion
The results obtained in this Letter allow us to comment on the general situation with the existence of bound quark-antiquark states in the deconfinement phase of QCD. First of all, contrary to naive expectations, the colour-Coulomb potential is screened down to a shortranged interaction and bound states appear due to nonperturbative colour-electric (see Refs. [8,9]) and colour-magnetic interactions in the vacuum. Indeed, although quark-antiquark pairs in the relative S-wave cannot be bound by such interactions for T 1.5T c (see, for example, Refs. [8,9,10]), pairs with a nonzero relative angular momentum can form bound states at all T > T c since σ H grows with T . Second, formation of such bound states above the T c is energetically favourable since it lowers the system energy as compared to the ensemble of free, unbound particles. For heavy quarks the binding is weak and the system dissociates easily. Finally, the dependence of the binding energy on the quark mass is strong -the corresponding eigenenergy for strange quarks is around 10÷100MeV rather then below 1MeV for the charmed and bottom quarks (see Table 2). The situation becomes even more dramatic for the lightest quarks. The effective inter-quark potential for light quark flavours (and gluons!) becomes extremely strong and may lead to pair creation -the effect similar to the critical phenomenon in QED for the centre charge Z exceeding 137. Vacuum polarisation effects become important and they lead to a complete rebuilding of the vacuum structure of the theory. We anticipate a similar phenomenon to take place for light quarks (and gluons) in QCD in the deconfinement phase.
It is important to notice that it was a separated quark-antiquark pair which was considered in this paper. In reality such quark-antiquark pairs are to be considered in the medium formed by other quarks and gluons, that is as a part of the SQGP. As a measure of the interaction in SQGP one can consider the ratio of the mean potential energy to the mean kinetic energy of the particles in the plasma, Γ = V / K . It is easy to estimate that K ≃ T and V ≃ σ H /T . This gives Γ = σ H /T 2 and so this parameter is large for quarks and it is several times larger for gluons. Therefore, SQGP is a strongly interacting medium which looks like a liquid, rather then as a gas. With the growth of the temperature the medium becomes more dense, and the mean distance between particles decreases. As this distance becomes comparable to the radius of the bound states discussed in this Letter, the latter will dissociate because of the screening effects. In other words, the hot medium plays the role of a natural cut-off for the effect of bound pair creation discussed above. Notice however that, for the quark masses around 0.2GeV, the radius of the bound state is of the order of one fm and it is expected to decrease further with the decrease of the quark mass, even if the pair creation process is properly taken into account. This means that indeed there is room for such bound states for the temperatures above the T c . Breakup, with the growth of the temperature, of such high-l states for quarks, and especially for gluons which possess more degrees of freedom than quarks, may affect such characteristics of the plasma as its free energy and it entropy (for a recent attempt of explaining the near-T c behaviour of these characteristics see Ref. [27]). This work is in progress and will be reported elsewhere.
Figure 1 :
1The profile of the effective potential (28) for m = 1GeV (solid line), m = 2GeV (dashed line), and m = 3GeV (dotted line).
Figure 2 :
2The binding energy of the quark-antiquark system versus the mass of the quark for l = 1 and n r = 0 (first plot) and n r = 1 (second plot).
Table 1 :
1The set of parameters used for the numerical evaluation.
. B Müller, J L Nagle, Ann. Rev. Nucl. and Part. Phys. 1B. Müller and J .L. Nagle, Ann. Rev. Nucl. and Part. Phys., 1 (2006).
. A Di Giacomo, E Meggiolaro, H Panagopoulos, Nucl. Phys. 48337A. Di Giacomo, E. Meggiolaro, H. Panagopoulos, Nucl. Phys. B483, 37 (1997);
. M Elia, A Di Giacomo, E Meggiolaro, Phys. Rev. 67114504M. D'Elia, A. Di Giacomo, E. Meggiolaro, Phys. Rev. D67, 114504 (2003).
. Yu A Simonov, JETP Lett. 55605Yu. A. Simonov, JETP Lett. 55, 605 (1992).
. H G Dosch, Phys. Lett. 190177H. G. Dosch, Phys. Lett B190, 177 (1987);
. H G Dosch, Yu A Simonov, Phys. Lett. B. 205339H. G. Dosch and Yu. A. Simonov, Phys. Lett. B 205, 339 (1988);
. Yu A Simonov, Nucl. Phys. B. 307512Yu. A. Simonov, Nucl. Phys. B 307, 512 (1988);
. A Di Giacomo, H G Dosch, V I Shevchenko, Yu A Simonov, Phys. Rep. 372319A. Di Giacomo, H. G. Dosch, V. I. Shevchenko, and Yu. A. Simonov, Phys. Rep. 372, 319 (2002).
. C Borgs, Nucl. Phys. 261455C. Borgs, Nucl. Phys. B261, 455 (1985);
. E Manowsakis, J Polonyi, Phys. Rev. Lett. 58847E. Manowsakis, J. Polonyi, Phys. Rev. Lett. 58, 847 (1987).
. G Boyd, Nucl. Phys. 469419G. Boyd et.al., Nucl. Phys. B469, 419 (1996);
. F Karsch, E Laermann, M Lutgemeier, Phys. Lett. 34694F. Karsch, E. Laermann, and M. Lutgemeier, Phys. Lett. B346, 94 (1995).
. Yu A Simonov, JETP Lett. 54249Yu. A. Simonov, JETP Lett. 54, 249 (1991).
. A Di Giacomo, arXiv:hep-ph/0512125Phys. At. Nucl. in pressA. Di Giacomo et al., arXiv:hep-ph/0512125, Phys. At. Nucl., in press.
. Yu A Simonov, Phys. Lett. 619293Yu. A. Simonov, Phys. Lett. B619, 293 (2005).
. P Petreczky, arXive:hep-lat/0409139P. Petreczky, arXive:hep-lat/0409139.
. Yu A Simonov, Phys. At. Nucl. 58309Yu. A. Simonov, Phys. At. Nucl. 58, 309 (1995).
. A Yu, A B Dubin, Yu A Kaidalov, Simonov, Phys. Lett. 32341A. Yu. Dubin, A. B. Kaidalov, and Yu. A. Simonov, Phys. Lett. B323, 41 (1994).
. M Campostrini, A Di Giacomo, G Mussardo, Z.Phys. 25173M. Campostrini, A. Di Giacomo, and G. Mussardo, Z.Phys. C25, 173 (1984);
. A , Di Giacomo, H Panagopoulos, Phys. Lett. 285133A. Di Gia- como and H. Panagopoulos, Phys. Lett. B285, 133 (1992);
. A Di Giacomo, M Maggiore, S Olejnik, Nucl. Phys. 347441A. Di Giacomo, M. Maggiore, and S. Olejnik, Nucl. Phys. B347, 441 (1990);
. R Haymaker, J Wosick, Acta. Phys. Pol. 21403R. Haymaker and J. Wosick, Acta. Phys. Pol. B21, 403 (1990);
. L Debbio, A Di Giacomo, Yu A Simonov, Phys. Lett. 332362L. Del Debbio, A. Di Giacomo, and Yu. A. Simonov, Phys. Lett. B332, 113 (1995), ibid B332, 362 (1995).
. L Brink, P Di Vecchia, P Howe, Nucl. Phys. 11876L. Brink, P. Di Vecchia, and P. Howe, Nucl. Phys. B118, 76 (1977).
P A M Dirac, Letures on Quantum Mechanics. New YorkYeshiva UniversityP. A. M. Dirac, "Letures on Quantum Mechanics", Belter Graduate School of Science, Yeshiva University, New York (1964).
. Yu S Kalashnikova, A V Nefediev, Phys. Atom. Nucl. 601389Yu. S. Kalashnikova and A. V. Nefediev, Phys. Atom. Nucl. 60, 1389 (1997).
. Yu A Simonov, Phys. Lett. 226151Yu. A. Simonov, Phys. Lett. B226, 151 (1989).
. Yu S Kalashnikova, A V Nefediev, Yu A Simonov, Phys. Rev. 6414037Yu. S. Kalashnikova, A. V. Nefediev, and Yu. A. Simonov, Phys. Rev. D64, 014037 (2001).
. V L Morgunov, A V Nefediev, Yu A Simonov, Phys. Lett. 459653V. L. Morgunov, A. V. Nefediev, and Yu. A. Simonov, Phys. Lett. B459, 653 (1999).
. F Buisseret, C Semay, Phys. Rev. 7077501F. Buisseret and C. Semay, Phys. Rev. D70, 077501 (2004).
Sense of Beauty in Physics", volume in honour of Adriano Di Giacomo. Yu A Simonov, arXiv:hep-ph/0512242Yad. Phys. A. M. Badalian and Yu. A. Simonov592247Pisa Univ. PressYu. A. Simonov, in: "Sense of Beauty in Physics", volume in honour of Adriano Di Gi- acomo, Pisa Univ. Press, 2006, p.29; arXiv:hep-ph/0512242; A. M. Badalian and Yu. A. Simonov, Yad. Phys. 59, 2247 (1996).
QCD: Perturbative or Nonperturbative. Yu A Simonov, Proceedings of the XVII International School of Physics. L. S. Ferreira, P. Nogueira, and J. I. Silva-Marcosthe XVII International School of PhysicsWorld Scientific60Yu. A. Simonov, in Proceedings of the XVII International School of Physics "QCD: Per- turbative or Nonperturbative," Lisbon, 1999, edited by L. S. Ferreira, P. Nogueira, and J. I. Silva-Marcos (World Scientific 2000), p. 60.
. V I Shevchenko, Yu A Simonov, Phys. Rev. Lett. 851811V. I. Shevchenko and Yu. A. Simonov, Phys. Rev. Lett., 85, 1811 (2000);
. Int. J. Mod. Phys. 18127Int. J. Mod. Phys. A18, 127 (2003);
. G S Bali, Phys. Rev. 62114503G. S. Bali, Phys. Rev. D62, 114503 (2000);
. S Deldar, Phys. Rev. 6234509S. Deldar, Phys. Rev. D62, 034509 (2000).
. S Gupta, K Húbner, O Kaczmarek, hep-lat/0608014S. Gupta, K. Húbner, and O. Kaczmarek, hep-lat/0608014.
. A B Migdal, D N Voskresenksy, V S Popov, ' Pis, Ma V Zhetf, 24186A. B. Migdal, D. N. Voskresenksy, and V. S. Popov, Pis'ma v ZhETF, 24, 186 (1976);
. Zhetf, 72834ZhETF, 72, 834 (1977).
. Yu A Simonov, Phys. Atom. Nucl. 68709Yu. A. Simonov, Phys. Atom. Nucl. 68, 709 (2005).
. D Antonov, S Domdey, H.-J Pirner, arXive:hep-ph/0612256D. Antonov, S. Domdey, and H.-J. Pirner, arXive:hep-ph/0612256.
| []
|
[
"Extreme Beam-forming with Impedance Metasurfaces Featuring Embedded Sources and Auxiliary Surface Wave Optimization",
"Extreme Beam-forming with Impedance Metasurfaces Featuring Embedded Sources and Auxiliary Surface Wave Optimization"
]
| [
"Student Member, IEEEGengyu Xu ",
"Student Member, IEEEVasileios G Ataloglou ",
"Senior Member, IEEESean V Hum ",
"Fellow, IEEEGeorge V Eleftheriades "
]
| []
| []
| We present the end-to-end design of compact passive and lossless metasurface (MTS) antennas with integrated feeds. The complete low-profile system consists of a single-layered reactive impedance MTS printed on top of a grounded dielectric substrate, and is fed by sources which are embedded inside the substrate. An accurate and efficient volume-surface integral equation-based model of the device is developed, and used as the basis for the rapid optimization of its performance. The optimized designs leverage tailored auxiliary surface waves supported by the impedance MTS to distribute the localized source power across their apertures. This facilitates the realization of extreme field transformations such as wide-angle beam-forming or sharedaperture beam-forming, with nearly 100% aperture efficiencies. The procedure also allows for arbitrary beam-shaping with complete main beam and side lobe control. We also derive several feasibility-related constraints, which can significantly enhance the power efficiency as well as the bandwidth of the MTS antennas when they are implemented in practice. Full-wave numerical simulations confirm the effectiveness of the presented approach, as well as the extreme field transformation capabilities of the synthesized designs. | 10.1109/access.2022.3157291 | [
"https://arxiv.org/pdf/2112.12700v1.pdf"
]
| 245,425,182 | 2112.12700 | bd8d8d7526114c1681a6fb06ad4defe5d68b3864 |
Extreme Beam-forming with Impedance Metasurfaces Featuring Embedded Sources and Auxiliary Surface Wave Optimization
Student Member, IEEEGengyu Xu
Student Member, IEEEVasileios G Ataloglou
Senior Member, IEEESean V Hum
Fellow, IEEEGeorge V Eleftheriades
Extreme Beam-forming with Impedance Metasurfaces Featuring Embedded Sources and Auxiliary Surface Wave Optimization
1Index Terms-Antenna beam-formingmetasurfacesMIMO antenna6G communicationsintegral equationssurface waves
We present the end-to-end design of compact passive and lossless metasurface (MTS) antennas with integrated feeds. The complete low-profile system consists of a single-layered reactive impedance MTS printed on top of a grounded dielectric substrate, and is fed by sources which are embedded inside the substrate. An accurate and efficient volume-surface integral equation-based model of the device is developed, and used as the basis for the rapid optimization of its performance. The optimized designs leverage tailored auxiliary surface waves supported by the impedance MTS to distribute the localized source power across their apertures. This facilitates the realization of extreme field transformations such as wide-angle beam-forming or sharedaperture beam-forming, with nearly 100% aperture efficiencies. The procedure also allows for arbitrary beam-shaping with complete main beam and side lobe control. We also derive several feasibility-related constraints, which can significantly enhance the power efficiency as well as the bandwidth of the MTS antennas when they are implemented in practice. Full-wave numerical simulations confirm the effectiveness of the presented approach, as well as the extreme field transformation capabilities of the synthesized designs.
I. INTRODUCTION
The research and development of antennas with extraordinary beam-forming capabilities have seen a resurgence owing to the advent of electromagnetic metasurfaces (MTSs) [1], which can be viewed as the two-dimensional analogue of metamaterials. They consist of dense arrays of scatterers (called meta-atoms) with judiciously designed electromagnetic polarizabilities. Due to their highly subwavelength thicknesses, MTSs can be theoretically modeled as infinitesimally thin sheets of effective electric and/or magnetic currents imposing abrupt discontinuities in the magnetic and/or electric fields. Full control over both fields can be achieved with the use of Huygens' metasurfaces (HMSs), which are electrically and magnetically polarizable [2], [3]. Even more types of fieldtransforming functionalities can be obtained by introducing cross-coupling between the electric and magnetic responses, thereby realizing so-called bianisotropic Huygens' metasurfaces (BMSs) [4]- [7].
The strong and efficient interaction between radio frequency electromagnetic waves with metals has led to the development of printed circuit board (PCB) metasurfaces whose constituent meta-atoms are fabricated by etching designed conductive patterns on dielectric substrates. Due to their compact form factor and their unprecedented ability to precisely control various aspects of the electromagnetic waves, such as the phase [8]- [11], amplitude [12], polarization [13]- [15] and frequency content [16], [17], PCB MTSs represent the ideal platform for realizing the next generation of antennas. They have already been successfully integrated into existing technologies such as transmitarray [18], [19] and reflectarray [20], [21] antennas.
To further reduce the overall profile of the system without compromising its performance, metasurfaces with integrated sources have been investigated. Periodically modulated impedance MTSs based on the extended principles of holography have been successfully designed to transform the surface waves from embedded dipoles into directive radiation [22]- [25]. By adding weak perturbations to the average surface impedance, shaped beams can also be produced. Some of the limitations of this approach can be lifted by directly synthesizing the desired aperture fields with the help of local-powerbalancing auxiliary surface waves [26], [27]. Alternatively, complete control of the antenna aperture fields can be obtained using HMSs [28], [29] or BMSs [30]- [33] with embedded sources, which offer the designer much more flexibility in terms of the obtainable radiation patterns. However, they are harder to implement and incur more ohmic loss due to their underlying complexity.
A class of embedded-source-fed passive beam-forming MTSs leveraging non-local electromagnetic interactions have been recently proposed [34]- [38]. In these designs, the feed is placed extremely close to the MTS. The localized source power is distributed across the MTS through tailored auxiliary surface waves (SWs), which enables the effective utilization of the entire physical aperture, regardless of its size. In contrast to holographic modulated impedance MTSs, these devices are capable of truly arbitrary beam-forming, since they do not assume any functional form for the aperture fields. In particular, some of the aforementioned designs [36], [38] leverage a single electric impedance MTS backed by a ground plane, meaning they are easier to fabricate and can be less lossy than other implementations that rely on multilayer transmissive HMSs. Their exceptional beam-forming capability can be explained by the fact that the ground plane shorts out the contribution of the effective electric currents to the far-field. Hence, the radiation pattern is solely dictated by the effective magnetic currents, which the electric impedance MTS supplies through its induced conduction currents and images (caused by the ground plane).
Passive and lossless MTSs leveraging auxiliary SWs benefit from a more rigorous design process than conventional ones, due to their reliance on tailored mutual interactions between meta-atoms. To address this, various approaches based on integral equations (IEs) have been developed [32], [34], [39]- [43]. Recently, these methods have been extended to account for the effect of dielectric and/or ground plane truncation [37], [38], [44]- [48], which can alter the beam shape through edge reflection and diffraction. Beside their accuracy, another major advantage of IE-based design approaches is that they are valid regardless of the homogeneity of the MTS. Hence, they can be utilized to design devices operating within the refractive (gradient MTS) regime or the diffractive regime. The latter, sometimes referred to as metagratings [49]- [52] or sparse MTSs [53]- [55], can be desirable due to their low complexity. Furthermore, they are well-suited for synthesizing devices that may have "forbidden regions" in which meta-atoms cannot be printed [56].
In this paper, we present the complete end-to-end design of compact passive and lossless ground-plane-backed impedance MTSs featuring embedded sources. Along with an accurate analysis technique based on a set of coupled volume-surface integral equations (VSIE), we develop an efficient and versatile accompanying optimization-based design method that can realize arbitrary beam-forming through exploitation of tailored auxiliary surface waves. Despite its simplicity, the proposed method can synthesize devices capable of extreme feats such as wide-angle or shared-aperture beam-forming while maintaining near-perfect aperture efficiency. We also derive optimization constraints based on practical considerations. Fullwave simulations with realistic devices confirm that, not only can the synthesized impedance MTSs form arbitrarily shaped beams, but they also exhibit significantly improved bandwidth and power efficiency, owing to the proposed constraints.
II. THEORETICAL MODEL
A. Proposed Architecture
The proposed architecture for the embedded-source-fed impedance metasurface is shown in Fig. 1. For simplicity, we restrict our attention to 1D beam-forming devices that are invariant along the x-direction. They can be envisioned as collections of "meta-wires" printed on top of grounded dielectric substrates with thickness h, width W (along the y-direction), and dielectric constant r . The wires have extremely subwavelength loading periodicity (Λ in Fig. 1), meaning they are homogenizable in their longitudinal (x) direction. However, we make no assumption on the homogeneity of the device along the y-direction, meaning that the wires can be sparsely and/or non-uniformly spaced. Furthermore, we do not assume an exact form for the embedded source(s) at this stage, other than that it consists of x-directed, x-invariant electric currents with angular frequency ω = 2πf . This then implies that the electric field everywhere only contains an x-component. Throughout this paper, a time convention of e jωt will be used.
To facilitate the efficient analysis and design of the device under consideration, we model the cross sections of its N meta-wires using hypothetical narrow homogeneous strips residing on the curve C w , as labeled in Fig. 2. Each strip has a surface electric impedance Z i (i ∈ [1, N ]), which is a function of the printed wire geometry. This is a good approximation as long as the wires have deeply sub-wavelength widths. The impedances of all the wires can be written into a N × 1 vector Z w . The cross section of the ground plane is modeled by the curve C g , while the cross section of the dielectric is modeled by the rectangular region S v .
θ (z) (x) (y)
B. Volume-Surface Integral Equations Formulation
The 2D model in Fig. 2 allows us to predict and engineer the radiation pattern of the MTS using a modified version of the well known VSIE framework [57] originally proposed to model mixed conductive and dielectric structures. It was recently adopted to design metasurface antennas [36]- [38], [44]- [48]. For completeness, this approach is reviewed here, along with some of our proposed augmentations. According to the VSIE formulation, the total field radiated by the MTS antenna, E, is ascribed to the following four contributions:
(z) (x) (y) C w C g S v Z i Z i+1 (y wn ,z wn ) (y gn ,z gn ) (y) (y) (z) J w J v(
1) The source (incident) field E i , radiated by the feed as if it was in free space; 2) The scattered field E s g radiated by the induced conduction currents J g on the ground plane;
3) The scattered field E s w radiated by the induced conduction currents J w on the meta-wires; 4) The scattered field E s v radiated by the induced polarization currents J v in the dielectric. Computation of the total field amounts to solving for the unknown currents induced by E i . To that end, a coupled system of integral equations for J g , J w , J v can be written. It can be solved numerically using the method of moments.
The fields radiated by the unknown surface conduction currents can be written as
E s ι ( ρ ) = − kη 4 Cι H (2) 0 (k | ρ − ρ |) J ι ( ρ )dy(1)
with ι ∈ {g, w}, ρ =ŷy+ẑz and ρ =ŷy +ẑz . Here, H
0 (·) is the zeroth order Hankel function of the second kind, and k = ω/c o is the free space wave number.
The fields radiated by the unknown volume polarization current in the dielectric can be written as
E s v ( ρ ) = − kη 4 Sv H (2) 0 (k | ρ − ρ |) J v ( ρ )dy dz .(2)
The total tangentional field must vanish on the ground plane, meaning
E i + E s g + E s v + E s w = 0 on C g .(3)
Furthermore, on meta-wire i, the total field must be proportional to J w by Z i , according to Ohm's law. In other words,
E i + E s g + E s v + E s w = Z i J w on meta-wire i.(4)
The electric field inside the dielectric must be proportional to the induced polarization current according to [58]
E i + E s g + E s v + E s w = 1 jω( r − 1) o J v in S v . (5)
Together, (3)-(5) forms a set of VSIEs which can be easily solved using the method of moments. In this work, we employ the point-matching method, with the discretization scheme illustrated by the zoomed-in pictures in Fig. 2. The curve representing the ground plane (C g ) is divided into N g segments with width ∆ g , centered at ρ gn =ŷy gn +ẑz gn with n ∈ [1, N g ]. Similarly, each of the impedance strips representing the meta-wires is divided into N w contiguous segments with center coordinates ρ wn =ŷy wn +ẑz wn , yielding a total of N w = N w × N segments for the entire N -wire array. The width of the meta-wire segments are also chosen to be ∆ w . The cross section of the dielectric substrate is discretized into a 2D grid of rectangular (almost square) cells with area πr 2 o , centered at ρ vn =ŷy vn +ẑz vn with n ∈ [1, N v ]. The area is specified this way such that the rectangular cells can be approximated by circles with radius r o during numerical integration.
As illustrated in Fig. 2, the described discretization scheme allows one to expand the currents and the fields on S g and S w using 1D pulse basis functions. The currents and the fields in S v can be expanded using 2D pulse basis functions. Then, the VSIEs can be cast into the matrix form
E i = (Z − G)J LJ,(6)whereĒ i = Ē i ḡ E i v E i w ,J = J ḡ J v J w , G = G gg G gv G gw G vg G vv G vw G wg G wv G ww , Z = 0 0 0 0 P 0 0 0 Z w .(7)
The vectorĒ i ι contains the incident (source) field sampled at the discretization points ρ ι , whileJ ι contains the sampled conduction or polarization currents. The blocks of the matrix G represent the self and mutual interaction between the various components of the impedance MTS antenna. They can be populated with (1) and (2). The integration over the ground plane and meta-wire segments can be performed numerically using the midpoint rule [59]. The integration over dielectric cells can be estimated as integrals over circles with equal area [60]. The singularities in the integrands associated with the self terms can be treated with the approximations [59], [60]
G ιι [n][n] = − kη∆ι 4 1 − j 2 π log 1.781k∆ι 4e ι ∈ {g, w} − η 2k kr o H (2) 1 (kr o ) − 2j ι = v .
(8) Recall that r o is the radius of a circle that has the same area as one rectangular dielectric cell. The matrix Z w is diagonal, with entries corresponding to the effective impedances of the meta-wire segments:
Z w = diag Z w [1], · · · ,Z w [1],Z w [2], · · · ,Z w [2], · · · . (9)
The matrix P is also a diagonal matrix, whose elements are proportional to the electric susceptibility of the substrate:
P = 1 jω( r − 1) o 1 Nv×Nv .(10)
Here, 1 Nv×Nv is the N v × N v identity matrix. Equation (6) can be interpreted as a function forJ in terms ofZ w , which can be used to determine the currents induced in the MTS by a known incident fieldĒ i . The results then enable the evaluation of the total far-field radiated by the MTS through the use of the asymptotic expression of the Hankel function for large arguments [59]. When observing at some arbitrary distance from the origin (chosen to be 1 m in this study for convenience), and sampled at a set of N θ discrete elevation angles, the far-field can be described by the following equation:Ē
f f =Ē f i + [G f g G f v G f w ]J, G f ι [m][n] = − kηF 4
2j πk e −jk e jk(yιn sin θm+zιn cos θm) .
Here, F = ∆ ι for ι ∈ {g, w}, and
F = πr 2 o if ι = v.
The m th entry ofĒ f f andĒ f i correspond to the total and the source far-field measured at the elevation angle θ m 2πm/N θ .
To complete the analytical model, one can calculate the radiation intensity pattern of the antenna using the total farzone electric field amplitude, according tō
U = 1 2η oĒ f f Ē f f * ,(12)
where denotes element-wise product between vectors, and {·} * denotes complex conjugation. This then can be used to calculate various key antenna parameters such as the 2D directivity, defined asD
= N θŪ N θ n=1Ū (θ n ) .(13)
C. Acceleration With Kron Reduction
In this work, the main purpose for developing the VSIE framework is to enable the optimization of the radiation intensity patternŪ , with the meta-wire loadingsZ w as the tunable variables. Such a procedure necessitates the frequent computation ofŪ with updated values ofZ w . The most direct way to do so is to first find the induced currents in the updated device using (6), and then compute the new radiation pattern using (12). This approach is extremely inefficient, since it involves the inversion of a large matrix L, despite the fact that only its smallest block (Z w − G ww ) is updated. Depending on the size of the antenna and the thickness of the dielectric substrate (which dictate the size of G vv , the largest block of L), the computational burden can be prohibitive. Fortunately, this bottleneck can be resolved with the Kron reduction technique often used in power system analysis [61]. First, we rewrite (6) into the following form by regrouping its blocks:
Ē i Ē i w = A B C Z w − G ww J J w .(14)
Rearranging (14) further, we obtain the Kron reduced system
J w = S −1 Ē i w − CA −1Ēi ,(15)
where S ∈ C Nw×Nw is the Schur complement of A, given by
S Z w − G ww − CA −1 B.(16)
Additionally,
J J ḡ J v = A −1 + A −1 BS −1 CA −1 Ē i − A −1 BS −1Ēi w .(17)
The far-field can be rewritten as
E f f = G f w − [G f g G f v ]A −1 B J w + I oḠf i + [Ḡ f sḠf v ]A −1Ē .(18)
Evidently, each evaluation of the cost function using the reduced system requires only the inversion of a single N w × N w matrix, S. The only other matrix inverse (A −1 ) is static throughout the entire optimization process, because it does not contain Z w . Therefore, it can be stored and reused. Furthermore, (17) accelerates the far-field computation by reducing the size of the matrices being multiplied. In fact, it eliminates the need to explicitly evaluateJ , which contains the currents on the ground plane and in the dielectric.
With the method proposed in this section, the optimization process becomes extremely efficient, regardless of the exact algorithm used. This opens up the opportunity to explore more computationally expensive methods such as global optimization, which may yield better performing designs. Furthermore, one can discretize the dielectric substrate and the ground plane with very fine resolution in order to improve the accuracy of the results, without incurring too much additional computational cost during optimization.
III. FAR-FIELD OPTIMIZATION
The closed-form expressions presented in Sec. II-B as well as the acceleration method proposed in Sec. II-C enable the rapid optimization of antenna characteristics. In this work, as a proof of concept, we demonstrate the ability for the compact MTS system to realize arbitrary beam-forming by shapinḡ U (Z w ) to match stipulated radiation patterns.
A. Optimization method
Different optimization methods can be used to determine the effective impedances of the meta-wiresZ w that produce a desired far-field radiation, based on (12), (15), (16) and (18).
For the examples presented in this work, we mainly rely on gradient-descent optimization, as implemented by the built-in function fmincon in MATLAB. This function is capable of locally minimizing a cost function while adhering to multiple linear and/or nonlinear constraints on the solution.
In order to provide a starting pointZ
(0)
w in the gradientdescent optimization, a preliminary step needs to be performed. This preparatory step helped the optimizer converge to solutions with better pattern matching in the vast majority of the studied design examples. Specifically, following previously proposed methods [42], [47], [48], we try to determine the currentsJ w that produce the desired far-field radiation, based on (12) and (18). We do this with an unconstrained gradientdescent optimization using the function fminunc in MATLAB. Then, a set of complex loadings that would give rise to the optimizedJ w is calculated based on Ohm's law in (4). The real part of the impedances is discarded and the imaginary part is used as the starting pointZ (0) w of the main gradient-descent optimization [42], [47], [48].
For some designs that have more demanding specifications, this method may not be able to supply a good startingZ (0) w . Such scenarios demand more specialized treatments, which will be discussed as they arise.
B. Cost Function
The aim of the optimization is to match the radiation pattern U (θ m ) produced from the metasurface with a desired radiation patternŪ des (θ m ). Since it is not clear what the total power contained inŪ des (θ m ) should be, we choose to define the cost function in terms of normalized radiation patterns, as
F = N θ m=1 Ū (θ m ) max θm {Ū (θ m )} −Ū des (θ m ) max θm {Ū des (θ m )} 2 .(19)
Due to the normalization, (19) can also be interpreted as a measure of the difference between the desired directivity patternD des and the realized patternD.
It should be noted that choosing to form the cost function this way, as we have done in this study, clearly puts the focus on the main beam compared to the side lobe values. However, even the shape of the side lobes will eventually match the expected one, if the algorithm converges to a low value of F .
The choice to match the normalized radiation pattern also means that the converged solution may not have the highest possible radiation efficiency. Indeed, there can be an infinite set of solutions satisfying the same constraints, but having different capacities for near-field reactive energy storage. There are multiple methods to deal with poorly radiating solutions. One way is to design a simple matching network for the input. Alternatively, in Sec. III-C, we introduce a constraint which can help guide the optimizer towards solutions with inherently high radiation efficiency.
C. Optimization constraints
In order for the optimizer to converge to solutions with desired characteristics, we introduce a set of linear and nonlinear constraints on the wire impedances. First, we require that the metasurface is passive and lossless. Therefore, the impedances should be purely imaginary, i.e. Re{Z n } = 0, ∀n. This constraint implies that the optimized device does not rely on engineered ohmic loss or power gain, both of which are difficult to control in practice. We also demand that the impedances be practically realizable with printed, periodically loaded copper traces which adhere to standard fabrication tolerances. By varying the width of a printed loading capacitor, an impedance range of [−j90, −j25] was acquired, as described in Appendix A. It is noted that interdigitated capacitors or other types of scatterers can increase this range. However, no significant improvements were observed in terms of the pattern matching for the designs considered in this paper. Therefore, the aforementioned impedance range was deemed sufficient.
A few more optional constraints can be introduced to converge to solutions with higher bandwidth and lower sensitivity. Having the currentsJ w at each iteration allows us to calculate the fields anywhere in the near-field region of the metasurface (including in the dielectric), as well as the radiated power. In general, it is advantageous to have as much radiated power as possible, while maintaining the field amplitudes and the stored energy close to the antenna at a relatively low level. With this in mind, we place a constraint on the quality factor (Q-factor) of the structure, calculated as:
Q ω 0 W P rad < Q max ,(20)
where
W = 1 2 0 r ( ρ)|E( ρ)| 2 dydz(21)
serves as an estimate of the stored energy and P rad is the total radiated power as calculated from the radiation intensitȳ U . It is noted that the integration in (21) extends beyond the dielectric region, into the air region close to the metasurface. This allows for the capture of the confined surface waves which also contribute to energy storage. Lastly, we can put a constraint on the sensitivity of the farfield with respect to the wire impedances, by computing the derivative dĒ f f /dZ n . This constraint helps the optimizer to converge to solutions which are more robust against fabrication errors that may modify the effective wire impedances. Moreover, such a trait also relates to the frequency bandwidth of the device, since the effective impedances for realistic meta-wires are frequency-dispersive. As seen from (15) and (18), to calculate the derivative of the far-field with respect to each loading Z n , we basically have to calculate the term dS −1 /dZ n . This can be done analytically without any extra matrix inversion, by using the identity for the derivative of an inverse matrix:
dS −1 dZ n = −S −1 dS dZ n S −1 .(22)
The inverse S −1 is already calculated in each iteration, while the remaining derivative (dS/dZ n ) reduces, from (16), to a diagonal matrix with N w non-zero entries (of −j) corresponding to the n th wire. This enables rapid calculation of the derivative of each far-field angle with respect to each wire.
Having derived dĒ f f (θ m )/dZ n , the constraint on sensitivity can be stated as
T max n θm dE f f (θ m ) dZ n < T max .(23)
As a strategy, the constraints on the passivity (Re{Z n } = 0) and the impedance range are always present during optimization. The other two nonlinear constraints, if desired, can be introduced in an additional step, after an initial round of optimization without them. The converged result of the first step is fed to the second, more stringent step, as the new starting point.
IV. NUMERICAL RESULTS
We verify the proposed embedded-source-fed impedance MTS and the developed design scheme with several numerical examples. The optimized devices are simulated in Ansys HFSS. Since they are x-invariant, we can predict their performance by taking a subwavelength-thin slice and placing it inside a parallel plate waveguide with perfectly matched terminations on all open sides. In this section, for simplicity, the meta-wires are modeled using thin strips with uniform surface impedances. We consider realistic devices with printed metallic wires in Sec. V.
A. Wide-angle Beam-forming with Near-perfect Aperture Efficiency
In this subsection, we demonstrate the ability of the proposed MTS topology to perform wide-angle beam-forming with near unity aperture efficiency. The size of the device is 7λ at the design frequency of 10 GHz. The impedance MTS on top of the grounded substrate (h = 2.54 mm, r = 3) only has 28 meta-wires, leading to a relatively large interelement spacing of λ/4. Each meta-wire is modeled using a narrow impedance strip with width 0.7 mm. The embedded source is chosen to be a single uniform line current located at
(y o , z o ) = (0, h/2).
The proposed device can be seen as a simplified version of the previously presented cavity-excited antennas [28], [30], featuring a dramatically reduced profile as well as a less complicated and sparser metasurface layer. Furthermore, the tasks of source power redistribution and aperture efficiency optimization, previously performed by a resonant cavity mode, is now accomplished by an auxiliary surface wave on the MTS. Despite the different working principle, the proposed MTS is still able to realize near-perfect illumination of arbitrarily large apertures with a single localized source [36], [38].
To begin the design, we first note that the incident field is simply a cylindrical wave centered at the source location ρ o =ŷy o +ẑz o . Hence the incident near-field vectorsĒ i ι can be easily populated followinḡ
E i ι [n] = − kηI o 4 H (2) 0 (k | ρ ιn − ρ o |) , ι ∈ {g, v, w},(25)
where I o is an arbitrarily chosen source current amplitude. The far-field radiated by the source is given bȳ
E f i [n] = −I o kη 4
2j πk e −jk e jk(yo sin θn+zo cos θn) ∀n. (26) It is desired to form a directive beam towards some arbitrary direction θ o with the highest possible aperture efficiency. Hence, the target far-zone electric field is that associated with a uniform aperture with linear phase gradient k sin θ o . It can be interpreted as the fields radiated by a sheet of phased surface current with uniform amplitude, given bȳ
E f f des = G f gJdes , J des [n] = J o e −jkygn sin θo ,(27)
where J o is some arbitrary complex constant which will be dropped upon normalization in (19). It is chosen to be 1 [A/m] in this work. We optimize the meta-wire impedances to produce beams with output angles ranging from θ o = −60 • to θ o = 0 • with 15 • increments. The converged impedance values are plotted in Fig. 3. Importantly, due to the acceleration discussed in Sec. II-C and the sparsity of the MTS, each design only took about 20 seconds to converge to the optimum. The performance of the optimized designs is qualitatively demonstrated by the simulated near-field electric field distributions plotted in Fig. 4(a) and (b), which showcase the θ o = 0 • and θ o = −45 • cases respectively. Here, it can be seen that full utilization of the entire aperture is achieved despite its large electrical size and the poor inherent illumination provided by the embedded source.
A more quantitative study on the MTS designs is conducted through an examination of their simulated 2D directivity D(θ), which are plotted in Fig. 5. Here it is shown that the internally excited MTS can easily form highly directive beams up to −60 • while maintaining side lobe levels (SLLs) below −14 dB. Since the source is placed at the center of the device, it is evident that the exceptional beam-forming ability can be extended to positive values of θ. (W = 7λ) radiating in the direction of θ, which is equal to
D uni (θ) = 2πW λ cos θ.(28)
Since the directivity of all simulated beams follow this envelope exactly, one can conclude that each of the designs presented in this section exhibits near-unity aperture efficiency (ξ apt ), which is defined as
ξ apt D(θ o ) D uni (θ o ) .(29)
Calculations using (29) reveal that each of the presented designs indeed has an aperture efficiency of at least 99%.
To verify the contribution of surface waves to the performance of the impedance MTS antennas, we plot the Fourier spectrum of the electric field (Ẽ x (k y )) for the θ o = 0 • design in Fig. 6. The Fourier transform was performed in a plane 0.1λ above the meta-wires. The red shaded region in Fig. 6 corresponds to the invisible region (|k y | > k), in which the surface waves reside. Although the spectral content in this region does not directly shape the far-field radiation pattern, they exert an indirect influence by modifying the aperture field distribution. As seen by the high amplitude spectra inside the invisible region in Fig. 6, the optimized design relies on surface waves to achieve its near perfect aperture illumination.
B. Multi-input Multi-output Metasurfaces
In this section, we demonstrate a multi-input multi-output (MIMO) beam-forming MTS which is capable of generating two independent beams when one of its two designated embedded sources are excited. Unlike single-input multi-beam systems, the MIMO MTSs are capable of distinguishing the signals picked up by different beams [24]. This feature makes them ideal for realizing high-capacity communication links.
For illustrative purposes, we focus our attention on a dualinput dual-output system. Since the two output beams must share the same physical aperture, it is expected that they each suffer from degraded directivity. However, as we will show, it is possible to obtain significantly higher aperture efficiency for each beam with the proposed optimization-based method, as compared to that achievable with physical aperture partitioning.
The design of the MIMO MTS requires some simple reformulation of the optimization problem. All derived formulae as well as the Kron reduction technique discussed in Sec. II are still valid. However, we now have a set of two independent input fieldsĒ i {1,2} , which give rise to two different output radiation patternsŪ {1,2} . If we denote the desired output pattern corresponding to input {1, 2} asŪ des,{1,2} , then a new cost function can be formulated as
F M = 2 i=1 α i N θ m=1 Ū i [m] max m {Ū i [m]} −Ū des,i [m] max m {Ū des,i [m]} 2 .(30)
Here, α i is a weight that allows the designer to place emphasis on pattern matching for one of the two input-output (I/O) pairs. In this study, we use equal weight for both I/O pairs.
The main difficulty for the present optimization problem is the identification of a good starting point. A reasonable strategy is to first treat the MIMO MTS as a single-input single-output device whose input isĒ i 1 +Ē i 2 , and whose desired output isŪ des,1 +Ū des,2 . This enables the use of the strategy discussed in Sec. III to obtain a rational starting point. However, we found that this method, while serviceable, does not guarantee the best results. On the other hand, a twostep optimization process, consisting of a global optimization which generates a starting point for a subsequent local search, produced much better performing designs. In this study, the global optimization is performed using the built-in particle swarm optimization (PSO) routine in MATLAB, while the local search is done with gradient descent (f mincon).
To demonstrate the effectiveness of the proposed approach, we design a 7λ-wide impedance MTS (h = 2.54 mm, r = 3) consisting of 42 meta-wires (each λ/40 wide), operating at 10 GHz. The sparse λ/6 spacing between meta-wires means that the device can be easily implemented using realistic PCB MTSs. The two inputs are assumed to be two line sources located at (y o , z o ) = (±3λ, h/2). The corresponding desired outputs are two beams directed at ±20 • with 100% aperture efficiency. Their far-zone electric fields can be obtained via (27).
Since the device is symmetric, only 21 optimization variables need to be considered. To accommodate the more de- manding design specifications, we extend the impedance limit of the meta-wires to [−j200, −j25]. This range is still easily obtainable using physical meta-wires, as one simply needs to increase the gap between the printed capacitor plates in order to realize more negative values of reactance. Fig. 7 depicts the convergedZ w after one round of PSO with a swarm size of 50 and an inertia range of [0.1, 0.5], followed by a gradient descent search.
The synthesized device is simulated in HFSS, giving the near-field distributions in Fig. 8. Here, the two subfigures depict the fields radiated by the MTS when one of its two designated inputs is excited. It appears that the two outputs, despite having to share the same physical aperture, are each able to achieve almost full effective aperture utilization.
The above assertion is quantitatively verified via an examination of the simulated 2D directivity patterns, plotted in Fig. 9. Here, the dashed red and blue curves correspond to the desired beams produced by uniform apertures. The solid red (blue) curve shows the realized directivity of the MTS when the line source positioned at −3λ (+3λ) is excited. Both output beams exhibit only a 0.47dB drop in directivity when compared to the desired beams, indicating aperture efficiencies of approximately 90%. This is significantly higher than the efficiency obtainable though heuristic techniques such as physical aperture partitioning.
The aperture electric field spectrum corresponding to the second output beam (20 • ) is plotted in Fig. 10. Due to the symmetry of the device, the spectrum for the other output is simply a reflection of the plotted curve about k y = 0. As with the example examined in Sec. IV-A, we observe large Fourier components corresponding to tailored auxiliary surface waves inside the shaded invisible region. There is a high peak on the negative k y side of the spectrum, indicating strong SWs travelling towards the negative y-direction. This agrees with intuition, since the main role served by the SWs in this mode of operation is to deliver the source power from the right side to the left side of the aperture.
V. NUMERICAL RESULTS WITH REALISTIC META-WIRES
In this section, we examine the physical realization of impedance MTSs featuring embedded sources and investigate their performance in terms of power efficiency and bandwidth. In order to converge to solutions that manifest higher bandwidth and milder sensitivity with respect to the wire impedances, we utilize all the constraints described in Sec. III-C. After the optimization has been performed, the impedance loadings of each meta-wire are implemented with printed capacitors that are characterized in an aperiodic simulation, as detailed in App. A. These steps complete the endto-end design of the embedded-source-fed impedance MTSs.
Two design examples are presented, namely, an impedance MTS realizing a Chebyshev pattern with −20 dB side lobe level and impedance MTSs which radiate two distinct beams in two different directions.
A. Chebyshev array pattern
In this design example, we realize a Chebyshev pattern with a constant side lobe level of −20 dB using a metasurface of width W = 7λ. The metasurface consists of 42 wires placed on top of a grounded dielectric substrate with height h = 3.04mm and dielectric constant r = 3, while the source is located at (y o , z o ) = (0, h/2). The geometrical parameters have been chosen so that two Rogers RO3003 substrates of 1.52 mm thickness bonded together could realize the device. To determine the required far-field radiation, 14 omnidirectional virtual sources are assumed with their currents calculated based on the Chebyshev design method.
The optimization is performed in two steps. In the first step, the impedances are constrained only by their passivity (Re{Z n } = 0) and range ([−j90, −j25]). The converged solution is then used as the starting point of a second step that includes constraints on the Q-factor and sensitivity based on Q max = 40 and T max = 40 in (20) and (23), respectively. The converged values for the wire impedances of both steps are shown in Fig. 11. These impedance values are then related to the width of the printed loading capacitors based on the design curve in Fig. 17, and simulations are performed in HFSS. The wire traces and ground plane are modelled with 18um copper and dielectric losses are introduced to the substrate through a loss tangent of tan(δ) = 0.001.
One half of a slice of the final optimized device implemented using realistic printed meta-wires is shown in Fig. 12(a). Due to symmetry, the other half of the device is not shown. Its simulated near-field profile Re{E x } is plotted in Fig. 12(b). As seen, auxiliary surface waves are developed in the near-field of the metasurface, supported by the capacitive sheet above the grounded substrate. These surface waves symmetrically carry power towards the edges of the MTS antenna, so that an amplitude-taper required by a virtual Chebyshev array is obtained.
From the HFSS simulation results, we also extract the spectrum of the electric field in a plane 0.1λ above the wires. We repeat this for the converged solutions of both optimization steps, and plot the resultant spectra normalized E x (Normalized) with respect toẼ x (k y = 0) in Fig. 12(c). Although both solutions produce significant evanescent waves, it is clear that the additional constraints of the second optimization step succeeded in reducing the near-field reactive energy, which leads to a better radiation efficiency.
To assess the fidelity of the formed beam, we plot the simulated radiation pattern in Fig. 13, along with the target pattern and the pattern predicted by the VSIE framework. As observed in Fig. 13, the three patterns match each other very well. The deviation in simulated directivity compared to that of the targeted Chebyshev pattern is less than 0.3 dB. The realized the side lobe level is at −19.9 dB, which is very close to the specified SLL of −20 dB.
Notably, the excellent pattern matching was realized without any additional geometrical tuning of the printed capacitors. This is because the design curve of Fig. 17 characterizes the intrinsic property of a single isolated meta-wire [42]. The mutual coupling between meta-wires is rigorously and efficiently accounted by the VSIE framework. In other words, once established, the same design curve can be reused regardless of relative placement of the wires, as long as other geometrical and electromagnetic parameters remain unchanged. This feature of the VSIE-based design scheme means that it can be much more efficient and accurate than conventional methods which rely on inaccurate periodic surrogate models to characterize the unit-cells.
The realistic design allows for the investigation of the losses θ (deg) Fig. 13. Radiation pattern of the Chebyshev antenna array at 10 GHz. VSIE (red curve) refers to the radiation pattern obtained from the optimized abstract wire loadings and HFSS (black curve) refers to the result from realistic simulations using loaded wires and including all power losses. Both match well with the targeted Chebyshev pattern (blue curve). and the bandwidth. Specifically, losses are estimated to be around 7.4% (3.3% ohmic losses in the wires and the ground plane and 4.1% in the dielectric). This constitutes a promising result, as multi-layer transmissive metasurfaces using auxiliary surface waves for similar functionality exhibit significantly higher losses [41]. In addition, the frequency variation of directivity is plotted in Fig. 14, showing a 3-dB bandwidth of slightly over 6% for the final optimized solution. The high power efficiency and acceptable bandwidth (considering the presence of surface waves) are attributed to not only the simplicity of the structure consisting of wires etched on a thin dielectric substrate, but also the additional nonlinear constraints that minimize the amplitude of the developed surface waves in the vicinity of the metasurface antenna. As a comparison, if we had concluded our optimization after the first step, the losses with a realistic structure would be 61.5% and the 3-dB bandwidth would be 2.6%, as shown in Fig. 14. This inspires confidence that even more broadband solutions can be obtained if tighter constraints are put in (20) and (23). However, this may require either slightly degrading the accuracy of the realized radiation pattern at the nominal frequency, or loosening some constraints (e.g. tolerating a higher SLL).
B. Single-input Multi-output Metasurfaces
In this example, we examine metasurfaces, fed by a single embedded source, that realize two beams in two predefined directions. As in the previous example, the frequency is set to 10 GHz and the metasurface consists of 42 wires extending along an aperture size of W = 7λ. The substrate has a height of h = 3.04 mm and dielectric constant e r = 3, while the source is placed at (y 0 , z 0 ) = (0, h/2). The desired radiation pattern is obtained by superimposing two uniform phased current sheets following (27):
J des [n] = A 1 J o e −jkygn sin θ1 + A 2 J o e −jkygn sin θ2 e jξ .(31)
Here, A 1 , A 2 dictate the directivity of the two beams, θ 1 , θ 2 are the two output angles, and ξ is a constant phase that slightly modifies the side lobes of the total pattern by affecting the interference of the two beams. For our examples, we use A 1 = A 2 = 1 resulting in desired patterns with two equally directive beams. It is emphasized that both beams originate from a single source, unlike the MIMO example presented in Sec. IV-B. In a sense, the present MTSs can be perceived as single-input multiple-output (SIMO) systems, as two spatially separated receivers could receive the same transmitted signal. With that in mind, we present two design examples with the specifications shown in Table I.
θ 1 θ 2 ξ Design A −45 • −10 • π/2 Design B −45 • 22.5 • 0
The loading impedances of the wires for each case are determined based on the same two-step approach. First, a gradient-descent optimization is ran aiming at passive loadings in the [−j90, −j25] range. The converged solutions are then used as a starting point in the second optimization step with the additional constraints Q max = 50 and T max = 65. These limits were selected to obtain the best compromise between pattern fidelity and device bandwidth as well as power efficiency. The final values for the meta-wire impedances in each of the two cases are depicted in Fig. 15. These impedance values are mapped to printed capacitor sizes based on Fig. 17 in Appendix A.
Simulations using realistic wires and losses are performed in HFSS. To assess the beam-forming accuracy, the realized far-field patterns for both designs are plotted in Fig. 16. The patterns predicted by the VSIE framework are also included for comparison. Evidently, both designs produced output beams at their designated angles. For all four output beams, the full- wave simulated (maximum) directivity values differ from the desired values by no more than 0.6 dB. Furthermore, both designs have side lobe levels less than -11.5 dB, which are very close to the desired side lobes when converted to linear scale.
The nonlinear constraint on the Q-factor helped us obtain power efficiencies of 92.3% for Design A and 91.0% for Design B. The power efficiency is limited by dielectric losses (4.2% for A and 4.5% for B) and ohmic losses in copper (3.5% for A and 4.5% for B). For each design, we examine its worst case 3-dB bandwidth, i.e. the bandwidth of its more narrow band beam. For Design A and Design B, these are evaluated to be 4.5% and 3.9% respectively.
VI. CONCLUSION
We presented the design of a complete low-profile beamforming platform consisting of a single-layered ground-planebacked impedance MTS fed by embedded sources. Through the use of an efficient optimization-based design strategy derived from volume-surface integral equations, several devices with extreme beam-forming capabilities were synthesized. By harnessing tailored auxiliary surface waves, they were able to realize optimal aperture illumination regardless of their physical size or the source type, while demonstrating useful functionalities such as wide-angle or MIMO beam-forming. To bring the proposed devices one step closer to physical realization, we derived several feasibility-related constraints which serve to enhance their bandwidths and robustness against fabrication errors, while reducing ohmic and dielectric losses. With the help of a simple mapping procedure between theoretical MTS models and realistic PCB-compatible devices, we constructed several designs capable of arbitrary beamshaping or SIMO beam-forming using simple capacitively loaded printed copper traces. Full-wave simulations corroborates the exceptional theoretically predicted beam-forming capabilities, while confirming the effectiveness of the proposed feasibility constraints.
APPENDIX A CHARACTERIZATION OF WIRE IMPEDANCES
The meta-wires comprising the unit-cells of our MTSs are periodically loaded in the longitudinal direction (every λ/8) with printed capacitors, as seen in the inset of Fig. 17. By varying the width W c of the capacitor, different effective wire impedances are acquired.
To characterize the impedance of a particular wire design, we employ a modified version of a method previously used to characterize meta-wires suspended in air [42]. We first consider a hypothetical device which has the same grounded dielectric substrate and source as the actual MTS design, but with only a single wire printed at (y, z) = (0, h). Through a full-wave simulation in HFSS, the total field radiated by this single-wire device is evaluated along a near-field observation segment. We then consider the VSIE model for this device, in which the printed wire is replaced by a homogeneous sheet with unknown surface impedance Z w . With the method presented in Sec. II, it is very easy to evaluate the theoretically expected electric field along the near-field observation segment for various values of Z w . The value of Z w that gives the best match between VSIE and HFSS results is designated as the effective impedance of the meta-wire design under consideration. Importantly, this method characterizes the wire in the presence of the grounded dielectric substrate, which has a non-negligible impact on the effective capacitance of the printed capacitor.
Aiming at the realistic implementation of the design example in Sec. V-A, we characterize the loaded wires above a dielectric of height h = 3.04mm and r = 3. The ground plane is also present and the whole structure has a finite width of 7λ. The extracted effective impedances for various values of capacitor width W c is plotted in Fig. 17. As expected, widening the printed loading increases the capacitance, or equivalently, decreases the effective reactance. For applications that require more negative reactances, it is sufficient to increase the gap of the capacitor. More positive values can be obtained with other types of wires such as those loaded with meandering inductors [42]. However, for our design examples, the obtained range of [−j90, −j25] Ω was adequate. In addition, no significant change was observed when adding or removing copper losses, indicating that the wires operate away from their resonance.
Fig. 3 .
3Optimized meta-wire impedances for wide-angle beam-forming with perfect aperture efficiency.
Fig. 4 .Fig. 5 .
45The dotted blue curve inFig. 5, labelled as "D uni ", represents the 2D directivity of a uniformly illuminated aperture Simulated near-field electric field distributions corresponding to (a) θo = 0 • and (b) θo = −45 • . Simulated 2D directivity for various designs demonstrating wide-angle beam-forming capability.
Fig. 6 .
6Electric field spectrum for the impedance MTS antenna with θo = 0 • , evaluated at a plane 0.1λ above the meta-wires. The invisible region is shaded in red.
Fig. 7 .
7Optimized meta-wire reactance for the MIMO MTS.
Fig. 8 .
8Simulated near-field electric field of the MIMO MTS when one of its two inputs is excited. θ (deg)Fig. 9. Output directivity of the MIMO MTS when one of its two inputs is excited.
Fig. 10 .
10Electric field spectrum for the MIMO impedance MTS antenna when its input located at 3λ is excited, evaluated at a plane 0.1λ above the meta-wires. The invisible region is shaded in red.
Fig. 11 .
11Optimized impedances of the meta-wires. The solution from Step 1 does not adhere to any nonlinear constraints, while the solutions at Step 2 is constrained in terms of Q-factor and sensitivity of the far-field.
Fig. 12 .
12(a) Right half of one slice of the optimized Chebyshev beam-forming impedance MTS implemented using printed metallic wires, (b) Simulated near-field profile Re{Ex}, and (c) Spectrum of the total electric field at a plane 0.1λ above the meta-wires (invisible region is shaded in red).
Fig. 14 .
14Frequency variation of the maximum directivity obtained from realistic wire simulations in HFSS. The final optimized solution (red curve) exhibits a wider bandwidth compared to the initial solution (blue curve) due to the additional constraints.
Fig. 15 .
15Optimized meta-wire impedances for the two SIMO MTS designs.
Fig. 16 .
16Radiation patterns for (a) Design A and (b) Design B. The full-wave simulated results (black) show good agreement with the optimized VSIE (red) and the desired (blue) radiation patterns in terms of the beam angles and peak directivities.
Fig. 17 .
17Extracted wire impedance values for capacitively-loaded wires as a function of the width Wc (inset: Geometry of printed loading. Absolute dimensions are in mm).
Fig. 2. Theoretical model for the cross section of the impedance MTS antenna.y)
J g
PEC
(y vn ,z vn )
Δ g
πr o
πr 2
Δ w
TABLE I DESIGN
ISPECIFICATIONS FOR TWO DIFFERENT SIMO MTSS.
Roadmap on metasurfaces. O Quevedo-Teruel, Journal of Optics. 212019O. Quevedo-Teruel et al., "Roadmap on metasurfaces," Journal of Optics, vol. 21, p. 073002, 7 2019.
Discontinuous electromagnetic fields using orthogonal electric and magnetic currents for wavefront manipulation. M Selvanayagam, G V Eleftheriades, Opt. Express. 21M. Selvanayagam and G. V. Eleftheriades, "Discontinuous electromag- netic fields using orthogonal electric and magnetic currents for wavefront manipulation," Opt. Express, vol. 21, pp. 14409-14429, Jun 2013.
Metamaterial Huygens' surfaces: tailoring wave fronts with reflectionless sheets. C Pfeiffer, A Grbic, Phys. Rev. Lett. 110197401C. Pfeiffer and A. Grbic, "Metamaterial Huygens' surfaces: tailoring wave fronts with reflectionless sheets," Phys. Rev. Lett., vol. 110, p. 197401, May 2013.
Bianisotropic metasurfaces for optimal polarization control: Analysis and synthesis. C Pfeiffer, A Grbic, Phys. Rev. Applied. 244011C. Pfeiffer and A. Grbic, "Bianisotropic metasurfaces for optimal polarization control: Analysis and synthesis," Phys. Rev. Applied, vol. 2, p. 044011, Oct 2014.
General metasurface synthesis based on susceptibility tensors. K Achouri, M A Salem, C Caloz, IEEE Transactions on Antennas and Propagation. 637K. Achouri, M. A. Salem, and C. Caloz, "General metasurface synthesis based on susceptibility tensors," IEEE Transactions on Antennas and Propagation, vol. 63, no. 7, pp. 2977-2991, 2015.
Arbitrary power-conserving field transformations with passive lossless omega-type bianisotropic metasurfaces. A Epstein, G V Eleftheriades, IEEE Transactions on Antennas and Propagation. 649A. Epstein and G. V. Eleftheriades, "Arbitrary power-conserving field transformations with passive lossless omega-type bianisotropic metasur- faces," IEEE Transactions on Antennas and Propagation, vol. 64, no. 9, pp. 3880-3895, 2016.
Bianisotropic metasurfaces: physics and applications. V S Asadchy, A Díaz-Rubio, S A Tretyakov, Nanophotonics. 76V. S. Asadchy, A. Díaz-Rubio, and S. A. Tretyakov, "Bianisotropic metasurfaces: physics and applications," Nanophotonics, vol. 7, no. 6, pp. 1069-1094, 2018.
Theory, design, and experimental verification of a reflectionless bianisotropic Huygens' metasurface for wide-angle refraction. M Chen, E Abdo-Sánchez, A Epstein, G V Eleftheriades, Phys. Rev. B. 97125433M. Chen, E. Abdo-Sánchez, A. Epstein, and G. V. Eleftheriades, "The- ory, design, and experimental verification of a reflectionless bianisotropic Huygens' metasurface for wide-angle refraction," Phys. Rev. B, vol. 97, p. 125433, Mar 2018.
Susceptibility derivation and experimental demonstration of refracting metasurfaces without spurious diffraction. G Lavigne, K Achouri, V S Asadchy, S A Tretyakov, C Caloz, IEEE Transactions on Antennas and Propagation. 663G. Lavigne, K. Achouri, V. S. Asadchy, S. A. Tretyakov, and C. Caloz, "Susceptibility derivation and experimental demonstration of refracting metasurfaces without spurious diffraction," IEEE Transactions on An- tennas and Propagation, vol. 66, no. 3, pp. 1321-1330, 2018.
Augmented Huygens' metasurfaces employing baffles for precise control of wave transformations. G Xu, S V Hum, G V Eleftheriades, IEEE Transactions on Antennas and Propagation. 6711G. Xu, S. V. Hum, and G. V. Eleftheriades, "Augmented Huygens' metasurfaces employing baffles for precise control of wave transforma- tions," IEEE Transactions on Antennas and Propagation, vol. 67, no. 11, pp. 6935-6946, 2019.
Omega-bianisotropic wire-loop Huygens' metasurface for reflectionless wide-angle refraction. M Chen, G V Eleftheriades, IEEE Transactions on Antennas and Propagation. 683M. Chen and G. V. Eleftheriades, "Omega-bianisotropic wire-loop Huygens' metasurface for reflectionless wide-angle refraction," IEEE Transactions on Antennas and Propagation, vol. 68, no. 3, pp. 1477- 1490, 2020.
Independent modulations of the transmission amplitudes and phases by using Huygens metasurfaces. X Wan, S Jia, T Cui, Scientific Reports. 625639X. Wan, S. Jia, T. Cui, et al., "Independent modulations of the transmis- sion amplitudes and phases by using Huygens metasurfaces," Scientific Reports, vol. 6, p. 25639, 2016.
Synthesis of polarization transformers. T Niemi, A O Karilainen, S A Tretyakov, IEEE Transactions on Antennas and Propagation. 616T. Niemi, A. O. Karilainen, and S. A. Tretyakov, "Synthesis of polar- ization transformers," IEEE Transactions on Antennas and Propagation, vol. 61, no. 6, pp. 3102-3111, 2013.
Multifunctional microstrip array combining a linear polarizer and focusing metasurface. H.-X Xu, S Tang, G.-M Wang, T Cai, W Huang, Q He, S Sun, L Zhou, IEEE Transactions on Antennas and Propagation. 648H.-X. Xu, S. Tang, G.-M. Wang, T. Cai, W. Huang, Q. He, S. Sun, and L. Zhou, "Multifunctional microstrip array combining a linear polarizer and focusing metasurface," IEEE Transactions on Antennas and Propagation, vol. 64, no. 8, pp. 3676-3682, 2016.
Design and experimental demonstration of impedance-matched circular-polarization-selective surfaces with spin-selective phase modulations. M Kim, G V Eleftheriades, Phys. Rev. Applied. 1314009M. Kim and G. V. Eleftheriades, "Design and experimental demon- stration of impedance-matched circular-polarization-selective surfaces with spin-selective phase modulations," Phys. Rev. Applied, vol. 13, p. 014009, Jan 2020.
A technique for designing multilayer multistopband frequency selective surfaces. G Xu, S V Hum, G V Eleftheriades, IEEE Transactions on Antennas and Propagation. 662G. Xu, S. V. Hum, and G. V. Eleftheriades, "A technique for designing multilayer multistopband frequency selective surfaces," IEEE Transac- tions on Antennas and Propagation, vol. 66, no. 2, pp. 780-789, 2018.
Generalized synthesis technique for high-order low-profile dual-band frequency selective surfaces. G Xu, G V Eleftheriades, S V Hum, IEEE Transactions on Antennas and Propagation. 6611G. Xu, G. V. Eleftheriades, and S. V. Hum, "Generalized synthesis technique for high-order low-profile dual-band frequency selective sur- faces," IEEE Transactions on Antennas and Propagation, vol. 66, no. 11, pp. 6033-6042, 2018.
Dual-mode transmissive metasurface and its applications in multibeam transmitarray. H.-X Xu, T Cai, Y.-Q Zhuang, Q Peng, G.-M Wang, J.-G Liang, IEEE Transactions on Antennas and Propagation. 654H.-X. Xu, T. Cai, Y.-Q. Zhuang, Q. Peng, G.-M. Wang, and J.-G. Liang, "Dual-mode transmissive metasurface and its applications in multibeam transmitarray," IEEE Transactions on Antennas and Prop- agation, vol. 65, no. 4, pp. 1797-1806, 2017.
High-transmission ultrathin Huygens' metasurface with 360°phase control by using double-layer transmitarray elements. L W Wu, H F Ma, Y Gou, R Y Wu, Z X Wang, M Wang, X Gao, T J Cui, Phys. Rev. Applied. 1224012L. W. Wu, H. F. Ma, Y. Gou, R. Y. Wu, Z. X. Wang, M. Wang, X. Gao, and T. J. Cui, "High-transmission ultrathin Huygens' metasurface with 360°phase control by using double-layer transmitarray elements," Phys. Rev. Applied, vol. 12, p. 024012, Aug 2019.
Highefficiency metasurface with polarization-dependent transmission and reflection properties for both reflectarray and transmitarray. T Cai, G.-M Wang, X.-L Fu, J.-G Liang, Y.-Q Zhuang, IEEE Transactions on Antennas and Propagation. 666T. Cai, G.-M. Wang, X.-L. Fu, J.-G. Liang, and Y.-Q. Zhuang, "High- efficiency metasurface with polarization-dependent transmission and reflection properties for both reflectarray and transmitarray," IEEE Transactions on Antennas and Propagation, vol. 66, no. 6, pp. 3219- 3224, 2018.
Design and experiment of a nearzero-thickness high-gain transmit-reflect-array antenna using anisotropic metasurface. F Yang, R Deng, S Xu, M Li, IEEE Transactions on Antennas and Propagation. 666F. Yang, R. Deng, S. Xu, and M. Li, "Design and experiment of a near- zero-thickness high-gain transmit-reflect-array antenna using anisotropic metasurface," IEEE Transactions on Antennas and Propagation, vol. 66, no. 6, pp. 2853-2861, 2018.
Modulated metasurface antennas for space: synthesis, analysis and realizations. G Minatti, M Faenzi, E Martini, F Caminita, P Vita, D González-Ovejero, M Sabbadini, S Maci, IEEE Transactions on Antennas and Propagation. 634G. Minatti, M. Faenzi, E. Martini, F. Caminita, P. De Vita, D. González- Ovejero, M. Sabbadini, and S. Maci, "Modulated metasurface antennas for space: synthesis, analysis and realizations," IEEE Transactions on Antennas and Propagation, vol. 63, no. 4, pp. 1288-1300, 2015.
Synthesis of modulated-metasurface antennas with amplitude, phase, and polarization control. G Minatti, F Caminita, E Martini, M Sabbadini, S Maci, IEEE Transactions on Antennas and Propagation. 649G. Minatti, F. Caminita, E. Martini, M. Sabbadini, and S. Maci, "Syn- thesis of modulated-metasurface antennas with amplitude, phase, and polarization control," IEEE Transactions on Antennas and Propagation, vol. 64, no. 9, pp. 3907-3919, 2016.
Multibeam by metasurface antennas. D González-Ovejero, G Minatti, G Chattopadhyay, S Maci, IEEE Transactions on Antennas and Propagation. 656D. González-Ovejero, G. Minatti, G. Chattopadhyay, and S. Maci, "Multibeam by metasurface antennas," IEEE Transactions on Antennas and Propagation, vol. 65, no. 6, pp. 2923-2930, 2017.
Metasurface antennas: new models, applications and realizations. M Faenzi, G Minatti, D González-Ovejero, F Caminita, E Martini, C D Giovampaola, S Maci, Scientific Reports. 910178M. Faenzi, G. Minatti, D. González-Ovejero, F. Caminita, E. Martini, C. D. Giovampaola, and S. Maci, "Metasurface antennas: new models, applications and realizations," Scientific Reports, vol. 9, p. 10178, 2019.
Modulated reactance surfaces for leaky-wave radiation based on complete aperture field synthesis. D.-H Kwon, IEEE Transactions on Antennas and Propagation. 687D.-H. Kwon, "Modulated reactance surfaces for leaky-wave radiation based on complete aperture field synthesis," IEEE Transactions on Antennas and Propagation, vol. 68, no. 7, pp. 5463-5477, 2020.
Modulated scalar reactance surfaces for endfire radiation pattern synthesis. D.-H Kwon, IEEE Transactions on Antennas and Propagation. D.-H. Kwon, "Modulated scalar reactance surfaces for endfire radiation pattern synthesis," IEEE Transactions on Antennas and Propagation, pp. 1-1, 2021.
Cavity-excited Huygens' metasurface antennas for near-unity aperture illumination efficiency from arbitrarily large apertures. A Epstein, J Wong, G V Eleftheriaes, Nat. Comm. 710360A. Epstein, J. Wong, and G. V. Eleftheriaes, "Cavity-excited Huygens' metasurface antennas for near-unity aperture illumination efficiency from arbitrarily large apertures," Nat. Comm., vol. 7, p. 10360, 2016.
Guided-wave-excited binary Huygens' metasurfaces for dynamic radiated-beam shaping with independent gain and scan-angle control. M Kim, G V Eleftheriades, Phys. Rev. Applied. 1554037M. Kim and G. V. Eleftheriades, "Guided-wave-excited binary Huygens' metasurfaces for dynamic radiated-beam shaping with independent gain and scan-angle control," Phys. Rev. Applied, vol. 15, p. 054037, May 2021.
Arbitrary antenna arrays without feed networks based on cavity-excited omega-bianisotropic metasurfaces. A Epstein, G V Eleftheriades, IEEE Transactions on Antennas and Propagation. 654A. Epstein and G. V. Eleftheriades, "Arbitrary antenna arrays without feed networks based on cavity-excited omega-bianisotropic metasur- faces," IEEE Transactions on Antennas and Propagation, vol. 65, no. 4, pp. 1749-1756, 2017.
A leaky-wave antenna with controlled radiation using a bianisotropic Huygens' metasurface. E Abdo-Sánchez, M Chen, A Epstein, G V Eleftheriades, IEEE Transactions on Antennas and Propagation. 671E. Abdo-Sánchez, M. Chen, A. Epstein, and G. V. Eleftheriades, "A leaky-wave antenna with controlled radiation using a bianisotropic Huy- gens' metasurface," IEEE Transactions on Antennas and Propagation, vol. 67, no. 1, pp. 108-120, 2019.
Design of compact Huygens' metasurface pairs with multiple reflections for arbitrary wave transformations. V G Ataloglou, A H Dorrah, G V Eleftheriades, IEEE Transactions on Antennas and Propagation. 6811V. G. Ataloglou, A. H. Dorrah, and G. V. Eleftheriades, "Design of compact Huygens' metasurface pairs with multiple reflections for arbitrary wave transformations," IEEE Transactions on Antennas and Propagation, vol. 68, no. 11, pp. 7382-7394, 2020.
Discrete-fourier-transformbased framework for analysis and synthesis of cylindrical omegabianisotropic metasurfaces. G Xu, G V Eleftheriades, S V Hum, Phys. Rev. Applied. 1464055G. Xu, G. V. Eleftheriades, and S. V. Hum, "Discrete-fourier-transform- based framework for analysis and synthesis of cylindrical omega- bianisotropic metasurfaces," Phys. Rev. Applied, vol. 14, p. 064055, Dec 2020.
Surface-waves optimization for beamforming with a single omega-bianisotropic Huygens' metasurface. V G Ataloglou, G V Eleftheriades, 2020 IEEE International Symposium on Antennas and Propagation and North American Radio Science Meeting. V. G. Ataloglou and G. V. Eleftheriades, "Surface-waves optimization for beamforming with a single omega-bianisotropic Huygens' metasurface," in 2020 IEEE International Symposium on Antennas and Propagation and North American Radio Science Meeting, pp. 905-906, 2020.
Microwave Huygens' metasurfaces: Fundamentals and applications. V G Ataloglou, M Chen, M Kim, G V Eleftheriades, IEEE Journal of Microwaves. 11V. G. Ataloglou, M. Chen, M. Kim, and G. V. Eleftheriades, "Microwave Huygens' metasurfaces: Fundamentals and applications," IEEE Journal of Microwaves, vol. 1, no. 1, pp. 374-388, 2021.
Passive metasurface antenna with perfect aperture efficiency. J Budhu, A Grbic, 2021 Fifteenth International Congress on Artificial Materials for Novel Wave Phenomena (Metamaterials). J. Budhu and A. Grbic, "Passive metasurface antenna with perfect aper- ture efficiency," in 2021 Fifteenth International Congress on Artificial Materials for Novel Wave Phenomena (Metamaterials), pp. 070-072, 2021.
Design of passive and lossless single layer metasurfaces for far field beamforming. J Budhu, L Szymanski, A Grbic, arXiv:2112.032502021J. Budhu, L. Szymanski, and A. Grbic, "Design of passive and lossless single layer metasurfaces for far field beamforming," 2021. arXiv: 2112.03250.
Extreme beam-forming with metagrating-assisted planar antennas. G Xu, S V Hum, G V Eleftheriades, arXiv:2110.130002021G. Xu, S. V. Hum, and G. V. Eleftheriades, "Extreme beam-forming with metagrating-assisted planar antennas," 2021. arXiv: 2110.13000.
Optimization of scalar and bianisotropic electromagnetic metasurface parameters satisfying far-field criteria. S Pearson, S V Hum, arXiv:2011.090162020S. Pearson and S. V. Hum, "Optimization of scalar and bianisotropic electromagnetic metasurface parameters satisfying far-field criteria," 2020. arXiv: 2011.09016.
On the use of electromagnetic inversion for metasurface design. T Brown, C Narendra, Y Vahabzadeh, C Caloz, P Mojabi, IEEE Transactions on Antennas and Propagation. 683T. Brown, C. Narendra, Y. Vahabzadeh, C. Caloz, and P. Mojabi, "On the use of electromagnetic inversion for metasurface design," IEEE Transactions on Antennas and Propagation, vol. 68, no. 3, pp. 1812- 1824, 2020.
Arbitrary wave transformations with Huygens' metasurfaces through surface-wave optimization. V G Ataloglou, G V Eleftheriades, IEEE Antennas and Wireless Propagation Letters. 209V. G. Ataloglou and G. V. Eleftheriades, "Arbitrary wave transformations with Huygens' metasurfaces through surface-wave optimization," IEEE Antennas and Wireless Propagation Letters, vol. 20, no. 9, pp. 1750- 1754, 2021.
Perfectly reflecting metasurface reflectarrays: Mutual coupling modeling between unique elements through homogenization. J Budhu, A Grbic, IEEE Transactions on Antennas and Propagation. 691J. Budhu and A. Grbic, "Perfectly reflecting metasurface reflectarrays: Mutual coupling modeling between unique elements through homog- enization," IEEE Transactions on Antennas and Propagation, vol. 69, no. 1, pp. 122-134, 2021.
Cascaded metasurface design using electromagnetic inversion with gradient-based optimization. T Brown, P Mojabi, IEEE Transactions on Antennas and Propagation. T. Brown and P. Mojabi, "Cascaded metasurface design using electro- magnetic inversion with gradient-based optimization," IEEE Transac- tions on Antennas and Propagation, pp. 1-1, 2021.
Dualband stacked metasurface reflectarray. J Budhu, A Grbic, E Michielssen, 2020 IEEE International Symposium on Antennas and Propagation and North American Radio Science Meeting. J. Budhu, A. Grbic, and E. Michielssen, "Dualband stacked metasurface reflectarray," in 2020 IEEE International Symposium on Antennas and Propagation and North American Radio Science Meeting, pp. 821-822, 2020.
Design of multilayer, dualband metasurface reflectarrays. J Budhu, A Grbic, E Michielssen, 2020 14th European Conference on Antennas and Propagation (EuCAP). J. Budhu, A. Grbic, and E. Michielssen, "Design of multilayer, dual- band metasurface reflectarrays," in 2020 14th European Conference on Antennas and Propagation (EuCAP), pp. 1-4, 2020.
Passive reflective metasurfaces for far-field beamforming. J Budhu, A Grbic, 2021 15th European Conference on Antennas and Propagation (EuCAP). J. Budhu and A. Grbic, "Passive reflective metasurfaces for far-field beamforming," in 2021 15th European Conference on Antennas and Propagation (EuCAP), pp. 1-4, 2021.
Fast and accurate optimization of metasurfaces with gradient descent and the woodbury matrix identity. J Budhu, A Grbic, arXiv:2108.027622021J. Budhu and A. Grbic, "Fast and accurate optimization of metasurfaces with gradient descent and the woodbury matrix identity," 2021. arXiv: 2108.02762.
The design of dual band stacked metasurfaces using integral equations. J Budhu, E Michielssen, A Grbic, arXiv:2103.036762021J. Budhu, E. Michielssen, and A. Grbic, "The design of dual band stacked metasurfaces using integral equations," 2021. arXiv: 2103.03676.
Analytical design of printed circuit board (pcb) metagratings for perfect anomalous reflection. O Rabinovich, A Epstein, IEEE Transactions on Antennas and Propagation. 668O. Rabinovich and A. Epstein, "Analytical design of printed circuit board (pcb) metagratings for perfect anomalous reflection," IEEE Transactions on Antennas and Propagation, vol. 66, no. 8, pp. 4086-4095, 2018.
Arbitrary diffraction engineering with multilayered multielement metagratings. O Rabinovich, A Epstein, IEEE Transactions on Antennas and Propagation. 683O. Rabinovich and A. Epstein, "Arbitrary diffraction engineering with multilayered multielement metagratings," IEEE Transactions on Anten- nas and Propagation, vol. 68, no. 3, pp. 1553-1568, 2020.
Dual-band reflective metagratings with interleaved meta-wires. G Xu, S V Hum, G V Eleftheriades, IEEE Transactions on Antennas and Propagation. 694G. Xu, S. V. Hum, and G. V. Eleftheriades, "Dual-band reflective meta- gratings with interleaved meta-wires," IEEE Transactions on Antennas and Propagation, vol. 69, no. 4, pp. 2181-2193, 2021.
Analysis and design of general printed circuit board metagratings with an equivalent circuit model approach. G Xu, G V Eleftheriades, S V Hum, IEEE Transactions on Antennas and Propagation. 698G. Xu, G. V. Eleftheriades, and S. V. Hum, "Analysis and design of general printed circuit board metagratings with an equivalent circuit model approach," IEEE Transactions on Antennas and Propagation, vol. 69, no. 8, pp. 4657-4669, 2021.
Perfect anomalous reflection with a bipartite Huygens' metasurface. A M H Wong, G V Eleftheriades, Phys. Rev. X. 811036A. M. H. Wong and G. V. Eleftheriades, "Perfect anomalous reflection with a bipartite Huygens' metasurface," Phys. Rev. X, vol. 8, p. 011036, Feb 2018.
Conformal sparse metasurfaces for wavefront manipulation. V Popov, S N Burokur, F Boust, Phys. Rev. Applied. 1444007V. Popov, S. N. Burokur, and F. Boust, "Conformal sparse metasurfaces for wavefront manipulation," Phys. Rev. Applied, vol. 14, p. 044007, Oct 2020.
Non-local reconfigurable sparse metasurface: Efficient near-field and far-field wavefront manipulations. V Popov, B Ratni, S N Burokur, F Boust, Advanced Optical Materials. 942001316V. Popov, B. Ratni, S. N. Burokur, and F. Boust, "Non-local reconfig- urable sparse metasurface: Efficient near-field and far-field wavefront manipulations," Advanced Optical Materials, vol. 9, no. 4, p. 2001316, 2021.
Synthesis of shaped beam reflectarrays with constrained geometry by exploiting nonradiating surface currents. M Salucci, A Gelmini, G Oliveri, N Anselmi, A Massa, IEEE Transactions on Antennas and Propagation. 6611M. Salucci, A. Gelmini, G. Oliveri, N. Anselmi, and A. Massa, "Synthe- sis of shaped beam reflectarrays with constrained geometry by exploiting nonradiating surface currents," IEEE Transactions on Antennas and Propagation, vol. 66, no. 11, pp. 5805-5817, 2018.
Electromagnetic scattering from material coated pec objects: a hybrid volume and surface integral equation approach. C Lu, W Chew, IEEE Antennas and Propagation Society International Symposium. C. Lu and W. Chew, "Electromagnetic scattering from material coated pec objects: a hybrid volume and surface integral equation approach," in IEEE Antennas and Propagation Society International Symposium. 1999
conjunction with: USNC/URSI National Radio Science Meeting. 4Digest. Held in conjunction with: USNC/URSI National Radio Science Meeting, vol. 4, pp. 2562-2565 vol.4, 1999.
R Harrington, Field Computation by Moment Methods. IEEE Press series on electromagnetic waves. MacmillanR. Harrington, Field Computation by Moment Methods. IEEE Press series on electromagnetic waves, Macmillan, 1968.
Advanced Engineering Electromagnetics. C Balanis, Wiley2nd EditionC. Balanis, Advanced Engineering Electromagnetics, 2nd Edition. Wiley, 2012.
Scattering by a dielectric cylinder of arbitrary cross section shape. J Richmond, IEEE Transactions on Antennas and Propagation. 133J. Richmond, "Scattering by a dielectric cylinder of arbitrary cross sec- tion shape," IEEE Transactions on Antennas and Propagation, vol. 13, no. 3, pp. 334-341, 1965.
Power System Analysis. Electrical engineering series. J Grainger, W Stevenson, McGraw-HillJ. Grainger and W. Stevenson, Power System Analysis. Electrical engineering series, McGraw-Hill, 1994.
| []
|
[
"LABEL-EFFICIENT SEMANTIC SEGMENTATION WITH DIFFUSION MODELS",
"LABEL-EFFICIENT SEMANTIC SEGMENTATION WITH DIFFUSION MODELS"
]
| [
"Dmitry Baranchuk \nYandex Research\n\n",
"Ivan Rubachev \nYandex Research\n\n",
"Andrey Voynov \nYandex Research\n\n",
"Valentin Khrulkov \nYandex Research\n\n",
"Artem Babenko \nYandex Research\n\n"
]
| [
"Yandex Research\n",
"Yandex Research\n",
"Yandex Research\n",
"Yandex Research\n",
"Yandex Research\n"
]
| []
| Denoising diffusion probabilistic models have recently received much research attention since they outperform alternative approaches, such as GANs, and currently provide state-of-the-art generative performance. The superior performance of diffusion models has made them an appealing tool in several applications, including inpainting, super-resolution, and semantic editing. In this paper, we demonstrate that diffusion models can also serve as an instrument for semantic segmentation, especially in the setup when labeled data is scarce. In particular, for several pretrained diffusion models, we investigate the intermediate activations from the networks that perform the Markov step of the reverse diffusion process. We show that these activations effectively capture the semantic information from an input image and appear to be excellent pixel-level representations for the segmentation problem. Based on these observations, we describe a simple segmentation method, which can work even if only a few training images are provided. Our approach significantly outperforms the existing alternatives on several datasets for the same amount of human supervision. The source code of the project is publicly available.Published as a conference paper at ICLR 2022 2. We design a simple semantic segmentation approach that exploits these representations and outperforms the alternatives in the few-shot operating point.3. We compare the DDPM-based representations with their GAN-based counterparts on the same datasets and demonstrate the advantages of the former in the context of semantic segmentation.RELATED WORKIn this section, we briefly describe the existing lines of research relevant to our work.Diffusion models(Sohl-Dickstein et al., 2015;Ho et al., 2020)are a class of generative models that approximate the distribution of real images by the endpoint of the Markov chain which originates from a simple parametric distribution, typically a standard Gaussian. Each Markov step is modeled by a deep neural network that effectively learns to invert the diffusion process with a known Gaussian kernel. Ho et al. highlighted the equivalence of diffusion models and score matching(Song & Ermon, 2019;, showing them to be two different perspectives on the gradual conversion of a simple known distribution into a target distribution via the iterative denoising process. Very recent works(Nichol, 2021;Dhariwal & Nichol, 2021)have developed more powerful model architectures as well as different advanced objectives, which led to the "victory" of DDPM over GANs in terms of generative quality and diversity. DDPM have been widely used in several applications, including image colorization ), super-resolution (Saharia et al., 2021Li et al., 2021b), inpainting (Song et al., 2021, and semantic editing(Meng et al., 2021). In our work, we demonstrate that one can also successfully use them for semantic segmentation. | null | [
"https://arxiv.org/pdf/2112.03126v3.pdf"
]
| 244,908,617 | 2112.03126 | 42f2271cebb7f272b0066c1f22d33381f139ee68 |
LABEL-EFFICIENT SEMANTIC SEGMENTATION WITH DIFFUSION MODELS
Dmitry Baranchuk
Yandex Research
Ivan Rubachev
Yandex Research
Andrey Voynov
Yandex Research
Valentin Khrulkov
Yandex Research
Artem Babenko
Yandex Research
LABEL-EFFICIENT SEMANTIC SEGMENTATION WITH DIFFUSION MODELS
Published as a conference paper at ICLR 2022
Denoising diffusion probabilistic models have recently received much research attention since they outperform alternative approaches, such as GANs, and currently provide state-of-the-art generative performance. The superior performance of diffusion models has made them an appealing tool in several applications, including inpainting, super-resolution, and semantic editing. In this paper, we demonstrate that diffusion models can also serve as an instrument for semantic segmentation, especially in the setup when labeled data is scarce. In particular, for several pretrained diffusion models, we investigate the intermediate activations from the networks that perform the Markov step of the reverse diffusion process. We show that these activations effectively capture the semantic information from an input image and appear to be excellent pixel-level representations for the segmentation problem. Based on these observations, we describe a simple segmentation method, which can work even if only a few training images are provided. Our approach significantly outperforms the existing alternatives on several datasets for the same amount of human supervision. The source code of the project is publicly available.Published as a conference paper at ICLR 2022 2. We design a simple semantic segmentation approach that exploits these representations and outperforms the alternatives in the few-shot operating point.3. We compare the DDPM-based representations with their GAN-based counterparts on the same datasets and demonstrate the advantages of the former in the context of semantic segmentation.RELATED WORKIn this section, we briefly describe the existing lines of research relevant to our work.Diffusion models(Sohl-Dickstein et al., 2015;Ho et al., 2020)are a class of generative models that approximate the distribution of real images by the endpoint of the Markov chain which originates from a simple parametric distribution, typically a standard Gaussian. Each Markov step is modeled by a deep neural network that effectively learns to invert the diffusion process with a known Gaussian kernel. Ho et al. highlighted the equivalence of diffusion models and score matching(Song & Ermon, 2019;, showing them to be two different perspectives on the gradual conversion of a simple known distribution into a target distribution via the iterative denoising process. Very recent works(Nichol, 2021;Dhariwal & Nichol, 2021)have developed more powerful model architectures as well as different advanced objectives, which led to the "victory" of DDPM over GANs in terms of generative quality and diversity. DDPM have been widely used in several applications, including image colorization ), super-resolution (Saharia et al., 2021Li et al., 2021b), inpainting (Song et al., 2021, and semantic editing(Meng et al., 2021). In our work, we demonstrate that one can also successfully use them for semantic segmentation.
INTRODUCTION
Denoising diffusion probabilistic models (DDPM) (Sohl-Dickstein et al., 2015;Ho et al., 2020) have recently outperformed alternative approaches to model the distribution of natural images both in the realism of individual samples and their diversity (Dhariwal & Nichol, 2021). These advantages of DDPM are successfully exploited in applications, such as colorization , inpainting , super-resolution (Saharia et al., 2021;Li et al., 2021b), and semantic editing (Meng et al., 2021), where DDPM often achieve more impressive results compared to GANs.
So far, however, DDPM were not exploited as a source of effective image representations for discriminative computer vision problems. While the prior literature has demonstrated that various generative paradigms, such as GANs (Donahue & Simonyan, 2019) or autoregressive models (Chen et al., 2020a), can be used to extract the representations for common vision tasks, it is not clear if DDPM can also serve as representation learners. In this paper, we provide an affirmative answer to this question in the context of semantic segmentation.
In particular, we investigate the intermediate activations from the U-Net network that approximates the Markov step of the reverse diffusion process in DDPM. Intuitively, this network learns to denoise its input, and it is not clear why the intermediate activations should capture semantic information needed for high-level vision problems. Nevertheless, we show that on certain diffusion steps, these activations do capture such information, and therefore, can potentially be used as image representations for downstream tasks. Given these observations, we propose a simple semantic segmentation method, which exploits these representations and works successfully even if only a few labeled images are provided. On several datasets, we show that our DDPM-based segmentation method outperforms the existing baselines for the same amount of supervision.
To sum up, the contributions of our paper are:
1. We investigate the representations learned by the state-of-the-art DDPM and show that they capture high-level semantic information valuable for downstream vision tasks.
Image segmentation with generative models is an active research direction at the moment, however, existing methods are primarily based on GANs. The first line of works (Voynov & Babenko, 2020;Voynov et al., 2021;Melas-Kyriazi et al., 2021) is based on the evidence that the latent spaces of the state-of-the-art GANs have directions corresponding to effects that influence the foreground/background pixels differently, which allows producing synthetic data to train segmentation models. However, these approaches are currently able to perform binary segmentation only, and it is not clear if they can be used in the general setup of semantic segmentation. The second line of works (Zhang et al., 2021;Tritrong et al., 2021;Xu, 2021;Galeev et al., 2020) is more relevant to our study since they are based on the intermediate representations obtained in GANs. In particular, the method proposed in (Zhang et al., 2021) trains a pixel class prediction model on these representations and confirms their label efficiency. In the experimental section, we compare the method from (Zhang et al., 2021) to our DDPM-based one and demonstrate several distinctive advantages of our solution.
Representations from generative models for discriminative tasks. The usage of generative models, as representation learners, has been widely investigated for global prediction (Donahue & Simonyan, 2019;Chen et al., 2020a), and dense prediction problems (Zhang et al., 2021;Tritrong et al., 2021;Xu, 2021;Xu et al., 2021). While previous works highlighted the practical advantages of these representations, such as out-of-distribution robustness (Li et al., 2021a), generative models as representation learners receive less attention compared to alternative unsupervised methods, e.g., based on contrastive learning (Chen et al., 2020b). The main reason is probably the difficulty of training a high-quality generative model on a complex, diverse dataset. However, given the recent success of DDPM on Imagenet (Deng et al., 2009), one can expect that this direction will attract more attention in the future.
REPRESENTATIONS FROM DIFFUSION MODELS
In the following section, we investigate the image representations learned by diffusion models. First, we provide a brief overview of the DDPM framework. Then, we describe how to extract features with DDPM and investigate what kind of semantic information these features might capture.
Background. Diffusion models transform noise x T ∼N (0, I) to the sample x 0 by gradually denoising x T to less noisy samples x t . Formally, we are given a forward diffusion process:
q(x t |x t−1 ) := N (x t ; 1 − β t x t−1 , β t I),(1)
for some fixed variance schedule β 1 , . . . , β t . (1) x 0 − → x t by adding noise according to q(x t |x 0 ). (2) Extracting feature maps from a noise predictor θ (x t , t).
(3) Collecting pixel-level representations by upsampling the feature maps to the image resolution and concatenating them. (4) Using the pixel-wise feature vectors to train an ensemble of MLPs to predict a class label for each pixel.
Importantly, a noisy sample x t can be obtained directly from the data x 0 :
q(x t |x 0 ) := N (x t ; √ᾱ t x 0 , (1 −ᾱ t )I), x t = √ᾱ t x 0 + √ 1 −ᾱ t , ∼ N (0, 1),(2)
where α t := 1 − β t ,ᾱ t := t s=1 α s . Pretrained DDPM approximates a reverse process:
p θ (x t−1 |x t ) := N (x t−1 ; µ θ (x t , t), Σ θ (x t , t)).(3)
In practice, rather than predicting the mean of the distribution in Equation (3), the noise predictor network θ (x t , t) predicts the noise component at the step t; the mean is then a linear combination of this noise component and x t . The covariance predictor Σ θ (x t , t) can be either a fixed set of scalar covariances or learned as well (the latter was shown to improve the model quality (Nichol, 2021)).
The denoising model θ (x t , t) is typically parameterized by different variants of the UNet architecture (Ronneberger et al., 2015), and in our experiments we investigate the state-of-the-art one proposed in (Dhariwal & Nichol, 2021).
Extracting representations. For a given real image x 0 ∈ R H×W ×3 , one can compute T sets of activation tensors from the noise predictor network θ (x t , t). The overall scheme for a timestep t is presented in Figure 1. First, we corrupt x 0 by adding Gaussian noise according to Equation (2). The noisy x t is used as an input of θ (x t , t) parameterized by the UNet model. The UNet's intermediate activations are then upsampled to H × W with bilinear interpolation. This allows treating them as pixel-level representations of x 0 .
REPRESENTATION ANALYSIS
We analyze the representations produced by the noise predictor θ (x t , t) for different t. We consider the state-of-the-art DDPM checkpoints trained on the LSUN-Horse and FFHQ-256 datasets 1 .
The intermediate activations from the noise predictor capture semantic information. For this experiment, we take a few images from the LSUN-Horse and FFHQ datasets and manually assign each pixel to one of the 21 and 34 semantic classes, respectively. Our goal is to understand whether the pixel-level representations produced by DDPM effectively capture the information about semantics. To this end, we train a multi-layer perceptron (MLP) to predict the pixel semantic label from its features produced by one of the 18 UNet decoder blocks on a specific diffusion step t. Note that we consider only the decoder activations because they also aggregate the encoder activations through the skip connections. MLPs are trained on 20 images and evaluated on 20 hold-out ones. The predictive performance is measured in terms of mean IoU. Figure 3: The evolution of predictive performance of DDPM-based pixel-wise representations on the LSUN-Horse dataset for classes with the smallest (Left) and largest (Right) average areas. The predictive performance for small-sized objects starts growing later in the reverse process. The deeper blocks are more informative for larger objects and the shallower blocks are more informative for smaller objects. A similar evaluation for other datasets is provided in Appendix A.
The evolution of predictive performance across the different blocks and diffusion steps t is presented in Figure 2. The blocks are numbered from the deep to shallow ones. Figure 2 shows that the discriminability of the features produced by the noise predictor θ (x t , t) varies for different blocks and diffusion steps. In particular, the features corresponding to the later steps of the reverse diffusion process typically capture semantic information more effectively. In contrast, the ones corresponding to the early steps are generally uninformative. Across different blocks, the features produced by the layers in the middle of the UNet decoder appear to be the most informative on all diffusion steps.
Also, we separately consider small-sized and large-sized semantic classes based on the average area in the annotated dataset. Then, we evaluate mean IoU for these classes independently across the different UNet blocks and diffusion steps. The results on LSUN-Horse are in Figure 3. As expected, the predictive performance for large-sized objects starts growing earlier in the reverse process. The shallower blocks are more informative for smaller objects, while the deeper blocks are more so for the larger ones. In both cases, the most discriminative features still correspond to the middle blocks. Figure 2 implies that for certain UNet blocks and diffusion steps, similar DDPM-based representations correspond to the pixels of the same semantics. Figure 4 shows the k-means clusters (k=5) formed by the features extracted by the FFHQ checkpoint from the blocks {6, 8, 10, 12} on the diffusion steps {50, 200, 400, 600, 800}, and confirms that clusters can span coherent semantic objects and object-parts. In the block B=6, the features correspond to coarse semantic masks. At the other extreme, the features from B=12 can discriminate between fine-grained face parts but exhibit less semantic meaningness for coarse fragmentation. Across different diffusion steps, the most meaningful features correspond to the later ones. We attribute this behavior to the fact that on the earlier steps of the reverse process, the global structure of a DDPM sample has not yet emerged, therefore, it is hardly possible to predict segmentation masks at this stage. This intuition is qualitatively confirmed by the masks in Figure 4. For t=800, the masks poorly reflect the content of actual images, while for smaller values of t, the masks and images are semantically coherent.
DDPM-BASED REPRESENTATIONS FOR FEW-SHOT SEMANTIC SEGMENTATION
The potential effectiveness of the intermediate DDPM activations observed above implies their usage as image representations for dense prediction tasks. Figure 1 schematically presents our overall approach for image segmentation, which exploits the discriminability of these representations. In more detail, we consider a few-shot semi-supervised setup, when a large number of unlabeled images {X 1 , . . . , X N } ⊂ R H×W ×3 from the particular domain are available, and only for n training images {X 1 , . . . , X n } ⊂ R H×W ×3 the groundtruth K-class semantic masks
{Y 1 , . . . , Y n } ⊂ R H×W ×{1,...,K} are provided.
As a first step, we train a diffusion model on the whole {X 1 , . . . , X N } in an unsupervised manner.
Then, this diffusion model is used to extract the pixel-level representations of the labeled images using the subset of the UNet blocks and diffusion steps t. In this work, we use the representations from the middle blocks B={5, 6, 7, 8, 12} of the UNet decoder and later steps t={50, 150, 250} of the reverse diffusion process. These blocks and time steps are motivated by the insights from Section 3.1 but intentionally not tuned for each dataset.
While the feature extraction at the particular time step is stochastic, we fix the noise for all timesteps t and ablate this in Section 4.1. The extracted representations from all blocks B and steps t are upsampled to the image size and concatenated, forming the feature vectors for all pixels of the training images. The overall dimension of the pixel-level representations is 8448.
Then, following (Zhang et al., 2021), we train an ensemble of independent multi-layer perceptrons (MLPs) on these feature vectors, which aim to predict a semantic label of each pixel available for training images. We adopt the ensemble configuration and training settings from (Zhang et al., 2021) and exploit them across all other methods in our experiments, see Appendix C for details.
To segment a test image, we extract its DDPM-based pixel-wise representations and use them to predict the pixel labels by the ensemble. The final prediction is obtained by majority voting.
EXPERIMENTS
This section experimentally confirms the advantage of the DDPM-based representations for the semantic segmentation problem. We start from a thorough comparison to the existing alternatives and then dissect the reasons for the DDPM success by additional analysis.
Datasets. In our evaluation, we mainly work with the "bedroom", "cat" and "horse" categories from LSUN (Yu et al., 2015) and FFHQ-256 (Karras et al., 2019). As a training set for each dataset, we consider several images for which the fine-grained semantic masks are collected following the protocol from (Zhang et al., 2021). For each dataset, a professional assessor was hired to annotate train and test samples. We denote the collected datasets as Bedroom-28, FFHQ-34, Cat-15, Horse-21, where the number corresponds to the number of semantic classes. Additionally, we consider two datasets, which, in contrast to others, have publicly available annotations and sizable evaluation sets:
• ADE-Bedroom-30 is a subset of the ADE20K dataset (Zhou et al., 2018), where we extract only images of bedroom scenes with 30 most frequent classes. We resize each image to 256 for the smaller side and then crop them to obtain the 256×256 samples. • CelebA-19 is a subset of the CelebAMask-HQ dataset (Lee et al., 2020), which provides the annotation for 19 facial attributes. All images are resized to 256 resolution.
The number of annotated images for each dataset are in Table 1. Other details are in Appendix E.
Methods. In the evaluation, we compare our method (denoted as DDPM) to several prior approaches which tackle the few-shot semantic segmentation setup. First, we describe the baselines that produce a large set of annotated synthetic images to train a segmentation model:
• DatasetGAN (Zhang et al., 2021) -this method exploits the discriminability of pixel-level features produced by GANs. In more detail, assessors annotate a few GAN-produced images. Then, the latent codes of these images are used to obtain the intermediate generator activations, which are considered as pixel-level representations. Given these representations, a classifier is trained to predict a semantic label for each pixel. This classifier is then used to label new synthetic GAN images, which, for their part, serve as a training set for the DeepLabV3 segmentation model (Chen et al., 2017). For each dataset, we increase the number of synthetic images until the performance on the validation set is not saturated. According to (Zhang et al., 2021), we also remove 10% of synthetic samples with the most uncertain predictions. • DatasetDDPM mirrors the DatasetGAN baseline with the only difference being that GANs are replaced with DDPMs. We include this baseline to compare the GAN-based and DDPM-based representations in the same scenario.
Note that our segmentation method described in Section 3.2 is more straightforward compared to DatasetGAN and DatasetDDPM since it does not require auxiliary steps of the synthetic dataset generation and training the segmentation model on it.
Then, we consider a set of baselines that allow extracting intermediate activations from the real images directly and use them as pixel-level representations similarly to our method. In contrast to DatasetGAN and DatasetDDPM, these methods can potentially be beneficial due to the absence of the domain gap between real and synthetic images.
• MAE (He et al., 2021) -one of the state-of-the-art self-supervised methods, which learns a denoising autoencoder to reconstruct missing patches. We use ViT-Large (Dosovitskiy et al., 2021) as a backbone model and reduce the patch size to 8×8 to increase the spatial dimensions of the feature maps. We pretrain all models on the same datasets as DDPM using the official code 2 . The feature extraction for this method is described in Appendix F. • SwAV (Caron et al., 2020) -one more recent self-supervised approach. We consider a twice wider ResNet-50 model for evaluation. All models are pretrained on the same datasets as DDPM also using the official source code 3 . The input image resolution is 256. • GAN Inversion employs the state-of-the-art method (Tov et al., 2021) to obtain the latent codes for real images. We map the annotated real images to the GAN latent space, which allows computing the intermediate generator activations and using them as pixel-level representations. Main results. The comparison of the methods in terms of the mean IoU measure is presented in Table 2. The results are averaged over 5 independent runs for different data splits. We also report per class IoUs in Appendix D. Additionally, we provide several qualitative examples of segmentation with our method in Figure 5. Below we highlight several key observations:
• The proposed method based on the DDPM representations significantly outperforms the alternatives on most datasets. • The MAE baseline is the strongest competitor to the DDPM-based segmentation and demonstrates comparable results on the FFHQ-34 and Cat-15 datasets. • The SwAV baseline underperforms compared to the DDPM-based segmentation. We attribute this behavior to the fact that this baseline is trained in the discriminative fashion and can suppress the details, which are needed for fine-grained semantic segmentation. This result is consistent with the recent findings in (Cole et al., 2021), which shows that the state-of-the-art contrastive methods produce representations, which are suboptimal for fine-grained problems. • DatasetDDPM outperforms its counterpart DatasetGAN against most benchmarks. Note that both these methods use the DeepLabV3 network. We attribute this superiority to the higher quality of DDPM synthetics, therefore, a smaller domain gap between synthetic and real data. • On most datasets, DDPM outperforms the DatasetDDPM competitor. We provide an additional experiment to investigate this in the discussion section below.
Overall, the proposed DDPM-based segmentation outperforms the baselines that exploit alternative generative models and also the baselines trained in the self-supervised fashion. This result highlights the potential of using the state-of-the-art DDPMs as strong unsupervised representation learners.
DISCUSSION
The effect of training on real data. The proposed DDPM method is trained on annotated real images, while DatasetDDPM and DatasetGAN are trained on synthetic ones, which are typically less natural, diverse, and can lack objects of particular classes. Moreover, synthetic images are harder for human annotation since they might have some distorted objects that are difficult to assign to a particular class. In the following experiment, we quantify the performance drop caused by training on real or synthetic data. Specifically, Table 3 reports the performance of the DDPM approach trained on real, DDPM-produced and GAN-produced annotated images. As can be seen, training on real images is very beneficial on the domains where the fidelity of generative models is still relatively low, e.g., LSUN-Cat, which indicates that annotated real images are a more reliable source of supervision. Moreover, if the DDPM method is trained on synthetic images, its performance becomes on par with DatasetDDPM. On the other hand, when trained on GAN-produced samples, DDPM significantly outperforms DatasetGAN. We attribute this to the fact that DDPMs provide more semantically-valuable pixel-wise representations compared to GANs.
Sample-efficiency. In this experiment, we evaluate the performance of our method when it utilizes less annotated data. We provide mIoU for four datasets in Table 4. Importantly, DDPM is still able to outperform most baselines in Table 2, using significantly less supervision.
The effect of stochastic feature extraction. Here, we investigate whether our method can benefit from the stochastic feature extraction described in Section 3.2. We consider the deterministic case, when the noise ∼N (0, I) is sampled once and used in (2) to obtain x t for all timesteps t during both training and evaluation. Then, we compare it to the following stochastic options:
First, different t are sampled for different timesteps t and shared during the training and evaluation. Second, one samples different noise for all timesteps at each training iteration; during the evaluation the method also uses unseen noise samples. Table 5: Performance of the DDPM-based method for different feature extraction variations. All considered stochastic options provide a similar mIoU to the determinstic one.
The results are provided in Table 5. As one can see, the difference in the performance is marginal. We attribute this behavior to the following reasons:
• Our method uses later t of the reverse diffusion process where the noise magnitude is low.
• Since we exploit the deep layers of the UNet model, the noise might not affect the activations from these layers significantly.
Robustness to input corruptions. In this experiment, we investigate the robustness of DDPMbased representations. First, we learn pixel classifiers on the clean images using the DDPM, SwAV and MAE representations on the Bedroom-28 and Horse-21 datasets. Then, 18 diverse corruption types, adopted from (Hendrycks & Dietterich, 2019), are applied to test images. Each corruption has five levels of severity. In Figure 6, we provide mean IoUs computed over all corruption types for 1, 3, 5 levels of severity, denoted as "weak", "medium" and "strong", respectively.
One can observe that the proposed DDPM-based method demonstrates higher robustness and preserves its advantage over the SwAV and MAE models even for severe image distortions.
CONCLUSION
This paper demonstrates that DDPMs can serve as representation learners for discriminative computer vision problems. Compared to GANs, diffusion models allow for a straightforward computation of these representations for real images, and one does not need to learn an additional encoder, which maps images to the latent space. This DDPM's advantage and superior generative quality provide state-of-the-art performance in the few-shot semantic segmentation task. The notable restraint of the DDPM-based segmentation is a requirement of high-quality diffusion models trained on the dataset at hand, which can be challenging for complex domains, like ImageNet or MSCOCO. However, given the rapid research progress on DDPM, we expect they will reach these milestones in the nearest future, thereby extending the range of applicability for the corresponding representations. MLP architecture. We adopt the MLP architecture from (Zhang et al., 2021). Specifically, we use MLPs with two hidden layers with ReLU nonlinearity and batch normalization. The sizes of hidden layers are 128 and 32 for datasets with a number of classes less than 30, and 256 and 128 for others.
Also, we evaluate the performance of the proposed method for twice wider / deeper MLPs on the Bedroom-28 and FFHQ-34 datasets and do not observe any noticeable difference, see Table 7. In Figure 10, we report the statistics of classes computed over annotated real images as well as annotated synthetic images produced by GAN and DDPM.
F EXTRACTING REPRESENTATIONS FROM MAE
To obtain pixelwise representations, we apply the model to a fully observed image (mask ratio=0) of resolution 256 and extract feature maps from the deepest 12 ViT-L blocks . The feature maps from each block have 1024×32×32 dimensions. Similarly to other methods, we upsample the extracted feature maps to 256×256 and concatenate them. The overall dimension of the pixel representation is 12288.
In addition, we investigated other feature extraction strategies and got the following observations:
1. Including activations from the decoder did not provide any noticeable gains;
2. Extracting activations right after self-attention layers caused slightly inferior performance; 3. Extracting activations from every second encoder block also provided a bit worse results.
Figure 1 :
1Overview of the proposed method.
Figure 4 :
4Examples of k-means clusters (k=5) formed by the features extracted from the UNet decoder blocks {6, 8, 10, 12} on the diffusion steps {50, 200, 400, 600, 800}. The clusters from the middle blocks spatially span coherent semantic objects and parts.
Figure 5 :
5The examples of segmentation masks predicted by our method on the test images along with the groundtruth annotated masks.
Figure 6 :
6mIoU degradation for different image corruption levels on the Bedroom-28 and Horse-21 datasets. DDPM demonstrates higher robustness and preserves its advantage for all distortion levels.
Figure 9 :Figure 10 :
910l ce ili ng w in do w la m p sh ad e cu rta in pi ct ur e cu sh io n pi llo w he ad bo ar d ta bl e ca rp et ch an de lie r pi ct ur e fr am e ta bl e to p fo ot bo ar d pl an t la m p co lu m n ta bl e st af f Per class IoUs for DatasetGAN, DatasetDDPM and DDPM. du ct ey eb ro w bo tt om li d ph il tr um su pe ri or li p te m Number of instances of each semantic class in the annotated real and synthetic train sets.
Table 1 :
1Number of annotated images for each dataset used in our evaluation.
Table 2 :
2The comparison of the segmentation methods in terms of mean IoU. (*) On CelebA-19 and ADE Bedroom-30, we evaluate models pretrained on FFHQ-256 and LSUN Bedroom, respectively. • GAN Encoder -while GAN Inversion struggles to reconstruct images from LSUN domains, we also consider the activations of the pretrained GAN encoder used for GAN Inversion. • VDVAE (Child, 2021) -state-of-the-art autoencoder model. The intermediate activations are extracted from both encoder and decoder and concatenated. While there are no pretrained models on the LSUN datasets, we evaluate this model only on the publicly available checkpoint 4 on FFHQ-256. Note that VAEs are still significantly inferior to GANs and DDPMs on LSUN. • ALAE (Pidhorskyi et al., 2020) adopts StyleGANv1 generator and adds an encoder network to the adversarial training. We extract features from the encoder model. In our evaluation, we use publicly available models on LSUN-Bedroom and FFHQ-1024 5 .Generative pretrained models. In our experiments, we use the state-of-the-art StyleGAN2(Karras et al., 2020) models for the GAN-based baselines and the state-of-the-art pretrained ADMs(Dhariwal & Nichol, 2021) for our DDPM-based method. Since there is not a pretrained model for FFHQ-256, we train it ourselves using the official implementation 6 . For evaluation on the ADE-Bedroom-30 dataset, we use the models (including the baselines) pretrained on LSUN-Bedroom. For Celeba-19, we evaluate the models trained on FFHQ-256.
Table 3 :
3Performance of DDPM-based segmentation when trained on real and synthetic images. When trained on DDPM-produced data, DDPM demonstrates comparable performance to Dataset-DDPM. When trained on GAN-produced data, DDPM still significantly outperforms DatasetGAN, but the gap between them reduces.
Table 4 :
4Evaluation of the proposed method with a different number of labeled training data. Even
using less annotated data, DDPM still outperforms most baselines in Table 2.
Share Train/Test Share for t Bedroom-28 FFHQ-34
+
+
49.3 ± 1.9 59.1 ± 1.4
+
-
49.1 ± 2.2 59.3 ± 1.5
-
-
48.9 ± 1.6 59.3 ± 1.4
Table 6 :
6Performance of DatasetDDPM and DatasetGAN for 10K−50K synthetic images in the training dataset. Mean IoU of both methods saturates at 30K−50K of synthetic data. C TRAINING SETUP The ensemble of MLPs consists of 10 independent models. Each MLP is trained for ∼4 epochs using the Adam optimizer (Kingma & Ba, 2015) with 0.001 learning rate. The batch size is 64. This setting is used for all methods and datasets.
Table 7 :
7Performance of the proposed method for twice wider / deeper MLP architecture within the ensemble. More expressive MLPs do not improve the performance.D PER CLASS IOUS
ba ck gr ou nd
pe rs on
th ig h
le g
m uz zl e
he ad
ba rr el
ta il
ea r
ne ck
sa dd le
sh ou ld er
ba ck
m an e
ch es t
fo re lo ck
ho of
le g pr ot ec tio n
br id le
no st ril
ey e
0
20
40
60
80
100
IoU
(a) Horse-21
ba ck gr ou nd
he ad
ea r
ey e
no se
le g
ba ck
pa w
ta il
ch es t
w hi sk er s
be lly
to ng ue
m ou th
ne ck
0
20
40
60
80
IoU
DatasetDDPM
DDPM
DatasetGAN
(b) Cat-15
ba ck gr ou nd
ne ck
ha ir
te et h
fo re he ad
ch ee k
no se tip
ch in
in fe rio r lip
ey eb ro w
bo tto m lid
al a of no se
ph ilt ru m
no se
ea r
br id ge
m ou st ac he
he ad
ja w
su pe rio r lip
iri s
lo bu le
sc le ra
fr ow
n
he lix
no st ril
pu pi l
to p lid
te m pl e
te ar du ct
si de bu rn s
ey el as he s
or al co m m is su re
w rin kl es
Bedroom-28: [bed, footboard, headboard, side rail, carpet, ceiling, chandelier, curtain, cushion, floor, table, table top, picture, pillow, lamp column, lamp shade, wall, window, curtain rod, window frame, chair, picture frame, plinth, door, pouf, wardrobe, plant, table staff] FFHQ-34: [background, head, cheek, chin, ear, helix, lobule, bottom lid, eyelashes, iris, pupil, sclera, tear duct, top lid, eyebrow, forehead, frown, hair, sideburns, jaw, moustache, inferior lip, oral commissure, superior lip, teeth, neck, nose, ala of nose, bridge, nose tip, nostril, philtrum, temple, wrinkles] Cat-15: [background, back, belly, chest, leg, paw, head, ear, eye, mouth, tongue, tail, nose, whiskers, neck]Horse-21:[background, person, back, barrel, bridle, chest, ear, eye, forelock, head, hoof, leg, mane, muzzle, neck, nostril, tail, thigh, saddle, shoulder, leg protection] CelebA-19: [background, cloth, ear r, eye g, hair, hat, l brow, l ear, l eye, l lip, mouth, neck, neck l, nose, r brow, r ear, r eye, skin, u lip] ADE- bed, floor, table, lamp, ceiling, painting, windowpane, pillow, curtain, cushion, door, chair, cabinet, chest, mirror, rug, armchair, book, sconce, plant, wardrobe, clock, light, flower, vase, fan, box, shelf, television] E.2 CLASS STATISTICS
https://github.com/openai/guided-diffusion
https://github.com/facebookresearch/mae 3 https://github.com/facebookresearch/swav
https://github.com/openai/vdvae 5 https://github.com/podgorskiy/ALAE 6 https://github.com/openai/guided-diffusion
Unsupervised learning of visual features by contrasting cluster assignments. Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, Armand Joulin, arXiv:2006.09882arXiv preprintMathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. arXiv preprint arXiv:2006.09882, 2020.
Rethinking atrous convolution for semantic image segmentation. Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam, arXiv:1706.05587arXiv preprintLiang-Chieh Chen, George Papandreou, Florian Schroff, and Hartwig Adam. Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587, 2017.
Generative pretraining from pixels. Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever, ICML. Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, and Ilya Sutskever. Generative pretraining from pixels. In ICML, 2020a.
A simple framework for contrastive learning of visual representations. Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey Hinton, ICML. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In ICML, 2020b.
Very deep {vae}s generalize autoregressive models and can outperform them on images. International Conference on Learning Representations. Rewon Child. Very deep {vae}s generalize autoregressive models and can outperform them on images. In International Conference on Learning Representations, 2021.
Oisin Mac Aodha, and Serge Belongie. When does contrastive visual representation learning work?. Elijah Cole, Xuan Yang, Kimberly Wilber, arXiv:2105.05837arXiv preprintElijah Cole, Xuan Yang, Kimberly Wilber, Oisin Mac Aodha, and Serge Belongie. When does contrastive visual representation learning work? arXiv preprint arXiv:2105.05837, 2021.
Imagenet: A large-scale hierarchical image database. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, Li Fei-Fei, 2009 IEEE conference on computer vision and pattern recognition. IeeeJia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hi- erarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255. Ieee, 2009.
Diffusion models beat gans on image synthesis. Prafulla Dhariwal, Alex Nichol, Prafulla Dhariwal and Alex Nichol. Diffusion models beat gans on image synthesis. 2021.
Large scale adversarial representation learning. NeurIPS. Jeff Donahue, Karen Simonyan, Jeff Donahue and Karen Simonyan. Large scale adversarial representation learning. NeurIPS, 2019.
Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, ICLRAlexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszko- reit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. ICLR, 2021.
Learning high-resolution domain-specific representations with a gan generator. Danil Galeev, Konstantin Sofiiuk, Danila Rukhovich, Mikhail Romanov, Olga Barinova, Anton Konushin, S+SSPR. Danil Galeev, Konstantin Sofiiuk, Danila Rukhovich, Mikhail Romanov, Olga Barinova, and Anton Konushin. Learning high-resolution domain-specific representations with a gan generator. In S+SSPR, 2020.
Masked autoencoders are scalable vision learners. Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick, arXiv:2111.06377Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Masked autoencoders are scalable vision learners. arXiv:2111.06377, 2021.
Benchmarking neural network robustness to common corruptions and perturbations. Dan Hendrycks, Thomas Dietterich, International Conference on Learning Representations. Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common cor- ruptions and perturbations. In International Conference on Learning Representations, 2019.
Denoising diffusion probabilistic models. Jonathan Ho, Ajay Jain, Pieter Abbeel, Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. 2020.
A style-based generator architecture for generative adversarial networks. Tero Karras, Samuli Laine, Timo Aila, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionTero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4401-4410, 2019.
Analyzing and improving the image quality of stylegan. Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, Timo Aila, IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Ana- lyzing and improving the image quality of stylegan. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8107-8116, 2020.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, 3rd International Conference on Learning Representations. Yoshua Bengio and Yann LeCunSan Diego, CA, USAConference Track ProceedingsDiederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015.
Maskgan: Towards diverse and interactive facial image manipulation. Ziwei Cheng-Han Lee, Lingyun Liu, Ping Wu, Luo, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2020Cheng-Han Lee, Ziwei Liu, Lingyun Wu, and Ping Luo. Maskgan: Towards diverse and interactive facial image manipulation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
Semantic segmentation with generative models: Semi-supervised learning and strong out-of-domain generalization. Daiqing Li, Junlin Yang, Karsten Kreis, Antonio Torralba, Sanja Fidler, CVPR. Daiqing Li, Junlin Yang, Karsten Kreis, Antonio Torralba, and Sanja Fidler. Semantic segmentation with generative models: Semi-supervised learning and strong out-of-domain generalization. In CVPR, 2021a.
Srdiff: Single image super-resolution with diffusion probabilistic models. Haoying Li, Yifan Yang, Meng Chang, Huajun Feng, Zhihai Xu, Qi Li, Yueting Chen, Haoying Li, Yifan Yang, Meng Chang, Huajun Feng, Zhihai Xu, Qi Li, and Yueting Chen. Srdiff: Single image super-resolution with diffusion probabilistic models. 2021b.
Finding an unsupervised image segmenter in each of your deep generative models. Luke Melas-Kyriazi, Christian Rupprecht, Iro Laina, Andrea Vedaldi, arXiv:2105.08127arXiv preprintLuke Melas-Kyriazi, Christian Rupprecht, Iro Laina, and Andrea Vedaldi. Finding an unsupervised image segmenter in each of your deep generative models. arXiv preprint arXiv:2105.08127, 2021.
Sdedit: Image synthesis and editing with stochastic differential equations. Chenlin Meng, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, Stefano Ermon, Chenlin Meng, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, and Stefano Ermon. Sdedit: Image synthesis and editing with stochastic differential equations. 2021.
Improved denoising diffusion probabilistic models. Prafulla Nichol, 2021Prafulla Nichol, Alex & Dhariwal. Improved denoising diffusion probabilistic models. ICML, 2021.
Adversarial latent autoencoders. Stanislav Pidhorskyi, A Donald, Gianfranco Adjeroh, Doretto, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR)2020Stanislav Pidhorskyi, Donald A Adjeroh, and Gianfranco Doretto. Adversarial latent autoencoders. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recog- nition (CVPR), 2020.
U-net: Convolutional networks for biomedical image segmentation. Olaf Ronneberger, Philipp Fischer, Thomas Brox, International Conference on Medical image computing and computerassisted intervention. SpringerOlaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedi- cal image segmentation. In International Conference on Medical image computing and computer- assisted intervention, pp. 234-241. Springer, 2015.
Image super-resolution via iterative refinement. Chitwan Saharia, Jonathan Ho, William Chan, Tim Salimans, J David, Mohammad Fleet, Norouzi, Chitwan Saharia, Jonathan Ho, William Chan, Tim Salimans, David J Fleet, and Mohammad Norouzi. Image super-resolution via iterative refinement. 2021.
Deep unsupervised learning using nonequilibrium thermodynamics. Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, Surya Ganguli, ICML. Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In ICML, 2015.
Generative modeling by estimating gradients of the data distribution. Yang Song, Stefano Ermon, NeurIPS. Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. In NeurIPS, 2019.
Improved techniques for training score-based generative models. Yang Song, Stefano Ermon, NeurIPS. Yang Song and Stefano Ermon. Improved techniques for training score-based generative models. NeurIPS, 2020.
Score-based generative modeling through stochastic differential equations. Yang Song, Jascha Sohl-Dickstein, P Diederik, Abhishek Kingma, Stefano Kumar, Ben Ermon, Poole, Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. 2021.
Or Patashnik, and Daniel Cohen-Or. Designing an encoder for stylegan image manipulation. Omer Tov, Yuval Alaluf, Yotam Nitzan, arXiv:2102.02766arXiv preprintOmer Tov, Yuval Alaluf, Yotam Nitzan, Or Patashnik, and Daniel Cohen-Or. Designing an encoder for stylegan image manipulation. arXiv preprint arXiv:2102.02766, 2021.
Repurposing gans for one-shot semantic part segmentation. Nontawat Tritrong, Pitchaporn Rewatbowornwong, Supasorn Suwajanakorn, CVPR. 2021Nontawat Tritrong, Pitchaporn Rewatbowornwong, and Supasorn Suwajanakorn. Repurposing gans for one-shot semantic part segmentation. In CVPR, 2021.
Unsupervised discovery of interpretable directions in the gan latent space. Andrey Voynov, Artem Babenko, ICML. Andrey Voynov and Artem Babenko. Unsupervised discovery of interpretable directions in the gan latent space. In ICML, 2020.
Object segmentation without labels with large-scale generative models. Andrey Voynov, Stanislav Morozov, Artem Babenko, 2021Andrey Voynov, Stanislav Morozov, and Artem Babenko. Object segmentation without labels with large-scale generative models. ICML, 2021.
Jianjin & Zheng. Linear semantics in generative adversarial networks. Changxi Xu, CVPR. 2021Changxi Xu, Jianjin & Zheng. Linear semantics in generative adversarial networks. In CVPR, 2021.
Generative hierarchical features from synthesizing images. Yinghao Xu, Yujun Shen, Jiapeng Zhu, Ceyuan Yang, Bolei Zhou, CVPR. 2021Yinghao Xu, Yujun Shen, Jiapeng Zhu, Ceyuan Yang, and Bolei Zhou. Generative hierarchical features from synthesizing images. In CVPR, 2021.
Fisher Yu, Yinda Zhang, Shuran Song, Ari Seff, Jianxiong Xiao, Lsun, arXiv:1506.03365Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprintFisher Yu, Yinda Zhang, Shuran Song, Ari Seff, and Jianxiong Xiao. Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365, 2015.
Datasetgan: Efficient labeled data factory with minimal human effort. Yuxuan Zhang, Huan Ling, Jun Gao, Kangxue Yin, Jean-Francois Lafleche, Adela Barriuso, Antonio Torralba, Sanja Fidler, CVPR. 2021Yuxuan Zhang, Huan Ling, Jun Gao, Kangxue Yin, Jean-Francois Lafleche, Adela Barriuso, Antonio Torralba, and Sanja Fidler. Datasetgan: Efficient labeled data factory with minimal human effort. In CVPR, 2021.
Semantic understanding of scenes through the ade20k dataset. Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Barriuso, Antonio Torralba, International Journal of Computer Vision. 127Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Semantic understanding of scenes through the ade20k dataset. International Journal of Computer Vision, 127:302-321, 2018.
| [
"https://github.com/openai/guided-diffusion",
"https://github.com/facebookresearch/mae",
"https://github.com/facebookresearch/swav",
"https://github.com/openai/vdvae",
"https://github.com/podgorskiy/ALAE",
"https://github.com/openai/guided-diffusion"
]
|
[
"CacheNet: A Model Caching Framework for Deep Learning Inference on the Edge",
"CacheNet: A Model Caching Framework for Deep Learning Inference on the Edge"
]
| [
"Student Member, IEEEYihao Fang ",
"Shervin Manzuri ",
"Student Member, IEEEShalmani ",
"Senior Member, IEEERong Zheng "
]
| []
| []
| The success of deep neural networks (DNN) in machine perception applications such as image classification and speech recognition comes at the cost of high computation and storage complexity. Inference of uncompressed large scale DNN models can only run in the cloud with extra communication latency back and forth between cloud and end devices, while compressed DNN models achieve real-time inference on end devices at the price of lower predictive accuracy. In order to have the best of both worlds (latency and accuracy), we propose CacheNet, a model caching framework. CacheNet caches low-complexity models on end devices and high-complexity (or full) models on edge or cloud servers. By exploiting temporal locality in streaming data, high cache hit and consequently shorter latency can be achieved with no or only marginal decrease in prediction accuracy. Experiments on CIFAR-10 and FVG have shown CacheNet is 58 − 217% faster than baseline approaches that run inference tasks on end devices or edge servers alone. | null | [
"https://arxiv.org/pdf/2007.01793v1.pdf"
]
| 220,347,463 | 2007.01793 | 7cbc458373a924de5971ad5f3d07cd5ab5cda6c4 |
CacheNet: A Model Caching Framework for Deep Learning Inference on the Edge
Student Member, IEEEYihao Fang
Shervin Manzuri
Student Member, IEEEShalmani
Senior Member, IEEERong Zheng
CacheNet: A Model Caching Framework for Deep Learning Inference on the Edge
1Index Terms-Edge ComputingDeep LearningComputer VisionModel Caching !
The success of deep neural networks (DNN) in machine perception applications such as image classification and speech recognition comes at the cost of high computation and storage complexity. Inference of uncompressed large scale DNN models can only run in the cloud with extra communication latency back and forth between cloud and end devices, while compressed DNN models achieve real-time inference on end devices at the price of lower predictive accuracy. In order to have the best of both worlds (latency and accuracy), we propose CacheNet, a model caching framework. CacheNet caches low-complexity models on end devices and high-complexity (or full) models on edge or cloud servers. By exploiting temporal locality in streaming data, high cache hit and consequently shorter latency can be achieved with no or only marginal decrease in prediction accuracy. Experiments on CIFAR-10 and FVG have shown CacheNet is 58 − 217% faster than baseline approaches that run inference tasks on end devices or edge servers alone.
INTRODUCTION
I N recent years, deep neural networks (DNN) have achieved tremendous successes in perception applications such as image classification, speech recognition, target tracking and machine translation. In many cases, they outperform human beings in accuracy. However, such high accuracy comes at the cost of high computation and storage complexity due to large model sizes. For instance, ResNet-152 contains 152 layers and over 60M parameters. Inference using such large-scale DNN models cannot be accomplished on end devices with limited computation power and storage in real-time. As a result, many model compression techniques have been proposed to reduce the size of DNN networks often at the expense of prediction performance [15], [18]. Therefore, application developers face a dilemma to choose between a highly accurate model that can only run in the cloud with extra communication latency of uploading raw input data and getting the results back, or local execution of compressed models with reduced accuracy.
Is it possible to get the best of both worlds? In other words, can we achieve a good trade-off between latency and prediction accuracy? This question has to some degree been answered by partitioning approaches [6], [12], [20]. They mainly fall into two paradigms: 1) model partitioning: concurrent computing among edge nodes and/or end devices [6], which collaboratively performs inference in parallel per a particular sensor input; 2) computation partitioning: partition between edge and cloud, which take a pre-trained deep model and decide at run-time based on computation capability of local and cloud compute nodes and communication overheads where portions of computation should reside [12]. The inference time of both paradigms is clearly lower bounded by the smaller (or smallest) of inference times on the end device and a cloud node (or on all end devices/edge nodes). Furthermore, as per computation partitioning, since DNN models tend to be sequential, the possible ways of partitioning are limited.
In this work, we take a drastically new approach in addressing the trade-off between latency and prediction accuracy of DNNs. Our approach is motivated by two observations of perception applications with inputs from natural scenes or human interactions. First, despite the fact that such applications may need to handle a large number of input classes over time, the classes of inputs commonly encountered can be much smaller. For instance, an average English speaking person uses about 4000 words in daily life out of 171,476 words listed in the second edition of Oxford English Dictionary. Secondly, there exists strong temporal locality in terms of the types of inputs encountered in a short period of time. This is especially true for vision processing where rich redundancy exists among consecutive video frames [3], [10], [16], [21].
To exploit these two properties, we propose CacheNet, a model caching framework for deep learning inference on edge. CacheNet is inspired by caching in the memory hierarchy. In computer architecture, the memory hierarchy separates computer storage (e.g., register, cache, random access memory, etc.) based on response time [17]. Caching increases data retrieval performance (e.g. faster response time) by reusing previously retrieved and computed data in the storage. Analogous to the memory hierarchy, end devices are closer to data sources and thus have faster response time but lower storage capacity; while an edge server has more storage capacity but relatively longer network latency. However, unlike the memory hierarchy that only stores data, CacheNet stores DNN models. To mitigate the limited computation power on end devices, only down-sized models with high confidence in the current input data are stored. Thanks to the temporal locality and the small number of frequently observed classes, the cached model only needs to be replaced infrequently.
In short, CacheNet combines model partitioning with caching. Instead of training a single large-scale model, Cach-eNet generates multiple small submodels each capturing a partition of the knowledge represented by the large model. In the proposed architecture, the end device is responsible for selecting a locally cached model and performing the inference; whereas the edge server stores the baseline model and submodels, and is responsible to handle "cache misses" when there are sufficient changes in input data. CacheNet is agnostic to the architecture of a baseline deep model. Both the number of submodels and the baseline deep model can be specified by users.
We have implemented CacheNet in TensorFlow, Ten-sorFlow Lite and NCNN. Here, TensorFlow is a highperformance framework for neural network training, while TensorFlow Lite and NCNN are lightweight inference framework optimized for edge computing. CacheNet has been evaluated on a variety of end devices and two different datasets (CIFAR-10 [13] and FVG [22]). We found that CacheNet outperforms end-device-only and edge-serveronly approaches in inference time without compromising inference accuracy. For CIFAR-10, CacheNet is 2.2 times faster than the end-device-only approach and 58% faster than edge-server-only; for FVG, it is 1.5 times faster than end-device-only and 71% faster than edge-server-only.
The rest of the paper is organized as follows. Section 2 describes related works to CacheNet from two perspectives: caching and partitioning. An overview of our approach is given in Section 3 from requirements to system level design. In Section 4, we elaborate on aspects of training CacheNet and formalize CacheNet mathematically. Details of inference is provided in Section 5 from partition selection on the edge server to cache replacement on end devices. Section 6 provides evaluations of CacheNet on multiple end devices including Jetson TX2, Jetson Nano, and Raspberry Pi 4. The conclusion and future works are stated in Section 7.
RELATED WORKS
Existing algorithmic approaches to accelerate machine learning inference on end devices mainly fall into three categories, namely: i) model compression, ii) computation and model partitioning, and iii) reduction of computation in machine learning pipelines. The three categories of approaches are orthogonal to one another and can be applied jointly. Among the three, the latter two are closer to CacheNet and will be discussed in further details in this section.
Computation and Model Partitioning
Partitioning splits a known neural network model into multiple parts to be executed either sequentially or concurrently on the edge and cloud. It can be performed between layers. By trading off between the time offloading computation to the cloud with the time spent in local computation on edge, a shorter latency could be achieved [12].
A more sophisticated computation partitioning was proposed in distributed deep neural networks (DDNNs) [20].
DDNN was designed to perform fast and localized inference using shallow portions of a neural network on end devices. Using an exit point after device inference, an output is classified locally. If the classification cannot be made due to low confidence, the task is escalated to a higher exit point (e.g. the edge exit) in the hierarchy until the last exit (the cloud exit). With multiple exit points, DDNNs can significantly reduce communication costs.
TeamNet [6] takes a different approach for computation partitioning. Rather than dividing a pre-trained neural network structurally, it explores knowledge specialization and trains multiple small NNs through competitive and selective learning. During inference, the NNs are executed in parallel on cooperative end devices. By decision-level fusion, a master node (either one of the end devices or a edge/cloud node) outputs the final inference results. Since computation partitioning in TeamNet is done at model level, it is also considered a model partitioning approach.
CacheNet bears similarity with TeamNet in training multiple shallower models to represent the knowledge of a single deep model. However, unlike TeamNet that requires concurrent execution of the shallower models, CacheNet utilizes a "selector" to determine the suitable shallow model based on input data. In CacheNet, when a cache hit occurs, the inference is performed on the end device only. The overall inference time is reduced by the indexability of specialized submodels and running the suitable submodel locally most of the time.
Computation Reduction
Exploiting the existence of the temporal locality in input data, several works reduce DNN inference time by reusing all or part of previous computation results.
Glimpse is a continuous, real-time object recognition system for camera-equipped mobile devices [3]. In Glimpse, object recognition tasks are executed on local devices when the communication latency between the server and mobile device is higher than a frame-time. In addition to using a reduced model for faster local inference, Glimpse uses an active cache of video frames on the mobile device. A subset of the frames in the active cache is used to track objects on the mobile, using (stale) hints about objects that arrive from the server from time to time. In [21], Xu et al. proposed DeepCache, a principled cache design for deep learning inference in continuous mobile vision. It breaks down an input video frame into smaller blocks and discovers similar blocks between consecutive frames using diamond search [21]. Computation on reusable regions (e.g., feature maps) can thus be cached and propagated through subsequent layers without further processing. In [1], to reduce energy drain while maintaining good object tracking precision, the authors develop a software framework called MARLIN. MAR-LIN only uses a DNN as needed, to detect new objects or recapture objects that significantly change in appearance. It employs lightweight methods in between DNN executions to track the detected objects with high fidelity. Alternatively, we can view MARLIN as reuse the detection and classification results by associating detected objects across multiple frames. In [8], Guo Owing to the temporal locality that exists in the video, the smaller specialized neural network will work well on consecutive frames over a short period. An abrupt change of frame induces higher entropy and triggers cache replacement.
previously computed outputs by harnessing the "equivalence" between different input values. Content lookup and high quality reuse are achieved by the adoption of adaptive locality sensitive hashing (A-LSH) and homogenized knearest neighbors (H-kNN). Harnessing reuse opportunities translates to reduced computation latency and energy consumption.
All afore-mentioned approaches are orthogonal to Cach-eNet. In DeepCache and MARLIN, a full-fledged deep model is still needed on an end device and thus the worstcase execution time is not reduced. This is in contrast with CacheNet, which only runs reduced submodels locally.
SYSTEM DESIGN
CacheNet is a distributed inference framework on edge. Its training phase happens in the cloud and the inference is a collaboration between the edge server and the end device. The intuition behind CacheNet is dividing a neural network's knowledge into multiple specialized partitions (neural networks). These specialized partitions are generally a few times smaller than the original neural network, and only the specialized partition is transferred to the end device for inference. From the end device's perspective, it caches only a times smaller and specialized partition of the knowledge, and thus its inference is times faster than the original ones.
The challenges of partitioning are two folds: 1) each partition must be sufficiently specialized and the combination (collaboration) of all partitions must behave roughly equivalently to the original neural network; 2) There must be a selector that picks the right partition given a specific hint at a time. The first challenge was mostly solved by TeamNet [6], while the second one has not been solved by any approaches at this point.
In order to solve the second challenge, it is necessary to formalize the hint as a specific representation. Inspired by coding theory, a code vector is a good representation as long as the mutual information between the code vector and the input image is maximized at the training phase. Although we have the hint representation, it is still difficult to associate the representation with a specific portion of the knowledge. To do so, we introduce a generator that generates the neural network's parameters accordingly to the given code representation.
Thus, in training ( Figure 1a), we need to 1) maximize the mutual information between the code representation and the input image; 2) better associate the code representation with the specialized neural network partition; 3) train each partition with respect to their output entropy, which has been demonstrated practical in TeamNet [6]. CacheNet's system design is therefore conducted simultaneously with respect to the above objectives.
During inference (Figure 1b), CacheNet should infer the code representation from a particular input image, and then the code representation will be used as a hint to tell which specialized partition to be cached on the end device. Without the need to transfer the input frames to the edge server every time, inference latency can be shortened. The accuracy is generally not sacrificed, because there exists a temporal locality on consecutive frames most of the time.
As long as there is not an abrupt change of the scene, a specialized partition should work well; otherwise (e.g., in regard to edited clips from multiple cameras, or a fastmoving object/camera [14]), a cache replacement should be triggered, considering a partition only holds a subset of the knowledge.
TRAINING CACHENET
As illustrated in Figure 1a, to train CacheNet submodels, we need to first divide the input data into partitions 1 . The index associated with a partition is taken as an input to a neural network generator to produce the corresponding submodel for the partition. The encoder that maps input data to partition indices and the submodels will be optimized jointly. Next, we discuss the steps in detail.
Stacked Information Maximizing Variational Autoencoder (S-InfoVAE)
The purpose of this step is to map input data into a low dimension space for further partitioning. The low-dimension representation should preserve the proximity among data and allow "reconstruction" of the orignal data. Variational Bayesian autoencoder was proposed by Kingma and Welling [5]. The basic idea is to find a lowerdimension latent variable underlying the corresponding input distribution. Let z denote the latent variable and x represent the input variable. Consider a dataset D = {X, Y }, where X is drawn independently from an input probability distribution p D (x). Suppose that p ξ (z) (the prior distribution of z) and the conditional probability distribution p ξ (x|z) are both parameterized by a neural network with parameters ξ. One can find the optimal parameters ξ by maximizing the log-likelihood as:
E p D (x) [log p ξ (x)] = E p D (x) log E p ξ (z) [p ξ (x|z)] . (1)
However, the integral of the marginal likelihood p ξ (x) is generally intractable even for a moderately complex neural network with a single non-linear hidden layer. A possible approach [5] is to rewrite log p ξ (x):
log p ξ (x) = D KL (q ψ (z|x)||p ξ (z|x)) + L(ξ, ψ; x), (2) where L(ξ, ψ; x) = −D KL (q ψ (z|x)||p ξ (z)) + E q ψ (z|x) log p ξ (x|z).
(3) Since Kullback-Leibler divergence is always nonnegative, L(ξ, ψ; x) is a lower bound of log p ξ (x), namely,
L(ξ, ψ; x) ≤ log p ξ (x).(4)
1. The partitions are overlapping as will be discussed in Section 4.2.
By maximizing the lower bound L(ξ, ψ; x), the log likelihood log p ξ (x) is maximized as well. However, since the latent variable z is of lower dimension than the input variable x, any optimization against x may be magnified compared to z. To counteract the imbalance problem, Zhao et al. [23] propose to put more weight on z. Let L(ξ, ψ) be the expectation of L(ξ, ψ; x) with respect to the input distribution p D (x). We then have,
L(ξ, ψ) =E p D (x) L(ξ, ψ; x) = − D KL (q ψ (x, z)||p ξ (x, z)) = − D KL (q ψ (z)||p ξ (z)) − E p ξ (z) [D KL (q ψ (x|z)||p ξ (x|z))] .(5)
To put more weights on z, one needs to add i) a scaling parameter to the Kullback-Leibler divergence between q ψ (z) and p ξ (z), and ii) a term of mutual information between x and z [23]:
L * (ξ, ψ) = − λD KL (q ψ (z)||p ξ (z)) − E p ξ (z) [D KL (q ψ (x|z)||p ξ (x|z))] + αI q ψ (x,z) (x; z).(6)
In practice, L * (ξ, ψ) can be rewritten into (7) for more effective optimization [23]:
L * (ξ, ψ) =E p D (x) E q ψ (z|x) [log p ξ (x|z)] − (1 − α)E p D (x) D KL (q ψ (z|x)||p ξ (z)) − (α + λ − 1)D M M D (q ψ (z)||p ξ (z)),(7)
where D M M D (q ψ (z)||p ξ (z)) is the maximum-mean discrepancy between q ψ (z) and p ξ (z). Experiments show that when the latent variable z is of far lower dimension than the input variable x, the lower bound L * (ξ, ψ) can not properly converge. To deal with this problem, we propose the S-InfoVAE by keeping z at a relative high dimension and introducing a second latent variablez of dimension two. The corresponding parameters (or equivalently the neural networks) of the two latency variables are stage-wisely optimized. Formally, the second optimization objective is defined as follows:
L * (ξ,ψ) = E pψ(z) L(ξ,ψ; z)(8)
Indexability of Low-dimension Representation
To divide data into overlapping partitions, sophisticated indexes are needed. Let K be the total number of submodels, an input parameter of CacheNet. Each input sample in D is associated with one or more indices chosen from 1 to K and will be used to train the corresponding submodel(s). By allowing multiple indices per data sample or equivalently shared training data, we facilitate knowledge sharing across submodels. In this step, we determine the indices of input data solely based on the low-dimension representations from the S-InfoVAE. In subsequent sections, we will also incorporate feedback from the resulting submodels in the form of uncertainty.
Recall thatz's are 2D vectors. To calculate the angular distance between the vectorz = [z 1z2 ] and the x-axis, the arctan trigonometric function is applied:
θ = arctanz 2 z 1z 1 > 0 arctanz 2 z 1 + πz 1 < 0,z 2 ≥ 0 arctanz 2 z 1 − πz 1 < 0,z 2 < 0 π 2z 1 = 0,z 2 > 0 − π 2z 1 = 0,z 2 < 0 0z 1 = 0,z 2 = 0.(9)
For better convergence, a small noise term is added to the θ. To keep the resulting angles between 0 and 2π, a modulo function is applied as follows:
θ = (θ + ) mod 2π.(10)
For K partitions where each partition roughly occupies a region of 2π K , the midpoint of the k th partition is given by
2π(k− 1 2 ) K , for k = 1, . . . , K.
Let ζ be a vector of all such midpoints, namely:
ζ = [ζ 1 . . . ζ K ], ζ k = 2π k − 1 2 K .(11)
We wish to assign input samples to partitions based on their closeness to the K midpoints in polar coordinates. One straightforward approach is via a 1nearest neighbor search, namely, finding k that minimizes min |θ − ζ k |, 2π − |θ − ζ k | . Doing so will result in a onehot vector with one for the kth element and zeros for all other elements. Instead, we choose to define a soft codec as,
c = n=1 n=−1 exp − (ζ −θ + 2πn) 2 2σ 2 ,(12)
where σ is a parameter that controls the speed of decay asθ deviates from the midpoints. Clearly, each element of c is between 0 and 1, and the maximum value occurs at
k = arg min k min |θ − ζ k |, 2π − |θ − ζ k | .
With the soft codec of some input x and a threshold τ , we can determine which partition(s) it belongs to as {k|c k ≥ τ }. Plugging (11) and (12), we have c k ≥ τ if the following condition holds,
2π k − 1 2 K − σ −2 log τ ≤θ ≤ 2π k − 1 2 K + σ −2 log τ .
In other words, we can view mapping to soft codes along with a suitable choice of τ and σ, having the effect of dividing the polar coordinate space into K overlapping sectors with width 2σ √ −2 log τ . An example of four partitions is given in Figure 2. Whenz of an input x falls into the overlapping area of sectors i and j, we view it as contributing to the training of submodel i and j. Let γ be the overlapping ratio (normalized by 2π). σ can thus be determined by,
σ = − π 2 (1 + γ) 2 2K 2 log τ(13)
In Figure 2, γ is set to 30% and τ equals to 0.3. Wheñ θ equals to 1 3 π, which is outside of the overlapping region (Figure 2a), the data point only contributes to the training of one submodel. Whenθ equals to 4 9 π, which is in between two midpoints 1 4 π and 3 4 π, the data point contributes to the training of the two corresponding submodels. to the minimum 0 while moving away fromθ. The red cross maker demonstrates a value on the midpoint, with in the brighter area telling it is above the selection threshold while in the darker area telling below the selection threshold.
Consideration of Model Uncertainty
The soft codec utilizes the angular proximity of input data in a 2D representation. However, partitioning based on the soft code alone does not always imply the trained model is more specialized. The predictive uncertainty of a trained model with respect to the input data is also indicative of how much the model has "specialized" on the data. Intuitively, if a model is specialized on one partition of the input space, it should have a lower predictive uncertainty on the prediction of the data in the partition, but higher uncertainty on other data. In [6], we found that the entropy computed from the softmax output of a neural network model is a good surrogate for the uncertainty of the model on the data. Formally, we denote H(ŷ k |x, φ k ) the entropy of the k th submodel parameterized by φ k with respect to the input x,
H(ŷ|x, φ k ) = − c p(ŷ = c|x, φ k ) log p(ŷ = c|x, φ k ),(14)
where p(ŷ = c|x, φ k ) is the predictive probability of output c = 1, 2, ..., C for input x from submodel k.
To encourage the assignment of x to a submodel that has the lowest predictive uncertainty, we introduce a Kdimension vectorc as follows:
c = [c 1 . . .c K ],c i = τ i = arg min k H(ŷ k |x, φ k ) 0 otherwise.(15)
Clearly,c is a one-hot vector scaled by τ .
Partition of Input Data
To this end, we have obtained two K-dimension codesc and c for each input data x. To decide the final partition of input data, we should take both into account. This can done by a simple linear combination:
c = αc + (1 − α)c.(16)
In the experiments, we set α = 1 2 . Let P(x) = k|c k ≥ τ 2 denote the indices of partitions (submodels) that input x contributes to. Clearly, P(x) cannot be an empty set since its respectivec contains one element that equals to τ . In the case that the cardinality of P(x) is greater than one, this implies that x will be used to train multiple submodels.
Neural Network Generator
The architecture of the generator network is illustrated in Figure 3. A neural network generator G takes an element k in P(x) (being converted to a one-hot vector) as input and generates the parameters φ k of the kth submodel. CacheNet is agnostic to the target neural network architecture, which is decided by the target application. For example, for image classification, Shake-Shake [7] has been shown to perform well across several datasets. Given K, we scale down the target neural network architecture to have reduced capacity.
Supposeŷ k is the prediction of the kth submodel for x, noted byŷ k = F (x; φ k ). To avoid overfitting, we allow parameter sharing across the submodels. The proportion of parameters to be shared, the depth and the width of the shared networks are hyper-parameters to be determined by the neural network structure of the submodels. For an input data x and its label y, we first compute P(x). The crossentropy loss for classification is given by,
J F (x, y) = k∈P(x) H(ŷ k , y)(17)
Minimizing E p D (x) J F (x, y) leads to a more accurate prediction with respect to the dataset.
Training Algorithm
In CachNet, there are three networks that need to be trained, namely, the stacked encoder, the stacked decoder and the generator network. Since the output of the stacked encoder contributes to the input of the generator network, they need to be trained jointly. The lower-dimension representationz is the most informative of a particular input x if two lower bounds L * (ξ, ψ)
if E p D (x) J F (x, y) is minimized. Thus, the minimization objective J should be E p D (x) J F (x, y)
added to the negation of L * (ξ, ψ) andL * (ξ,ψ):
J = E p D (x) J F (x, y) − L * (ξ, ψ) −L * (ξ,ψ).(18)
To better converge, E p D (x) J F (x, y), L * (ξ, ψ), and L * (ξ,ψ) are optimized stage-wisely and batch-wisely. Let J (i) be J with respect to a batch (X (i) , Y (i) ) drawn from the dataset D. Suppose the generator G is parameterized by χ, and κ is the set of {ξ, ψ,ξ,ψ, χ}. The training algorithm should iteratively apply gradient updates to κ (or χ) with respect to the loss function J (i) and descend to a minimum of J (as shown in Algorithm 1).
Algorithm 1 Training CacheNet
let η be the learning rate let ν be the epoch stopping gradient updates in ξ, ψ,ξ,ψ 1: procedure TRAIN(η, ν) 2: while J (i) is decreasing do 3: draw the next batch (X (i) , Y (i) ) from D
4:
if #epoch < ν then 5: κ ← κ − η∇ κ J (i)
CACHENET INFERENCE
With CacheNet, inference on end devices is accelerated by caching submodels of lower computation complexity. Depending on storage availability, one or multiple submodels can be stored on end devices. At any time, only one submodel is active and is used to make predictions. Given an input data sample x, the active submodel k outputsŷ, the label of x and the predictive entropy H(ŷ|x, φ k ). If the entropy is above a certain threshold,ŷ will be returned. Otherwise, two situations may arise, i) x is better handled by another cached submodel, and ii) x is better handled by a submodel not in cache. The latter case is called a cache miss. Like caching in memory hierarchy, CacheNet needs to handle cache misses by replacing an cached "item" (model). However, unique to CacheNet, the newly cached "item" is not the input data but a suitable model.
Submodel Selection
In Section 4, a K-dimension codec is computed for each input data sample using S-InfoVAE and the subsequent mapping in polar coordinates. In the training stage,c contributes to the input to the generator network that generates the parameters of respective submodels. In the inference stage, c can be used to select the submodel to make prediction given an input data sample. In particular, the joint optimization of S-InfoVAE, generator network and submodels aligns the output of S-InfoVAE with the submodel that has lowest predictive uncertainty. Thus, we can simply select the submodel whose index corresponds to the largest element in c. Note in the inference stage, we do not need to calculate the predictive uncertainty for each submodel. Instead, only one submodel is applied. This is one of the key differences between CacheNet and the work in [6]. S-InfoVAE can be executed on the end device or on the edge server. In the former case, extra storage and computation overhead are introduced. In the latter case, submodel storage and selection are delegated to the edge server.
Cache Replacement
When the predictive entropy is below a preconfigured threshold using the active submodel, the input data x is sent to the edge server, which will perform inference on behalf of the end device. Additionally, by submodel selection, the edge server determines a suitable model for x. A cache miss occurs on the end device. The newly selected submodel will be downloaded to the device to replace an existing model. Here, we adopt the Least Recently Used (LRU) policy and select the model that is least recently used. By the virtue of LRU, such a policy does not suffer from Bélády's anomaly. In other words, as the cache size increases, the cache miss rate does not increase.
EVALUATION
In this section, we evaluate CacheNet with two different real-world datasets (the CIFAR-10 [13] and the Frontal View Gait (FVG) dataset [22]), and test CacheNet's performance with respectively two different neural network models (Shake-Shake [7] and ResNet [9]).
Datasets
CIFAR-10: CIFAR-10 [13] is a benchmark dataset for image classification, comprised of 60, 000, 32 × 32 colored images and 10 classes (such as automobile, bird and horse) in total. Although CIFAR-10 is not a video dataset and is an image classification dataset, image classification is still a valid scenario if it is in a video processing pipeline (e.g. where the background has been removed previously from the video). In this case, temporal locality still applies while consecutive images would be less redundant owing to the earlier steps in the pipeline. For example, a horse (possibly shot in different angles with different scales) in the video is still likely to appear multiple times in the sequence, even when the background has been removed (e.g. object detection).
For fair evaluation, test images are not supposed to be seen during training. Thus, we set aside 10, 000 images for testing. To simulate temporal locality in a video pipeline, the synthesized image sequence in testing is composed of a sample of the 10, 000 images in the way that images with the same label are concatenated together.
To reduce overfitting, data augmentation techniques are used, including: 1) random cropping and 2) random flipping. Apart from data augmentation, Shake-Shake regularization has been applied to reduce overfitting [7], and batch normalization to reduce internal covariate shift [11].
FVG: FVG is a person re-identification dataset, first introduced in [22], as a collection of frontal walking videos from 226 subjects. In total it contains 2, 856 videos at 15 frames per second with a resolution of 1920 × 1080.
In contrast to other person re-identification datasets in surveillance settings, FVG is the first to focus on the frontal view. This makes it useful for two reasons: (i) It contains temporal locality in the form of a fixed background and the same subject walking towards the camera, which can be leveraged for caching. (ii) Having a frontal view means that it contains minimal gait cues.
To reduce the chance of overfitting and improve generalization ability we use data augmentation techniques [19] on this dataset as well. We first oversample the images by interpolating between existing frames. This technique preserves the extrinsic distribution while allowing us to experiment with cache performance by varying the degree of temporal locality. Additionally, in the original dataset the average frame rate of each video is 15 frames per second. That is only half of the frame rate of a HD video (generally 30-60 frames per second). Since each video sample is of the subject walking straight towards the camera from a distance, it contains intrinsic depth information that can be utilized to synthesize intermediate frames. As such, we use DAIN [2], a state of the art approach that leverages the depth information to interpolate between the frames.
Experimental Setup
CacheNet's performance is evaluated on two different datasets (CIFAR-10 and FVG), three end devices (Jetson TX2, Jetson Nano, and Raspberry Pi 4) and two deep learning frameworks (NCNN and TensorFlow Lite). There are two baselines to compare with: a) running a full model (Shake-Shake-26 or ResNet-50) on an end devices (Device), and b) offloading the full model onto an edge server (Edge). Different thresholds are evaluated to better trade off hit rate against accuracy: for CIFAR-10, they are 0.5, 0.6, 0.7, 0.75, and 0.8; for FVG, they are 1.5, 2.0, 2.3, 2.5, and 2.7. (Here, A larger threshold in FVG is caused by more classes (neurons) at the output layers.) Furthermore, on the FVG dataset, we evaluate two video frame rates 15 FPS and 30 FPS (at inference) with both trained at 60 FPS (by using data augmentation). The number of submodels K is set to 4 in the experiment. For a possible convergence, CacheNet is trained on Tensor-Flow with 4 NVIDIA 1080TI graphic cards. Per CIFAR-10, CacheNet partitions Shake-Shake-26 (with 26 layers) into 4 Shake-Shake-8 (with 8 layers) neural network submodels for caching; per FVG, CacheNet partitions ResNet-50 into 4 ResNet-20 (but with fewer channels per layer).
CacheNet's inference is distributed between the edge server and the end device in the experiment. One submodel is cached and runs on the end device, while submodel storage and selection are delegated to the edge server. End devices are evaluated with limited storage to mimic that of end devices such as security cameras. One Intel Xeon CPU core is enabled on the edge server to representatively simulate those of most of WiFi access points (e.g. a 500 megahertz MIPS processor on the Arlo SmartHub) with generally limited compute power. There is sufficient storage on the edge server comparable to that of WiFi access points (e.g. a 128 gigabyte SD card on the eufy HomeBase and a 2 terabyte USB hard drive onto the Arlo SmartHub). End devices are connected to the edge server through a WiFi router, via WiFi 5G (802.11ac) and an Ethernet cable, respectively.
TensorFlow submodels from training were converted to NCNN and TensorFlow Lite submodels and stored on the edge server. Whenever a submodel is needed, the end device initiates an HTTP/1.1 request to the edge server, and then the chosen submodel on the edge server is encoded in an HTTP/1.1 and protobuf message then sent back to the end device. OpenCV is also used in the experiment to read a testing image sequence (video) into the memory and convert them into tensors.
Results
Specialization: Specialization is crucial for caching because a non-specialized partition cannot match the full model's performance by any chance even for a smaller subset of input. There are two aspects we would investigate: (a) whether similar input images are mapped to the same partition; (b) whether input images are partitioned roughly evenly to fully utilize the capacities of all submodels, considering both CIFAR-10 and FVG are approximately balanced datasets. Figure 4 and 5 illustrate the number of input images per class being mapped (by S-InfoVAE) to a particular partition. They answer most of our concerns: (a) A partition roughly covers most of similar input images from the same class. e.g. for CIFAR-10, partition A is more specialized in trucks and automobiles; partition B knows better airplanes and ships; for FVG, partition A is more certain of person identifier (PID) 211, 019, 011, 016, 006, and 005; and partition B is specialized in PID 013, 008, 003, 015, and 215. (b) In both cases of CIFAR-10 or FVG, the areas (Figure 4 and 5) that partitions occupy are roughly even. It implies the total number of (image) instances they span are approximately the same. Convergence: Not all neural networks converge. Thus, whether CacheNet is useful depends on whether it converges or not per the particular dataset. In CIFAR-10 and FVG, we can see ( Figure 6) that their losses both start high but converge closer and closer to zero. Since FVG is a smaller dataset compared to CIFAR-10, CacheNet with FVG converges faster (in fewer iterations) than CIFAR-10.
Cache replacement:
As it is discussed in Section 5.2, if the predictive entropy is below a preconfigured threshold, the inference is performed locally; otherwise, it is done remotely on the edge server. Figure 7a and 7b demonstrate that the FPS increases as the threshold increased most of the time for both CIFAR-10 and FVG. The reason is that the hit rate is generally higher when the threshold is higher. Fewer cache replacement is needed and more and more images are being processed locally, which speeds up the inference. On the other hand, Figure 7a and 7b show that a higher hit rate generally comes at the cost of lower accuracy. It is because a higher threshold allows prediction with higher entropy (uncertainty) to become valid. Higher entropy predictions are of lower quality that decrease the overall accuracy. We find that in practice, it is a trade-off between hit rate and accuracy. More details are given in Table 1-6. CacheNet generally works better on end devices with more computing power such as Jetson TX2 and Jetson Nano. Offloading to the edge server (Edge) releases end devices' burden thus CPU usages are lowest among three. However, it also implies that the computing power on the end device has not been fully utilized. Memory usages fall into a similar pattern as that of CPU usages. If we divide elapsed time into the time that is run on the end device and that is performed on the edge server (including time for upload and download), we observe that CacheNet distributes the total (computation) time between the end device and the edge server, while the other two baselines are not taking the advantages of distributed computing, that either runs locally (Device) or computes on the edge server most of the time (Edge).
Comparison to baselines:
Comparison across frameworks: NCNN and TensorFlow
Lite are both lightweight deep learning framework tailored for embedded devices with limited compute power, memory and storage. A comparison between TensorFlow Lite and NCNN are given in Table 1-6. CacheNet with NCNN and TensorFlow Lite both outperform the baselines. NCNN is slightly more efficient than TensorFlow Lite for both CIFAR-10 and FVG, while TensorFlow Lite consumes far less memory than NCNN. Figure 8, we observe that CacheNet performs better on end devices with higher compute power such as Jetson TX2 and Jetson Nano. Raspberry Pi incurs more time on submodel inference, which leads to lower FPS. Detailed numerical comparisons can be found in Table 1-6.
Comparison across devices: From
CONCLUSION
In this paper, we proposed CacheNet, a neural network model caching mechanism for edge computing. In Cach-eNet, an edge (cloud) server is responsible for the storage and selection of neural network partitions, while an end device with a cached partition performs inferencing most of the time. Three key features enable CacheNet to achieve short end-to-end latency without much compromise in prediction accuracy: 1) Caching avoids the communication latency between an end device and edge (cloud) server whenever there is a cache hit; 2) specialized cached partitions do not sacrifice prediction accuracy if properly trained and selected; 3) the computation and storage complexities of cached model partitions are smaller rather than those of a full model.
In future works, we plan to experiment with more datasets and neural network models using CacheNet. The two-level caching idea can be further extended to consider a hierarchy of caches, e.g., distributed among end devices, edge nodes and cloud servers. Another line of research is to apply neural architecture search to CacheNet to improve its adaptability to different types of neural networks.
APPENDIX A ABSENCE OF BÉLÁDY'S ANOMALY
Bélády's anomaly is the phenomenon that a larger cache incurs more cache misses than a smaller one. In CacheNet, there are two possible ways to take advantage of a larger cache size: 1) each individual submodel being cached has a larger capacity (i.e., deeper); 2) more submodels are being cached on an end device. If both do not result in fewer cache hits, we can conclude that Bélády's anomaly does not occur in CacheNet.
A.1 Larger Capacity
A submodel with a larger capacity is defined as follows. Given any sequence X = x 1 , x 2 , . . . , x N of images, audio clips etc. Let Φ = φ (1) , φ (2) , . . . , φ (Q) be an sequence of submodel instances for caching, with respect to 1) their depths d (1) < d (2) < . . . < d (Q) , 2) any layer in φ (1) contained by φ (2) , . . ., and any layer in φ (Q−1) contained by φ (Q) . According to the capacity theorem [4], submodel instance φ (1) expresses less functions than φ (2) , . . ., and φ (Q−1) less functions than φ (Q) .
Let H(ŷ|x i , φ (j) ) be the predictive entropy given any input x i , i = 1, 2, . . . , N and any submodel instance φ (j) , j = 1, 2, . . . , Q. For a predefined threshold T , if H(ŷ|x i , φ (j) ) < T , we say it is a cache hit, else it is a cache miss. Theorem 1. Let M (X, φ (j) ) be the number of misses (faults) given the input sequence X and the submodel instance φ (j) , j = 1, 2, . . . , Q. Then M (X, φ (1) ) ≥ M (X, φ (2) ) ≥ . . . ≥ M (X, φ (Q) ) Proof. We can prove this theorem by induction.
1) Base case: if X = x 1 , both φ (j) and φ (j+1) incurs a cache miss on x 1 , thus, M (X, φ (j) ) = M (X, φ (j+1) ) 2) Induction hypothesis: we need to show if X = x 1 , . . . , x i , M (X, φ (j) ) ≥ M (X, φ (j+1) ) for an arbitrary j, when X = x 1 , . . . , x i+1 , M (X, φ (j) ) ≥ M (X, φ (j+1) ) also holds. a) If the newly input x i+1 incurs a cache hit on the submodel instance φ (j) , there should be also a cache hit on φ (j+1) . This claim relies on the capacity theorem [4] that the submodel instance φ (j+1) has more functional expressibility than φ (j) . By definition, the submodel instance φ (j) can be embedded in φ (j+1) . The submodel instance φ (j+1) 's additional layers can be made as an identity for x 1 , . . . , x i+1 's intermediate outputs. Thus, the claim holds. b) If the new input x i+1 incurs a cache miss on φ (j) , there may be a cache hit or cache miss on φ (j+1) . Since the submodel instance φ (j) is embedded in φ (j+1) , and φ (j+1) 's additional layers are made as an identity for x 1 , . . . , x i 's intermediate outputs. The additional layers of φ (j+1) may have the additional capacity to represent x i+1 's function.
In either case, M (X, φ (j) ) ≥ M (X, φ (j+1) ) for an arbitrary j. The induction hypothesis holds.
A.2 More Submodels
When there are multiple submodels to cache on an end device, a cache miss happens if the predictive entropy of the current submodel is less than the threshold T and there is no suitable submodel (which is decided by the S-InfoVAE on the end device) currently stored on the end device.
Theorem 2.
Let k (1 ≤ k ≤ K) be the number of submodels cached on an end device. LetM (X, k) be the number of misses (faults) given the input sequence X. Then, under the LRU cache replacement policy,M (X, 1) ≥M (X, 2) ≥ · · · ≥M (X, K).
Proof. We can prove this theorem by induction.
1) Base case: if X = x 1 , both k and k +1 cached submodels incur a cache miss on x 1 , thus,M (X, k) =M (X, k + 1)
2) Induction hypothesis: we need to show if X = x 1 , . . . , x i ,M (X, k) ≥M (X, k + 1) for an arbitrary k, when X = x 1 , . . . , x i+1 ,M (X, k) ≥M (X, k + 1) also holds. a) If the newly input x i+1 incurs a cache hit on k cached submodels, there should be also a cache hit on k + 1 cached submodels, because the k cached submodels are always embedded in the k + 1 submodels under the least recently used (LRU) policy. b) If the newly input x i+1 incurs a cache miss on k cached submodels, there may be a cache hit or cache miss on k + 1 cached submodels, because the k submodels are embedded in the k + 1 submodels, the one more submodel in the cached k + 1 submodels may cause the hit or not depending on whether it matches the index given by S-InfoVAE.
No matter in either case,M (X, k) ≥M (X, k + 1) for an arbitrary k. The induction hypothesis holds. Thus, the theorem holds.
To this end, we conclude when individual submodels have larger capacity or more submodels can be cached on an end device, CacheNet always has higher or the same hit rates. In other words, it does not suffer from Bélády's anomaly.
•
Y. Fang, S. Manzuri Shalmani, and R. Zheng are with the Department of Computing and Software, McMaster University, Hamilton, ON, Canada. E-mail: {fangy5,manzuris,rzheng}@mcmaster.ca. R. Zheng is a visiting professor in Harbin Institute of Technology (Shenzhen), China between 2019 and 2020.
Fig. 1 :
1(a) CacheNet first partitions a neural network into multiple smaller specialized neural networks in the cloud. (b)
Fig. 2 :
2The red straight line denotes the angleθ, with the red curve indicating the amount of decay from the maximum 1
Fig. 3 :
3The generator G takes a one-hot vector δ i as input and generates the parameters of the i th partition. Values (either 0 or 1) of each dimension in δ i are used to deactivate or activate a corresponding branch. andL * (ξ,ψ) are maximized, and a submodel's predictions are the most accurate
Fig. 4 :Fig. 5 :
45For CIFAR-10, partition A is more specialized in trucks and automobiles; partition B can predict airplanes and ships better; partition C is more certain of the horse, dog, and cat classes; partition D knows more about frogs and deer. For FVG, partition A is more certain of person identifier (PID) 211, 019, 011, 016, 006, and 005; partition B is specialized in PID 013, 008, 003, 015, and 215; partition C knows more about PID 010, 191, 009, 012, and 018; partition D is more certain of PID 004, 002, 204, 017, and 001.
Fig. 7 :
7FPS and hit rate increase most of the time as the preconfigured threshold increases. Accuracy generally decreases because predictions of less certainty are considered valid. When multiple submodels outperforming the full model (in the FVG dataset), there is a small peak observed before the accuracy declines.
A comparison between CacheNet and the other two baselines (Device and Edge) are shown inFigure 8aand 8b. Medians (of all the scenarios) are taken and standard deviations are plotted as error bars in those figures. For CacheNet, preconfigured threshold 0.75 and 2.5 are chosen respectively per CIFAR-10 and FVG to the best extend to trade off hit rate against accuracy. As visualized on those figures, CacheNet is much faster than the other two baselines: for CIFAR-10, 3.2X of Device and 1.6X of Edge; for FVG, 2.5X of Device and 1.7X of Edge. At the same time, the accuracy of CacheNet is comparable with that of the full model, with only a slight drop on CIFAR-10, but increasing a bit on FVG.
Fig. 8 :
8Medians are taken and standard deviations are plotted as error bars. CacheNet is faster than the other two baselines, while accuracy is comparable to the full model.
et al. proposed FoggyCache for crossdevice approximate computation reuse. FoggyCache reuseslower-dimension
representation
stacked encoder
generator
entropy
stacked decoder
̅
feedback code
argmin
̅
̿
cloud
(a) Train
submodel selection
trusted?
no
entropy
yes
cache replacement
̅
̅
edge
server
end
device
TABLE 1 :
1Experimental Results with CIFAR-10 on Jetson TX2, Jetson Nano, and Raspberry Pi 4 -NCNN
Jetson TX2
Jetson Nano
Raspberry Pi 4
Device Edge CacheNet Device Edge CacheNet Device Edge CacheNet
FPS
2.85 4.89
8.02
4.25 3.83
9.53
1.57 5.60
4.77
Accuracy (%)
95.47 95.47
93.20
95.47 95.47
93.20
95.47 95.47
93.20
CPU (%)
84.53 4.25
60.26
96.83 5.96
57.23
98.65 1.21
63.75
Memory (Mb) 610.71 1.86
198.76 863.14 1.94
241.80 875.53 0.91
201.75
Time (s)
124.06 72.16
44.03
83.06 92.20
37.04 224.15 63.02
74.03
Device (s)
124.06 0.80
26.25
83.06 0.66
17.89 224.15 0.63
42.28
Edge (s)
0.00 71.37
17.79
0.00 91.54
19.14
0.00 62.39
31.75
TABLE 2 :
2Experimental Results with CIFAR-10 on Jetson TX2, Jetson Nano, and Raspberry Pi 4 -TensorFlow Lite
Jetson TX2
Jetson Nano
Raspberry Pi 4
Device Edge CacheNet Device Edge CacheNet Device Edge CacheNet
FPS
2.59 4.71
7.83
2.19
3.74
7.31
0.90
5.44
4.24
Accuracy (%)
95.47 95.47
93.20
95.47 95.47
93.20
95.47 95.47
93.20
CPU (%)
77.26 4.40
52.37
79.92
5.90
56.82
74.29
1.66
53.45
Memory (Mb) 213.42 29.93
113.96 226.37 108.73
133.23 210.12 106.98
99.98
Time (s)
136.05 74.91
45.08 161.04 94.29
48.30 390.31 64.90
83.25
Device (s)
136.05 0.56
29.00 161.04
0.83
34.16 390.31
0.55
60.36
Edge (s)
0.00 74.35
16.07
0.00 93.46
14.13
0.00 64.35
22.89
TABLE 3 :
3Experimental Results with FVG (15 FPS) on Jetson TX2, Jetson Nano, and Raspberry Pi 4 -NCNNJetson TX2
Jetson Nano
Raspberry Pi 4
Device Edge CacheNet Device Edge CacheNet Device Edge CacheNet
FPS
11.36 10.40
20.80
10.41 10.40
22.70
5.10 11.36
14.70
Accuracy (%)
97.20 97.20
98.40
97.20 97.20
98.40
97.20 97.20
98.40
CPU (%)
96.05 8.27
22.35
95.22 11.54
23.37
96.32 4.23
24.37
Memory (Mb) 312.35 7.19
10.55 436.91 7.75
11.57 454.79 6.72
8.04
Time (s)
22.01 24.05
12.02
24.02 24.04
11.01
49.04 22.01
17.01
Device (s)
22.01 0.72
2.31
24.02 0.87
1.70
49.04 0.48
4.62
Edge (s)
0.00 23.32
9.71
0.00 23.17
9.31
0.00 21.53
12.38
TABLE 4 :
4Experimental Results with FVG (15 FPS) on Jetson TX2, Jetson Nano, and Raspberry Pi 4 -TensorFlow LiteJetson TX2
Jetson Nano
Raspberry Pi 4
Device Edge CacheNet Device Edge CacheNet Device Edge CacheNet
FPS
7.65 10.72
20.65
7.01 10.65
21.47
3.75 10.71
16.02
Accuracy (%)
97.20 97.20
98.40
97.20 97.20
98.40
97.20 97.20
98.40
CPU (%)
69.51 8.63
18.31
72.55 10.77
21.11
62.75 4.89
18.32
Memory (Mb) 194.68 10.61
16.16 181.79 99.08
16.95 190.74 96.64
10.42
Time (s)
32.70 23.33
12.11
35.67 23.48
11.64
66.65 23.33
15.60
Device (s)
32.70 0.49
2.42
35.67 0.74
2.87
66.65 0.80
4.59
Edge (s)
0.00 22.84
9.68
0.00 22.74
8.78
0.00 22.53
11.01
TABLE 5 :
5Experimental Results with FVG (30 FPS) on Jetson TX2, Jetson Nano, and Raspberry Pi 4 -NCNNJetson TX2
Jetson Nano
Raspberry Pi 4
Device Edge CacheNet Device Edge CacheNet Device Edge CacheNet
FPS
11.62 11.09
17.84
10.86 11.88
19.98
5.05 11.62
13.89
Accuracy (%)
96.40 96.40
97.20
96.40 96.40
97.20
96.40 96.40
97.20
CPU (%)
96.94 8.46
22.78
96.91 11.84
24.05
98.10 4.43
25.30
Memory (Mb) 310.16 12.76
9.00 455.06 12.79
11.52 454.23 11.48
9.35
Time (s)
43.02 45.08
28.03
46.03 42.07
25.03
99.04 43.01
36.01
Device (s)
43.02 0.87
3.77
46.03 0.65
3.21
99.04 0.22
8.52
Edge (s)
0.00 44.21
24.26
0.00 41.42
21.82
0.00 42.79
27.49
TABLE 6 :
6Experimental Results with FVG (30 FPS) on Jetson TX2, Jetson Nano, and Raspberry Pi 4 -TensorFlow LiteThe authors would like to thank McMaster Faculty of Engineering SummerTech Entrepreneur Fellowship for offering the financial support in purchasing experimental equipment including a Jetson Nano, Raspberry Pi 4, and a TP-LINK Archer C3200 router.Jetson TX2
Jetson Nano
Raspberry Pi 4
Device Edge CacheNet Device Edge CacheNet Device Edge CacheNet
FPS
7.82 10.49
17.76
7.15 11.08
19.43
3.43 11.53
13.49
Accuracy (%)
96.40 96.40
97.20
96.40 96.40
97.20
96.40 96.40
97.20
CPU (%)
69.98 8.60
19.89
73.13 11.52
21.24
63.68 4.52
17.17
Memory (Mb) 197.03 10.34
16.64 189.10 99.05
16.09 191.69 6.79
11.23
Time (s)
63.90 47.67
28.15
69.89 45.13
25.73 145.61 43.35
37.06
Device (s)
63.90 0.69
4.97
69.89 0.99
5.57 145.61 0.19
9.29
Edge (s)
0.00 46.98
23.18
0.00 44.14
20.16
0.00 43.16
27.78
ACKNOWLEDGMENTSThis work is in part supported by the Discovery Grant and Collaborative Research Development Grant from Natural Science and Engineering Council, Canada.[23] Shengjia Zhao, Jiaming Song, and Stefano Ermon. Infovae: Balancing learning and inference in variational autoencoders. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 5885-5892, 2019.
Frugal following: Power thrifty object detection and tracking for mobile augmented reality. Kittipat Apicharttrisorn, Xukan Ran, Jiasi Chen, V Srikanth, Amit K Roy-Chowdhury Krishnamurthy, Proceedings of the 17th Conference on Embedded Networked Sensor Systems. the 17th Conference on Embedded Networked Sensor SystemsKittipat Apicharttrisorn, Xukan Ran, Jiasi Chen, Srikanth V Krish- namurthy, and Amit K Roy-Chowdhury. Frugal following: Power thrifty object detection and tracking for mobile augmented reality. In Proceedings of the 17th Conference on Embedded Networked Sensor Systems, pages 96-109, 2019.
Depth-aware video frame interpolation. Wenbo Bao, Wei-Sheng Lai, Chao Ma, Xiaoyun Zhang, Zhiyong Gao, Ming-Hsuan Yang, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionWenbo Bao, Wei-Sheng Lai, Chao Ma, Xiaoyun Zhang, Zhiyong Gao, and Ming-Hsuan Yang. Depth-aware video frame interpola- tion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3703-3712, 2019.
Glimpse: Continuous, real-time object recognition on mobile devices. Tiffany Yu-Han Chen, Lenin Ravindranath, Shuo Deng, Paramvir Bahl, Hari Balakrishnan, Proceedings of the 13th ACM Conference on Embedded Networked Sensor Systems. the 13th ACM Conference on Embedded Networked Sensor SystemsACMTiffany Yu-Han Chen, Lenin Ravindranath, Shuo Deng, Paramvir Bahl, and Hari Balakrishnan. Glimpse: Continuous, real-time object recognition on mobile devices. In Proceedings of the 13th ACM Conference on Embedded Networked Sensor Systems, pages 155- 168. ACM, 2015.
On the expressive power of deep learning: A tensor analysis. Nadav Cohen, Or Sharir, Amnon Shashua, Conference on Learning Theory. Nadav Cohen, Or Sharir, and Amnon Shashua. On the expressive power of deep learning: A tensor analysis. In Conference on Learning Theory, pages 698-728, 2016.
Auto-encoding variational bayes. Max P Kingma Diederik, Welling, Proceedings of the International Conference on Learning Representations (ICLR). the International Conference on Learning Representations (ICLR)P Kingma Diederik, Max Welling, et al. Auto-encoding variational bayes. In Proceedings of the International Conference on Learning Representations (ICLR), 2014.
Teamnet: A collaborative inference framework on the edge. Yihao Fang, Jin Ziyi, Rong Zheng, 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS). IEEEYihao Fang, Ziyi Jin, and Rong Zheng. Teamnet: A collaborative inference framework on the edge. In 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS), pages 1487- 1496. IEEE, 2019.
Xavier Gastaldi, arXiv:1705.07485Shake-shake regularization. arXiv preprintXavier Gastaldi. Shake-shake regularization. arXiv preprint arXiv:1705.07485, 2017.
Foggycache: Cross-device approximate computation reuse. Peizhen Guo, Bo Hu, Rui Li, Wenjun Hu, Proceedings of the 24th Annual International Conference on Mobile Computing and Networking. the 24th Annual International Conference on Mobile Computing and NetworkingPeizhen Guo, Bo Hu, Rui Li, and Wenjun Hu. Foggycache: Cross-device approximate computation reuse. In Proceedings of the 24th Annual International Conference on Mobile Computing and Networking, pages 19-34, 2018.
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016.
Deepmon: Mobile gpu-based deep learning framework for continuous vision applications. N Loc, Youngki Huynh, Rajesh Krishna Lee, Balan, Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services. the 15th Annual International Conference on Mobile Systems, Applications, and ServicesACMLoc N Huynh, Youngki Lee, and Rajesh Krishna Balan. Deepmon: Mobile gpu-based deep learning framework for continuous vision applications. In Proceedings of the 15th Annual International Con- ference on Mobile Systems, Applications, and Services, pages 82-95. ACM, 2017.
Batch normalization: Accelerating deep network training by reducing internal covariate shift. Sergey Ioffe, Christian Szegedy, arXiv:1502.03167arXiv preprintSergey Ioffe and Christian Szegedy. Batch normalization: Acceler- ating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
Neurosurgeon: Collaborative intelligence between the cloud and mobile edge. Yiping Kang, Johann Hauswald, Cao Gao, Austin Rovinski, Trevor Mudge, Jason Mars, Lingjia Tang, Proceedings of the Twenty-Second International Conference on Architectural Support for Programming Languages and Operating Systems. the Twenty-Second International Conference on Architectural Support for Programming Languages and Operating SystemsACMYiping Kang, Johann Hauswald, Cao Gao, Austin Rovinski, Trevor Mudge, Jason Mars, and Lingjia Tang. Neurosurgeon: Collabora- tive intelligence between the cloud and mobile edge. In Proceedings of the Twenty-Second International Conference on Architectural Support for Programming Languages and Operating Systems, pages 615-629. ACM, 2017.
Learning multiple layers of features from tiny images. Alex Krizhevsky, Geoffrey Hinton, CiteseerTechnical reportAlex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
Tracking of abrupt motion using wang-landau monte carlo estimation. Junseok Kwon, Kyoung Mu Lee, European conference on computer vision. SpringerJunseok Kwon and Kyoung Mu Lee. Tracking of abrupt motion using wang-landau monte carlo estimation. In European conference on computer vision, pages 387-400. Springer, 2008.
Fixed point quantization of deep convolutional networks. Darryl Lin, Sachin Talathi, Sreekanth Annapureddy, International Conference on Machine Learning. Darryl Lin, Sachin Talathi, and Sreekanth Annapureddy. Fixed point quantization of deep convolutional networks. In Interna- tional Conference on Machine Learning, pages 2849-2858, 2016.
Deepeye: Resource efficient local execution of multiple deep vision models using wearable commodity hardware. Akhil Mathur, D Nicholas, Sourav Lane, Aidan Bhattacharya, Claudio Boran, Fahim Forlivesi, Kawsar, Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services. the 15th Annual International Conference on Mobile Systems, Applications, and ServicesACMAkhil Mathur, Nicholas D Lane, Sourav Bhattacharya, Aidan Boran, Claudio Forlivesi, and Fahim Kawsar. Deepeye: Resource efficient local execution of multiple deep vision models using wearable commodity hardware. In Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services, pages 68-81. ACM, 2017.
Research problems and opportunities in memory systems. Onur Mutlu, Lavanya Subramanian, Supercomputing frontiers and innovations. 13Onur Mutlu and Lavanya Subramanian. Research problems and opportunities in memory systems. Supercomputing frontiers and innovations, 1(3):19-55, 2015.
Low-rank matrix factorization for deep neural network training with high-dimensional output targets. N Tara, Brian Sainath, Vikas Kingsbury, Ebru Sindhwani, Bhuvana Arisoy, Ramabhadran, 2013 IEEE international conference on acoustics, speech and signal processing. IEEETara N Sainath, Brian Kingsbury, Vikas Sindhwani, Ebru Arisoy, and Bhuvana Ramabhadran. Low-rank matrix factorization for deep neural network training with high-dimensional output tar- gets. In 2013 IEEE international conference on acoustics, speech and signal processing, pages 6655-6659. IEEE, 2013.
A survey on image data augmentation for deep learning. Connor Shorten, M Taghi, Khoshgoftaar, Journal of Big Data. 6160Connor Shorten and Taghi M Khoshgoftaar. A survey on image data augmentation for deep learning. Journal of Big Data, 6(1):60, 2019.
Distributed deep neural networks over the cloud, the edge and end devices. Surat Teerapittayanon, Bradley Mcdanel, H T Kung, Distributed Computing Systems (ICDCS), 2017 IEEE 37th International Conference on. IEEESurat Teerapittayanon, Bradley McDanel, and HT Kung. Dis- tributed deep neural networks over the cloud, the edge and end devices. In Distributed Computing Systems (ICDCS), 2017 IEEE 37th International Conference on, pages 328-339. IEEE, 2017.
Deepcache: principled cache for mobile deep vision. Mengwei Xu, Mengze Zhu, Yunxin Liu, Felix Xiaozhu Lin, Xuanzhe Liu, Proceedings of the 24th Annual International Conference on Mobile Computing and Networking. the 24th Annual International Conference on Mobile Computing and NetworkingACMMengwei Xu, Mengze Zhu, Yunxin Liu, Felix Xiaozhu Lin, and Xuanzhe Liu. Deepcache: principled cache for mobile deep vision. In Proceedings of the 24th Annual International Conference on Mobile Computing and Networking, pages 129-144. ACM, 2018.
Gait recognition via disentangled representation learning. Ziyuan Zhang, Luan Tran, Xi Yin, Yousef Atoum, Jian Wan, Nanxin Wang, Xiaoming Liu, Proceeding of IEEE Computer Vision and Pattern Recognition. eeding of IEEE Computer Vision and Pattern RecognitionLong Beach, CAZiyuan Zhang, Luan Tran, Xi Yin, Yousef Atoum, Jian Wan, Nanxin Wang, and Xiaoming Liu. Gait recognition via disentan- gled representation learning. In In Proceeding of IEEE Computer Vision and Pattern Recognition, Long Beach, CA, June 2019.
| []
|
[
"Graph Information Bottleneck",
"Graph Information Bottleneck"
]
| [
"Tailin Wu [email protected] \nDepartment of Computer Science\nStanford University\n\n",
"Hongyu Ren [email protected] \nDepartment of Computer Science\nStanford University\n\n",
"Pan Li [email protected] \nDepartment of Computer Science\nStanford University\n\n",
"Jure Leskovec \nDepartment of Computer Science\nStanford University\n\n"
]
| [
"Department of Computer Science\nStanford University\n",
"Department of Computer Science\nStanford University\n",
"Department of Computer Science\nStanford University\n",
"Department of Computer Science\nStanford University\n"
]
| []
| Representation learning of graph-structured data is challenging because both graph structure and node features carry important information. Graph Neural Networks (GNNs) provide an expressive way to fuse information from network structure and node features. However, GNNs are prone to adversarial attacks. Here we introduce Graph Information Bottleneck (GIB), an information-theoretic principle that optimally balances expressiveness and robustness of the learned representation of graph-structured data. Inheriting from the general Information Bottleneck (IB), GIB aims to learn the minimal sufficient representation for a given task by maximizing the mutual information between the representation and the target, and simultaneously constraining the mutual information between the representation and the input data. Different from the general IB, GIB regularizes the structural as well as the feature information. We design two sampling algorithms for structural regularization and instantiate the GIB principle with two new models: GIB-Cat and GIB-Bern, and demonstrate the benefits by evaluating the resilience to adversarial attacks. We show that our proposed models are more robust than state-of-theart graph defense models. GIB-based models empirically achieve up to 31% improvement with adversarial perturbation of the graph structure as well as node features. | null | [
"https://arxiv.org/pdf/2010.12811v1.pdf"
]
| 225,066,684 | 2010.12811 | 2fce1ef37391cd685fc5459e1cbfcb8490b85242 |
Graph Information Bottleneck
Tailin Wu [email protected]
Department of Computer Science
Stanford University
Hongyu Ren [email protected]
Department of Computer Science
Stanford University
Pan Li [email protected]
Department of Computer Science
Stanford University
Jure Leskovec
Department of Computer Science
Stanford University
Graph Information Bottleneck
Representation learning of graph-structured data is challenging because both graph structure and node features carry important information. Graph Neural Networks (GNNs) provide an expressive way to fuse information from network structure and node features. However, GNNs are prone to adversarial attacks. Here we introduce Graph Information Bottleneck (GIB), an information-theoretic principle that optimally balances expressiveness and robustness of the learned representation of graph-structured data. Inheriting from the general Information Bottleneck (IB), GIB aims to learn the minimal sufficient representation for a given task by maximizing the mutual information between the representation and the target, and simultaneously constraining the mutual information between the representation and the input data. Different from the general IB, GIB regularizes the structural as well as the feature information. We design two sampling algorithms for structural regularization and instantiate the GIB principle with two new models: GIB-Cat and GIB-Bern, and demonstrate the benefits by evaluating the resilience to adversarial attacks. We show that our proposed models are more robust than state-of-theart graph defense models. GIB-based models empirically achieve up to 31% improvement with adversarial perturbation of the graph structure as well as node features.
information within the input data D = (A, X) to predict the target Y . D includes information from both the graph structure A and node features X. When Z contains irrelevant information from either of these two sides, it overfits the data and is prone to adversarial attacks and model hyperparameter change. Ω defines the search space of the optimal model P(Z|D). I(·; ·) denotes the mutual information [17].
target (minimal). Based on this learning paradigm, the learned model naturally avoids overfitting and becomes more robust to adversarial attacks.
However, extending the IB principle to representation learning on graph-structured data presents two unique challenges. First, previous models that leverage IB assume that the training examples in the dataset are independent and identically distributed (i.i.d.). For graph-structured data, this assumption no longer holds and makes model training in the IB principle hard. Moreover, the structural information is indispensable to represent graph-structured data, but such information is discrete and thus hard to optimize over. How to properly model and extract minimal sufficient information from the graph structure introduces another challenge that has not been yet investigated when designing IB-based models.
We introduce Graph Information Bottleneck (GIB), an information-theoretic principle inherited from IB, adapted for representation learning on graph-structured data. GIB extracts information from both the graph structure and node features and further encourages the information in learned representation to be both minimal and sufficient (Fig. 1). To overcome the challenge induced by non-i.i.d. data, we further leverage local-dependence assumption of graph-structure data to define a more tractable search space Ω of the optimal P(Z|D) that follows a Markov chain to hierarchically extract information from both features and structure. To our knowledge, our work provides the first information-theoretic principle for supervised representation learning on graph-structured data.
We also derive variational bounds for GIB, making GIB tractable and amenable for the design and optimization of GNNs. Specifically, we propose a variational upper bound for constraining the information from the node features and graph structure, and a variational lower bound for maximizing the information in the representation to predict the target.
We demonstrate the GIB principle by applying it to the Graph Attention Networks (GAT) [5], where we leverage the attention weights of GAT to sample the graph structure in order to alleviate the difficulty of optimizing and modeling the discrete graph structure. We also design two sampling algorithms based on the categorical distribution and Bernoulli distribution, and propose two models GIB-Cat and GIB-Bern. We show that both models consistently improve robustness w.r.t. standard baseline models, and outperform other state-of-the-art defense models. GIB-Cat and GIB-Bern improve the classification accuracy by up to 31.3% and 34.0% under adversarial perturbation, respectively. Project website and code can be found at http://snap.stanford.edu/gib/.
Preliminaries and Notation
Graph Representation Learning. Consider an undirected attributed graph G = (V, E, X) with n nodes, where V = [n] = {1, 2, ...n} is the node set, E ⊆ V × V is the edge set and X ∈ R n×f includes the node attributes. Let A ∈ R n×n denote the adjacency matrix of G, i.e., A uv = 1 if (u, v) ∈ E or 0 otherwise. Also, let d(u, v) denote the shortest path distance between two nodes u, v (∈ V ) over A. Hence our input data can be overall represented as D = (A, X).
In this work, we focus on node-level tasks where nodes are associated with some labels Y ∈ [K] n . Our task is to extract node-level representations Z X ∈ R n×f from D such that Z X can be further space Ω of our GIB principle, of which each step uses a local-dependence assumption to extract information from the structure and node features. The correlation between node representations are established in a hierarchical way: Suppose local dependence appears within 2-hops given the structure A. (b) In the graph, given the representations Z (l) X of the blue nodes and A that conveys the structural information that the blue nodes lie within 2-hops of the black node, the representations Z (l+1) X are independent between the black node and the white nodes. However, the correlation between them may be established in Z (l+2) X . used to predict Y . We also use the subscript with a certain node v ∈ V to denote the affiliation with node v. For example, the node representation of v is denoted by Z X,v and its label is denoted by Y v .
Notation. We do not distinguish the notation of random variables and of their particular realizations if there is no risk of confusion. For any set of random variables H, we use P(H), Q(H), ... to denote joint probabilistic distribution functions (PDFs) of the random variables in H under different models. P(·) corresponds to the induced PDF of the proposed model while Q(H) and Q i (H), i ∈ N correspond to some other distributions, typically variational distributions. For discrete random variables, we use generalized PDFs that may contain the Dirac delta functions [20]. In this work, if not specified, E[H] means the expectation over all the random variables in H w.r.t. P(H). Otherwise, we use E Q(H) [H] to specify the expectation w.r.t. other distributions denoted by Q(H). We also use X 1 ⊥ X 2 |X 3 to denote that X 1 and X 2 are conditionally independent given X 3 . Let Cat(φ), Bernoulli(φ) denote the categorical distribution and Bernoulli distribution respectively with parameter φ (∈ R 1×C ≥0 ). For the categorical distribution, φ corresponds to the probabilities over different categories and thus φ 1 = 1. For the Bernoulli distribution, we generalize it to high dimensions and assume we have C independent components and each element of φ is between 0 and 1. Let Gaussian(µ, σ 2 ) denote the Gaussian distribution with mean µ and variance σ 2 . µ and σ 2 could be vectors with the same dimension, in which case the Gaussian distribution is with the mean vector µ and covariance matrix Σ = diag(σ 2 ). Let Φ(· : µ, σ 2 ) denote its PDF. We use [i 1 : i 2 ] to slice a tensor w.r.t. indices from i 1 to i 2 − 1 of its last dimension.
Graph Information Bottleneck
Deriving the Graph Information Bottleneck Principle
In general, the graph information bottleneck (GIB) principle, inheriting from the principle of information bottleneck (IB), requires the node representation Z X to minimize the information from the graph-structured data D (compression) and maximize the information to Y (prediction). However, optimization for the most general GIB is challenging because of the correlation between data points. The i.i.d. assumption of data points is typically used to derive variational bounds and make accurate estimation of those bounds to learn IB-based models [21,22]. However, for the graph-structured data D, this is impossible as node features, i.e., different rows of X, may be correlated due to the underlying graph structure A. To fully capture such correlation, we are not allowed to split the whole graph-structured data D w.r.t. each node. In practice, we typically have only a large network, which indicates that only one single realization of P(D) is available. Hence, approximating the optimal Z X in the general formulation GIB seems impossible without making additional assumptions.
Here, we rely on a widely accepted local-dependence assumption for graph-structured data: Given the data related to the neighbors within a certain number of hops of a node v, the data in the rest of the graph will be independent of v. We use this assumption to constrain the space Ω of optimal representations, which leads to a more tractable GIB principle. That is, we assume that the optimal representation follows the Markovian dependence shown in Fig. 2. Specifically, P(Z X |D) iterates node representations to hierarchically model the correlation. In each iteration l, the local-dependence assumption is used: The representation of each node will be refined by incorporating its neighbors
3 (a) (b) w.r.t a graph structure Z (l) A . Here, {Z (l)
A } 1≤l≤L is obtained by locally adjusting the original graph structure A and essentially controlling the information flow from A. Finally, we will make predictions based on Z (L) X . Based on this formulation, the objective reduces to the following optimization:
min P(Z (L) X |D)∈Ω GIB β (D, Y ; Z (L) X ) −I(Y ; Z (L) X ) + βI(D; Z (L) X )(1)
where Ω characterizes the space of the conditional distribution of Z (L) X given the data D by following the probabilistic dependence shown in Fig. 2. In this formulation, we just need to optimize two series of distributions P(Z (l)
X |Z (l−1) X , Z (l) A ) and P(Z (l) A |Z (l−1) X , A), l ∈ [L]
, which have local dependence between nodes and thus are much easier to be parameterized and optimized.
Variational Bounds. Even using the reduced GIB principle and some proper parameterization of P(Z X ) is still intractable. Hence, we need to introduce variational bounds on these two terms, which leads to the final objective to optimize. Note that variational methods are frequently used in model optimization under the traditional IB principle [21]. However, we should be careful to derive these bounds as the data points now are correlated. We introduce a lower bound of I(Y ; Z (L) X ), which is reproduced from [22,23], and an upper bound of I(D; Z
X )). For any distributions Q 1 (Y v |Z (L) X,v ) for v ∈ V and Q 2 (Y ), we have I(Y ; Z (L) X ) ≥ 1 + E log v∈V Q 1 (Y v |Z (L) X,v ) Q 2 (Y ) + E P(Y )P(Z (L) X ) v∈V Q 1 (Y v |Z (L) X,v ) Q 2 (Y ) (2)X } l∈S X ∪ {Z (l) A } l∈S A ) ≤ l∈S A AIB (l) + l∈S X XIB (l) , where(3)AIB (l) = E log P(Z (l) A |A, Z (l−1) X ) Q(Z (l) A )
,
XIB (l) = E log P(Z (l) X |Z (l−1) X , Z (l) A ) Q(Z (l) X ) ,(4)
The proofs are given in Appendix B and C. Proposition 3.2 indicates that we need to select a group of random variables with index sets S X and S A to guarantee the conditional independence between D and Z (L) X . Note that S X and S A that satisfy this condition have the following properties: (1) S X = ∅, and (2) suppose the greatest index in S X is l and then S A should contain all integers in [l + 1, L].
To use GIB, we need to model P(Z A ) can use GIB as the objective in training. In the next subsection, we will introduce two instantiations of GIB, which is inspired by GAT [5].
Instantiating the GIB Principle
The GIB principle can be applied to many GNN models. As an example, we apply it to the Graph Attention Network model [5] and present GIB-Cat and GIB-Bern. Algorithm 1 illustrates the base framework of both models with different neighbor sampling methods shown in Algorithm 2 and 3. In each layer, GIB-Cat and GIB-Bern need to first refine the graph structure using the attention weights to obtain Z . Concretely, we design two algorithms for neighbor sampling, which respectively use the categorical distribution and the Bernoulli distribution. For the categorical version, we view the attention weights as the parameters of categorical distributions to sample the refined graph structure to extract structural information. We sample k neighbors with replacement from the pool of nodes V vt for each node v, where V vt includes the nodes whose shortest-path-distance to v over A is t. We use T as an upper limitation of t to encode the local-dependence assumption of the GIB principle, which also benefits the scalability of the model. For the Bernoulli version, we model each pair of node v and its neighbors independently with a Bernoulli distribution parameterized by the attention weights. Note that here we did not normalize it with the softmax function as in the categorical version, however, we use the sigmoid function to squash it between 0 and 1. Here we do not need to specify the number of neighbors one node sample (k in the categorical version).
Step 4 is sum-pooling of the neighbors, and the output will be used to compute the parameters for a Gaussian distribution where the refined node representations will be sampled. Note that we may also use a mechanism similar to multi-head attention [5]: We splitZ (l−1) X into different channels w.r.t. its last dimension, perform Steps 2-7 independently for each channel and then concatenate the output of different channels to obtain new Z (l) X . Moreover, when training the model, we adopt reparameterization trick for Steps 3 and 7:
Step 3 uses Gumbel-softmax [24,25] while Step 7 usesẐ
(l) X,v = µ (l) v + σ (l) v z where z ∼ Gaussian(0, I), z ∈ R 1×f
and is element-wise product.
Algorithm 1: Framework of GIB-Cat and GIB-Bern
Input: The dataset D = (X, A); T : An integral limitation to impose local dependence; k: The number of neighbors to be sampled. τ : An element-wise nonlinear rectifier.
Initialize: Z (0) X ← X; For all v ∈ V, t ∈ [T ], construct sets V vt ← {u ∈ V |d(u, v) = t}; Weights: a ∈ R T ×4f , W (1) ∈ R f ×2f , W (l) ∈ R f ×2f , for l ∈ [2, L], W out ∈ R f ×K . Output: Z (L) X ,Ŷ v = softmax(Z (L) X,v W out ) 1. For layers l = 1, ..., L and For v ∈ V , do: 2.Z (l−1) X,v ← τ (Z (l−1) X,v )W (l) 3. Z (l) A,v ← NeighborSample(Z l−1 X , T , V vt , a) 4.Z (l) X,v ← u∈Z (l) A,vZ (l−1) X,v 5. µ (l) v ←Z (l) X,v [0 : f ] 6. σ 2(l) v ← softplus(Z (l) X,v [f : 2f ]) 7. Z (l) X,v ∼ Gaussian(µ (l) v , σ 2(l) v )
Properties. Different from traditional GNNs, GIB-Cat and GIB-Bern depend loosely on the graph structure since A is only used to decide the potential neighbors for each node, and we perform message passing based on Z A . This property renders our models extremely robust to structural perturbations/attacks where traditional GNNs are sensitive [15,16]. Both our models also keep robustness to the feature perturbation that is similar to other IB-based DNN models [21,26]. Moreover, the proposed models are invariant to node permutations as we may show that for any permutation matrix Π ∈ R n×n , with per-
muting A → A Π = ΠAΠ T , X → X Π = ΠX, the obtained new node rep- resentations Z (L)
X,Π and ΠZ (L) X share the same distribution (proof in Appendix E). Permutation invariance is known to be important for structural representation learning [13].
Algorithm 2: NeighborSample (categorical) Input: Z l X , T , V vt , a, as defined in Alg. 1; Output: Z (l+1) A,v 1.For t ∈ [T ], do: 2. φ (l) vt ← softmax({(Z (l−1) X,v ⊕Z (l−1) X,u )a T } u∈Vvt ) 3. Z (l+1) A,v ← ∪ T t=1 {u ∈ V vt |u iid ∼ Cat(φ (l) vt ), k times} Algorithm 3: NeighborSample (Bernoulli) Input: Z l X , T , V vt , a, as defined in Alg. 1; Output: Z (l+1) A,v 1.For t ∈ [T ], do: 2. φ (l) vt ← sigmoid({(Z (l−1) X,v ⊕Z (l−1) X,u )a T } u∈Vvt ) 3. Z (l+1) A,v ← ∪ T t=1 {u ∈ V vt |u iid ∼ Bernoulli(φ (l) vt )}
Objective for training. To optimize the parameters of the model, we need to specify the bounds for I(D; Z A ) is a non-informative distribution [24,25]. Specifically, we use the uniform distribution for the categorical version:
Z A ∼ Q(Z A ), Z A,v = ∪ T t=1 {u ∈ V vt |u iid ∼ Cat( 1 |Vvt| )} and Z A,v ⊥ Z A,u if v = u;
and we also adopt a non-informative prior for the Bernoulli version:
Z A,v = ∪ T t=1 {u ∈ V vt |u iid ∼ Bernoulli(α)}, where α ∈ (0, 1) is a hyperparameter.
The difference is that, unlike the categorical distribution, we have an additional degree of freedom provided by α. After the model computes φ
AIB (l) = E P(Z (l) A |A,Z (l−1) X ) log P(Z (l) A |A, Z (l−1) X ) Q(Z (l) A )
, which is instantiated as follows for the two versions,
AIB C (l) = v∈V,t∈[T ] KL(Cat(φ (l) vt )||Cat( 1 |V vt | )) AIB B (l) = v∈V,t∈[T ] KL(Bernoulli(φ (l) vt )||Bernoulli(α))
To estimate XIB (l) , we set Q(Z (l) X ) as a mixture of Gaussians with learnable parameters [27]. Specifically, for any node v,
Z X ∼ Q(Z X ), we set Z X,v ∼ m i=1 w i Gaussian(µ 0,i , σ 2 0,i ) where w i , µ 0,i , σ 0,i are learnable parameters shared by all nodes and Z X,v ⊥ Z X,u if v = u. We estimate XIB (l) by using the sampled Z (l) X : XIB (l) = log P(Z (l) X |Z (l−1) X , Z (l) A ) Q(Z (l) X ) = v∈V log Φ(Z (l) X,v ; µ v , σ 2 v ) − log( m i=1 w i Φ(Z (l) X,v ; µ 0,i , σ 2 0,i )) .
Therefore, in practice, we may select proper sets of indices S X , S A that satisfy the condition in Proposition 3.2 and use substitution
I(D; Z (L) X ) → l∈S A AIB (l) + l∈S X XIB (l)(5)
To characterize Eq.
(2), we may simply set
Q 2 (Y ) = P(Y ) and Q 1 (Y v |Z (L) X,v ) = Cat(Z (L) X,v W out )
. Then, the RHS of Eq. (2) reduces to the cross-entropy loss by ignoring constants, i.e.,
I(Y ; Z (L) X ) → − v∈V Cross-Entropy(Z (L) X,v W out ; Y v )(6)
Other choices of Q 2 (Y ) may also be adopted and yield the contrastive loss [22,28] (Appendix D). However, in our case, we use the simplest setting to illustrate the benefit of the GIB principle. Plugging Eq. (5) and Eq. (6) into Eq. (1), we obtain the objective to train our models.
Other Formalizations of the GIB Principle. There are also other alternative formalizations of the GIB principle, especially when modeling P(Z
(l) A |Z (l−1) X , A)
. Generally speaking, any node-pair representations, such as messages over edges in MPNN [29], can be leveraged to sample structures. Applying the GIB principle to other architectures is a promising direction for future investigation.
Related Work
GNNs learn node-level representations through message passing and aggregation from neighbors [1,3,[29][30][31]. Several previous works further incorporate the attention mechanism to adaptively learn the correlation between a node and its neighbor [5,32]. Recent literature shows that representations learned by GNNs are far from robust and can be easily attacked by malicious manipulation on either features or structure [15,16]. Accordingly, several defense models are proposed to increase the robustness by injecting random noise in the representations [33], removing suspicious and uninformative edges [34], low-rank approximation of the adjacency matrix [35], additional hinge loss for certified robustness [36]. In contrast, even though not specifically designed against adversarial attacks, our model learns robust representations via the GIB principle that naturally defend against attacks. Moreover, none of those defense models has theoretical foundations except [36] that uses tools of robust optimization instead of information theory.
Recently several works have applied contrastive loss [28] as a regularizer for GNNs. The idea is to increase the score for positive samples while decrease the score for negative samples. This can be further formulated as a mutual information maximization term that aims to maximize the mutual information between representations of nodes and their neighbor patches [37], between representations of sub-structures and the hidden feature vectors [38], between representations of graphs and their sub-structures [39]. In contrast, our model focuses on the compression of node features and graph structure while at the same time improves prediction, which is orthogonal to these previous works on unsupervised representation learning with information maximization.
Another line of related work is representation learning with the IB principle. DVIB [21] first applies IB [18] to deep neural networks, and shows increased robustness of learned representations. Other methods apply IB to various domains [40,41]. The difference is that we develop information-theoretic modeling of feature, structure and their fusion on graph-structured data. Furthermore, several works on GNNs [37][38][39] leverage information maximization [42] for unsupervised learning. However, we focus on learning robust representations by controlling the information in supervised learning setting.
Experiments
The goal of our experiments is to test whether GNNs trained with the GIB objective are more robust and reliable. Specifically, we consider the following two questions: (1) Boosted by GIB, does GIB-Cat and GIB-Bern learn more robust representations than GAT to defend against attacks? (2) How does each component of GIB contribute to such robustness, especially, to controlling the information from one of the two sides -the structure and node features?
We compare GIB-Cat and GIB-Bern with baselines including GCN [3] and GAT [5], the most relevant baseline as GIB-Cat and GIB-Bern are to impose the GIB principle over GAT. In addition, we consider two state-of-the-art graph defense models specifically designed against adversarial attacks: GCNJaccard [34] that pre-processes the graph by deleting the edges between nodes with low feature similarity, and Robust GCN (RGCN) [33] that uses Gaussian reparameterization for node features and variance-based attention. Note that RGCN essentially includes the term XIB (Eq. (3)) to control the information of node features while it does not have the term AIB (Eq. (3)) to control the structural information. For GCNJaccard and RGCN, we perform extensive hyperparameter search as detailed in Appendix G.3. For GIB-Cat and GIB-Bern, we keep the same architectural component as GAT, and for the additional hyperparameters k and T (Algorithm 1, 2 and 3), we search k ∈ {2, 3} and T ∈ {1, 2} for each experimental setting and report the better performance. Please see Appendix G for more details.
We use three citation benchmark datasets: Cora, Pubmed and Citeseer [43], in our evaluation. In all experiments, we follow the standard transductive node classification setting and standard trainvalidation-test split as GAT [5]. The summary statistics of the datasets and their splitting are shown in Table 4 in Appendix F. For all experiments, we perform the experiments over 5 random initializations and report average performance. We always use F1-micro as the validating metric to train our model.
Robustness Against Adversarial Attacks
In this experiment, we compare the robustness of different models against adversarial attacks. We use Nettack [15], a strong targeted attack technique on graphs that attacks a target node by flipping the edge or node features. We evaluate the models on both evasive and poisoning settings, i.e. the attack happens after or before the model is trained, respectively. We follow the setting of Nettack [15]: for each dataset, select (i) 10 nodes with highest margin of classification, i.e. they are clearly correctly classified, (ii) 10 nodes with lowest margin but still correctly classified and (iii) 20 more nodes randomly, where for each target node, we train a different model for evaluation. We report the classification accuracy of these 40 targeted nodes. We enumerate the number of perturbations from 1 to 4, where each perturbation denotes a flipping of a node feature or an addition or deletion of an edge. Since Nettack can only operate on Boolean features, we binarize the node features before training. Table 1 shows the results. We see that compared with GAT, GIB-Cat improves the classification accuracy by an average of 8.9% and 14.4% in Cora and Pubmed, respectively, and GIB-Bern improves the classification accuracy by an average of 8.4% and 14.6% in Cora and Pubmed, respectively, which demonstrates the effectiveness of the GIB principle to improve the robustness of GNNs. Remarkably, when the number of perturbation is 1, GIB-Cat and GIB-Bern boost accuracy over GAT (as well as other models) by 31.3% and 34.0% in Pubmed, respectively. GIB-Cat also outperforms GCNJaccard and RGCN by an average of 10.3% and 12.3% on Cora (For GIB-Bern, it is 9.8% and 11.7%), and by an average of 15.0% and 14.6% on Pubmed (For GIB-Bern, it is 15.2% and 14.8%), although GIB-Cat and GIB-Bern are not intentionally designed to defend attacks. For Citeseer, GIB-Cat and GIB-Bern's performance are worse than GCNJaccard in the poisoning setting. This is because Citeseer has much more nodes with very few degrees, even fewer than the number of specified perturbations, as shown in Table 13 in Appendix J. In this case, the most effective attack is to connect the target node to a node from a different class with very different features, which exactly matches the assumption used by GCNJaccard [34]. GCNJaccard proceeds to delete edges with dissimilar node features, resulting in the best performance in Citeseer. However, GIB does not depend on such a restrictive assumption. More detailed analysis is at Appendix J.
Ablation study. To see how different components of GIB contribute to the performance, we perform ablation study on Cora, as shown in Table 2. Here, we use AIB-Cat and AIB-Bern to denote the models that only sample structures with AIB (Eq. (5)) in the objective (whose NeighborSample() function is identical to that of GIB-Cat and GIB-Bern, respectively), and use XIB to denote the model that only samples node representations with XIB (Eq. (5)) in the objective. We see that the AIB (structure) contributes significantly to the improvement of GIB-Cat and GIB-Bern, and on average, AIB-Cat (AIB-Bern) only underperforms GIB-Cat (GIB-Bern) by 0.9% (0.4%). The performance gain is due to the attacking style of Nettack, as the most effective attack is typically via structural perturbation [15], as is also confirmed in Appendix J. Therefore, next we further investigate the case that only perturbation on node features is available.
Only Feature Attacks
To further check the effectiveness of IB for node features, we inject random perturbation into the node features. Specifically, after the models are trained, we add independent Gaussian noise to each dimension of the node features for all nodes with increasing amplitude. Specifically, we use the mean of the maximum value of each node's feature as the reference amplitude r, and for each feature dimension of each node we add Gaussian noise λ · r · , where ∼ N (0, 1), and λ is the feature noise ratio. We test the models' performance with λ ∈ {0.5, 1, 1.5}. Table 3 shows the results. We see across different feature noise ratios, both GIB-Cat and GIB-Bern consistently outperforms other models without IB, especially when the feature noise ratio is large (λ = 1.5), and the AIB models with only structure IB performs slightly worse or equivalent to the GIB models. This shows that GIB makes the model more robust when the feature attack becomes the main source of perturbation.
Conclusion and Discussion
In this work, we have introduced Graph Information Bottleneck (GIB), an information-theoretic principle for learning representations that capture minimal sufficient information from graph-structured data. We have also demonstrated the efficacy of GIB by evaluating the robustness of the GAT model trained under the GIB principle on adversarial attacks. Our general framework leaves many interesting questions for future investigation. For example, are there any other better instantiations of GIB, especially in capturing discrete structural information? If incorporated with a node for global aggregation, can GIB break the limitation of the local-dependence assumption? May GIB be applied to other graph-related tasks including link prediction and graph classification?
Broader Impact
Who may benefit from this research: Graphs have been used to represent a vast amount of realworld data from social science [44], biology [45], geographical mapping [46], finances [47] and recommender systems [48], because of their flexibility in modeling both the relation among the data (structures) and the content of the data (features). Graph neural networks (GNN), naturally entangle both aspects of the data in the most expressive way, have attracted unprecedented attention from both academia and industry across a wide range of disciplines. However, GNNs share a common issue with other techniques based on neural networks. They are very sensitive to noise of data and are fragile to model attacks. This drawback yields the potential safety problems to deploy GNNs in the practical systems or use them to process data in those disciplines that heavily emphasize unbiased analysis. The Graph Information Bottleneck (GIB) principle proposed in this work paves a principled way to alleviate the above problem by increasing the robustness of GNN models. Our work further releases the worries about the usage of GNN techniques in practical systems, such as recommender systems, social media, or to analyze data for other disciplines, including physics, biology, social science. Ultimately, our work increases the interaction between AI, machine learning techniques and other aspects of our society, and could achieve far-reaching impact.
Who may be put at disadvantage from this research: Not applicable.
What are the consequences of failure of the system: Not applicable.
Does the task/method leverage biases in the data: The proposed GIB principle and the GIB-GAT model as an instantiation of GIB leverage the node features and structural information which in general are not believed to include undesirable biases. The datasets to evaluate our approaches are among the most widely-used benchmarks, which in general are not believed to include undesirable biases as well.
I(Y ; Z)(8)
Intuitively, Eq. (7) or (8) Using the information diagram (Fig. 3), where we represent the information of D, Y as circles and their shared part as the overlapping region of the circles, then IB encourages Z to cover as much of the I(D; Y ) as possible, and cover as little of H(D|Y ) (the irrelevant information part) as possible.
An optimal representation is defined as the minimal sufficient representation [49] that only covers I(D; Y ). In practice, due to the expressiveness of the models and different choices of β in Eq. (7), this optimal information can hardly be reached, and may only be approached. It is an interesting future direction to study that when sweeping β, how near it is to the optimal representation on the diagram of I(Y ; Z) vs. I(X; Z).
B Proof for Proposition 3.1
We restate Proposition 3.2: For any PDFs
Q 1 (Y v |Z (L) X,v ) for v ∈ V and Q 2 (Y ), we have I(Y ; Z (L) X ) ≥ 1 + E log v∈V Q 1 (Y v |Z (L) X,v ) Q 2 (Y ) + E P(Y )P(Z (L) X ) v∈V Q 1 (Y v |Z (L) X,v ) Q 2 (Y )(9)
Proof. We use the Nguyen, Wainright & Jordan's bound I NWJ [22,23]:
Lemma B.1. [22,23] For any two random variables X 1 , X 2 and any function g : g(X 1 , X 2 ) ∈ R, we have
I(X 1 , X 2 ) ≥ E [g(X 1 , X 2 )] − E P(X1)P(X2) [exp(g(X 1 , X 2 ) − 1)] .
We use the above lemma to (Y, Z
X } l∈S X ∪ {Z (l) A } l∈S A ) ≤ l∈S X XIB (l) + l∈S A AIB (l) , where(10)AIB (l) = E log P(Z (l) A |A, Z (l−1) X ) Q(Z (l) A )
,
XIB (l) = E log P(Z (l) X |Z (l−1) X , Z (l) A ) Q(Z (l) X ) ,(11)
Proof. The first inequality I(D; Z (L)
X ) ≤ I(D; {Z (l) X } l∈S X ∪ {Z (l)
A } l∈S A ) directly results from the data processing inequality [17] and the Markov property D ⊥ Z
(L) X |{Z (l) X } l∈S X ∪ {Z (l) A } l∈S A .
To prove the second inequality, we define an order "≺" of random variables in {Z (l)
X } l∈S X ∪ {Z (l) A } l∈S A such that 1) for two different integers l, l , Z (l) X , Z (l) A ≺ Z (l ) X , Z (l ) A ; 2) For one integer l, Z (l) A ≺ Z (l)
X . Based on the order, define a sequence of sets
H (l) A = {Z (l1) X , Z (l2) A |l 1 < l, l 2 < l, l 1 ∈ S X , l 2 ∈ S A }, H (l) X = {Z (l1) X , Z (l2) A |l 1 < l, l 2 ≤ l, l 1 ∈ S X , l 2 ∈ S A }.
We may decompose I(D; {Z (l)
X } l∈S X ∪ {Z (l) A } l∈S A ) with respect to this order I(D; {Z (l) X } l∈S X ∪ {Z (l) A } l∈S A ) = l∈S A I(D; Z (l) A |H (l) A ) + l∈S X I(D; Z (l) X |H (l) X ).A ; Z (l) X |H (l) X ) 2) = I(Z (l−1) X , Z (l) A ; Z (l) X |H (l) X ) + I(D; Z (l) X |H (l) X , Z (l−1) X , Z (l) A ) 3) = I(Z (l−1) X , Z (l) A ; Z (l) X |H (l) X ) + 0 4) ≤ I(Z (l−1) X , Z (l) A ; Z (l) X ) 5) = XIB (l) − KL(P(Z (l) X )||Q(Z (l) X )) ≤ XIB (l)
where 1), 2) use the basic properties of mutual information, 3) uses X ⊥ Z (l)
A |{A, Z (l−1) X } and D ⊥ Z (l) X |{Z (l−1) A , Z (l−1) X }, 4) uses H (l) A ⊥ Z (l) A |{Z (l−1) X , A} and H (l) X ⊥ Z (l) X |{Z (l−1) X , Z (l)
A } and 5) uses the definitions of AIB (l) and XIB (l) .
D The Contrastive Loss Derived from the Variational Bound Eq. (2)
To characterize Eq. (2), We may also use a contrastive loss [22,28] which empirically may sometimes improve the robustness of the model. Concretely, we keep Q 1 (Y v |Z (L) X,v ) as the same as that to derive Eq. (6), i.e.,
Q 1 (Y v |Z (L) X,v ) = Cat(Z (L) X,v W out ) and set Q 2 (Y ) = E P(Z (L) X )P(Z (L) X ) [ v∈V 1 2 (Q 1 (Y v |Z (L) X,v ) + Q 1 (Y v |Z (L) X,v ))].
Here, P(Z (L) X ) refers to the distribution of the last-layer node representation after we replace A with a random graph structure A ∈ R n×n where A is uniformly sampled with the constraint that A has the same number of edges as A. When using this contrastive loss, we simply use the estimation of Q 2 (Y ) based on the sampled Z (L) X,v and Z (L) X,v . Moreover, the last term of Eq. (2) is empirically closed to 1 and thus we ignore it and other constants in Eq. (2). Overall, we have the substitution for the contrastive loss,
I(Y ; Z (L) X ) → v∈V log(h(Y v ; Z (L) X,v )) − log(h(Y v ; Z (L) X,v ) + h(Y v ; Z (L) X,v )) ,(12)
where
h(Y v ; Z X,v ) = exp(Z X,v Wout[Yv]) K i=1 exp(Z X,v Wout[i]) .
E Permutation Invariance of GIB-Cat and GIB-Bern
Let Π ∈ R n×n denote a permutation matrix where each row and each column contains exactly one single 1 and the rest components are all 0's. For any variable in GIB-Cat or GIB-Bern, we use subscript Π to denote the corresponding obtained variable after we permutate the node indices of the input data, i.e., D = (X, A) → Π(D) = (ΠX, ΠAΠ T ). For example, Z (l)
X,Π denotes the node representations after l layers of GIB-Cat or GIB-Bern based on the input data Π(D). Moreover, the matrix Π also defines a bijective mapping π : V → V , where π(v) = u iff Π uv = 1. We also use " d =" to denote that two random variables share the same distribution. Now, we formally restate the permutation invariant property of GIB-Cat and GIB-Bern: Suppose Π ∈ R n×n is any permutation matrix, if the input graph-structured data becomes Π(D) = (ΠX, ΠAΠ T ), the corresponding node representations output by GIB-Cat or GIB-Bern satisfy Z π(v)t . Here, we use A → ΠAΠ T and thus V vt → V π(v)t , and assume that φ
(l) vt,Π , φ (l) π(v)t are represented as vectors in R n×1 where their uth components, φ (l) vt,Π,u , φ (l) π(v)t,u , are 0's if π −1 (u) / ∈ V vt . Substep 3, implies Z (l) A,v,Π d = π(Z (l) A,π(v) ) where π(S) = {π(v)|v ∈ S} for some set S ⊆ V . • Step 4 impliesZ (l) X,v,Π d =Z (l) X,π(v) . • Steps 5-6 imply µ (l) v,Π d = µ (l) π(v) , σ 2(l) v,Π d = σ 2(l) π(v) . • Step 7 implies Z (l) X,v,Π d = Z (l) X,π(v) . which indicates Z (l) X,Π d = ΠZ (l)
X and concludes the proof. Table 4 summarizes statistics of the datasets (Cora, Pubmed, Citeseer [43]) we use, as well as the standard train-validation-test split we use in the experiments.
F Summary of the Datasets
G.1 Implementation Details for the GIB-Cat and GIB-Bern
The architecture of GIB-Cat and GIB-Bern follows Alg. 1 (and Alg. 2 and 3 for the respective neighbor-sampling). We follow GAT [5]'s default architecture, in which we use 8 attention heads, nonlinear activation τ as LeakyReLU, and feature dropout rate of 0.6 between layers. We follow GAT's default learning rate, i.e. 0.01 for Cora and Citeseer, and 5×10 −3 for Pubmed. As stated in the main text, the training objective is Eq. (1), substituting in Eq. (5) and (6). To allow more flexibility (in similar spirit as β-VAE [41]), we allow the coefficient before AIB and XIB to be different, and denote them as β 1 and β 2 . In summary, the objective is written as: Table 5. In Table 6 and 7, we provide the hyperparameters that produce the results in Section 5.1, and in Table 8, we provide hyperparameters that produce the results in Section 5.2.
L = v∈V Cross-Entropy(Z (L) X,v W out ; Y v ) + β 1 l∈S A AIB (l) + β 2 l∈S X XIB (l)(13)
G.3 Implementation Details for RGCN and GCNJaccard
We used the implementation in this repository: https://github.com/DSE-MSU/DeepRobust. We perform hyperparameter tuning for both baselines for the adversarial attack experiment in Section 5. Table 9, 10 and 11.
5 × 10 −4 threshold - 5 × 10 −2 β 1 5 × 10 −4 - γ 0.3 -
G.4 Additional Details for Adversarial Attack Experiment
We use the implementation of Nettack [15] in the repository https://github.com/DSE-MSU/ DeepRobust with default settings. As stated in the main text, for each dataset we select 40 nodes in the test set to attack with 10 having the highest margin of classification, 10 having the lowest margin of classification (but still correctly classified), and 20 random nodes. For each target node, we independently train a different model and evaluate its performance on the target node in both evasive and poisoning setting. Different from [15] that only keeps the largest connected component of the graph and uses random split, to keep consistent settings across experiments, we still use the full graph and standard split, which makes the defense even harder than that in [15]. For each dataset and each number of perturbations (1, 2, 3, 4), we repeat the above experiment 5 times with random seeds 0, 1, 2, 3, 4, and report the average accuracy on the targeted nodes (therefore, each cell in Table 1 is the mean and std. of the performance of 200 model instances (5 seeds × 40 targeted nodes, each training one model instance). Across the 5 runs of the experiment, the 20 nodes with highest and lowest margin of classification are kept the same, and the 20 random nodes are sampled randomly and then fixed. We also make sure that for the same seed, different models are evaluated against the same 40 target nodes, to eliminate fluctuation between models due to random sampling.
G.5 Additional Details for Feature Attack Experiment
As before, for each model to compare, we train 5 instances with seeds 0, 1, 2, 3, 4. After training, for each seed and each specified feature noise ratio λ, we perform 5 random node feature attacks, by adding independent Gaussian noise λ · r · to each dimension of the node feature, where r is the mean of the maximum value of each node's feature, and ∼ N (0, 1). Therefore, each number in Table 3 is the mean and std. of 25 instances (5 seeds × 5 attacks per seed).
H Training time for GIB-Cat and GIB-Bern
The training time of GIB-Cat and GIB-Bern is on the same order as GAT with the same underlying architecture. For example, with 2 layers, GIB-Cat takes 98s (GIB-Bern takes 84s) to train 2000 epochs on a NVIDIA GeForce RTX 2080 GPU, while GAT takes 51s to train on the same device. The similar order of training time is due to that they have similar number of parameters and complexity. Compared to GAT, GIB-Cat and GIB-Bern introduce minimal more parameters. In this work, on the structural side, we use the attention weights of GAT as parameters to encode structural representation, which keeps the same number of parameters as GAT. On the feature side, we set S X = {L − 1}, which only requires to predict the diagonal variance of the Gaussian in addition to the mean, which introduce small number of parameters. Therefore, in total, GIB-Cat and GIB-Bern have similar complexity. The added training time is due to the sampling of edges and node features during training. We expect that when GIB is applied to other GNNs, the augmented model has similar complexity and training time.
I Additional experiments for Deep Graph Infomax (DGI)
Here we perform additional experiment for adversarial attacks on Cora using Nettack. The result is in Table 12. We see that both GIB-Cat and GIB-Bern outperform DGI by a large margin.
J More Detailed Analysis of Adversarial Attack in Section 5.1 Table 13 summarizes the statistics of the target nodes and the adversarial perturbations by Nettack, for Cora, Pubmed and Citeseer. We have the following observations:
Figure 1 :
1Graph Information Bottleneck is to optimize the representation Z to capture the minimal sufficient
Figure 2 :
2Our GIB principle leverages local-dependence assumption. (a) The Markov chain defines the search
X
), as shown in Propositions 3.1 and 3.2. Proposition 3.1 (The lower bound of I(Y ; Z (L)
XA
)). We choose two groups of indices S X , S A ⊂ [L] such that D ⊥ Z } l∈S A based on the Markovian dependence inFig. 2, and then for any distributions Q(Z (l) X ), l ∈ S X , and Q(Z
A
) to estimate the corresponding AIB (l) and XIB (l) for regularization, and some Q 1 (Y v |Z (L) X,v ) and Q 2 (Y ) to specify the lower bound in Eq. (2). Then, plugging Eq. (2) and Eq. (3) into the GIB principle (Eq. (1)), one obtains an upper bound on the objective to optimize. Note that any model that parameterizes P(Z
A
(Step 3) and then refines node representations Z
X
) as in Eq. (3) and I(Y ; Z (L) X ) as in Eq. (2), and further compute the bound of the GIB objective in Eq. (1). To characterize AIB (l) in Eq. (3), we assume Q(Z (l)
according to Step 4, we get an empirical estimation of AIB (l) :
encourages the representation Z to maximally capture the information in Y , while controlling the complexity of the representation in terms of I(D; Z). When increasing β from 0 to some large value, we are essentially using a straight line with slope β to sweep out the Pareto frontier of I(Y ; Z) vs. I(X; Z) as given by Eq.(8).
Figure 3 :
3Information diagram for the Information Bottleneck (IB). Also plotted are the minimal sufficient information as covered by I(D; Y ) and overfitting part that occupies parts of H(D|Y ).
X
) and plug in g(Y, Z (L)X ) = 1 + log v∈V Q1(Yv|Z (L) X,v ) Q2(Y )
X
is the output node representations based on the original input data D = (X, A).Proof. We use induction to prove this result. Specifically, we only need to show that for a certain l ∈ [L], if node representations Z π(v) because τ is element-wise operation.• Steps 3: For both NeighborSample (categorical and Bernoulli) by Algorithm 2
Table 1 :
1Average classification accuracy (%) for the targeted nodes under direct attack. Each number is the average accuracy for the 40 targeted nodes for 5 random initialization of the experiments. Bold font denotes top two models.Model
Clean (%)
Evasive (%)
Poisoning (%)
1
2
3
4
1
2
3
4
Cora
GCN
80.0±7.87
51.5±4.87 38.0±6.22 31.0±2.24 26.0±3.79 47.5±7.07 39.5±2.74 30.0±5.00 26.5±3.79
GCNJaccard
75.0±5.00
48.5±6.75 36.0±6.51 32.0±3.25 30.0±3.95 47.0±7.37 38.0±6.22 33.5±3.79 28.5±3.79
RGCN
80.0±4.67
49.5±6.47 36.0±5.18 30.5±3.25 25.5±2.09 46.5±5.75 35.5±3.70 29.0±3.79 25.5±2.73
GAT
77.8±3.97
48.0±8.73 39.5±5.70 36.5±5.48 32.5±5.30 50.5±5.70 38.0±5.97 33.5±2.85 26.0±3.79
GIB-Cat
77.6±2.84
63.0±4.81 52.5±3.54 44.5±5.70 36.5±6.75 60.0±6.37 50.0±2.50 39.5±5.42 30.0±3.95
GIB-Bern
78.4±4.07
64.0±5.18 51.5±4.54 43.0±3.26 37.5±3.95 61.5±4.18 46.0±4.18 36.5±4.18 31.5±2.85
Pubmed
GCN
82.6±6.98
39.5±4.81 32.0±4.81 31.0±5.76 31.0±5.76 36.0±4.18 32.5±6.37 31.0±5.76 28.5±5.18
GCNJaccard
82.0±7.15
37.5±5.30 31.5±5.18 30.0±3.95 30.0±3.95 36.0±3.79 32.5±4.67 31.0±4.87 28.5±4.18
RGCN
79.0±5.18
39.5±5.70 33.0±4.80 31.5±4.18 30.0±5.00 38.5±4.18 31.5±2.85 29.5±3.70 27.0±3.70
GAT
78.6±6.70
41.0±8.40 33.5±4.18 30.5±4.47 31.0±4.18 39.5±3.26 31.0±4.18 30.0±3.06 25.5±5.97
GIB-Cat
85.1±6.90
72.0±3.26 51.0±5.18 37.5±5.30 31.5±4.18 71.0±4.87 48.0±3.26 37.5±1.77 28.5±2.24
GIB-Bern
86.2±6.54
76.0±3.79 50.5±4.11 37.5±3.06 31.5±1.37 72.5±4.68 48.0±2.74 36.0±2.85 26.5±2.85
Citeseer
GCN
71.8±6.94
42.5±7.07 27.5±6.37 18.0±3.26 15.0±2.50 29.0±7.20 20.5±1.12 17.5±1.77 13.0±2.09
GCNJaccard
72.5±9.35
41.0±6.75 32.5±3.95 20.5±3.70 13.0±1.11 42.5±5.86 30.5±5.12 17.5±1.76 14.0±1.36
RGCN
73.5±8.40
41.5±7.42 24.5±6.47 18.5±6.52 13.0±1.11 31.0±5.48 19.5±2.09 13.5±2.85 5.00±1.77
GAT
72.3±8.38
49.0±9.12 33.0±5.97 22.0±4.81 18.0±3.26 38.0±5.12 23.5±4.87 16.5±4.54 12.0±2.09
GIB-Cat
68.6±4.90
51.0±4.54 39.0±4.18 32.0±4.81 26.5±4.54 30.0±9.19 14.0±5.76 9.50±3.26 6.50±2.24
GIB-Bern
71.8±5.03
49.0±7.42 37.5±7.71 32.5±4.68 23.5±7.42 35.0±6.37 19.5±4.81 11.5±3.79 6.00±2.85
Table 2 :
2Average classification accuracy (%) for the ablations of GIB-Cat and GIB-Bern on Cora dataset.Model
Clean (%)
Evasive (%)
Poisoning (%)
1
2
3
4
1
2
3
4
XIB
76.3±2.90
57.0±5.42 47.5±7.50 39.5±6.94 33.0±3.71 54.5±2.09 41.0±3.79 36.0±5.18 31.0±4.54
AIB-Cat
78.7±4.95
62.5±5.86 51.5±5.18 43.0±3.26 36.0±3.35 60.5±3.26 47.5±5.00 36.0±3.35 31.5±6.27
AIB-Bern
79.9±3.78
64.0±4.50 51.5±6.50 42.0±5.40 37.0±5.70 58.5±3.80 46.0±4.50 39.0±4.20 30.0±3.10
GIB-Cat
77.6±2.84
63.0±4.81 52.5±3.54 44.5±5.70 36.5±6.75 60.0±6.37 50.0±2.50 39.5±5.42 30.0±3.95
GIB-Bern
78.4±4.07
64.0±5.18 51.5±4.54 43.0±3.26 37.5±3.95 61.5±4.18 46.0±4.18 36.5±4.18 31.5±2.85
Table 3 :
3Classification F1-micro (%) for the trained models with increasing additive feature noise. Bold font denotes top 2 models.Dataset
Model
Feature noise ratio (λ)
0.5
1
1.5
Cora
GCN
64.0±2.05 41.3±2.05 31.4±2.81
GCNJaccard 61.1±2.18 41.2±2.28 31.8±2.63
RGCN
57.7±2.27 39.1±1.58 29.6±2.47
GAT
62.5±1.97 41.7±2.32 29.8±2.98
AIB-Cat
67.9±2.65 49.6±5.35 38.4±5.06
AIB-Bern
68.8±1.85 49.0±2.87 37.1±4.47
GIB-Cat
67.1±2.21 49.1±3.67 37.5±4.76
GIB-Bern
69.0±1.91 51.3±2.62 38.9±3.38
Pubmed
GCN
61.3±1.52 50.2±2.08 44.3±1.43
GCNJaccard 62.7±1.25 51.9±1.53 45.1±2.04
RGCN
58.4±1.74 49.0±1.65 43.9±1.29
GAT
62.7±1.68 50.2±2.35 43.7±2.43
AIB-Cat
64.5±2.13 50.9±3.83 43.0±3.73
AIB-Bern
61.1±2.70 47.8±3.65 42.0±4.21
GIB-Cat
67.1±4.33 57.2±5.27 51.5±4.84
GIB-Bern
64.9±2.52 54.7±1.83 48.2±2.10
Citeseer
GCN
55.9±1.33 40.6±1.83 32.8±2.19
GCNJaccard 56.8±1.49 41.3±1.81 33.1±2.27
RGCN
51.4±2.00 36.5±2.38 29.5±2.17
GAT
55.8±1.43 40.8±1.77 33.8±1.93
AIB-Cat
55.1±1.26 43.1±2.46 35.6±3.19
AIB-Bern
55.8±2.01 43.3±1.67 36.3±2.47
GIB-Cat
54.9±1.39 42.0±1.92 34.8±1.75
GIB-Bern
54.4±5.98 50.3±4.33 46.1±2.47
Table 4 :
4Summary of the datasets and splits in our experiments.Cora Pubmed Citeseer
# Nodes
2708 19717
3327
# Edges
5429 44338
4732
# Features/Node
1433
500
3703
# Classes
7
3
6
# Training Nodes
140
60
120
# Validation Nodes 500
500
500
# Test Nodes
1000
1000
1000
G Implementation Details for the GIB-Cat, GIB-Bern and Other Compared
Models
For all experiments and all models, the best models are selected according to the classification
accuracy on the validation set. All models are trained with a total of 2000 epochs. For all experiments,
we run it with 5 random seeds: 0, 1, 2, 3, 4 and report the average performance and standard deviation.
The models are all trained on NVIDIA GeForce RTX 2080 GPUs, together with Intel(R) Xeon(R)
Gold 6148 CPU @ 2.40GH CPUs. We use PyTorch [50] and PyTorch Geometric [51] for constructing
the GNNs and evaluation. Project website and code can be found at http://snap.stanford.edu/
gib/. In Section G.1, G.2 and G.3, we detail the hyperparameter setting for Section 5.1, and in
Section G.4 and G.5, we provide additional details for the experiments.
Gaussian with learnable mean and standard deviation. This flexible variational marginal allows it to flexibly approximate the true marginal distribution P(Z X ). For the reparameterization in AIB, we use Gumbel-softmax[24,25] with temperature τ . For GIB-Cat, the number of neighbors k to be sampled is a hyperparameter. For GIB-Bern, we use Bernoulli(α) as the non-informative prior, where we fix α = 0.5. To facilitate learning at the beginning, for the first 25% of the epochs we do not impose AIB or XIB, and gradually anneal up both β 1 and β 2 during the 25% -50% epochs of training, and keep them both at their final value afterwards. For the experiment in Section 5.1 and section 5.2, we perform hyperparameter search of β 1 ∈ {0.1, 0.01, 0.001}, β 2 ∈ {0.01, 0.1}, T ∈ {1, 2}, τ ∈ {0.05, 0.1, 1}, k ∈ {2, 3} for each dataset, and report the one with higher validation F1-micro. A summary of the hyperparameter scope is inIn this work, we set the index set S A = [L] = {1, 2, ...L} and S X = {L − 1}, which satisfies
Proposition 3.2. For XIB, we use mixture of Gaussians as the variational marginal distribution
Q(Z X ). For the mixture of Gaussians, we use m = 100 components with learnable weights, where
each component is a diagonal
Table 5 :
5Hyperparameter scope for Section 5.1 and 5.2 for GIB-Cat and GIB-Bern. 1} Fixed Number m of mixture components for Q(Z X ) Fixed: a constant value † Choice: choose from a set of discrete valuesHyperparameters
Value/Search space
Type
S A
[L]
Fixed *
S X
{L − 100
Fixed
β 1
{0.1, 0.01, 0.001}
Choice †
β 2
{0.1, 0.01}
Choice
τ
{0.05,0.1,1}
Choice
k
{2, 3}
Choice
T
{1, 2}
Choice
*
Table 6 :
6Hyperparameter for adversarial attack experiment for GIB-Cat and GIB-Bern.Dataset
Model
β 1
β 2
τ
k T
Cora
GIB-Cat
0.001 0.01
1
3 2
GIB-Bern 0.001 0.01 0.1
-2
Pubmed
GIB-Cat
0.001 0.01
1
3 2
GIB-Bern 0.001 0.01 0.1
-2
Citeseer
GIB-Cat
0.001 0.01 0.1 2 2
GIB-Bern 0.001 0.01 0.05 -2
Table 7 :
7Hyperparameter for adversarial attack experiment for the ablations of GIB-Cat and GIB-Bern.Model
β 1
β 2
τ
k T
AIB-Cat
-
0.01
1
3 2
AIB-Bern
-
0.01 0.1 -2
XIB
0.001
-
-
-2
Table 8 :
8Hyperparameter for feature attack experiment (Section 5.2) for GIB-Cat and GIB-Bern.G.2 Implementation Details for GCN and GATWe follow the default setting of GCN[3] and GAT[5], as implemented in https://github.com/ rusty1s/pytorch_geometric/blob/master/examples/gcn.py and https://github.com/ rusty1s/pytorch_geometric/blob/master/examples/gat.py, respectively. Importantly, we keep the dropout on the attention weights as the original GAT. Whenever possible, we keep the same architecture choice between GAT and GIB-Cat (and GIB-Bern) as detailed in Section G.1, for a fair comparison.Dataset
Model
β 1
β 2
τ
k T
Cora
GIB-Cat
0.01 0.01 0.1 2 2
AIB-Cat
-
0.01 0.1 2 2
GIB-Bern 0.001 0.01 0.05 -2
AIB-Bern
-
0.01 0.05 -2
Pubmed
GIB-Cat
0.001 0.01
1
3 2
AIB-Cat
-
0.01
1
3 2
GIB-Bern 0.01 0.01 0.05 -1
AIB-Bern
-
0.01 0.05 -1
Citeseer
GIB-Cat
0.001 0.01 0.1 2 2
AIB-Cat
-
0.01 0.1 2 2
GIB-Bern
0.1
0.01 0.05 -2
AIB-Bern
-
0.01 0.05 -2
1. We first tune the latent dimension, learning rate, weight decay for both models. Specifically, we search within {16, 32, 64, 128} for latent dimension, {10 −3 , 10 −2 , 10 −1 } for learning rate, and {10 −4 , 5 × 10 −4 , 10 −3 } for weight decay. For GCNJaccard, we additionally fine-tune the threshold hyperparameter which is used to decide whether two neighbor nodes are still connected. We search threshold within {0.01, 0.03, 0.05}. For RGCN, we additionally fine-tune the β 1 within {10 −4 , 5 × 10 −4 , 10 −3 } and γ within {0.1, 0.3, 0.5, 0.9}. Please find the best set of hyperparameters for both models in
Table 9 :
9Hyperparameter of baselines used on Citeseer dataset.RGCN
GCNJaccard
latent dim
64
16
learning rate
10 −2
10 −2
weight dacay 5 × 10 −4
Table 10 :
10Hyperparameter of baselines used on Cora dataset.RGCN
GCNJaccard
latent dim
64
16
learning rate
10 −2
10 −2
weight dacay 5 × 10 −4
5 × 10 −4
threshold
-
5 × 10 −2
β 1
5 × 10 −4
-
γ
0.3
-
Table 11: Hyperparameter of baselines used on Pubmed dataset.
RGCN
GCNJaccard
latent dim
16
16
learning rate
10 −2
10 −2
weight dacay 5 × 10 −4
5 × 10 −4
threshold
-
5 × 10 −2
β 1
5 × 10 −4
-
γ
0.1
-
Table 12 :
12Average classification accuracy (%) for the targeted nodes under direct attack for Cora. Each number is the average accuracy for the 40 targeted nodes for 5 random initialization of the experiments. Bold font denotes top two models. 5±4.81 41.5±2.24 35.5±5.42 31.0±3.79 53.5±7.42 38.5±4.18 33.0±5.42 29.0±3.79 GIB-Cat 77.6±2.84 63.0±4.81 52.5±3.54 44.5±5.70 36.5±6.75 60.0±6.37 50.0±2.50 39.5±5.42 30.0±3.95 GIB-Bern 78.4±4.07 64.0±5.18 51.5±4.54 43.0±3.26 37.5±3.95 61.5±4.18 46.0±4.18 36.5±4.18 31.5±2.85Clean (%)
Evasive (%)
Poisoning (%)
1
2
3
4
1
2
3
4
DGI
83.2±4.82
54.
Table 13 :
13Statistics of the target nodes and adversarial perturbations by Nettack in Section 5.1. Cora Pubmed Citeseer Fraction of degree 1 in target nodes0.215
0.425
0.500
P(Z|D):I(D;Z)≤Ic
Acknowledgments and Disclosure of FundingWe thank the anonymous reviewers for providing feedback on our manuscript. Hongyu Ren is supported by the Masason Foundation Fellowship. Jure Leskovec is a Chan Zuckerberg Biohub investigator. We also gratefully acknowledge the support of DARPA under Nos.Appendix A Preliminaries for Information BottleneckHere we briefly review the Information Bottleneck (IB) principle and its application to representation learning.Given the input data D and target Y , and an stochastic encoding Z of D by P(Z|D) that satisfies the Markov chain Z − D − Y , IB has the following objective: minIt also has an equivalent form: max
Inductive representation learning on large graphs. W Hamilton, Z Ying, J Leskovec, Advances in neural information processing systems. W. Hamilton, Z. Ying, and J. Leskovec, "Inductive representation learning on large graphs," in Advances in neural information processing systems, 2017.
Variational graph auto-encoders. T N Kipf, M Welling, arXiv:1611.07308arXiv preprintT. N. Kipf and M. Welling, "Variational graph auto-encoders," arXiv preprint arXiv:1611.07308, 2016.
Semi-supervised classification with graph convolutional networks. International Conference on Learning Representations. --, "Semi-supervised classification with graph convolutional networks," in International Conference on Learning Representations, 2017.
Optimizing generalized pagerank methods for seedexpansion community detection. P Li, I Chien, O Milenkovic, Advances in Neural Information Processing Systems. P. Li, I. Chien, and O. Milenkovic, "Optimizing generalized pagerank methods for seed- expansion community detection," in Advances in Neural Information Processing Systems, 2019.
Graph attention networks. P Veličković, G Cucurull, A Casanova, A Romero, P Liò, Y Bengio, International Conference on Learning Representations. P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio, "Graph attention networks," in International Conference on Learning Representations, 2018.
FastGCN: Fast learning with graph convolutional networks via importance sampling. J Chen, T Ma, C Xiao, International Conference on Learning Representations. J. Chen, T. Ma, and C. Xiao, "FastGCN: Fast learning with graph convolutional networks via importance sampling," in International Conference on Learning Representations, 2018.
Predict then propagate: Graph neural networks meet personalized pagerank. J Klicpera, A Bojchevski, S Günnemann, International Conference on Learning Representations. J. Klicpera, A. Bojchevski, and S. Günnemann, "Predict then propagate: Graph neural networks meet personalized pagerank," in International Conference on Learning Representations, 2019.
How powerful are graph neural networks?. K Xu, W Hu, J Leskovec, S Jegelka, in International Conference on Learning Representations. K. Xu, W. Hu, J. Leskovec, and S. Jegelka, "How powerful are graph neural networks?" in International Conference on Learning Representations, 2019.
Position-aware graph neural networks. J You, R Ying, J Leskovec, International Conference on Machine Learning. J. You, R. Ying, and J. Leskovec, "Position-aware graph neural networks," in International Conference on Machine Learning, 2019.
Geom-gcn: Geometric graph convolutional networks. H Pei, B Wei, K C Chang, Y Lei, B Yang, International Conference on Learning Representations. H. Pei, B. Wei, K. C.-C. Chang, Y. Lei, and B. Yang, "Geom-gcn: Geometric graph convolutional networks," in International Conference on Learning Representations, 2020.
Provably powerful graph networks. H Maron, H Ben-Hamu, H Serviansky, Y Lipman, Advances in Neural Information Processing Systems. H. Maron, H. Ben-Hamu, H. Serviansky, and Y. Lipman, "Provably powerful graph networks," in Advances in Neural Information Processing Systems, 2019.
Relational pooling for graph representations. R Murphy, B Srinivasan, V Rao, B Riberio, International Conference on Machine Learning. R. Murphy, B. Srinivasan, V. Rao, and B. Riberio, "Relational pooling for graph representations," in International Conference on Machine Learning, 2019.
On the equivalence between graph isomorphism testing and function approximation with gnns. Z Chen, S Villar, L Chen, J Bruna, Advances in Neural Information Processing Systems. Z. Chen, S. Villar, L. Chen, and J. Bruna, "On the equivalence between graph isomorphism testing and function approximation with gnns," in Advances in Neural Information Processing Systems, 2019.
Measuring and improving the use of graph information in graph neural networks. Y Hou, J Zhang, J Cheng, K Ma, R T B Ma, H Chen, M.-C Yang, International Conference on Learning Representations. Y. Hou, J. Zhang, J. Cheng, K. Ma, R. T. B. Ma, H. Chen, and M.-C. Yang, "Measuring and improving the use of graph information in graph neural networks," in International Conference on Learning Representations, 2020.
Adversarial attacks on neural networks for graph data. D Zügner, A Akbarnejad, S Günnemann, Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data MiningD. Zügner, A. Akbarnejad, and S. Günnemann, "Adversarial attacks on neural networks for graph data," in Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2018.
Adversarial attack on graph structured data. H Dai, H Li, T Tian, X Huang, L Wang, J Zhu, L Song, arXiv:1806.02371arXiv preprintH. Dai, H. Li, T. Tian, X. Huang, L. Wang, J. Zhu, and L. Song, "Adversarial attack on graph structured data," arXiv preprint arXiv:1806.02371, 2018.
Elements of information theory. T M Cover, J A Thomas, John Wiley & SonsT. M. Cover and J. A. Thomas, Elements of information theory. John Wiley & Sons, 2012.
N Tishby, F C Pereira, W Bialek, physics/0004057The information bottleneck method. arXiv preprintN. Tishby, F. C. Pereira, and W. Bialek, "The information bottleneck method," arXiv preprint physics/0004057, 2000.
Deep learning and the information bottleneck principle. N Tishby, N Zaslavsky, 2015 IEEE Information Theory Workshop (ITW). IEEEN. Tishby and N. Zaslavsky, "Deep learning and the information bottleneck principle," in 2015 IEEE Information Theory Workshop (ITW). IEEE, 2015.
The principles of quantum mechanics. P A M Dirac, Oxford university pressP. A. M. Dirac, The principles of quantum mechanics. Oxford university press, 1981, no. 27.
A A Alemi, I Fischer, J V Dillon, K Murphy, arXiv:1612.00410Deep variational information bottleneck. arXiv preprintA. A. Alemi, I. Fischer, J. V. Dillon, and K. Murphy, "Deep variational information bottleneck," arXiv preprint arXiv:1612.00410, 2016.
On variational bounds of mutual information. B Poole, S Ozair, A Van Den, A Oord, G Alemi, Tucker, International Conference on Machine Learning. B. Poole, S. Ozair, A. Van Den Oord, A. Alemi, and G. Tucker, "On variational bounds of mutual information," in International Conference on Machine Learning, 2019.
Estimating divergence functionals and the likelihood ratio by convex risk minimization. X Nguyen, M J Wainwright, M I Jordan, IEEE Transactions on Information Theory. X. Nguyen, M. J. Wainwright, and M. I. Jordan, "Estimating divergence functionals and the likelihood ratio by convex risk minimization," IEEE Transactions on Information Theory, 2010.
Categorical reparameterization with gumbel-softmax. E Jang, S Gu, B Poole, International Conference on Learning Representations. E. Jang, S. Gu, and B. Poole, "Categorical reparameterization with gumbel-softmax," in International Conference on Learning Representations, 2017.
The concrete distribution: A continuous relaxation of discrete random variables. C J Maddison, A Mnih, Y W Teh, International Conference on Learning Representations. C. J. Maddison, A. Mnih, and Y. W. Teh, "The concrete distribution: A continuous relaxation of discrete random variables," in International Conference on Learning Representations, 2017.
Ceb improves model robustness. I Fischer, A A , arXiv:2002.05380arXiv preprintI. Fischer and A. A. Alemi, "Ceb improves model robustness," arXiv preprint arXiv:2002.05380, 2020.
Deep unsupervised clustering with gaussian mixture variational autoencoders. N Dilokthanakul, P A Mediano, M Garnelo, M C Lee, H Salimbeni, K Arulkumaran, M Shanahan, arXiv:1611.02648arXiv preprintN. Dilokthanakul, P. A. Mediano, M. Garnelo, M. C. Lee, H. Salimbeni, K. Arulkumaran, and M. Shanahan, "Deep unsupervised clustering with gaussian mixture variational autoencoders," arXiv preprint arXiv:1611.02648, 2016.
Representation learning with contrastive predictive coding. A V Oord, Y Li, O Vinyals, arXiv:1807.03748arXiv preprintA. v. d. Oord, Y. Li, and O. Vinyals, "Representation learning with contrastive predictive coding," arXiv preprint arXiv:1807.03748, 2018.
Neural message passing for quantum chemistry. J Gilmer, S S Schoenholz, P F Riley, O Vinyals, G E Dahl, Proceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine Learning70J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl, "Neural message passing for quantum chemistry," in Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 2017.
Adaptive graph convolutional neural networks. R Li, S Wang, F Zhu, J Huang, Thirty-second AAAI conference on artificial intelligence. R. Li, S. Wang, F. Zhu, and J. Huang, "Adaptive graph convolutional neural networks," in Thirty-second AAAI conference on artificial intelligence, 2018.
Representation learning on graphs with jumping knowledge networks. K Xu, C Li, Y Tian, T Sonobe, K Kawarabayashi, S Jegelka, arXiv:1806.03536arXiv preprintK. Xu, C. Li, Y. Tian, T. Sonobe, K.-i. Kawarabayashi, and S. Jegelka, "Representation learning on graphs with jumping knowledge networks," arXiv preprint arXiv:1806.03536, 2018.
Gaan: Gated attention networks for learning on large and spatiotemporal graphs. J Zhang, X Shi, J Xie, H Ma, I King, D.-Y Yeung, arXiv:1803.07294arXiv preprintJ. Zhang, X. Shi, J. Xie, H. Ma, I. King, and D.-Y. Yeung, "Gaan: Gated attention networks for learning on large and spatiotemporal graphs," arXiv preprint arXiv:1803.07294, 2018.
Robust graph convolutional networks against adversarial attacks. D Zhu, Z Zhang, P Cui, W Zhu, Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data MiningD. Zhu, Z. Zhang, P. Cui, and W. Zhu, "Robust graph convolutional networks against adversarial attacks," in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019.
Adversarial examples for graph data: Deep insights into attack and defense. H Wu, C Wang, Y Tyshetskiy, A Docherty, K Lu, L Zhu, International Joint Conference on Artificial Intelligence, IJCAI. H. Wu, C. Wang, Y. Tyshetskiy, A. Docherty, K. Lu, and L. Zhu, "Adversarial examples for graph data: Deep insights into attack and defense," in International Joint Conference on Artificial Intelligence, IJCAI, 2019.
All you need is low (rank) defending against adversarial attacks on graphs. N Entezari, S A Al-Sayouri, A Darvishzadeh, E E Papalexakis, Proceedings of the 13th International Conference on Web Search and Data Mining. the 13th International Conference on Web Search and Data MiningN. Entezari, S. A. Al-Sayouri, A. Darvishzadeh, and E. E. Papalexakis, "All you need is low (rank) defending against adversarial attacks on graphs," in Proceedings of the 13th International Conference on Web Search and Data Mining, 2020.
Certifiable robustness and robust training for graph convolutional networks. D Zügner, S Günnemann, Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data MiningD. Zügner and S. Günnemann, "Certifiable robustness and robust training for graph convo- lutional networks," in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019.
Deep graph infomax. P Veličković, W Fedus, W L Hamilton, P Liò, Y Bengio, R D Hjelm, arXiv:1809.10341arXiv preprintP. Veličković, W. Fedus, W. L. Hamilton, P. Liò, Y. Bengio, and R. D. Hjelm, "Deep graph infomax," arXiv preprint arXiv:1809.10341, 2018.
Graph representation learning via graphical mutual information maximization. Z Peng, W Huang, M Luo, Q Zheng, Y Rong, T Xu, J Huang, Proceedings of The Web Conference 2020. The Web Conference 2020Z. Peng, W. Huang, M. Luo, Q. Zheng, Y. Rong, T. Xu, and J. Huang, "Graph representation learning via graphical mutual information maximization," in Proceedings of The Web Conference 2020, 2020.
Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization. F.-Y Sun, J Hoffmann, J Tang, arXiv:1908.01000arXiv preprintF.-Y. Sun, J. Hoffmann, and J. Tang, "Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization," arXiv preprint arXiv:1908.01000, 2019.
Variational discriminator bottleneck: Improving imitation learning, inverse rl, and gans by constraining information flow. X B Peng, A Kanazawa, S Toyer, P Abbeel, S Levine, arXiv:1810.00821arXiv preprintX. B. Peng, A. Kanazawa, S. Toyer, P. Abbeel, and S. Levine, "Variational discriminator bottleneck: Improving imitation learning, inverse rl, and gans by constraining information flow," arXiv preprint arXiv:1810.00821, 2018.
beta-vae: Learning basic visual concepts with a constrained variational framework. I Higgins, L Matthey, A Pal, C Burgess, X Glorot, M Botvinick, S Mohamed, A Lerchner, International Conference on Learning Representations. I. Higgins, L. Matthey, A. Pal, C. Burgess, X. Glorot, M. Botvinick, S. Mohamed, and A. Lerch- ner, "beta-vae: Learning basic visual concepts with a constrained variational framework." in International Conference on Learning Representations, 2017.
Learning deep representations by mutual information estimation and maximization. R D Hjelm, A Fedorov, S Lavoie-Marchildon, K Grewal, P Bachman, A Trischler, Y Bengio, International Conference on Learning Representations. R. D. Hjelm, A. Fedorov, S. Lavoie-Marchildon, K. Grewal, P. Bachman, A. Trischler, and Y. Bengio, "Learning deep representations by mutual information estimation and maximization," in International Conference on Learning Representations, 2019.
Collective classification in network data. P Sen, G Namata, M Bilgic, L Getoor, B Galligher, T Eliassi-Rad, AI magazineP. Sen, G. Namata, M. Bilgic, L. Getoor, B. Galligher, and T. Eliassi-Rad, "Collective classifica- tion in network data," AI magazine, 2008.
Friendship and mobility: user movement in locationbased social networks. E Cho, S A Myers, J Leskovec, Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining. the 17th ACM SIGKDD international conference on Knowledge discovery and data miningE. Cho, S. A. Myers, and J. Leskovec, "Friendship and mobility: user movement in location- based social networks," in Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, 2011.
Graph theory and networks in biology. O Mason, M Verwoerd, IET systems biology. O. Mason and M. Verwoerd, "Graph theory and networks in biology," IET systems biology, 2007.
Spatial networks. M Barthélemy, Physics Reports. M. Barthélemy, "Spatial networks," Physics Reports, 2011.
Designing a neural network for forecasting financial. I Kaastra, M Boyd, Neurocomputing. I. Kaastra and M. Boyd, "Designing a neural network for forecasting financial," Neurocomputing, 1996.
Graph convolutional neural networks for web-scale recommender systems. R Ying, R He, K Chen, P Eksombatchai, W L Hamilton, J Leskovec, Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data MiningR. Ying, R. He, K. Chen, P. Eksombatchai, W. L. Hamilton, and J. Leskovec, "Graph convolu- tional neural networks for web-scale recommender systems," in Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2018.
I Fischer, arXiv:2002.05379The conditional entropy bottleneck. arXiv preprintI. Fischer, "The conditional entropy bottleneck," arXiv preprint arXiv:2002.05379, 2020.
Pytorch: An imperative style, high-performance deep learning library. A Paszke, S Gross, F Massa, A Lerer, J Bradbury, G Chanan, T Killeen, Z Lin, N Gimelshein, L Antiga, A Desmaison, A Kopf, E Yang, Z Devito, M Raison, A Tejani, S Chilamkurthy, B Steiner, L Fang, J Bai, S Chintala, Advances in Neural Information Processing Systems. H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. GarnettCurran Associates, Inc32A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, "Pytorch: An imperative style, high-performance deep learning library," in Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, Eds. Curran Associates, Inc., 2019.
Fast graph representation learning with PyTorch Geometric. M Fey, J E Lenssen, ICLR Workshop on Representation Learning on Graphs and Manifolds. M. Fey and J. E. Lenssen, "Fast graph representation learning with PyTorch Geometric," in ICLR Workshop on Representation Learning on Graphs and Manifolds, 2019.
Citeseer has much more nodes with degrees less than 1, 2, 3, 4. This explains why in general the 5 models has worse performance in Citeseer than in Cora and Pubmed. • Compared to Cora and Pubmed• Compared to Cora and Pubmed, Citeseer has much more nodes with degrees less than 1, 2, 3, 4. This explains why in general the 5 models has worse performance in Citeseer than in Cora and Pubmed.
• Almost all attacks (≥ 99.1%) are structural attacks. • Almost all attacks (≥ 99.1%) are structural attacks.
Within structural attacks, most of them (≥ 83.4%) are via adding edges, with Citeseer having the largest fraction. • Within structural attacks, most of them (≥ 83.4%) are via adding edges, with Citeseer having the largest fraction.
• For the added edges, almost all of them (≥ 98.5%) have different classes for the end nodes. • For the added edges, almost all of them (≥ 98.5%) have different classes for the end nodes.
From the above summary, we see that the target nodes in Citeseer dataset in general have fewest degrees, which are most prone to added-edge structural attacks by connecting nodes with different classes. This exactly satisfies the assumption of GCNJaccard. 34From the above summary, we see that the target nodes in Citeseer dataset in general have fewest degrees, which are most prone to added-edge structural attacks by connecting nodes with different classes. This exactly satisfies the assumption of GCNJaccard [34].
GCNJaccard proceeds by deleting edges with low feature similarity, so those added edges are not likely to enter into the model training during poisoning attacks. This is probably the reason why in Nettack poisoning mode in Citeseer. GCNJaccard has the best performanceGCNJaccard proceeds by deleting edges with low feature similarity, so those added edges are not likely to enter into the model training during poisoning attacks. This is probably the reason why in Nettack poisoning mode in Citeseer, GCNJaccard has the best performance.
| [
"https://github.com/DSE-MSU/DeepRobust.",
"https://github.com/DSE-MSU/"
]
|
[
"A coarse-grained model with implicit salt for RNAs: predicting 3D structure , stability and salt effect Short version of the title: A coarse-grained model for RNAs",
"A coarse-grained model with implicit salt for RNAs: predicting 3D structure , stability and salt effect Short version of the title: A coarse-grained model for RNAs"
]
| [
"Ya -Zhou Shi \nDepartment of Physics\nSchool of Physics and Technology\nKey Laboratory of Artificial Micro-and Nano-structures of Ministry of Education\nWuhan University\n430072WuhanChina\n",
"Feng-Hua Wang \nDepartment of Physics\nSchool of Physics and Technology\nKey Laboratory of Artificial Micro-and Nano-structures of Ministry of Education\nWuhan University\n430072WuhanChina\n",
"Yuan-Yan Wu \nDepartment of Physics\nSchool of Physics and Technology\nKey Laboratory of Artificial Micro-and Nano-structures of Ministry of Education\nWuhan University\n430072WuhanChina\n",
"Zhi-Jie Tan [email protected] \nDepartment of Physics\nSchool of Physics and Technology\nKey Laboratory of Artificial Micro-and Nano-structures of Ministry of Education\nWuhan University\n430072WuhanChina\n"
]
| [
"Department of Physics\nSchool of Physics and Technology\nKey Laboratory of Artificial Micro-and Nano-structures of Ministry of Education\nWuhan University\n430072WuhanChina",
"Department of Physics\nSchool of Physics and Technology\nKey Laboratory of Artificial Micro-and Nano-structures of Ministry of Education\nWuhan University\n430072WuhanChina",
"Department of Physics\nSchool of Physics and Technology\nKey Laboratory of Artificial Micro-and Nano-structures of Ministry of Education\nWuhan University\n430072WuhanChina",
"Department of Physics\nSchool of Physics and Technology\nKey Laboratory of Artificial Micro-and Nano-structures of Ministry of Education\nWuhan University\n430072WuhanChina"
]
| []
| To bridge the gap between the sequences and 3-dimensional (3D) structures of RNAs, some computational models have been proposed for predicting RNA 3D structures. However, the existed models seldom consider the conditions departing from the room/body temperature and high salt (1M NaCl), and thus generally hardly predict the thermodynamics and salt effect. In this study, we propose a coarse-grained model with implicit salt for RNAs to predict 3D structures, stability and salt effect. Combined with Monte Carlo simulated annealing algorithm and a coarse-grained force field, the model folds 46 tested RNAs (≤ 45 nt) including pseudoknots into their native-like structures from their sequences, with an overall mean RMSD of 3.5 Å and an overall minimu m RMSD of 1.9 Å from the experimental structures. For 30 RNA hairpins, the present model also gives 2 the reliable predictions for the stability and salt effect with the mean deviation ~ 1.0℃ of melting temperatures, as compared with the extensive experimental data. In addition, the model could provide the ensemble of possible 3D structures for a short RNA at a given temperature/salt condition. | 10.1063/1.4894752 | [
"https://arxiv.org/pdf/1409.0305v1.pdf"
]
| 835,302 | 1409.0305 | ccc1648d7e61249a5c081647061dacd40659ad8d |
A coarse-grained model with implicit salt for RNAs: predicting 3D structure , stability and salt effect Short version of the title: A coarse-grained model for RNAs
Ya -Zhou Shi
Department of Physics
School of Physics and Technology
Key Laboratory of Artificial Micro-and Nano-structures of Ministry of Education
Wuhan University
430072WuhanChina
Feng-Hua Wang
Department of Physics
School of Physics and Technology
Key Laboratory of Artificial Micro-and Nano-structures of Ministry of Education
Wuhan University
430072WuhanChina
Yuan-Yan Wu
Department of Physics
School of Physics and Technology
Key Laboratory of Artificial Micro-and Nano-structures of Ministry of Education
Wuhan University
430072WuhanChina
Zhi-Jie Tan [email protected]
Department of Physics
School of Physics and Technology
Key Laboratory of Artificial Micro-and Nano-structures of Ministry of Education
Wuhan University
430072WuhanChina
A coarse-grained model with implicit salt for RNAs: predicting 3D structure , stability and salt effect Short version of the title: A coarse-grained model for RNAs
1 The authors contributed equally to the work. * To whom correspondence should be addressed. 3RNA3D structure predictionstabilitysalt effectMonte Carlo †
To bridge the gap between the sequences and 3-dimensional (3D) structures of RNAs, some computational models have been proposed for predicting RNA 3D structures. However, the existed models seldom consider the conditions departing from the room/body temperature and high salt (1M NaCl), and thus generally hardly predict the thermodynamics and salt effect. In this study, we propose a coarse-grained model with implicit salt for RNAs to predict 3D structures, stability and salt effect. Combined with Monte Carlo simulated annealing algorithm and a coarse-grained force field, the model folds 46 tested RNAs (≤ 45 nt) including pseudoknots into their native-like structures from their sequences, with an overall mean RMSD of 3.5 Å and an overall minimu m RMSD of 1.9 Å from the experimental structures. For 30 RNA hairpins, the present model also gives 2 the reliable predictions for the stability and salt effect with the mean deviation ~ 1.0℃ of melting temperatures, as compared with the extensive experimental data. In addition, the model could provide the ensemble of possible 3D structures for a short RNA at a given temperature/salt condition.
I. INTRODUCTION
The central dogma of molecular biology stipulates that RNA plays a pervasive role in gene regulation and expression. 1 Within the past few years, discoveries of various noncoding RNAs have led to new insights in the importance of RNAs in many cell life processes. These diverse RNA molecules include ribozymes which catalyze cleavage or ligation of RNA backbone, 2 small interference RNAs which induce gene silencing 3,4 and riboswitches which control gene expression by directly sensing the levels of specific small-molecule metabolites 5 .
To perform the biological functions, RNAs generally adopt compact native tertiary structures.
Although the knowledge of the spatial structures and dynamics of RNAs is a fundamental prerequisite to completely understand their functions, 6-11 to obtain high-resolution RNA structures by experimental methods such as X-ray crystallography and nuclear magnetic resonance spectroscopy is very time-consuming and expensive compared to the determination of RNA sequences. Furthermore, due to the high flexibility and high charge density of RNA backbone, RNA structures are very sensitive to the solution environment such as temperature and salt. 6,12-20 Therefore, to build three-dimensional (3D) structures of RNAs remains an important challenge, especially at non-physiological solution conditions. 6, 21,22 In an alternative way, computational modeling becomes very important to predict RNA 3D structures, and then to understand their biological functions. [23][24][25][26][27][28][29][30][31][32][33][34] In recent years, some computational models have been developed for predicting RNA 3D structures . The graphics-based approaches, such as MANIP 35 , S2S/Assemble 36,37 and RNA2D3D 38 can be used to model small-to large-size structured RNAs from their secondary structures. Although the approaches are quick and intuitive, they are limited by the requirements fo r 4 the users' interactive operation and expert knowledge. Based on the similarity between the structures of the evolutionarily related RNAs , another series of models have been developed such as ModeRNA 41 and RNABuilder 42 . These models can predict the structures for large RNAs while it may be difficult to find a proper template in databases for a particular RNA. In addition to the above knowledge-based models [35][36][37][38][39][40][41][42][43][44]51 , there are some physics-based models [45][46][47][48][49][50]52,[55][56][57][58][59][60][61][62][63][64][65][66][67] , which simulate the folding process through sampling the conformations with minimizing free energy. The atomistic models such as MC-Fold/MC-Sym pipeline 45 and FARNA 46,47 can make promising predictions for 3D atomistic structures, while generally either depend on the knowledge of the secondary structure or only treat small RNAs. Since an atomistic structure model generally involves huge number of freedom degrees, 68 the simplified coarse-grained (CG) models [55][56][57][58][59][60][61][62][63][64][65][66][67][69][70][71][72][73][74][75][76][77] such as NAST 56 , iFold 58,59 and Vfold 60,61 have been developed by treating a group of functional atoms as a single bead. The NAST 56 can be used to model large RNA molecules (>100 nt) based on the known secondary structures and certain tertiary contacts. The iFold 58,59 can predict 3D structures of small RNAs from sequences with the use of the discrete molecular dynamics sampling. The Vfold 60,61 can make reliable predictions on the free energy landscape of RNA pseudoknots. Although some of the CG models could take salt into account, it is still difficult for them to quantitatively predict the stability of various RNAs in salt solutions.
Despite the advances, there are few RNA 3D structure prediction models with the ability to quantitatively predict the thermodynamic properties of RNAs, especially in salt solutions. Recently, a CG model was developed to quantitatively predict the folding thermodynamics of RNA pseudoknots. 75,76 However, it could not give reliable prediction for RNA 3D structure from sequence. Therefore, it is imperative to propose a model to predict 3D structures of RNAs at a given 5 salt/temperature condition.
Here, beyond the existed CG models, we propose a new CG model for short RNAs to predict 3D structures, stability and salt effect using the Monte Carlo (MC) simulated annealing algorithm. In the model, each nucleotide is represented by three beads representing phosphate, sugar and base, respectively. The knowledge-based potentials are implemented for the bonds, angles, and dihedrals for the CG beads, and the sequence-dependent base-pairing and base-stacking interactions, and electrostatic interaction are also included. The CG nature of the model, as well as the high-efficiency of MC algorithm, enables us to predict the 3D structures and stability of short RNAs at different salt conditions. The model works well for extensive short RNAs, and it could be extended to treat larger RNAs in solution containing Mg 2+ which was shown to play a critical role in RNA folding and functions.
II. MODEL AND METHODS
A. CG structural model
To reduce the complexity of RNA molecules, for each nucleotide, we adopt a reduced representation with three beads which correspond to the phosphate, sugar and base, respectively. As shown in Fig. 1a, the three beads are placed on the existing atoms: the phosphate (P) bead and the sugar (C) bead are placed at the P and C4' atom position, and the purine base bead and the pyrimidine bead are placed at the N9 and the N1 atom position, respectively. 61 P, C (C4') and N (N1 and N9) beads are treated as the spheres with van der Waals radii of 1.9 Å, 1.7 Å and 2.2 Å, 6 respectively. 68,78 B. Force field and optimization
Energy functions
The implicit-solvent/salt force field in our model is expressed as a summation of bonded and nonbonded terms by
The bonded term, namely the first three terms in Eq. 1, is a summation of potentials for virtual bond length U b , bond angle U a , and dihedral U d , which were initially parameterized by the statistical analysis on the available 3D structures of 157 RNA molecules in the Protein Data Bank (PDB, http://www.rcsb.org/pdb/home/home.do) and their PDB codes are listed in Table SI 79 . The function forms of U b , U a , U d , are given by Eqs. S3-S5 in supplementary material 79 . Since the structural features of RNAs are different for stems (helical) and single-strands/loops (nonhelical), two sets of parameters are calculated for the potentials of virtual bond length U b , bond angle U a and dihedral U d for stems and single-strands/loops, named as Para helical and Para nonhelical , respectively. As Figs. S1-S3
shown, the distributions of bond length/angle/dihedral for nonhelical parts are slightly broader than that of helical parts. 79 It is because that the bases in the loops of native structures in PDB are sometimes stacked with their neighbours , and consequently, the loops generally have some features of stems (helical parts). The Para helical are used in the folding process to describe the folding of free RNA chains, and during the structure refinements on the initial folded structures, the 7 Para helical /Para nonhelical are used for helical/nonhelical regions, respectively.
The remaining terms of Eq. 1 describe various pairwise nonbonded interactions. The excluded volume energy U exc between the CG beads is modeled by a purely repulsive Lennard-Jones potential; see Eq. S8 79 . The electrostatic interaction U qq between phosphate groups is accounted for by the Debye-Hückel potential combined with the concept of counterion condensation: ∑ where r ij is the distance between two phosphate beads i and j. N p is the number of phosphate beads. l D is the Debye length which is related to salt concentration and temperature; see Eq. S10 79 . ε 0 is the permittivity of free space and ε(T) is an effective temperature-dependent dielectric constant 75,80 : where T is in℃. Qe in Eq. 2 is the net charge of each phosphate bead, where e is the elementary charge and Q is given by Q=b/l B . 75,81 Here b is the charge spacing on RNA backbone and l B is the Bjerrum length.
The base-pairing interaction between nucleotides is an important interaction in the folding of RNAs. For simplicity, we use three types of distances to model the orientation-dependent hydrogen-bonding interaction between specific nucleotide pairs ( Fig. 1b), including the canonical Watson-Crick base pairs (G-C and A-U) and the wobble base pairs (G-U). For two nucleotides i and j, if the distance r NiNj between two base beads N i and N j satisfies the paired criteria: a 1 <r NiNj <a 2 , the hydrogen-bond formed and the corresponding base-paring potential is given by
∑ ( ) ∑ ( ) ∑ ( )
where ε bp (<0) is the interaction strength, which depends on the number of formed hydrogen bonds, and where γ describes the ratio of pairing strength between different types of 8 bases. 58,65,66,82 In base-pairing potential, the distance r Ci(j)Nj(i) /r Pi(j)Nj(i) between sugar bead C i(j) or phosphate bead P i(j) and base bead N j(i) are used to restrict the orientation between the pairing nucleotides 58,82 ; see is an effective potential to efficiently take two complementary bases into preferable helix.
The base-stacking interaction provides a strong force in stabilizing RNA secondary structure. In the model, if two nearest neighbour bases i and i+1 are paired with other neighbour bases j and j-1 respectively, the base stacking is formed as shown in Fig. 1c. The base-stacking potential can be given by
∑ {[ ( ) ( ) ] [ ( ) ( ) ]}
where σ st is the optimal distance of two neighbour bases in the known helix structures shown in Fig. S4b. 79 G i,i+1,j-1, j are the strength of base stacking energy, which was derived from the combined analysis of available thermodynamic parameters and the MC algorithm
Here T is the absolute temperature in Kelvin. ΔH and ΔS are the RNA thermodynamic parameters associated with stacking between two adjacent base pairs and have been experimentally measured by Turner and colleagues. 83,84 Since a part of entropy change due to base-pair stacking is naturally included in the MC sampling process, this part of entropy change ΔS c should be removed from ΔS.
To estimate ΔS c , we performed lots of MC simulations for one free base pair of an A-form 9 double-stranded RNA at different locations and calculated the entropy change when it stacks with neighbour base pair; see more details on the calculation of ΔS c in Eq. S15 and Fig. S5 in supplementary material. 79
Determination of the parameters of energy functions
For the above described potentials, the initial parameters are directly obtained from the statistical analysis on the known structures (157 RNAs listed in Table SI). 79 Afterwards, the parameters are optimized through the comparisons with the experiments and the consequently slight adjustment. 85,86 In practice, five RNA hairpins (PDB code: 1q75, 1i3x, 1bn0, 2kd8, 28sr listed in Table SI and SIV) 79 are used for 3D structure comparisons and three other RNA hairpins (RH1, RH18, RH23 listed in Table I) are used for the comparisons on stability. The optimized parameters have been tabulated in Tables SII and SIII. 79 It is noted that the parameters for bond length, bond angle and dihedral do not differ significantly from the initial parameters. The base-pairing strength ε bp =-3.5 kcal/mol (Table SIII) in Eq. 3 sounds a little large, while appears not so strong in RNA folding process due to the strict constraints of distance for base pairing, and the formation of helix is mainly determined by the T-dependent base stacking (see Eq. 4). The charge spacing b in Eq. 2 on backbone is taken as 5.5 Å in the calculation, a slightly smaller value than the distances between two adjacent phosphate groups of single-stranded RNA (~6.0 Å) since the chain is generally bent rather than straight. The optimized parameters from 5 hairpins for structure and 3 hairpins for stability are then used to make predictions on 3D structure for 46 RNAs and on stability for 30 RNA hairpins. 10
C. Simulation algorithm
With the above energy functions for the CG beads in RNAs, the present CG model can be employed to fold RNAs into native-like structures with the MC simulated annealing algorithm 87 , which can effectively avoid the trap in local energy minima and has been used to predict the folding of proteins and RNAs 88 .
The MC algorithm is performed as follows. Firstly, an initial random conformation is generated from RNA sequence at initial high temperature. Secondly, at each temperature, two different types of moves for the RNA chain are performed: subtle translation and the pivot move, which has been demonstrated to be rather efficient in sampling conformations of a polymer. 78,89 The change after each move is accepted or rejected according to the standard Metropolis algorithm. 78,89 Finally, after long enough steps for the system to reach equilibrium, the temperature is lowered based on the exponential scheme and the previous process is repeated until the target temperature is reached. 87
III. RESULTS AND DISCUSSION
Based on the implicit-solvent/salt force field for CG beads, we employ the model to predict 3D structures, stability and salt effect for various short RNAs. As compared with the experimental structures and the experimental thermodynamics data, the present model can make overall reliable predictions.
A. RNA 3D structure prediction 11 The with a 6-nt loop and a 7 base-pair (bp) stem as a paradigm. First, a random chain (e.g., structure A in Fig. 2a) is generated from the sequence based on bonded potentials. Next, starting from this extended random conformation, the MC simulated annealing algorithm is performed from an initial high temperature to the target temperature (e.g., 298K). In the MC simulation annealing process, the 12 energy of the hairpin reduces with the decrease of temperature and finally it fluctuates around a certain low value at room temperature, as shown in the top panel of Fig. 2a. Simultaneously, the hairpin folds into native-like structures (e.g., structure C in Fig. 2a) at the target temperature from the initial random chain step by step; see the middle panel and bottom panel of Fig. 2a. It is necessary to point out that only the Para nonhelical are employed for bonded potentials in the process to describe the folding of a free chain.
Refinement on initial folded 3D structure. After the annealing process, the initial native-like 3D structure of the hairpin is predicted. However, due to the Para nonhelical are insufficient to perfectly depict the more standard geometry of helical parts, the further structure refinement should be implemented. During the structure refinement, the Para helical (shown in Table SII) 79 are introduced to replace the parameters used before for the base-pairing regions (stems) in the initially folded structure. In addition, the RNA conformational changes are implemented by the subtle translation move of a single bead rather than the wide pivot moves used in the above simulated annealing process for RNA chains. As shown in Fig. 2b, the energy of the refined structures of the hairpin (PDB code: 1u2a) is about 5.0 kcal/mol lower than that of the structures before the refinement (Fig. 2b, top panel).
Evaluation of the predicted 3D structures. The predicted 3D structures are evaluated by their root-mean-square deviation (RMSD) 90 calculated over C beads from the corresponding atoms C4' in the native structures in PDB, and the RMSDs between the predicted structures and the experimental structures can be calculated by VMD software 91 . Since the present model generally predicts a series 13 of native-like structures, in the following, we will use mean and minimum RMSDs to evaluate the reliability of the predictions. The former is the averaged RMSD over all the equilibrium structures and the latter is the RMSD for the equilibrium structure closest to the native one. For the hairpin shown here (PDB code: 1u2a), the mean RMSD and the minimum RMSD between predicted structures and its native structure are 2.6 Å and 1.5 Å, respectively; see the middle panel and bottom panel of Fig. 2b.
3D structures of 46 RNAs
According to the above process, we predict the 3D structures for the 46 tested short RNAs listed in Table SIV, 79 where mean RMSD and minimum RMSD of each RNA are also listed. All the 46
RNAs fold into their near-native structures with the overall mean RMSD of 3.5 Å and the overall minimum RMSD of 1.9 Å from their native structures; see Table SIV. RNA hairpins. Since an RNA hairpin is a simplest secondary structure of RNAs, we predicted 3D structures for 18 RNA hairpins with different lengths with the overall mean RMSD of 2.8 Å from the corresponding experimental structures. As Fig. 3a shown, the present model is very reliable for predicting the structures of stems, while the predicted loops are slightly deviated from the experimental ones. Table SIV. 79 Obviously, if a helix region is modeled to be roughly in the right place, while another helix region is angled relatively to the correct orientation, the conformation could propagate to produce large RMSD values even with modest degrees of angular deflection (Fig. 3c).
Beyond that, as important functional fragments (e.g., as binding sites), internal loops usually have specific structures, which may contain noncanonical motifs/base-pairs. 9
Pseudoknots. An RNA pseudoknot is defined as the structure with the base pairing between a loop and other single-stranded regions of an RNA. Pseudoknots play diverse fundamental roles in the control of viral replication, in structural organization of complex RNAs and in the self-cleaving ribozyme catalysis. 92 Fig. 3d. A possible reason is that the present model ignores some 15 interactions in RNA pseudoknots, such as the single-stranded base stacking and some specific hydrogen bonds between bases and backbone. [93][94][95] In addition, the model predicted a two-step folding for the pseudoknots: a chain would firstly fold into an intermediate state of hairpin and then to the final state of pseudoknot, which is in accordance with the previous studies 96, 97 . Fig. 3) with the experimental structures (cartoon in Fig. 3) shows that the present model is very effective to capture the 3D shapes of short RNAs. As shown in Fig. 4, the prediction accuracy of 3D structures by the present model decreases with the increase of RNA length (Fig. 4a), especially with the number of unpaired nucleotides (Fig. 4b) possibly due to their highly flexible nature. This suggests that the further improvement on the prediction accuracy requires the more accurate treatment on loops.
Comparisons with previous models
We test our predictions against two well-established 3D structure prediction models: FARNA 46 and MC-Fold/MC-Sym pipeline 45 Beyond 3D structure prediction, the present model could also make reliable predictions on the stability of RNA hairpins and the salt effect.
B. Stability of RNA hairpins
The folding thermodynamics of RNAs are important for unravelling structure-function relationships for RNAs. To obtain accurate thermodynamic parameters of RNAs, there are many thermal denaturation experiments and theoretical modeling for RNA hairpins [98][99][100][101][102][103][104][105][106] or duplexes 80,83 . In order to make further validation on the present model, we have performed the extensive simulations for 30 RNA hairpins to predict their melting temperatures T m . Here the ionic condition is also assumed at 1 M NaCl where the RNAs can nearly get full-neutralization; see Eq. 2. All the 17 sequences of 30 RNA hairpins tested here are listed in Table I.
We use the statistical average of the number of base pairs P bp (t,T) to characterize the state of hairpins and P bp (t, T) can be given by (Table I) have the similar bistability (data not shown). Fig. 6b shows that the predicted data (symbols) and fitted melting curves (lines) for the hairpin RH24 mentioned above, and the melting temperature T m can be eas ily estimated from the curve. For RH24, the predicted T m (82.5℃ ) is very close to the experimental data (82.4℃ ); see Table I. In addition, an advantage of the present model is that it can provide a series of the 3D structures of RNA molecules at different temperatures T: native structures at low T, disordered chains at high T, and occasionally partially 18 denatured structures at T ~ T m ; see Fig. 6b.
To examine the sequence effect, 30 RNA hairpins with extensive sequences have been studied with the present model. The sequences with the corresponding melting temperatures are listed in Table I, which shows that our predictions on T m agree well with the experimental data with the mean deviation of 1.1℃ over the extensive sequences. In addition, Fig. 6c shows the melting curves for the hairpins of RH6, RH18 and RH23, and the agreement between our predictions and available experimental data 98,103 suggests that the present model can reliably predict the denature processes of RNA hairpins. However, for unusually stable RNA hairpins such as hairpins with GA mismatches and tetraloops, the present model cannot give accurate predictions on stability. This may be because that the special hydrogen-bond and base stacking interactions in loops are not accounted for in the present model.
C. Salt effect in RNA hairpin stability
Since RNAs are strongly charged polyanionic chains, there is strong intrachain Coulombic repulsion during RNA folding process. The counterions in solutions are critical to RNA folding because the ions can neutralize backbone charges and consequently favour the folding. [12][13][14]107,108 Although the present model can conveniently involve the explicit salt ions, for simplicity and efficiency, we combine the DH theory with the concept of ion binding of Manning 75
Hairpin denaturing induced by high temperature and low salt
Folding/unfolding of an RNA is closely related to the solution environments such as temperature and salt concentration as well as the forces acting on it. 76,111,112 Although increasing temperature or decreasing salt concentration both can trigger unfolding of RNAs, the mechanisms are not the same. 112 For the hairpin RH24 (Table I) for three different [Na + ]'s. As shown in Fig. 8d, the largest free energy barrier between folded and unfolded states is at the formation of first base pair, and the relative free energy decreases as more base pairs are formed. This is reasonable since the formation of first base pair generally leads to a great loss of chain conformational entropy. Once the first base pair is formed, it will become easier for more base pairs to be formed. 10,11 Fig. 8d also shows that the free energy barrier between folded and unfolded states decreases as [Na + ] increases. Specifically, the free energy barrier is lowered ~1 kcal/mol when [Na + ] increases from 20mM to ~1M, which indicates that the transition is much easier at high salt. Such phenomenon mainly comes from the weaker intrachain electrostatic repulsion at higher [Na + ].
In the present work, we only studied the effect of monovalent salt. Multivalent counterions such 22 as Mg 2+ stabilize RNA tertiary structure more effectively than monovalent ions , [12][13][14][15]113 which is beyond the description of the DH theory used in the present model due to the strong ion -ion correlations. To predict the stability of RNAs in multivalent ion solutions, explicit ions may need to be accounted for in the model and accordingly would increase the computation cost. 78
IV. CONCLUSIONS
In summary, we have developed and employed a new CG model with implicit salt for RNA folding with MC simulated annealing algorithm. The model enables us to predict 3D structures and stability of short RNA hairpins over a broad range of [Na + ]'s. The following are the major conclusions:
(1) The present model can predict the native-like 3D structures for RNA hairpins, with and without bulge/internal loops, and pseudoknots (≤ 45 nt), with an overall mean RMSD of 3.5 Å and an overall minimum RMSD of 1.9 Å from experimental structures. The prediction accuracy of the present model reduces with the increase of length of unpaired regions in RNAs.
(2) The present model can make reliable predictions on the stability of RNA hairpins such as melting temperature T m with the mean deviation of 1.1℃ from experimental data. Meanwhile, it can provide the ensemble of probable 3D structures of RNA hairpins at different temperatures.
(3) The present model can also predict the stability for RNA hairpins with the mean deviation of 0.9℃ 23 over wide [Na + ]'s as compared with extensive experimental data, and simultaneously provide the ensemble of probable 3D structures at different [Na + ]'s.
Although our model makes the overall reliable predictions for native-like structures and stability of short RNA molecules over a broad range of [Na + ]'s, further improvements need to be made on the model to improve the predictive accuracy and to treat larger RNAs with complex structures. Firstly, the loop/unpaired regions of structures predicted by the present model are slightly deviated from experiment ones. A possible reason may be the neglect of some important intrachain interactions in the present model such as the self-stacking in single-stand chain, special hydrogen bonds involving phosphates and sugars, and mismatched base pairs. 114,115 Secondly , the model at the present version cannot effectively make prediction for large RNA molecules, especially for those with complex tertiary interactions, since the model doesn't take into account the non-cannonical base pairs and tertiary hydrogen bonds which are often present in large RNA structures. 114,115 Further improvement on the issue should contain more imperative interactions between atoms as well as enhance the sampling efficiency in conformational space 50 by involving the specific tertiary contacts 42,65,[114][115][116][117][118][119][120][121] .
Thirdly , the DH approximation used in the present model ignores the effects of ion-ion correlation and ion-binding fluctuation which can be important for multivalent ions (e.g., Mg 2+ ). [12][13][14][15]80,122 The model can be further extended to involve explicit ions while the conformation searching cost would increase accordingly. 78 In addition, although the predicted CG structures contain the major topological information of RNA molecules, they are limited for practical applications due to the lack of atomistic details. It is necessary to reconstruct the all-atomistic structures based on the CG structures. 59,61,64,123 24 Nevertheless, the present model could be a basis for a possible model for predicting 3D structures, the stability and salt effect for RNAs with complex structures.
Fig. 1b .
1bThe coefficients k NN , k CN and k PN are the corresponding energy strength of the three base-pairing constraints and r NN , r CN and r PN are their optimal distances obtained from the known structures; seeFig. S4a.79 i(j) in Eq. 3 stands for the summation over i and j. In the model, one nucleotide cannot become paired with more than one nucleotide. Although such a strict constraint for base pairing requires a large |ε bp | to overcome the entropy change due to base pairing, it
Hairpins with bulge loop. RNA bulge loops, which interrupt one strand of a continuous double helix, occur frequently in the secondary structures of large RNAs generally as recognition sites. The overall 14 mean RMSD between the predicted structures of the 9 hairpins with bulge loop and the experimental structures is 3.3 Å. The value of mean RMSDs for the hairpins with bulge loop is slightly larger than that for the hairpins without bulge loop possibly because that bulge loops usually distort the RNA backbone and cause higher flexibility of RNAs; see Fig. 3b. Hairpins with internal loop. An internal loop, which separates an RNA into two helical regions , generally causes strong distortion and high flexibility of the RNA. For the 15 tested hairpins with internal loop, the overall mean RMSD is 4.0 Å, a larger value than those of hairpins with and without bulge loop; see
Fig. 3
3shows the predicted 3D structures (ball-stick) with the mean and minimum RMSDs and the experimental structures for four typical RNAs. The overall comparison of the predicted structures (ball-stick in
. The RMSDs of our predicted structures are calculated over C beads from the corresponding C4' atoms in the native structures and the predicted structures are not further refined by all-atomistic molecular dynamics. Firstly, we make comparisons with the predictions from FARNA, and the RMSDs calculated over C4' atoms from FARNA are taken from Ref.46. As shown inFig. 5a, the average prediction accuracy (mean RMSD = 3.82 Å; minimu m 16 RMSD = 2.37 Å) by the present model is not worse than that of FARNA (mean RMSD = 3.92 Å; minimum RMSD = 2.2 Å). Secondly, we further make comparisons between the present model and MC-Fold/MC-Sym pipeline. For all the above tested RNAs, we used the MC-Sym online server (option: model_limit =1000 or time_limit = 12h) to predict the best 3D structures with lowest score using the best 2D structures predicted by the MC-Fold (option: consider H-type pseudoknots and return the best 100 structures) and then we calculated the RMSDs of the predicted structures over C4' atoms from the corresponding atoms in their native structures.Fig. 5band Table SIV79 show that for the 46 RNAs (≤ 45 nt), the overall mean RMSD of structures from the present model is 3.5 Å, which is slightly smaller than 3.9 Å, the mean RMSD of the top 1 structures from the MC-Fold/MC-Sym pipeline.
∑
where t is the MC step. T is temperature, and N bp (t, T) is the total number of base pairs at step t and temperature T. Based on the equilibrium value of the number of base pairs at each temperature (e.g.,transverse lines in Fig. 6a), the fraction of denatured base pairs f(T) can be calculated and fitted to two-state model to obtain the melting temperature T m ⁄ where dT is an adjustable parameter. 83,101 Fig. 6a shows that the number of base pairs changes at different temperatures in the simulation of the hairpin RH24 with a 10-nt loop and a 6-bp stem; see Table I. When the temperature is very high (~120℃), almost all of the base pairs are denatured (bottom panel) and when the temperature is low (~40℃ ), the hairpin is in a folded state (top panel). Around the melting temperature (~80℃ ), the folded and unfolded states appear alternately with approximately equal probability (middle panel), which illustrates the bistability in terms of base pair number near T m . In addition, all RNA hairpins tested in this work
3D structures of 46 RNAs with length ≤ 45 nt are predicted with the present model and compared with the experimentally measured structures. The RNA structures include hairpins, hairpins with bulge loop, hairpins with internal loop, and pseudoknots and the PDB codes of all these RNAs are listed inTable SIV. It should be noted that more than one half of the tested RNAs are not included in the dataset(Table SI)for obtaining the initial parameters of energy functions and only 5 RNAs (PDB code: 1q75, 1i3x, 1bn0, 2kd8, 28sr) are in the dataset for optimizing the parameters of energy functions. Here, the solution contains 1M NaCl and the RNAs are nearly fully neutralized by ions during the structure prediction process; see Eq. 2. In the following, we first select an RNA hairpin as an example to show how it folds in the present algorithm and afterwards, we will show our predictions for 46 RNAs including RNA hairpins with bulge, hairpins with internal loop and pseudoknots, and the comparisons with experimental structures. Finally, we will compare the present model with other two representative models: FARNA 46 and MC-Fold/MC-Sym pipeline 45 .1. Folding process of a paradigm RNA hairpinInitial 3D structure prediction from random chain. To show the folding process in the present model, we select an RNA hairpin (PDB code: 1u2a; sequence: 5'-GGUCAGUGUAACAACUGACC-3')
,93 Four pseudoknots have been tested by the present model and the meanRMSDs for these pseudoknots are 4.2 Å, 5.2 Å, 4.2 Å, and 5.4 Å, respectively; see Table SIV. 79
Although the model accurately predicts secondary structures of these pseudoknots, it only
moderately captures their 3D structures with the slight deviation from experimental structures for
pseudoknot loops; shown in
can fluctuate with the less spatial constraints. This property of the duplex was captured in our model, in accordance with the previous experiments83,84 .,81 by a DH
potential between the reduced backbone charges; see Eq. 2. In the following, firstly, we will predict
the structures of HIV-1 TAR RNA at different Na + concentrations ([Na + ]'s) and make the
comparison with the available experiments 109 . Afterwards, we will study the [Na + ]-dependent
19
stability in comparisons with the available experimental data 102-106 .
1. Na + -dependent conformational change of HIV-1 TAR
The transactivation response element (TAR) RNA from the human immunodeficiency type I
virus (HIV-1) is a hairpin (29-nt) with a 3-nt bulge loop and its 3D structures are strongly dependent
on counterions. 109 To examine the Na + -dependent RNA structure change, Casiano-Negroni et al.
have experimentally derived the Na + -dependent angle between two stems of a tetraloop HIV-1 TAR
variant. 109 To directly compare with the experimental data, we predict the 3D structures of the TAR
variant at different [Na + ]'s and calculate the angles between two stems separated by the bulge. For
each predicted structure, the two stems can be approximately fitted to canonical A-form helices and
the central axises of the helices can be derived with the use of the Program Curves + 110 . Based on
these central axises, the inter-helical angles for all predicted TAR variant structures at different
[Na + ]'s can be calculated. As Fig. 7a shown, the angles between two stems predicted by the present
model agree well with the experiment data 109 , especially at low [Na + ]'s. Furthermore, the model also
gives the possible 3D structures of the TAR variant at different [Na + ]'s; see Fig. 7a.
2. Na + -dependent stability of RNA hairpins
Here we employed the present model to examine the [Na + ] effect on the stability of six RNA
hairpins RH23, RH24, RH25, RH27, RH29 and RH30 whose sequences are shown in Table I. 102-106
For each RNA hairpin, we perform the simulations at different temperatures over a broad range of
20
[Na + ]'s. Based on the data from the simulations, we obtain the melting curves at different [Na + ]'s by
fitting the calculated data to the two-state model; see Fig. S6. 79
As shown in Figs. 7b, c and d, the increase of [Na + ] enhances the RNA hairpin folding stability,
and the predicted melting temperatures agree well with experimental data 102-106 with the mean
deviation of 0.9℃. Thus, the present model can give the quantitative predictions on the melting
temperatures of RNA hairpins over a broad range of [Na + ]'s. Furthermore, the present model can
provide the ensemble of probable 3D structures at different [Na + ]'s, as shown in Fig. 7b. Generally,
the RNA duplexes adjacent to the terminal base pair are not stable, because the terminal base pair of
a duplex could only stack with one base pair rather than stack with two nearest neighbour ones and
, we calculated the statistical distributions of the end-to-end distances at different temperatures ([Na + ] = 1M; Fig. 8a) and different [Na + ]'s (T = 70℃; Fig. 8b). As Fig. 8b shown, although the melting transitions induced by high T and low [Na + ] both exhibit the two-state transition, the denatured states at low salt are different from those at high temperature; see the inset figures in Figs. 8a and 8b. The distributions of end-to-end distance show 21 that the denatured structures at low salt become more extended with the decrease of [Na + ], while those at high temperatures appear independent on temperature. Such difference comes from the different mechanisms of RNA denatured induced by high T and low salt. The transition induced by low salt is mainly caused by the intrachain electrostatic repulsion, while that induced by high-T mainly results from the conformational entropy of RNA chain. 4. Free energy barrier at T m versus [Na + ] In order to illustrate the salt effect on the denatured transition of RNA hairpins, we calculated the normalized populations (P(N bp )) of base pair number (N bp ) at T m and different [Na + ]'s for RH24; see Fig 8c. The relative free energy at T m can be calculated by ( ) ; see Fig. 8d
Table I .
IThe melting temperatures T m of 30 RNA hairpins at 1 M NaCl.
ACKNOWLEDGEMENTSWe are grateful to Shi-Jie Chen, Wenbing Zhang, and Song Cao for valuable discussions.partially denatured from the pseudoknot of beet western yellow virus and the stability of the hairpin is studied in this work. c T m is taken fromFig. 7in Ref.106.
Central dogma of molecular biology. F Crick, Nature. 227Crick, F., " Central dogma of molecular biology," Nature 227, 561-563 (1970).
Ribozyme structures and mechanis ms. E A Doherty, J A Doudnan, Annu. Rev. Biophys. Biomol. Struct. 30Doherty, E.A., and Doudnan, J.A., " Ribozyme structures and mechanis ms," Annu. Rev. Biophys. Biomol. Struct. 30, 457-475 (2001).
RNA interference. G J Hannon, Nature. 418Hannon, G.J., " RNA interference," Nature 418, 244-251 (2002).
Kinetic analysis of the effects of target structures on siRNA effici ency. J Chen, W Zhang, J. Chem. Phys. 137225102Chen, J., and Zhang, W., " Kinetic analysis of the effects of target structures on siRNA effici ency," J. Chem. Phys. 137, 225102 (2012).
Riboswitches: sm all-mol ecul e recognition by gene regulatory RNAs. T E Edwards, D J Klein, A R Ferre-D'amare, Curr. Opin. Chem. Biol. 17Edwards, T.E., Klein, D.J., and Ferre-d'Amare, A.R., " Riboswitches: sm all-mol ecul e recognition by gene regulatory RNAs," Curr. Opin. Chem. Biol. 17, 273-279 (2007).
How RNA folds. I Tinoco, Jr, C Bustamante, J. Mol. Biol. 293Tinoco, I., Jr., and Bustamante, C., " How RNA folds," J. Mol. Biol. 293, 271-281 (1999).
How RNA unfol ds and refolds. P T X Li, J Vieregg, I Tinoco, Jr, Annu. Rev. Biochem. 77Li, P.T.X., Vieregg, J., and Tinoco, I., Jr., " How RNA unfol ds and refolds," Annu. Rev. Biochem. 77, 77-100 (2008).
Hierarchy of RNA functional dynamics. A M Mustoe, C L Brooks, Al-Has Himi, H M , 28.1-28.26Annu. Rev. Biochem. 83Mustoe, A.M., Brooks, C.L., and Al-Has himi, H.M., " Hierarchy of RNA functional dynamics," Annu. Rev. Biochem. 83, 28.1-28.26 (2014).
Spectroscopic probes of RNA structures and dynamics. K B Hall, Methods Mol. Biol. 875Hall, K.B., " Spectroscopic probes of RNA structures and dynamics," Methods Mol. Biol. 875, 67-84 (2012).
RNA hairpin-folding kinetics. W Zhang, Chen , S J , Proc. Natl. Acad. Sci. USA 99. Natl. Acad. Sci. USA 99Zhang, W., and Chen, S.J., " RNA hairpin-folding kinetics," Proc. Natl. Acad. Sci. USA 99, 1931-1936 (2002).
Predicting secondary st ructural folding kinetics for nuclei c acids. P Zhao, W Zhang, Chen , S J , Biophys. J. 98Zhao, P., Zhang, W., and Chen, S.J., " Predicting secondary st ructural folding kinetics for nuclei c acids," Biophys. J. 98, 1617-1625 (2010).
Metal ions and RNA folding: a highly charged topic with a dynamic future. S A Woodson, Curr. Opin. Struct. Biol. 9Woodson, S.A., " Metal ions and RNA folding: a highly charged topic with a dynamic future," Curr. Opin. Struct. Biol. 9, 104-109 (2005).
Folding of RNA terti ary structure: linkages between backbone phos phat es, ions, and wat er. D E Draper, Biopolymers. 99Draper, D.E., " Folding of RNA terti ary structure: linkages between backbone phos phat es, ions, and wat er, " Biopolymers 99, 1105-1113 (2013).
RNA does the foldi ng dance of twist, turn, stack. K B Hall, Proc. Natl. Acad. Sci. USA. 110Hall, K.B., " RNA does the foldi ng dance of twist, turn, stack," Proc. Natl. Acad. Sci. USA 110, 16706-16707 (2013).
Underst anding nuclei c acid-ion interactions. J Lipfert, S Doniach, R Das, D Hers Chlag, 19.1-19.29Annu. Rev. Biochem. 83Lipfert, J., Doniach, S., Das, R., and Hers chlag, D., " Underst anding nuclei c acid-ion interactions," Annu. Rev. Biochem. 83, 19.1-19.29 (2014).
Theory of competitive counterion adsorption on fl exibl e polyel ect rolytes: Divalent salts. A Kundagrami, M Muthukumar, J. Chem. Phys. 128244901Kundagrami, A., and Muthukumar, M., "Theory of competitive counterion adsorption on fl exibl e polyel ect rolytes: Divalent salts," J. Chem. Phys. 128, 244901 (2008).
Theory of count er-ion condensation on fl exible polyelectrolytes: Ads orption mechanism. M Muthukum Ar, J. Chem. Phys. 1209343Muthukum ar, M., "Theory of count er-ion condensation on fl exible polyelectrolytes: Ads orption mechanism ," J. Chem. Phys. 120, 9343 (2004).
Role of ion valence in the submillisecond collaps e and folding 26 of a small RNA domain. S A Pabit, J L Sutton, H Chen, L Pollack, Biochemistry. 52Pabit, S.A., Sutton, J.L., Chen, H., and Pollack, L., "Role of ion valence in the submillisecond collaps e and folding 26 of a small RNA domain," Biochemistry 52, 1539-1546 (2013).
Electrostatic correlations and fl uctuations for ion binding to a finit e lengt h polyelectrolyte. Z J Tan, Chen , S J , J. Chem. Phys. 12244903Tan, Z.J., and Chen, S.J., " Electrostatic correlations and fl uctuations for ion binding to a finit e lengt h polyelectrolyte," J. Chem. Phys. 122, 44903 (2005).
Predicting ion binding properti es for RNA t erti ary structures. Z J Tan, Chen , S J , Biophys. J. 99Tan, Z.J., and Chen, S.J., "Predicting ion binding properti es for RNA t erti ary structures," Biophys. J. 99, 1565-1576 (2010).
Bridging the gap in RNA structure predi ction. B A Shapiro, Y G Yingling, W Kasprzak, E Bindewald, Curr. Opin. Struct. Biol. 17Shapiro, B.A., Yingling, Y.G., Kasprzak, W., and Bindewald, E., " Bridging the gap in RNA structure predi ction," Curr. Opin. Struct. Biol. 17, 157-165 (2007).
RNA folding: conformational statistics, folding kinetics, and ion electrostati cs. S J Chen, Annu. Rev. Biophys. 37Chen, S.J., " RNA folding: conformational statistics, folding kinetics, and ion electrostati cs ," Annu. Rev. Biophys. 37, 197-214 (2008).
Detailed molecular model for transfer ribonucleic acid. M Levitt, Nature. 224Levitt, M., " Detailed molecular model for transfer ribonucleic acid," Nature 224, 759-763 (1969).
Modelling of the three-dim ensional architecture of group I cat alytic int rons bas ed on comparative sequence analysis. F Michel, E Westhof, J. Mol. Biol. 216Michel, F., and Westhof, E., " Modelling of the three-dim ensional architecture of group I cat alytic int rons bas ed on comparative sequence analysis," J. Mol. Biol. 216, 585-610 (1990).
Use of photoaffinit y crosslinking and mol ecular modeling to analyze the global archit ecture of ribonucl eas e P RNA. M E Harris, J M Nolan, A Malhotra, J W Brown, S C Harvey, N R Pace, Embo. J. 13Harris, M.E., Nolan, J.M., Malhotra, A., Brown, J.W., Harvey, S.C. , and Pace, N.R., " Use of photoaffinit y crosslinking and mol ecular modeling to analyze the global archit ecture of ribonucl eas e P RNA," Embo. J. 13, 3953-3963 (1994).
A structural model for the ass embly of the 30S subunit of the ribosome. S M Stagg, J A Mears, S C Harvey, J. Mol. Biol. 328Stagg, S.M., Mears, J.A., and Harvey, S.C., " A structural model for the ass embly of the 30S subunit of the ribosome," J. Mol. Biol. 328, 49-61 (2003).
Perspective: Reaches of chemical physi cs in biology. M Gruebele, D Thirumalai, J. Chem. Phys. 139121701Gruebele, M., and Thirumalai, D, " Perspective: Reaches of chemical physi cs in biology," J. Chem. Phys. 139, 121701 (2013).
RNA and protein 3D structure modeling: similarities and differences. K Rother, M Rother, M Boniecki, T Puton, J M Bujnicki, J. Mol. Model. 17Rother, K., Rother, M., Boniecki, M., Puton, T., and Bujnicki, J.M., " RNA and protein 3D structure modeling: similarities and differences," J. Mol. Model 17, 2325-2336 (2011).
Computional approaches to RNA structure predi ction, analysis, and design. C Laing, T Schlick, Curr. Opin. 27Laing, C., and Schlick, T., " Computional approaches to RNA structure predi ction, analysis, and design," Curr. Opin. 27
. Struct. Biol. 21Struct. Biol. 21, 306-318 (2011).
Computatonal approaches to 3D modeling of RNA. C Laing, T Chlick, J. Phys. Condens. M atter. 22283101Laing, C., and S chlick, T., " Computatonal approaches to 3D modeling of RNA," J. Phys. Condens. M atter. 22, 283101 (2010).
RNA-Puzzles: A CASP-like evalution of RNA three-dimensional structure prediction. J A Cruz, M F Blanchet, M Boniecki, J M Bujnicki, S J Chen, S Cao, R Das, F Ding, N V Dokholyan, S C Flores, L Huang, C A Lavender, V Lisi, F Major, K Mikolajczak, D J Patel, A Philips, T Puton, J Santalucia, F Sijenyi, T Herm Ann, K Rother, M Rother, A Serganov, M Skorupski, T Soltysinski, P Sripakdeevong, I Tuszynska, K M Weeks, C Waldsich, M Wildauer, N B Leontis, E Westhof, RNA. 18Cruz, J.A., Blanchet, M.F., Boniecki, M., Bujnicki, J.M., Chen, S.J., Cao, S., Das, R., Ding, F., Dokholyan, N.V., Flores, S.C., Huang, L., Lavender, C.A., Lisi, V., Major, F., Mikolajczak, K., Patel, D.J., Philips, A., Puton, T., Santalucia, J., Sijenyi, F., Herm ann, T., Rother, K., Rother, M., Serganov, A., Skorupski, M., Soltysinski, T., Sripakdeevong, P., Tuszynska, I., Weeks, K.M., Waldsich, C., Wildauer, M., Leontis, N.B., and Westhof, E., " RNA-Puzzles: A CASP-like evalution of RNA three-dimensional structure prediction," RNA 18, 610-625 (2012).
On the signifi cance of an RNA tertiary structure prediction. C E Hajdin, F Ding, N V Dokholyan, K M Weeks, RNA. 16Hajdin, C.E., Ding, F., Dokholyan, N.V., and Weeks, K.M., " On the signifi cance of an RNA tertiary structure prediction," RNA 16, 1340-1349 (2010).
RNA structure predi ction: progress and perspective. Y Z Shi, Y Y Wu, F H Wang, Z J Tan, Chin. Phys. B. 2378701Shi, Y.Z., Wu, Y.Y., Wang, F.H., and Tan, Z.J., "RNA structure predi ction: progress and perspective," Chin. Phys. B 23, 078701 (2014).
p53-RNA interactions: New clues in an old mystery. K J Riley, L J Maher, RNA. 13Riley, K.J., and Maher, L.J., " p53-RNA interactions: New clues in an old mystery," RNA 13, 1825-1833 (2007).
MANIP: an int eractive tool for modelling R NA. C Massire, E Westhof, J. Mol. Graph. Model. 16Massire, C., and Westhof, E., " MANIP: an int eractive tool for modelling R NA," J. Mol. Graph. Model. 16,197 -205 (1998).
Sequence to Structure (S2S): displ ay, manipul ate and interconnect RNA dat a from sequence to structure. F Jossinet, E Westhof, Bioinformatics. 21Jossinet, F., and Westhof, E., " Sequence to Structure (S2S): displ ay, manipul ate and interconnect RNA dat a from sequence to structure," Bioinformatics 21, 3320-3321 (2005).
Assemble: an interactive graphi cal tool to analyze and build R NA architectures at 2D and 3D levels. F Jossinet, T E Ludwig, E Westhof, Bioinformatics. 26Jossinet, F., Ludwig, T.E., and Westhof, E., " Assemble: an interactive graphi cal tool to analyze and build R NA architectures at 2D and 3D levels," Bioinformatics 26, 2057-2059 (2010).
RNA 2D3D: a program for generating, viewing, and comparing 3-dimensional models of RNA. H M Martinez, J V Maizel, Jr, B A Shapiro, J. Biomol. Struct. Dyn. 25Martinez, H.M., Maizel, J.V., Jr., and Shapiro, B.A., " RNA 2D3D: a program for generating, viewing, and comparing 3-dimensional models of RNA," J. Biomol. Struct. Dyn. 25, 669-683 (2008).
Three-dim ensional comparative modeling of RNA. C Zwieb, F Muller, Nuclei c Acids Symp. Ser. 36Zwieb, C., and Muller, F., "Three-dim ensional comparative modeling of RNA," Nuclei c Acids Symp. Ser. 36, 28 69-71 (1997).
Modeling unusual nucl ei c acid structures. T J Macke, D A Case, Acs. Symp. Ser. Am. Chem. Socn. Macke, T.J., and Case, D.A., " Modeling unusual nucl ei c acid structures," Acs. Symp. Ser. Am. Chem. Socn. 1998, 379-393 (1998).
ModeRNA: A tool for comparative modeling of RNA 3D structure. M Rother, K Rother, T Puton, J M Bujnicki, Nucleic Acids Res. 39Rother, M., Rother, K., Puton, T., and Bujnicki, J.M., " ModeRNA: A tool for comparative modeling of RNA 3D structure," Nucleic Acids Res. 39, 4007-4022 (2011).
Turning limited experimental inform ation into 3D models of RNA. S C Flores, R B Altman, RNA. 16Flores, S.C., and Altman, R.B., "Turning limited experimental inform ation into 3D models of RNA," RNA 16, 1769-1778 (2010).
The RNA dolding probl ems: different levels of RNA structure prediction. F Sijenyi, P Saro, Z Ouyang, K Damm-Ganam Et, M Wood, J Jiang, J Santalucia, Jr, RNA 3D Structure Analysis and Predi ction. Leontis, N. and Westhof, E.SpringerSijenyi, F., Saro, P., Ouyang, Z., Damm-Ganam et, K., Wood, M., Jiang, J., and SantaLucia, J., Jr., "The RNA dolding probl ems: different levels of RNA structure prediction", in RNA 3D Structure Analysis and Predi ction, Leontis, N. and Westhof, E. (Eds.) Series " Nucleic Acids and Molecular Biology", Springer (2011).
Automated 3D structure composition for large RNAs. M Popenda, M Szachniuk, M Antczak, K J Purzycka, P Lukasiak, N Bartol, J Blazewicz, R W , Nucleic Acids Res. 40112Popenda, M., Szachniuk, M., Antczak, M., Purzycka, K.J., Lukasiak, P., Bartol, N., Blazewicz, J. , and Adami ak, R.W., " Automated 3D structure composition for large RNAs," Nucleic Acids Res. 40, e112 (2012).
The MC-Fold and MC-Sym pipeline infers RNA structure from sequence data. M Parisien, F Major, Nature. 452Parisien, M., and Major, F., "The MC-Fold and MC-Sym pipeline infers RNA structure from sequence data," Nature 452, 51-55 (2008).
Automated de novo predi ction of native-like RNA tertiary structures. R Das, D Baker, Proc. Natl. Acad. Sci. USA. 104Das, R., and Baker, D., " Automated de novo predi ction of native-like RNA tertiary structures," Proc. Natl. Acad. Sci. USA 104, 14664-14669 (2007).
Atomic accuracy in predicting and designing noncanonical RNA structure. R Das, J Karani Colas, D Baker, Nat. Meth. 7Das, R., Karani colas, J., and Baker, D., " Atomic accuracy in predicting and designing noncanonical RNA structure," Nat. Meth. 7, 291-294 (2010).
Improved prediction of R NA tertiary structure with insights into nat ive stat e dynamics. J P Bida, Maher, L J Iii, RNA. 18Bida, J.P., and Maher, III, L.J., " Improved prediction of R NA tertiary structure with insights into nat ive stat e dynamics," RNA 18, 385-393 (2012).
A probabilistic model of RNA conformational space. J Frellsen, I Moltke, M Thiim, K V Mardia, J Ferkinghoff-Borg, PloS Comput. Biol. 51000406Frellsen, J., Moltke, I., Thiim, M., Mardia, K.V. , and Ferkinghoff-Borg, J., " A probabilistic model of RNA conformational space," PloS Comput. Biol. 5, e1000406 (2009).
Modeling and desi gn by hi erarchi cal natural moves. A Y Sim, M Levitt, P Minary, Proc. Natl. Acad. Sci. USA. 109Sim, A.Y., Levitt, M., and Minary, P., " Modeling and desi gn by hi erarchi cal natural moves," Proc. Natl. Acad. Sci. USA 109, 2890-2895 (2012).
Automat ed and fast building of three-dimensional RNA structures. Y Zhao, Y Huang, Z Gong, Y Wang, J Man, Xiao , Y , Sci. Rep. 2734Zhao, Y., Huang, Y., Gong, Z., Wang, Y., Man, J. , and Xiao, Y., " Automat ed and fast building of three-dimensional RNA structures," Sci. Rep. 2, 734 (2012).
J Zhang, Y Bian, H Lin, Wang , W , RNA fragment modeling with a nucleobas e discret e-st at e model. 8521909Zhang, J. Bian, Y., Lin, H., and Wang, W., " RNA fragment modeling with a nucleobas e discret e-st at e model," Phys . Rev. E 85, 021909 (2012).
Improvem ents of the hi erarchi cal approach for predicting R NA tertiary structure. Y Zhao, Z Gong, Xiao , Y , J. Biomol. Struct. Dyn. 28Zhao, Y., Gong, Z., and Xiao, Y., " Improvem ents of the hi erarchi cal approach for predicting R NA tertiary structure," J. Biomol. Struct. Dyn. 28, 815-826 (2011).
A novel protocol for three-dim ensional struct ure prediction o f RNA-protein complexes. Y Huang, S Liu, D Guo, L Li, Xiao , Y , Sci. Rep. 31887Huang, Y., Liu, S., Guo, D., Li, L., and Xiao, Y., " A novel protocol for three-dim ensional struct ure prediction o f RNA-protein complexes," Sci. Rep. 3, 1887 (2013).
YUP: A molecular simulation program for coars e-grai ned and multiscaled models. R K Z Tan, A S Petrov, S C Harvey, J. Chem. Theor. Comput. 2Tan, R.K.Z., Petrov, A.S., and Harvey, S.C., " YUP: A molecular simulation program for coars e-grai ned and multiscaled models," J. Chem. Theor. Comput. 2, 529-540 (2006).
Coarse-grained modeling of l arge RNA molecul es with knowledge-based potenti als and st ruct ural filters. M A Jonikas, R J Radmer, A Laederach, R Das, S Pearlman, D Hers Chlag, R B Altman, RNA. 15Jonikas, M.A., Radmer, R.J., Laederach, A., Das, R., Pearlman, S. Hers chlag, D. , and Altman, R.B., " Coarse-grained modeling of l arge RNA molecul es with knowledge-based potenti als and st ruct ural filters," RNA 15,189-199 (2009).
Three-dimensional st ructures of R NA obtained by means of knowledge-based interaction potentials. O Taxilaga-Zetina, P Pliego-Past Rana, M D Carbajal-Tinoco, Phys. Rev. E. 8141914Taxilaga-Zetina, O., Pliego-Past rana, P., and Carbajal-Tinoco, M.D., "Three-dimensional st ructures of R NA obtained by means of knowledge-based interaction potentials," Phys. Rev. E 81, 041914 (2010).
Ab initio RNA folding b y discrete molecular dynamics: from structure prediction to folding mechanisms. F Ding, S Sharma, P Chalasani, V V Demidov, N E Broude, N V Dokholyan, RNA. 14Ding, F., Sharma,S., Chalasani, P., Demidov, V.V., Broude, N.E. , and Dokholyan, N.V., " Ab initio RNA folding b y discrete molecular dynamics: from structure prediction to folding mechanisms," RNA 14, 1164-1173 (2008).
iFoldRNA: three-dimensional R NA st ructure predi ction and folding. S Sharm A, F Ding, N V Dokholyan, Bioinformatics. 24Sharm a, S., Ding, F., and Dokholyan, N.V., " iFoldRNA: three-dimensional R NA st ructure predi ction and folding," Bioinformatics 24, 1951-1952 (2008).
Predicting RNA folding thermodynami cs with a reduced chain repres ent ation model. S Cao, Chen , S J , 30 RNA. 11Cao, S., and Chen, S.J., " Predicting RNA folding thermodynami cs with a reduced chain repres ent ation model ," 30 RNA 11, 1884-1897 (2005).
Physics-bas ed de novo prediction of RNA 3D structures. S Cao, Chen , S J , J. Phys. Chem. B. 115Cao, S., and Chen, S.J., "Physics-bas ed de novo prediction of RNA 3D structures," J. Phys. Chem. B 115, 4216-4226 (2011).
HiRE: A high resolution coarse-grained energy model for R NA. S Pasquali, P Derreum Aux, J. Phys. Chem. B. 114Pasquali, S., and Derreum aux, P., " HiRE: A high resolution coarse-grained energy model for R NA," J. Phys. Chem. B 114, 11957-11966 (2010).
Coarse-grained simulations of RNA and DNA dupl exes. T Cragnolini, P Derreum Aux, S Pasquali, J. Phys. Chem. B. 117Cragnolini, T., Derreum aux, P., and Pasquali, S., "Coarse-grained simulations of RNA and DNA dupl exes," J. Phys. Chem. B 117, 8047-8060 (2013).
Coarse-grained m odel for sim ulation R NA three-dimensional structures. Z Xia, D P Gardner, R R Gutell, P Ren, J. Phys. Chem. B. 114Xia, Z., Gardner, D.P., Gutell, R.R., and Ren, P., " Coarse-grained m odel for sim ulation R NA three-dimensional structures," J. Phys. Chem. B 114, 13497-13506 (2010).
RNA 3D struct ure prediction by using a coars e-grained model and experimental data. Z Xia, D R Bell, Y Shi, P Ren, J. Phys. Chem. B. 117Xia, Z., Bell, D.R., Shi, Y., and Ren, P., "RNA 3D struct ure prediction by using a coars e-grained model and experimental data," J. Phys. Chem. B 117, 3135-3144 (2013).
A nucleotide-level coars e-grained model of RNA. P Sulc, F Romano, T E Ouldridge, J P K Doye, A A Louis, J. Chem. Phys. 140235102Sulc, P., Romano, F., Ouldridge, T.E., Doye, J.P.K., and Louis, A.A., " A nucleotide-level coars e-grained model of RNA," J. Chem. Phys. 140, 235102 (2014).
Coarse-grained structure-bas ed model for RNA -protein compl exes developed b y fluctuation matching. N Hori, S Takada, J. Chem. Theory Comput. 8Hori, N., and Takada, S., " Coarse-grained structure-bas ed model for RNA -protein compl exes developed b y fluctuation matching," J. Chem. Theory Comput. 8, 3384-3394 (2012).
Molecular dynami cs simulation of nuclei c acids: successes, limitations, and promise. T E Cheatham, Iii, M A Young, Biopolymers. 56Cheatham, T.E., III, and Young, M.A., " Molecular dynami cs simulation of nuclei c acids: successes, limitations, and promise," Biopolymers 56, 232-256 (2000).
Coarse-grained R NA nanostructures for mol ecular dynamics simul ations. M Paliy, R Melnik, B A Shapiro, Phys. Biol. 736001Paliy, M., Melnik, R., and Shapiro, B.A., " Coarse-grained R NA nanostructures for mol ecular dynamics simul ations," Phys. Biol. 7, 036001 (2010).
Anisotropic coarse-grained st atistical potentials improve the ability to identify nativelike protein structures. N E Buchete, J E Straub, D , J. Chem. Phys. 1187658Buchete, N.E., Straub, J.E., and Thirumal ai, D., " Anisotropic coarse-grained st atistical potentials improve the ability to identify nativelike protein structures," J. Chem. Phys. 118, 7658 (2003).
Coarse-Grained Model of Coil-to-Helix Kinetics Demonst rat es the. A E Giess En, J E Straub, 31Giess en, A.E., and Straub, J.E., " Coarse-Grained Model of Coil-to-Helix Kinetics Demonst rat es the Importance of 31
Multiple Nucleation Sites in Helix Folding. J. Chem. Theory Comput. 2Multiple Nucleation Sites in Helix Folding," J. Chem. Theory Comput. 2, 674-684 (2006).
Coarse-Grained Simulations of M acromol ecules: From DNA to Nanocom posites. J J De Pablo, Annu. Rev. Phys. Chem. 62555de Pablo, J.J., " Coarse-Grained Simulations of M acromol ecules: From DNA to Nanocom posites," Annu. Rev. Phys. Chem. 62, 555 (2011).
Coarse-Graining M ethods for Comput ational Biology. M G Saunders, G A Voth, Annu. Rev. Biophys. 42Saunders, M.G., and Voth, G.A., " Coarse-Graining M ethods for Comput ational Biology," Annu. Rev. Biophys. 42, 73-93 (2013).
Perspective: Coarse-grained models for biomolecular systems. W G Noid, J. Chem. Phys. 13990901Noid, W.G., " Perspective: Coarse-grained models for biomolecular systems," J. Chem. Phys. 139, 090901 (2013).
Coarse-grained model for predi cting RNA fol ding thermodynami cs. N Denesyuk, D Thirumalai, J. Phys. Chem. B. 117Denesyuk, N., and Thirumalai, D., "Coarse-grained model for predi cting RNA fol ding thermodynami cs," J. Phys. Chem. B 117, 4901-4911 (2013).
Mechanical unfolding of RNA hairpins. C Hyeon, D Thirumalai, Proc. Natl. Acad. Sci. USA. Natl. Acad. Sci. USA102Hyeon, C., and Thirumalai, D., "Mechanical unfolding of RNA hairpins," Proc. Natl. Acad. Sci. USA 102, 6789-6794 (2005).
Discrete RNA Libraries from ps eudo-torsional space. E Humphris-Narayanan, A M Pyle, J. Mol. Biol. 421Humphris-Narayanan, E., and Pyle, A.M., " Discrete RNA Libraries from ps eudo-torsional space," J. Mol. Biol. 421, 6-26 (2012).
Salt contribution to the fl exibility of singl e-st randed nucl eic acid of finit e length. F H Wang, Y Y Wu, Z J Tan, Biopolymers. 99Wang, F.H. Wu, Y.Y., and Tan, Z.J., " Salt contribution to the fl exibility of singl e-st randed nucl eic acid of finit e length," Biopolymers 99, 370-381 (2013).
See supplem entary m aterial at [UR L] for the det ailed des cription of energy functions and corresponding paramet ers of the model, the m elting curves of three RNAs (RH23, RH24 and RH30) at di fferent [Na + ]'s and the description o f the 46 RNAs used. in this work and predicted resultsSee supplem entary m aterial at [UR L] for the det ailed des cription of energy functions and corresponding paramet ers of the model, the m elting curves of three RNAs (RH23, RH24 and RH30) at di fferent [Na + ]'s and the description o f the 46 RNAs used in this work and predicted results.
Nucleic acid helix stability: effects of s alt conc entration, cation val ence and size, and chain length. Z J Tan, Chen , S J , Biophys. J. 90Tan, Z.J., and Chen, S.J., " Nucleic acid helix stability: effects of s alt conc entration, cation val ence and size, and chain length," Biophys. J. 90, 1175-1190 (2006).
The molecul ar theory of polyelectrolyte solutions with applications to the electrost atic properties of polynucleotides. G S Manning, Q. Rev. Biophys. 11Manning, G.S., "The molecul ar theory of polyelectrolyte solutions with applications to the electrost atic properties of polynucleotides," Q. Rev. Biophys. 11, 179-246 (1978).
C M Gherghe, C W Leonard, F Ding, N V Dokholyan, K M Week, Native-like RNA tertiary structures. 32Gherghe, C.M., Leonard, C.W., Ding, F., Dokholyan, N.V. , and Week, K.M., " Native-like RNA tertiary structures 32
using a s equence-encoded cleavage agent and refinement by discrete mol ecular dynamics. J. Am. Chem. Soc. 131using a s equence-encoded cleavage agent and refinement by discrete mol ecular dynamics," J. Am. Chem. Soc. 131, 2541-2546 (2009).
Thermodynamic param eters for an expanded nearest -neighbor model for form ation of R NA duplexes wit h Watson-Crick base pairs. T Xia, J Santalucia, Jr, M E Burkand, R Kierzek, S J Schroeder, X Jiao, C Cox, D H Turner, Biochemistry. 37Xia, T., SantaLucia, J., Jr., Burkand, M.E., Kierzek, R., Schroeder, S.J., Jiao, X., Cox, C. , and Turner, D.H., "Thermodynamic param eters for an expanded nearest -neighbor model for form ation of R NA duplexes wit h Watson-Crick base pairs," Biochemistry 37, 14719-14735 (1998).
Expended sequence dependence of therm odynami c parameters improves prediction of RNA secondary structure. D H Mathews, J Sabina, M Zuker, D H Turner, J. Mol. Biol. 288Mathews, D.H., Sabina, J., Zuker, M., and Turner, D.H., " Expended sequence dependence of therm odynami c parameters improves prediction of RNA secondary structure," J. Mol. Biol. 288, 911-940 (1999).
Evolutionary algorithm in the optimization of a coarse-grained force field. F Leonarski, F Trovato, V Tozzini, A Les, J Trylska, J. Chem. Theory Comput. 9Leonarski, F., Trovato, F., Tozzini, V., Les, A., and Trylska, J., " Evolutionary algorithm in the optimization of a coarse-grained force field," J. Chem. Theory Comput. 9, 4874-4889 (2013).
Fully differentiable coarse-grained and all -atom knowledge-based potentials for RNA structure evaluation. J Berrauer, X Huang, A Y Sim, M Levitt, RNA. 17Berrauer, J., Huang, X., Sim, A.Y., and Levitt, M., " Fully differentiable coarse-grained and all -atom knowledge-based potentials for RNA structure evaluation," RNA 17, 1066-1075 (2011).
Optimization by simulated annealing. S Kirkpatri Ck, C D Gelatt, Jr, M P Vecchi, Science. 220Kirkpatri ck, S., Gelatt, C.D., Jr., and Vecchi, M.P., " Optimization by simulated annealing," Science 220, 671-680 (1983).
Discription of RNA folding by simulated annealing. M Schmitz, G Steger, J. Mol. Biol. 255Schmitz, M., and Steger, G., " Discription of RNA folding by simulated annealing," J. Mol. Biol. 255, 254-266 (1996).
The pivot algorithm: A highly effici ent Mont e Carlo method for the sel f-avoiding walk. N Madras, A D Sokal, J. Stat. Phys. 50Madras, N., and Sokal, A.D., "The pivot algorithm: A highly effici ent Mont e Carlo method for the sel f-avoiding walk," J. Stat. Phys. 50, 109-186 (1988).
New metrics for com paring and asses sing discrepancies between RNA 3D structures and models. M Parisien, J A Cruz, E Westhof, F Major, RNA. 15Parisien, M., Cruz, J.A., Westhof, E., and Major, F., " New metrics for com paring and asses sing discrepancies between RNA 3D structures and models," RNA 15, 1875-1885 (2009).
VMD: visual molecul ar dynami cs. W Humphrey, A Dalke, K Schulten, J. Mol. Graph. 14Humphrey, W., Dalke, A., and Schulten, K., " VMD: visual molecul ar dynami cs," J. Mol. Graph. 14, 33 -8, 27-8 (1996).
Pseudoknots: RNA structures with diverse functions. D W Staple, S E Butcher, PloS Biol. 333Staple, D.W., and Butcher, S.E., " Pseudoknots: RNA structures with diverse functions," PloS Biol. 3, e213 (2005). 33
Predicting RNA pseudoknot folding thermodynami cs. S Cao, Chen , S J , Nucl eic Acids R es. 34Cao, S., and Chen, S.J., " Predicting RNA pseudoknot folding thermodynami cs," Nucl eic Acids R es. 34, 2634 -2652 (2006).
Tertiary interactions det ermine the accuracy of RNA folding. S Chauhan, S A Woodson, J. Am. Chem. Soc. 130Chauhan, S., and Woodson, S.A., "Tertiary interactions det ermine the accuracy of RNA folding," J. Am. Chem. Soc. 130, 1296-1303 (2008).
Prediction of geom etri cally feasibl e three-dimensional structures of pseudoknotted RNA through free energy estimation. J Zhang, J Dundas, M Lin, M Chen, W Wang, J Liang, RNA. 15Zhang, J., Dundas, J., Lin, M., Chen, M., Wang, W. , and Liang, J., " Prediction of geom etri cally feasibl e three-dimensional structures of pseudoknotted RNA through free energy estimation," RNA 15, 2248-2263 (2009).
Atomistic analysis of pseudoknotted RNA unfolding. Y Zhang, J Zhang, Wang , W , J. Am. Chem. Soc. 133Zhang, Y., Zhang, J., and Wang, W., " Atomistic analysis of pseudoknotted RNA unfolding," J. Am. Chem. Soc. 133, 6882-6885 (2011).
Kinetic mechanism of conformational switch bet ween bistabl e RNA hai rpins. X Xu, Chen , S J , J. Am. Chem. Soc. 134Xu, X., and Chen, S.J., " Kinetic mechanism of conformational switch bet ween bistabl e RNA hai rpins," J. Am. Chem. Soc. 134, 12499-12507 (2012).
RNA hairpin loop stability depends on closing base pair. M J Serra, M H Lyttle, T J Axenson, C A Schadt, D H Turner, Nucleic Acids Res. 21Serra, M.J., Lyttle, M.H., Axenson, T.J., Schadt, C.A. , and Turner, D.H., " RNA hairpin loop stability depends on closing base pair," Nucleic Acids Res. 21, 3845-3849 (1993).
Improved parameters for the prediction of RNA hairpin stability. M J Serra, W T Barnes, K Betschart, M J Gutierrez, K J Sprouse, C K Riley, L Stewart, R E Temel, Biochemistry. 36Serra, M.J., Barnes, W.T., Betschart, K., Gutierrez, M.J., Sprouse, K.J., Riley, C.K., Stewart, L. , and Temel, R.E., " Improved parameters for the prediction of RNA hairpin stability," Biochemistry 36, 4844-4851 (1997).
Stability of RNA hairpin loops clos ed by AU bas e pairs. C J Vecenie, M J Serra, Biochemist ry. 43Vecenie, C.J., and Serra, M.J., "Stability of RNA hairpin loops clos ed by AU bas e pairs," Biochemist ry 43, 11813-11817 (2004).
Sequence dependence of the stability of RNA hairpi n molecules with six nucleotide loops. C J Vecenie, C V Morrow, A Zyra, M J Serra, Biochemistry. 45Vecenie, C.J., Morrow, C.V., Zyra, A., and Serra, M.J., "Sequence dependence of the stability of RNA hairpi n molecules with six nucleotide loops," Biochemistry 45, 1400-1407 (2006).
Characterization of RNA hairpin loop stability. D R Groebe, O C Uhlenbeck, Nucl eic Acids Res. 16Groebe, D.R., and Uhlenbeck, O.C., " Characterization of RNA hairpin loop stability," Nucl eic Acids Res. 16, 11725-11735 (1988).
Thermodynami c comparison of salt dependenc e of natural RNA hairpins and RNA hairpins with non-nucleotide spacers. D J Williams, K B Hall, Biochemistry. 35Williams, D.J., and Hall, K.B., "Thermodynami c comparison of salt dependenc e of natural RNA hairpins and RNA hairpins with non-nucleotide spacers," Biochemistry 35, 14665-14670 (1996).
Salt dependence of nucleic acid hairpin stability. Z J Tan, Chen , S J , Biophys. J. 96Tan, Z.J., and Chen, S.J., " Salt dependence of nucleic acid hairpin stability," Biophys. J. 96, 738-752 (2008).
Tertiary structure of an RNA pseudoknot is stabilized by " diffuse" Mg2+ ions. A M Soto, V Misra, D E Draper, Biochemistry. 46Soto, A.M., Misra, V., and Draper, D.E., " Tertiary structure of an RNA pseudoknot is stabilized by " diffuse" Mg2+ ions," Biochemistry 46, 2973-2983 (2007).
Energeti cs of a strongly pH dependent RNA terti ary structure in a fram eshi fting pseudoknot. P L Nixon, D P Giedroc, J. Mol. Biol. 296Nixon, P.L., and Giedroc, D.P., " Energeti cs of a strongly pH dependent RNA terti ary structure in a fram eshi fting pseudoknot," J. Mol. Biol. 296, 659-671 (2000).
Salt contribution to RNA tertiary structure folding stability. Z J Tan, Chen , S J , Biophys. J. 101Tan, Z.J., and Chen, S.J., "Salt contribution to RNA tertiary structure folding stability," Biophys. J. 101, 176-187 (2011).
Ion-mediated RNA structural coll apse: effect of spatial confinem ent. Z J Tan, Chen , S J , Biophys. J. 103Tan, Z.J., and Chen, S.J., " Ion-mediated RNA structural coll apse: effect of spatial confinem ent," Biophys. J. 103, 827-836 (2012).
Probing Na + -induced changes in the HIV -1 TAR conform ational dynami cs using NMR residual dipolar couplings: new insights into the rol e of counterions and electrostatic interactions in adaptive recognition. A Casiano-Negroni, X Sun, Al-Hashimi , H M , Biochemistry. 46Casiano-Negroni, A., Sun, X., and Al-Hashimi, H.M., " Probing Na + -induced changes in the HIV -1 TAR conform ational dynami cs using NMR residual dipolar couplings: new insights into the rol e of counterions and electrostatic interactions in adaptive recognition," Biochemistry 46, 6525-6535 (2007).
Conform ational analysis o f nucleic acids revisited: Curves+. R Lavery, M Moakher, J H Maddocks, D Petkeviciut E, K Zakrzewska, Nucleic Acids Res. 37Lavery, R., Moakher, M., Maddocks, J.H., Petkeviciut e, D. , and Zakrzewska, K., " Conform ational analysis o f nucleic acids revisited: Curves+," Nucleic Acids Res. 37, 5917-5929 (2009).
Combining temperature and force to study folding of an RNA hai rpin. W Stephenson, S Keller, R Santiago, J E Albrecht, P N Asare-Okai, S A Tenenbaum, M Zuker, P T X Li, Phys. Chem. Chem. Phys. 16Stephenson, W., Keller, S., Santiago, R., Albrecht, J.E., Asare-Okai, P.N., Tenenbaum, S.A., Zuker, M., and Li, P.T.X., " Combining temperature and force to study folding of an RNA hai rpin," Phys. Chem. Chem. Phys. 16, 906-917 (2014).
Folding of human telom eras e RNA pseudoknot using ion-jump and temperature-quench simulations. S Biyun, S S Cho, D Thirumalai, J. Am. Chem. Soc. 133Biyun, S., Cho, S.S., and Thirumalai, D., "Folding of human telom eras e RNA pseudoknot using ion-jump and temperature-quench simulations," J. Am. Chem. Soc. 133, 20634-20643 (2011).
Importance of diffuse metal ion binding to RNA. Z J Tan, Chen , S J , Met. Ions Life Sci. 9Tan, Z.J., and Chen, S.J., " Importance of diffuse metal ion binding to RNA," Met. Ions Life Sci. 9, 101-124 (2011).
Determining RNA three-dimensional structures using low-res olution dat a. M Parisien, F Major, Struct. Biol. 179Parisien, M., and Major, F., " Determining RNA three-dimensional structures using low-res olution dat a," Struct. Biol. 179, 252-260 (2012).
Structural inference of native and partially fol ded R NA by high -throughput cont act mapping. R Das, M Kudaravalli, M Jonikas, A Laederach, R Fong, J P Schwans, D Baker, J A Piccirilli, R B Altman, D Herschlag, Proc. Natl. Acad. Sci. USA 105. Natl. Acad. Sci. USA 105Das, R., Kudaravalli, M., Jonikas, M., Laederach, A., Fong, R., Schwans, J.P. , Baker, D., Piccirilli, J.A., Altman, R.B., and Herschlag, D., " Structural inference of native and partially fol ded R NA by high -throughput cont act mapping," Proc. Natl. Acad. Sci. USA 105, 4144-4149 (2008).
The molecular interactions that stabilize RNA tertiary structure: RNA moti fs, patterns, and networks. S E Butcher, A M Pyle, Acc. Chem. Res. 44Butcher, S.E., and Pyle, A.M., "The molecular interactions that stabilize RNA tertiary structure: RNA moti fs, patterns, and networks," Acc. Chem. Res. 44, 1302-1311 (2011).
Topological constraints: using R NA s econdary structure to m odel 3D conform ation, folding pathways, and dynami c adapt ation. M H Bailor, A M Mustoe, C L Brooks, Al-Hashimi , H M , Curr. Opin. Struct. Biol. 21Bailor, M.H., Mustoe, A.M., Brooks, C.L., and Al-Hashimi, H.M., " Topological constraints: using R NA s econdary structure to m odel 3D conform ation, folding pathways, and dynami c adapt ation," Curr. Opin. Struct. Biol. 21, 296- 305 (2011).
Automated RNA terti ary st ruct ure prediction from secondary st ruct ure and low-resolution restraints. M J Seetin, D H Mathews, J. Comput. Chem. 32Seetin, M.J., and Mathews, D.H., " Automated RNA terti ary st ruct ure prediction from secondary st ruct ure and low-resolution restraints," J. Comput. Chem. 32, 2232-2244 (2011).
Selective 2'-hydroxyl acylation analyzed by primer ext ension (SHAPE): Quantitative RNA structure analysis at single nucleotide resolution. K A Wilkinson, E J Merino, K M Weeks, Nat. Protoc. 1Wilkinson, K.A., Merino, E.J., and Weeks, K.M., " Selective 2'-hydroxyl acylation analyzed by primer ext ension (SHAPE): Quantitative RNA structure analysis at single nucleotide resolution," Nat. Protoc. 1, 1610-1616 (2006).
Predicting Heli cal Topologies in RNA Junctions as Tree Graphs. C Laing, S Jung, N Kim, S Elmetwaly, M Zahran, T Schlick, PLoS ONE. 871947Laing, C., Jung, S., Kim, N., Elmetwaly, S., Zahran, M. , and Schlick, T., " Predicting Heli cal Topologies in RNA Junctions as Tree Graphs," PLoS ONE 8, e71947 (2013).
Three-dim ensional RNA struct ure refinement by hydroxyl radical probing. F Ding, C A Lavender, K M Weeks, N V Dokholyan, Nat. Methods. 9Ding, F., Lavender, C.A., Weeks, K.M., and Dokholyan, N.V., " Three-dim ensional RNA struct ure refinement by hydroxyl radical probing," Nat. Methods 9, 603-608 (2012).
RNA helix stability in mixed Na+/Mg2+ solution. Z J Tan, Chen , S J , Biophys. J. 92Tan, Z.J., and Chen, S.J., " RNA helix stability in mixed Na+/Mg2+ solution," Biophys. J. 92, 3615-3632 (2007).
Knowledge-bas ed instanti ation of full atomic detail into coarse-grain RNA 3D structural models. M A Jonikas, R J Radmer, R B Altman, Bioinformatics. 25Jonikas, M.A. Radmer, R.J., and Altman, R.B., " Knowledge-bas ed instanti ation of full atomic detail into coarse-grain RNA 3D structural models," Bioinformatics 25, 3259-3266 (2009).
Our coarse-grained representation for one fragment of RNA superposed on an all-atom representation. Namely, three beads are located at the atoms of phosphate (P, orange), C4' (C, green), and N1 for pyrimidine or N9 for purine (N, blue), respectively. The structure is shown with the PyMol. ). (b) The schematic representation for base-pairing, which is restricted by three types of distances: N i N j (red), C i(j) N j(i) (green) and P. i(j) N j(i) (orangeFIGURES AND TABLES FIGURE 1. (a) Our coarse-grained representation for one fragment of RNA superposed on an all-atom representation. Namely, three beads are located at the atoms of phosphate (P, orange), C4' (C, green), and N1 for pyrimidine or N9 for purine (N, blue), respectively. The structure is shown with the PyMol (http://www.pymol.org). (b) The schematic representation for base-pairing, which is restricted by three types of distances: N i N j (red), C i(j) N j(i) (green) and P i(j) N j(i) (orange);
Here, we use the distance constraints rather than the angular constraints for convenient programing of the model. (c) The schematic representation for base-stacking. Dash-dotted line (blueHere, we use the distance constraints rather than the angular constraints for convenient programing of the model. (c) The schematic representation for base-stacking. Dash-dotted line (blue):
The time-evolution of the energy (top panel), the number of base pairs (middle panel) and the 3D structures (bottom panel) during the MC simulated annealing simulation of the RNA hairpin (PDB code: 1u2a). (b) The structural refinement of the predicted structure of the hairpin (PDB code: 1u2a) illustrated by the energy of optimized structures (top panel), the RMSDs between optimized structures and the native structure in PDB (middle panel), and the predicted 3D structures (ball-stick; bottom panel) of the hairpin with the minimum RMSD (1.5 Å) and the mean RMSD (2.6 Å) from the native structures (cartoon). FIGURE 2. The illustration of the present algorithm for the folding process (a) and structure refinement (b). (a). The structures are shown with the PyMolFIGURE 2. The illustration of the present algorithm for the folding process (a) and structure refinement (b). (a) The time-evolution of the energy (top panel), the number of base pairs (middle panel) and the 3D structures (bottom panel) during the MC simulated annealing simulation of the RNA hairpin (PDB code: 1u2a). (b) The structural refinement of the predicted structure of the hairpin (PDB code: 1u2a) illustrated by the energy of optimized structures (top panel), the RMSDs between optimized structures and the native structure in PDB (middle panel), and the predicted 3D structures (ball-stick; bottom panel) of the hairpin with the minimum RMSD (1.5 Å) and the mean RMSD (2.6 Å) from the native structures (cartoon). The structures are shown with the PyMol (http://www.pymol.org).
The predicted 3D structures (ball-stick) with the minimum RMSD and the mean RMSD 37 for a hairpin (a), a hairpin with bulge loop (b), a hairpin with internal loop (c) and an RNA pseudoknot (d). The mean (minimum) RMSDs between the predicted structures and their native. FIGURE 3. FIGURE 3. The predicted 3D structures (ball-stick) with the minimum RMSD and the mean RMSD 37 for a hairpin (a), a hairpin with bulge loop (b), a hairpin with internal loop (c) and an RNA pseudoknot (d). The mean (minimum) RMSDs between the predicted structures and their native
respectively. The RMSDs are calculated over C beads, and the structures are shown with the PyMol. Å), respectively. The RMSDs are calculated over C beads, and the structures are shown with the PyMol (http://www.pymol.org).
The scatter plots of mean (square) and minimum (circle) RMSDs between the predicted structures and the native structures as functions of RNA size (-nt) (a) and of the number. FIGURE 4.. ntFIGURE 4. The scatter plots of mean (square) and minimum (circle) RMSDs between the predicted structures and the native structures as functions of RNA size (-nt) (a) and of the number (-nt) of
) to test the accuracy of MC-Fold/MC-Sym and calculate the RMSD for the top 1 predicted structure over C4' atom in the backbone. The RMSDs in (a) and (b) of structures predicted by the present model are. RMSDs of structures including hairpins and pseudoknots predicted by FARNA are calculated over the C4' atom in the backbone and the data are taken from Ref. 46. (b). Comparison of the RMSDs between the present model and the MC-Fold/MC-Sym pipeline 45 . For each of the 46 tested RNAs, we use the MC-Fold/MC-Sym pipeline online tool. calculated over C beads from the corresponding C4' atoms in native structuresRMSDs of structures including hairpins and pseudoknots predicted by FARNA are calculated over the C4' atom in the backbone and the data are taken from Ref. 46. (b) Comparison of the RMSDs between the present model and the MC-Fold/MC-Sym pipeline 45 . For each of the 46 tested RNAs, we use the MC-Fold/MC-Sym pipeline online tool (http://www.major.iric.ca/MC-Fold/) to test the accuracy of MC-Fold/MC-Sym and calculate the RMSD for the top 1 predicted structure over C4' atom in the backbone. The RMSDs in (a) and (b) of structures predicted by the present model are calculated over C beads from the corresponding C4' atoms in native structures.
5℃ (middle panel) and 120℃ (bottom panel). of temperature. Symbols: the predicted data; Line: fitted to the predicted data through Eq. 7; Ball-stick: 3D structures at different temperatures shown with the PyMol. FIGURE 6. (a) The time-evolution of the number of base pairs of the hairpin RH24 (shown in Table I) at different temperatures: 40℃ (top panel). 82). (c) The fractions of denatured base pairs for three RNA hairpins. RH6, RH18, and RH23 shown in Table I) as functions of temperature. Symbols: predicted dataFIGURE 6. (a) The time-evolution of the number of base pairs of the hairpin RH24 (shown in Table I) at different temperatures: 40℃ (top panel), 82.5℃ (middle panel) and 120℃ (bottom panel). of temperature. Symbols: the predicted data; Line: fitted to the predicted data through Eq. 7; Ball-stick: 3D structures at different temperatures shown with the PyMol (http://www.pymol.org). (c) The fractions of denatured base pairs for three RNA hairpins (RH6, RH18, and RH23 shown in Table I) as functions of temperature. Symbols: predicted data;
(a) The experimental (Ref. 109) and predicted inter-helical bend angle as functions of [Na + ] for tetraloop HIV-1 TAR variant at 25℃ and the corresponding typical 3D structures predicted by the present model. (b) The experimental (calculated from Table II in Ref. 103) and predicted fraction of denatured base pairs as functions of [Na + ] for RNA hairpin RH24 (in Table I) at 70℃ and the corresponding typical 3D structures. The 3D structures in (a) and (b) are shown with the PyMol. Symbols: experimental data (c) ■ RH23 (Ref. 103), • RH24 (Ref. 103), ▲ RH30 (Ref. 105) and (d) ■ RH25. ). (c) and (d) The melting temperature T m as functions of [Na + ] for six RNA hairpins (shown in Table I). Ref. 102), • RH27 (Ref. 102), ▲ RH29 (Ref. 102Bold lines: fitted to the predicted data through Eq. 7; Dashed lines: experimental curves 98 . FIGURE 7. (a) The experimental (Ref. 109) and predicted inter-helical bend angle as functions of [Na + ] for tetraloop HIV-1 TAR variant at 25℃ and the corresponding typical 3D structures predicted by the present model. (b) The experimental (calculated from Table II in Ref. 103) and predicted fraction of denatured base pairs as functions of [Na + ] for RNA hairpin RH24 (in Table I) at 70℃ and the corresponding typical 3D structures. The 3D structures in (a) and (b) are shown with the PyMol (http://www.pymol.org). (c) and (d) The melting temperature T m as functions of [Na + ] for six RNA hairpins (shown in Table I). Symbols: experimental data (c) ■ RH23 (Ref. 103), • RH24 (Ref. 103), ▲ RH30 (Ref. 105) and (d) ■ RH25 (Ref. 102), • RH27 (Ref. 102), ▲ RH29 (Ref. 102).
| []
|
[
"Parity-violating aysmmetries in elastic ep scattering in the chiral quark-soliton model: Comparison with A4, G0, HAPPEX and SAMPLE",
"Parity-violating aysmmetries in elastic ep scattering in the chiral quark-soliton model: Comparison with A4, G0, HAPPEX and SAMPLE"
]
| [
"Antonio Silva \nDepartamento de Física and Centro de Física Computacional\nUniversidade de Coimbra\nP-3000CoimbraPortugal\n\nFaculdade de Engenharia da\nUniversidade do Porto\nP-4200-465 PortoPortugal\n",
"Hyun-Chul Kim \nDepartment of Physics and Nuclear Physics & Radiation Technology Institute (NuRI)\nPusan National University\n609-735BusanRepublic of Korea\n",
"Diana Urbano \nDepartamento de Física and Centro de Física Computacional\nUniversidade de Coimbra\nP-3000CoimbraPortugal\n\nFaculdade de Engenharia da\nUniversidade do Porto\nP-4200-465 PortoPortugal\n",
"Klaus Goeke \nInstitut für Theoretische Physik II\nRuhr-Universität Bochum\nD-44780BochumGermany\n",
"DrR ",
"Roberto Frias "
]
| [
"Departamento de Física and Centro de Física Computacional\nUniversidade de Coimbra\nP-3000CoimbraPortugal",
"Faculdade de Engenharia da\nUniversidade do Porto\nP-4200-465 PortoPortugal",
"Department of Physics and Nuclear Physics & Radiation Technology Institute (NuRI)\nPusan National University\n609-735BusanRepublic of Korea",
"Departamento de Física and Centro de Física Computacional\nUniversidade de Coimbra\nP-3000CoimbraPortugal",
"Faculdade de Engenharia da\nUniversidade do Porto\nP-4200-465 PortoPortugal",
"Institut für Theoretische Physik II\nRuhr-Universität Bochum\nD-44780BochumGermany"
]
| []
| We investigate parity-violating electroweak asymmetries in the elastic scattering of polarized electrons off protons within the framework of the chiral quark-soliton model (χQSM). We use as input the former results of the electromagnetic and strange form factors and newly calculated SU(3)axial-vector form factors, all evaluated with the same set of four parameters adjusted several years ago to general mesonic and baryonic properties. Based on this scheme, which yields positive electric and magnetic strange form factors with a µ s = (0.08 − 0.13)µ N , we determine the parity-violating asymmetries of elastic polarized electron-proton scattering. The results are in a good agreement with the data of the A4, HAPPEX, and SAMPLE experiments and reproduce the full Q 2 -range of the G0-data. We also predict the parity-violating asymmetries for the backward G0 experiment. | 10.1103/physrevd.74.054011 | [
"https://arxiv.org/pdf/hep-ph/0601239v1.pdf"
]
| 119,095,553 | hep-ph/0601239 | 86a1dcdcc1400578716755b29ba4198fbfa93c4f |
Parity-violating aysmmetries in elastic ep scattering in the chiral quark-soliton model: Comparison with A4, G0, HAPPEX and SAMPLE
arXiv:hep-ph/0601239v1 29 Jan 2006
Antonio Silva
Departamento de Física and Centro de Física Computacional
Universidade de Coimbra
P-3000CoimbraPortugal
Faculdade de Engenharia da
Universidade do Porto
P-4200-465 PortoPortugal
Hyun-Chul Kim
Department of Physics and Nuclear Physics & Radiation Technology Institute (NuRI)
Pusan National University
609-735BusanRepublic of Korea
Diana Urbano
Departamento de Física and Centro de Física Computacional
Universidade de Coimbra
P-3000CoimbraPortugal
Faculdade de Engenharia da
Universidade do Porto
P-4200-465 PortoPortugal
Klaus Goeke
Institut für Theoretische Physik II
Ruhr-Universität Bochum
D-44780BochumGermany
DrR
Roberto Frias
Parity-violating aysmmetries in elastic ep scattering in the chiral quark-soliton model: Comparison with A4, G0, HAPPEX and SAMPLE
arXiv:hep-ph/0601239v1 29 Jan 2006(Dated: August 2005)PACS numbers: 1240-y, 1420Dh * Electronic address: ajose@teorfisucpt † Electronic address: hchkim@pusanackr ‡ Electronic address: urbano@feuppt § Electronic address: klausgoeke@tp2rubde
We investigate parity-violating electroweak asymmetries in the elastic scattering of polarized electrons off protons within the framework of the chiral quark-soliton model (χQSM). We use as input the former results of the electromagnetic and strange form factors and newly calculated SU(3)axial-vector form factors, all evaluated with the same set of four parameters adjusted several years ago to general mesonic and baryonic properties. Based on this scheme, which yields positive electric and magnetic strange form factors with a µ s = (0.08 − 0.13)µ N , we determine the parity-violating asymmetries of elastic polarized electron-proton scattering. The results are in a good agreement with the data of the A4, HAPPEX, and SAMPLE experiments and reproduce the full Q 2 -range of the G0-data. We also predict the parity-violating asymmetries for the backward G0 experiment.
1. The complex structure of the nucleon goes well beyond its simplest description as a collection of three valence quarks moving in some potential. The sea of gluons and qq-pairs that arises in quantum chromodynamics is expected to play an important role even at long distance scales. As the lightest explicitely non-valence quark the strange quark provides an attractive tool to probe the qq-sea, since any strange quark contribution to an observable must be the effect of the sea. Thus the strange quark contribution to the distributions of charge and magnetization in the nucleon has been a very important issue well over decades, since it provides a vital clue in understanding the structure of the nucleon. For recent reviews, see, for example, Refs. [1,2,3,4,5]. Recently, the strangeness content of the nucleon has been studied particularly intensively since parity-violating electron scattering (PVES) has demonstrated to provide an essential tool for probing the sea of ss pairs in the vector channel [6,7]. In fact, various PVES experiments have been already conducted in order to measure the parity-violating asymmetries (PVAs) from which the strange vector form factors can be extracted [8,9,10,11,12,13,14,15,16]. While PVES experiments have direct access to the PVA with relatively good precision, a certain amount of uncertanties arise in the flavor decomposition for the nucleon vector form factors. As a result, the strange vector form factors extracted so far from the data have rather large errors [8,9,10,11,12,13,14,15].
The chiral quark-soliton model (χQSM) is an effective quark theory of the instantondegrees of freedom of the QCD vacuum. It results in an effective chiral action for valence and sea quarks both moving in a static self-consistent Goldstone background field [17,18] originating from the spontaneous chiral symmetry breaking of the QCD. It has successfully been applied to mass splittings of hyperons, to electromagnetic and axial-vector form factors [17] of the baryon octet and decuplet and to forward and generalized parton distributions [19,20,21] and has led even to the prediction of the heavily discussed pentaquark baryon Θ + [22]. The present authors have recently investigated in the χQSM model the strange vector form factors [23,24] and they presented some aspects of the SAMPLE, HAPPEX, and A4 experiments. The results have shown a good agreement with the available data, though the experimental uncertainties are rather large, as mentioned above. Thus, it is theoretically more challenging to calculate directly the PVAs and to confront them with the more accurate experimental data. Moreover, since the G0 experiment has measured the PVA over a range of momentum transfers 0.12 ≤ Q 2 ≤ 1.0 GeV 2 in the forward direction [16], the check of the theory is on much firmer ground.
Actually, the PVA contains a set of six electromagnetic form factors (G u,d,s E,M ) and three axial-vector ones (G u,d,s A ). In fact, all these form factors have already been calculated within the SU(3)-χQSM [23,24,25,26] by using the well established parameter set consisting of m s = 180 MeV and the other three parameters having been adjusted some years ago to the physical values of f π , m π and baryonic properties as e.g. the charge radius of the proton and the delta-nucleon (∆−N) mass splitting. Apart from reproducing the existing experimental data on the PVAs, we will predict the PVAs of the future G0 experiment at backward angles.
2. The PVA in polarized ep scattering is defined as the difference of the total cross sections for circularly polarized electrons with positive and negative helicities divided by their sum:
A P V = σ + − σ − σ + + σ − .(1)
Denoting, at the tree level, the amplitudes for γ and Z exchange by M γ and M Z , respec- tively, the total cross section for a given polarization is proportional to the square of the sum of the amplitudes, which indicates the interference between the electromagnetic and neutral weak amplitudes:
σ ± ∼ |M γ + M Z | 2 ± .(2)
The PVA comprises three different terms:
A P V = A V + A s + A A ,(3)
where
A V = −aρ ′ (1 − 4κ ′ sin 2 θ W ) − εG p E G n E + τ G p M G n M ε(G p E ) 2 + τ (G p M ) 2 , A s = aρ ′ εG p E G s E + τ G p M G s M ε(G p E ) 2 + τ (G p M ) 2 , A A = a (1 − 4 sin 2 θ W )ε ′ G p M G p A ε(G p E ) 2 + τ (G p M ) 2 , a = G F Q 2 / 4 √ 2πα EM , τ = Q 2 /(4M 2 N ), ε = 1 + 2(1 + τ ) tan 2 θ/2 −1 , ε ′ = τ (1 + τ )(1 − ε 2 ).(4)
The G p E,M , G s E,M , and G p A denote, respectively, the electromagnetic form factors of the proton, strange vector form factors, and the axial-vector form factors. The G F is the Fermi constant as measured from muon decay, α EM the fine structure constant, and θ W the electroweak mixing angle given as sin 2 θ W = 0.2312 [27]. The Q 2 stands for the negative square of the four momentum transfer. The parameters ρ ′ and κ ′ are related to electroweak radiative corrections [1,28].
•
FIG. 2:
The parity-violating asymmetries as a function of Q 2 , compared with the SAMPLE measurement [9]. The dotted curve is calculated without the s-quark contribution. The dashed curve is obtained by using the form factors from the χQSM without the electroweak radiative corrections, while the solid one (χQSM) includes them and is our final result. The parity-violating asymmetries as a function of Q 2 , compared with the A4 measurement [12]. The dotted curve is calculated without the s-quark contribution. The dashed curve is obtained by using the form factors from the χQSM without the electroweak radiative corrections, while the solid one (χQSM) includes them and is our final result.
Factoring out the quark charges, we can express the electromagnetic and electroweak neutral axial-vector form factors of the proton in terms of the flavor-decomposed electromagnetic form factors:
G p E,M = 2 3 G u E,M − 1 3 G d E,M + G s E,M , G pZ A = G d A − (G u A + G s A ) .(5)0.0 0.2 0.4 0.6 0.8 1.0 Q 2 [GeV 2 ] −60 −40 −20 0 A P V [
FIG. 4:
The parity-violating asymmetries as a function of Q 2 , compared with the HAPPEX measurement [11]. The dotted curve is calculated without the s-quark contribution. The dashed curve is obtained by using the form factors from the χQSM without the electroweak radiative corrections, while the solid one (χQSM) includes them and is our final result.
Including the electroweak radiative corrections [1,28], we find that the electroweak axialvector form factors of the proton can be written as [30]:
G p A (Q 2 ) = −(1 + R 1 A )G (3) A (Q 2 ) + R 0 A + G s A ,(6)
with the values for the electroweak radiative corrections [28]: Figure 1 depicts the electroweak neutral axial-vector form factors expressed in Eqs. (5,6), which is obtained in the χQSM [26]. We will use G pZ A in Fig. 1 to yield the PVA. The other six electromagnetic form factors, G p,n,s E,M can be read out from Refs. [23,24,25].
R 1 A = −0.41 ± 0.24, R 0 A = 0.06 ± 0.14.(7)
3.
We discuss now the results of the PVA obtained from the χQSM. In detail, the model has the following parameters: The constituent quark mass M, the current quark mass m u , the cut-off Λ of the proper-time regularization, and the strange quark mass m s . However, these parameters are not free but has been fixed to independent observables in a very clear way [17]: For a given M the Λ and the m u are adjusted in the mesonic sector to the physical pion mass m π = 139 MeV and the pion decay constant f π = 93 MeV. The strange quark mass is selected to be m s = 180 MeV throughout the present work, with which the mass splittings of hyperons are produced very well. The remaining parameter M is varied from 400 MeV to 450 MeV. However, the value of 420 MeV, which for many years is known to produce the best fit to many baryonic observables [17], is chosen for our final result in the baryonic sector. We always assume isospin symmetry. With these parameters at hand, we can proceed to derive the form factors of the proton required for the PVA. On obtaining these form factors, we use the symmetry conserving quantization scheme [29] and take into account the rotational 1/N c corrections, the explicit SU(3) symmetry breaking in linear order, and the wave function corrections, as discussed in Ref. [17,23] in detail. With this scheme, we have obtained the results [23,24] for the strange vector form factors in good agreement with the data of the A4, SAMPLE and HAPPEX experiments as far as they were available 1 . The parity-violating asymmetries as a function of Q 2 , compared with the forward G0 measurement [16]. The dotted curve is calculated without the s-quark contribution. The dashed curve is obtained by using the form factors from the χQSM without the electroweak radiative corrections, while the solid one (χQSM) includes them and is our final result. We present our numerical results in Figs. 2-6 at relevant kinematics to the A4, G0, HAPPEX, and SAMPLE experiments in comparison with the data. The dotted curves depict the PVA without the strange quark contribution. This means we put A s = 0 in Eq. 3. The dashed ones are obtained by using the form factors from the SU(3)-χQSM without the electroweak radiative corrections, i.e. with ρ ′ and κ ′ set equal to zero, while the solid ones (χQSM) are our final theoretical asymmetries including those corrections. One notices that the effect of the electroweak radiative corrections is rather tiny. One also notices that with increasing Q 2 the PVA without strange contribution deviates more and more from the experiments, which means that with increasing Q 2 the contribution of the strange quarks gets larger and larger reaching in the end an amount up to 40 % in the present model.
0.0 0.2 0.4 0.6 0.8 1.0 Q 2 [GeV 2 ] −60 −40 −20 0 A P V [Q 2 [GeV 2 ] G s E (Q 2 ) + β(Q 2 , θ)G s M (Q 2 ) θ = 35 • µ → π µ → K θ = 145 • µ → π µ → KQ 2 [GeV 2 ] G s E (Q 2 ) + 0.94Q 2 G s M (Q 2 ) θ = 10 • µ → π µ → KQ 2 [GeV 2 ] (A phys − A 0 )(Q 2 ) (ppm) ⋆
As shown in Figs. 2-5, the present results are in a good agreement with the experimental data from A4, HAPPEX, and SAMPLE at small and intermediate Q 2 . However, since the G0 experiments have measured the PVA over the range of momentum transfers 0.12 ≤ Q 2 ≤ 1.0 GeV 2 , it is more interesting to compare our results with them. In fact, the predicted PVA in the present work describes remarkably well the G0 data over the full range of Q 2values. It indicates that the present model produces the correct Q 2 -dependence of all the form factors relevant for the PVA. Figure 6 depicts the prediction for the backward G0 experiment at θ = 108 • whose data are announced to be available in near future. Figures 7-9 yield further data which allow a detailed comparison between experiment and theory. Fig. 7 shows the typical combination G s E (Q 2 ) + β(Q 2 , θ)G s M (Q 2 ) playing a key role in the experiments. In forward direction A4 has measured two points of this observable at small Q 2 -values, which are both well reproduced by the χQSM calculations. The dotted error band indicates a systematic error of the χQSM, since the soliton is bound The hydrogen and deuterium data for G s M and G e A (T = 1) from HAPPEX at Q 2 = 0.1 GeV 2 . The ellipse represents the 1 σ overlap of the two measurements. The theoretical number obtained by the χQSM is indicated with the bar which reflects the theoretical error. The data-plot is taken from Ref. [31] to have the same profile function in the up-, down-and strange direction, see ref. [23] for details. Fig. 8 shows a similar combination for G0, where the β is assumed to be equal to η = 0.94 Q 2 . In this plot the experimental data are again resonably well reproduced by the χQSM.
The
Actually one can see at Fig. 10 how the χQSM values for G s E and G s M fit into the present world data at Q 2 = 0.1 GeV 2 . The plot is taken from HAPPEX. [15] and the ellipse reflects the 95 % confidence level. Apparently there is good agreement beween the χQSM and the data. A similar conclusion can be drawn from Fig. 11, in which for G s M and G s E (T = 1) the χQSM is confronted with the data. Here the ellipse represents the 1-σ overlap of the deuterium and hydrogen measurements. This figure is taken from Beise et al. [31] of the HAPPEX collaboration.
In Fig. 9 the PVAs of the various experiments are presented focussing on the strange contribution. Following Eq.(1) plotted are A phys − A 0 = A s . The curves are from the χQSM. Actually the calculations yield for the HAPPEX-experiments and the G0-experiment nearly identical curves which cannot be distinguished in Fig. 9. One notes for this sensitive quantity, originating solely from the strange quarks of the Dirac sea, a good agreement between theory and experiment.
5.
In the present work, we have investigated the parity-violating asymmetries in the elastic scattering of polarized electrons off protons within the framework of the chiral quarksoliton model (χQSM). We used as an input the electromagnetic and strange vector form factors calculated in the former works [23,24,25], yielding both positive magnetic and electric strange form factors, and the axial-vector form factors [26] from a recent publication. All these form factors, incorporated in the present work, were obtained with one fixed set of four model parameters, which has been adjusted several years ago to basic mesonic and baryonic observables. In fact, the parity-violating asymmetries obtained in the present work are in a remarkable agreement with the experimental data, which implies that the present model (χQSM) produces reasonable form factors of many different quantum numbers. We also predicted in the present work the parity-violating asymmetries for the future G0 experiment at backward angles. Altogether, comparing the results of the χQSM with the overall observables of SAMPLE, HAPPEX, A4 and G0 one observes a remarkable agreement.
.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
χQSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . • FIG. 3:
χQSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
χQSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 The value of the strange electric form factor at Q 2 = 0.091 GeV 2 is newly extracted by the HAPPEX experiment [14]: G s E = (−0.038 ± 0.042 ± 0.010)n.m. which is consistent with zero. The G0 experiment indicates that G s E may be negative in the intermediate region up to Q 2 ∼ 0.3 GeV 2 . The present model predicts G s E ≃ 0.025 at Q 2 = 0.091 GeV 2 which is positive and slightly outside the error margins of HAPPEX. FIG.6:The parity-violating asymmetries as a function of Q 2 . They are the predictions for the backward G0 experiment (θ = 108). The dotted curve is calculated without the s-quark contribution. The dashed curve is obtained by using the form factors from the χQSM without the electroweak radiative corrections, while the solid one (χQSM) includes them and is our final result.
.FIG. 7 :
7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The values of G s E (Q 2 ) + β(Q 2 , θ)G s M (Q 2 ) as a function of Q 2 . The dotted fields are the χQSM-predictions for the A4 experiment at θ = 35 and θ = 145. The theoretical error fields are given by assuming the Yukawa mass of the solitonic profile in the χQSM to coincide with the pion mass or the kaon mass, respectively.
.FIG. 8 :
8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The values of G s E (Q 2 ) + ηG s M (Q 2 ) with η = 0.94Q 2 as a function of Q 2 .They are the predictions for the G0 experiment at θ = 10. The theoretical error field is given by assuming the Yukawa mass of the solitonic profile in the χQSM to coincide with the pion mass or the kaon mass, respectively.
FIG. 9 :
9Difference between the parity-violating asymmetries including strange quark effects (A phys ) and the asymmetry including just u and d quark contributions (A 0 ). The lines represent the χQSM results for the kinematics (laboratory angles) of the experiments enumerated. The curves for the small angle forward case (G0, HAPPEX: θ ∼ 8 • ) almost overlap each other and differ slightly from A4, θ = 35 • (solid line). SAMPLE is a backward angle experiment, θ = 146 • .
FIG. 10 :
10The world data for G s E and G s M from A4, HAPPEX, SAMPLE and G0 experiments at Q 2 = 0.1 GeV 2 . The plot is taken from HAPPEX[15] and the ellipse reflects the 95 % confidence level. The theoretical number obtained by the χQSM is indicated by a cross which reflects the theoretical errors. The dots indicate the center of the ellipse and the point with vanishing strange form factors.FIG. 11:
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .G Z
A
FIG. 1: The electroweak neutral axial-vector form factors G e
A and G pZ
A as functions of Q 2 calculated
in the χQSM.
. . . . . . . . . . . . . . . . . . . . . . . . . . . No rad. corr. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . χQSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..0
0.2
0.4
0.6
0.8
1.0
Q 2 [GeV 2 ]
−80
−60
−40
−20
0
A
P V [ppm]
No s quark
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . No rad. corr. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .0.0
0.2
0.4
0.6
0.8
1.0
Q 2 [GeV 2 ]
−60
−40
−20
0
A
P V [ppm]
No s quark
. . . . . . . . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. .
No rad. corr. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . χQSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .ppm]
No s quark
. . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
•
ppm] No s quark . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . No rad. corr. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . No rad. corr. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .0.0
0.2
0.4
0.6
0.8
1.0
Q 2 [GeV 2 ]
−80
−60
−40
−20
0
A
P V [ppm]
No s quark
. . . . . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. .
HAPPEX-H, ⊲⊳ HAPPEX-4 H e , ⋄ HAPPEX I • G0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SAMPLE . . . . . . . . . . . . . . . . . . . . .. . .. . . . . . . . . . . . . . . . . . . . .. . .. . . . . . . . . . . . . . . . . . . . . A4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . • • . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . × . . . . . . . . . . . . . . . . . . . . .. . .. . . . . . . . . . . . . . . . . . . . .. . .. . . . . . . . . . . . . . . . . . . . .. . .. . . . . . . . . . . . . . . . . . . . .. . .. . . . . . . . . . . . . . . . . . . . .. . .. . . . . . . . . . . . . . . . . . . . .. . .. . . . . . . . . . . . . . . . . . . . .. . .. . . . . . . . . . . . . . . . . . . . .. . .. . . . . . . . . . . . . . . . . . . . .. . .. . . . . . . . . . . . . . . . . . . . .. . .. . . . . . . . . . . . . . . . . . . . .. . .. . . . . . . . . . . . . . . . . . . . .. . .. . . . . . . . . . . . . . . . . . . . .. . .. . . . . . . . . . . . . . . . . . . . .. . .. . . . . . . . . . . . . . . . . . . . .. . .. . . . . . . . . . . . . . . . . . . . .. . .. . . . . . . . . . . . . . . . . . . . .. . .. . . . . . . . . . . . . . . . . . . . .. . .. . . . . . . . . . . . . . . . . . . . .. . .. . . . . . . . . . . . . . . . . . . . .. . .. . . . . . . . . . . . . . . . . . . . .. . .. . . . . . . . . . . . . . . . . . . . .. . .. . . . . . . . . . . . . . . . . . . . .. . .. . . . . . . . . . . . . . . . . . . . .. . .. . . . . . . . . . . . . . . . . . . . .. . .. . . . . . . . . . . . . . . . . . . . .. . .. . . . . . . . . . . . . . . . . . . . .× • •
• •
•
•
•
•
•
• •
•
•
•
•
•
•
•
•
The authors are grateful to Frank Maas for useful comments and discussions. AS acknowledges partial financial support from Portugese Praxis XXI/BD/15681/98. The work has also been supported by Korean-German grant of the Deutsche Forschungsgemeinschaft and KOSEF (F01-2004-000-00102-0). The work is partially supported by the Transregio-Sonderforschungsbereich Bonn-Bochum-Giessen as well as by the Verbundforschung of the Federal Ministry for Education and Research. The work of HCK is also supported by Korea Research Foundation (Grant No. KRF-2003-070-C00015).
. M J Musolf, T W Donnelly, J Dubach, S J . Pollock, S Kowalski, E J Beise, Phys. Rept. 2391M. J. Musolf, T. W. Donnelly, J. Dubach, S. J. .. Pollock, S. Kowalski, and E. J. Beise, Phys. Rept. 239, 1 (1994).
. K S Kumar, P A Souder, Prog. Part. Nucl. Phys. 45333K. S. Kumar and P. A. Souder, Prog. Part. Nucl. Phys. 45, S333 (2000).
. D H Beck, B R Holstein, Int. J. Mod. Phys. E. 101D. H. Beck and B. R. Holstein, Int. J. Mod. Phys. E 10, 1 (2001).
. D H Beck, R D Mckeown, Ann. Rev. Nucl. Part. Sci. 51189D. H. Beck and R. D. McKeown, Ann. Rev. Nucl. Part. Sci. 51, 189 (2001).
. M J Ramsey-Musolf, arXiv:nucl-th/0501023M. J. Ramsey-Musolf, arXiv:nucl-th/0501023.
. R N Cahn, F J Gilman, Phys. Rev. D. 171313R. N. Cahn and F. J. Gilman, Phys. Rev. D 17, 1313 (1978).
. D B Kaplan, A Manohar, Nucl. Phys. B. 310527D. B. Kaplan and A. Manohar, Nucl. Phys. B 310, 527 (1988).
. B Mueller, SAMPLE CollaborationPhys. Rev. Lett. 783824B. Mueller et al. [SAMPLE Collaboration], Phys. Rev. Lett. 78, 3824 (1997).
. D T Spayde, SAMPLE CollaborationPhys. Rev. Lett. 841106D. T. Spayde et al. [SAMPLE Collaboration], Phys. Rev. Lett. 84, 1106 (2000);
. Phys. Lett. B. 58379Phys. Lett. B 583, 79 (2004).
. R Hasty, SAMPLE CollaborationScience. 2902117R. Hasty et al. [SAMPLE Collaboration], Science 290, 2117 (2000).
. K A , HAPPEX CollaborationPhys. Lett. B. 509211K. A. Aniol et al. [HAPPEX Collaboration], Phys. Lett. B 509, 211 (2001).
. F E Maas, A4 CollaborationEur. Phys. J. A. 17339F. E. Maas et al. [A4 Collaboration], Eur. Phys. J. A 17, 339 (2003);
. Phys. Rev. Lett. 9322002Phys. Rev. Lett. 93, 022002 (2004);
. Phys. Rev. Lett. 94152001Phys. Rev. Lett. 94, 152001 (2005).
. K A , HAPPEX CollaborationPhys. Rev. C. 6965501K. A. Aniol et al. [HAPPEX Collaboration], Phys. Rev. C 69, 065501 (2004).
. K A , HAPPEX CollaborationPhys. Rev. Lett. 9622003K. A. Aniol et al. [HAPPEX Collaboration], Phys. Rev. Lett. 96, 022003 (2006).
. K A , HAPPEX CollaborationarXiv:nucl-ex/0506011K. A. Aniol et al. [HAPPEX Collaboration], arXiv:nucl-ex/0506011.
. D S Armstrong, G0 CollaborationarXiv:nucl-ex/0506021D. S. Armstrong et al. [G0 Collaboration], arXiv:nucl-ex/0506021.
. C V Christov, Prog. Part. Nucl. Phys. 3791C. V. Christov et al., Prog. Part. Nucl. Phys. 37, 91 (1996).
. R Alkofer, H Reinhardt, H Weigel, Phys. Rept. 265139R. Alkofer, H. Reinhardt, and H. Weigel, Phys. Rept. 265, 139 (1996).
. D Diakonov, V Petrov, P Pobylitsa, M V Polyakov, C Weiss, Nucl. Phys. B. 480341D. Diakonov, V. Petrov, P. Pobylitsa, M. V. Polyakov, and C. Weiss, Nucl. Phys. B 480, 341 (1996).
. V Y Petrov, P V Pobylitsa, M V Polyakov, I Bornig, K Goeke, C Weiss, Phys. Rev. D. 574325V. Y. Petrov, P. V. Pobylitsa, M. V. Polyakov, I. Bornig, K. Goeke, and C. Weiss, Phys. Rev. D 57, 4325 (1998).
. K Goeke, M V Polyakov, M Vanderhaeghen, Prog. Part. Nucl. Phys. 47401K. Goeke, M. V. Polyakov, and M. Vanderhaeghen, Prog. Part. Nucl. Phys. 47, 401 (2001).
. D Diakonov, V Petrov, M V Polyakov, Z. Phys. A. 359305D. Diakonov, V. Petrov, and M. V. Polyakov, Z. Phys. A 359, 305 (1997)
. A Silva, H Ch, K Kim, Goeke, Phys. Rev. D. 6514016Erratum-ibid. D 66, 039902 (2002)A. Silva, H.-Ch. Kim, and K. Goeke, Phys. Rev. D 65, 014016 (2002). [Erratum-ibid. D 66, 039902 (2002)].
. A Silva, H Ch, K Kim, Goeke, Eur. Phys. J. A. 22481A. Silva, H.-Ch. Kim, and K. Goeke, Eur. Phys. J. A 22, 481 (2004).
. A Silva, Ruhr-Universität BochumPh.D. DissertationA. Silva, Ph.D. Dissertation (Ruhr-Universität Bochum, unpublished) (2004).
. A Silva, H Ch, D Kim, K Urbano, Goeke, Phys. Rev. D. 7294011A. Silva, H.-Ch. Kim, D. Urbano, and K. Goeke, Phys. Rev. D 72, 094011 (2005).
. S Eidelman, Phys. Lett. B. 5921Particle Data GroupS. Eidelman et al. [Particle Data Group], Phys. Lett. B 592, 1 (2004).
. S L Zhu, S J Puglia, B R Holstein, M J Ramsey-Musolf, Phys. Rev. D. 6233008S. L. Zhu, S. J. Puglia, B. R. Holstein, and M. J. Ramsey-Musolf, Phys. Rev. D 62, 033008 (2000).
. M Praszalowicz, T Watabe, K Goeke, Nucl. Phys. A. 64749M. Praszalowicz, T. Watabe, and K. Goeke, Nucl. Phys. A 647, 49 (1999).
. W M Alberico, S M Bilenky, C Maieron, Phys. Rept. 358227W. M. Alberico, S. M. Bilenky, and C. Maieron, Phys. Rept. 358 227 (2002).
. E J Beise, M L Pitt, D T Spayde, Prog. Part. Nucl. Phys. 54289E. J. Beise, M. L. Pitt, and D. T. Spayde, Prog. Part. Nucl. Phys. 54, 289 (2005).
| []
|
[
"A new method to invert InSAR data to resolve stress changes on a fracture embedded in a 3D heterogeneous crust",
"A new method to invert InSAR data to resolve stress changes on a fracture embedded in a 3D heterogeneous crust"
]
| [
"Oliver Bodart [email protected] \nUMR 5208\nThe Lyon University\nUniversité Jean Monnet Saint-Étienne\nCNRS\nInstitut Camille Jordan\nF-42023Saint-EtienneFrance\n",
"Valérie Cayol [email protected]. \nLaboratoire Magmas et Volcans, . Université Clermont Auvergne-CNRS-IRD, OPGC\nThe Lyon University\nF-63038Clermont-FerrandFrance\n\nUniversité Jean Monnet\nInstitut Camille Jordan, F42023Saint-EtienneFrance\n",
"Farshid Dabaghi [email protected] "
]
| [
"UMR 5208\nThe Lyon University\nUniversité Jean Monnet Saint-Étienne\nCNRS\nInstitut Camille Jordan\nF-42023Saint-EtienneFrance",
"Laboratoire Magmas et Volcans, . Université Clermont Auvergne-CNRS-IRD, OPGC\nThe Lyon University\nF-63038Clermont-FerrandFrance",
"Université Jean Monnet\nInstitut Camille Jordan, F42023Saint-EtienneFrance"
]
| []
| We present a new method to invert variable stress changes of fractures from InSAR ground displacements. Fractures can be either faults or magma intrusions, embeded in a 3D heterogeneous crust with prominent topographies. The method is based on a fictituous domain approach using a finite element discretization of XFEM type. A cost function involves the misfit between the solution of the physical problem and the observed data together with the smoothing terms. Regularization parameters are determined by using L-curves. The method is then reformulated to be applied to InSAR data (masked and noisy), projected in Earth-Satellite directions. Synthetic tests confirm the efficiency and effectiveness of our method. | null | [
"https://export.arxiv.org/pdf/2212.03198v1.pdf"
]
| 254,275,492 | 2212.03198 | 6150a2582b7cb5a87c0ab4baa1993f59d7417338 |
A new method to invert InSAR data to resolve stress changes on a fracture embedded in a 3D heterogeneous crust
December 7, 2022 6 Dec 2022
Oliver Bodart [email protected]
UMR 5208
The Lyon University
Université Jean Monnet Saint-Étienne
CNRS
Institut Camille Jordan
F-42023Saint-EtienneFrance
Valérie Cayol [email protected].
Laboratoire Magmas et Volcans, . Université Clermont Auvergne-CNRS-IRD, OPGC
The Lyon University
F-63038Clermont-FerrandFrance
Université Jean Monnet
Institut Camille Jordan, F42023Saint-EtienneFrance
Farshid Dabaghi [email protected]
A new method to invert InSAR data to resolve stress changes on a fracture embedded in a 3D heterogeneous crust
December 7, 2022 6 Dec 20221
We present a new method to invert variable stress changes of fractures from InSAR ground displacements. Fractures can be either faults or magma intrusions, embeded in a 3D heterogeneous crust with prominent topographies. The method is based on a fictituous domain approach using a finite element discretization of XFEM type. A cost function involves the misfit between the solution of the physical problem and the observed data together with the smoothing terms. Regularization parameters are determined by using L-curves. The method is then reformulated to be applied to InSAR data (masked and noisy), projected in Earth-Satellite directions. Synthetic tests confirm the efficiency and effectiveness of our method.
1 Physical and computational model 1
.1 Mathematical modelling of the solid
From a mathematical point of view, a volcano is a bounded open set Ω ⊂ R 3 , occupied by an elastic solid. This set is assumed to have a smooth boundary ∂Ω which we separate in two (non empty) parts ∂Ω := Γ D ∪ Γ N , with Γ D ∩ Γ N = ∅. As depicted on Figure 1, the subset Γ N is actually the ground of the volcano and free to move. The subset Γ D is an artificial boundary defined in order to work on a finite size object and is assumed to to satisfy a 0 displacement condition. We assume that the elastic solid occupying Ω is subject to a body force field f . In the sequel, boldface letters will be used to denote (usually 3-dimensional) vector fields. Plain letters will represent scalar quantities.
We denote by x = (x, y, z) the generic point in R 3 , and by u = u(x) the displacement field of the solid Ω. The Cauchy stress σ(u) and strain ε(u) are given by σ(u) = λTr ε(u) I R 3 + 2µε(u) and ε(u) = 1 2 (∇u + ∇u T ),
where (λ, µ) are the Lamé coefficients of the material, I R 3 the identity matrix, and Tr(·) the matrix trace. We also assume the presence of a crack in the volcano, which is represented mathematically by a 2-D surface Γ C ⊂ Ω on which a traction or pressure force will be exerted (see also Figure 1). In a volcanic context, Γ C might represent a magma-filled crack on a fault. As already said in the introduction, the shape and position of the crack are supposed to be known in this work. The deformation field u of the solid is supposed to satisfy the following elastostatic system:
−div σ(u) = f in Ω,(1)u = 0 in Γ D ,(2)
σ(u) · n = 0 on Γ N ,
(3) σ(u) · n ± = t ± on Γ C .(4)
The first equation is the equilibrium law describing the (linear) elastic behaviour of the material. Condition (2) describes the fact that the displacement vanishes on the underground boundary of the solid. In equation (3), and in the sequel, n denotes the unit normal vector on the boundary of Ω, oriented externally. This condition describes the free movement of the ground part of the volcano. Finally, equation (4) describes the force acting on the crack Γ C . The vectors n + and n − = −n + are unit normal vectors on the crack Γ C (see Figure 2). The vector functions t ± (x) denote the force exerted on the crack. They are such that t + = −t − . When this force acts normally to the crack, that is t ± (x) = p(x)n ± , the force is called a pressure, otherwise in general it is called a traction force. This traction force is the actual unknown of our problem. From the mathematical point of view it is supposed to be quadratically integrable as well as its gradient.
In order to derive a finite element approximation of the state equations (1)-(4), we need a weak formulation of the system. This is done by multiplying the equation (1) by a regular vector field v such that v = 0, and then integrating by parts over Ω. Using the conditions (2)-(4) then yields:
Ω σ(u) : ε(v) dΩ = Ω f · v dΩ + Γ C (t ± ) · v dΓ C ,(5)
where : is the double inner product for matrices, i.e. for A = (a ij ) i,j=1,2,3 and B = (b ij ) i,j=1,2,3 ,
A : B = 3 i=1 3 j=1 a ij b ij .
Notice that this approach allow us to deal with situations where the elastic material is nonhomogeneous and anisotropic, that is when the Lamé coefficients are not constant.
Finite element model
The direct problem (1)-(4) efficiently will have to be solved iteratively during the inversion procedure. Therefore we need an efficient algorithm to do so. To this aim, we use a domain decomposition method. More precisely, following [3], the domain Ω is split into two sub-domains such that each point of the domain lies on one side of the crack or on the crack. For this purpose, we use an artificial extension Γ 0 of the crack Γ C . Assuming that Γ F = Γ C ∪ Γ 0 splits the domain into two subdomains Ω + and Ω − , we have Ω
= Ω + ∪ Γ F ∪ Ω − , Γ N = Γ + N ∪ Γ − N and Γ D = Γ + D ∪ Γ − D .
We define on Γ C two opposite unit outward normal vectors n + (from Ω + ) and n − (from Ω − ). The global unknown u is split into u + = u |Ω + and u − = u |Ω − . Therefore, the equations (1)-(4) can be rewritten with u + and u − as unknowns:
−div σ(u ± ) = f ± in Ω ± ,(6)u ± = 0 on Γ D ∩ ∂Ω ± ,(7)(σ(u) · n) ± = 0 on Γ N ∩ ∂Ω ± ,(8)(σ(u) · n) ± = t ± on Γ C ,(9)[[u]] = 0 on Γ 0 ,(10)[[σ(u)]] · n + = 0 on Γ 0 ,(11)
where [[v]] = v + −v − denotes the jump of a function across Γ 0 . The equations and conditions (6)-(9) are a straightforward rewriting of (1)-(4). The two last conditions (10) and (11) are imposed to enforce the continuity of displacement and stress across Γ 0 to ensure that u = (u − , u + ) solves the original problem (1)-(4). Let us now describe the discretization of this problem via the finite element method. A lagrange multiplier λ defined on Γ 0 is introduced to enforce the continuity of the displacement across the Γ 0 . Defining the associated lagrangian functional and expressing the associated saddle point conditions give a weak formulation for Problem (6)-(11) which unknown is in the form X = (u + , u − , λ) (see [3] for details).
Γ + D Ω − Ω + Γ + N Γ − D Γ − D Γ 0 Γ + D Γ − N Γ 0 n + Γ C n −
In order to numerically approximate the system, consider a tetrahedral mesh of Ω (as e.g. in Figure 3). Let ϕ ± i be a finite element basis defined on this mesh. Also defining a mesh of the extended crack, we build finite elements bases ψ i and θ j on Γ 0 and Γ C respectively.
Identifying the unknowns u + , u − and λ with there value on the nodes of the mesh, the discretized form of Problem (6)-(11), then reads
KX = F.(12)
The matrix K of this system is built from the stiffness matrices A ± , and the constraint coupling matrices on Γ 0 denoted by B ± . More precisely, we have
K = A + 0 B + T 0 A − −B − T B + −B − 0 , where A ± := Ω ± σ(ϕ ± i ) : ε(ϕ ± j ) dΩ ± ij , B ± := Γ 0 ϕ ± i · ψ j dΓ 0 ij
The right hand side vector F ± is given by
F = F + F − 0 , F ± := Ω ± f ± · ϕ ± i dΩ ± + Γ C t ± · ϕ ± i dΓ C i ,
which boils down to the algebraic formulation
F = F + F − 0 = M + Ω f + M − Ω f − 0 + +M C t −M C t 0 ,
where the mass matrices M ± Ω and the coupling matrix on Γ C , denoted by M C , are given by
M ± Ω ij = Ω ± ϕ ± i · ϕ ± j dΩ ± , [M c ] ij = Γ C ϕ ± i · θ j dΓ C .
Eventually, we can naturally define two matrices L Ω and L C such that we have
F = L Ω f + L C t.
The matrix K is symmetric and positive definite, which allows the use of powerful classical solvers. The implementation of this model uses the finite element library GetFem++ [9]. The library provides all the necessary tools to handle various types of PDEs, and links to classical powerful solvers (conjugate gradient, SuperLU, MUMPS) as well as other routines for mesh management, definition of cracks and boundaries via level set functions) which make it quite versatile and powerful. We refer the reader to [3] for a detailed mathematical and computational analysis of this method. See also [1,2] for a mathematical analysis of similar optimal control problems aking use of the domains decomposition technique.
2 Inversion from full ground measurements 2.1 Mathematical formulation of the problem Let us assume the displacement field of the volcano to be fully measured on the ground Γ N , and call it u d . The inversion problem consists in finding a traction vector t such that the solution of Problem (1)-(4) satisfies u = u d on Γ N . However this is known to be an ill-posed problem (e.g. the solution is not unique), and moreover the measurements are perturbed by random uncertainties. Therefore we adopt a least squares approach, combined with a regularization (i.e. smoothing) technique to build the inversion process.
Consider the following cost function:
J(t) = 1 2 Γ N (u − u d ) C −1 (u − u d ) dΓ N + α 0 2 Γ C |t| 2 dΓ C + α 1 2 Γ C |∇t| 2 dΓ C ,(13)
where C denotes the covariance operator of the measurements uncertainties (see e.g. [11]), and is assumed to be positive definite, denotes the transpose operator, and finally, the non-negative constants α 0 and α 1 are the Tikhonov regularization (or smoothing) parameters. We aim at minimizing this cost function over the space of feasible traction vectors. This space consists in quadratically integrable vectors, which gradient is also quadratically integrable.
In [1,2], a simplified version of this problem is mathematically studied. The cost function only features the first smoothing term. As it will be shown later on, our tests have proved the necessity of smoothing the gradient, in a practical framework, to obtain a physically admissible solution t. The optimal choices for the values of α 0 and α 1 will be discussed further.
To derive the optimality conditions for the Problem (13), we introduce the following Lagrangian:
L(u, t, φ) = J(t) − Ω (σ(u) : ε(φ) − f ) · φ dΩ + Γ C t ± · φ dΓ C ,
where φ is the Lagrange multiplier for the constraint (1)-(4) in the weak form (5). When ∇ u L = 0 and ∇ φ L = 0, we have ∇ t L = ∇J. Cancelling ∇ φ L gives problem (1)-(4). Cancelling ∇ u L and performing integration by parts yields that φ is the solution of
−div σ(φ) = 0 in Ω, φ = 0 in Γ D , σ(φ) · n = C −1 (u − u d ) on Γ N ,(14)
called the adjoint state system. Finally, computing ∇ u L, for u and φ satisfying (1)-(4) and (14), gives:
∇J(t) = α 0 t + α 1 ∇t + (φ · n ± ).(15)
We refer the reader to [1,2], for further details. After splitting the computational domain Ω, the cost function J in the optimization problem (13) becomes:
J(t) = 1 2 Γ + N (u + − u + d ) C −1 (u + − u + d ) dΓ + N + 1 2 Γ − N (u − − u − d ) C −1 (u − − u − d ) dΓ − N + α 0 2 Γ C |t| 2 dΓ C + α 1 2 Γ C |∇t| 2 dΓ C .(16)
The previous optimality conditions rewrite naturally from this new formulation.
The Discrete Problem
The next step is to adapt the cost function, the adjoint state and the gradient of the cost function to the discrete system (12). To this aim, it has to be noticed that all terms in (16) are symmetric. Discretizing the two first terms will involve the introduction of mass matrices on Γ ± N . On needs to keep the symmetry of the discrete form. Since the covariance matrix C is positive definite, it is also the same for its inverse and we can write
C = QDQ −1 , C −1 = QD −1 Q −1 ,
where Q is the matrix which columns are the eigenvectors of C and D the diagonal matrix of the eigenvalues of C. This diagonalized form allows us to write
C −1 = C − 1 2 C − 1 2 , where C − 1 2 = QD − 1 2 Q −1 .
In view of this, we rewrite the continuous cost function J as
J(t) = 1 2 Γ + N (u + − u + d ) C − 1 2 C − 1 2 (u + − u + d ) dΓ + N + 1 2 Γ − N (u − − u − d ) C − 1 2 C − 1 2 (u − − u − d ) dΓ − N + α 0 2 Γ C |t| 2 dΓ C + α 1 2 Γ C |∇t| 2 dΓ C .(17)
As in Section 1.2, we will identify the previous functions with their values at the mesh nodes. Define a set of finite element basis functions χ ± i (with 3 degrees of freedom at each mesh node) defined on Γ ± N . Then, define the ground mass matrix
M G = M + G 0 0 M − G ,(18)with M ± G ij = Γ N χ ± i · χ ± j dΓ N . Let us also define the matrices M F 0 and M F 1 as [M F 0 ] ij = Γ C θ i · θ j dΓ C , [M F 1 ] ij = Γ C ∇θ i · ∇θ j dΓ C .
Recall that θ j are the finite element basis functions defined on Γ C . Finally, in view of the domain decomposition method, the mesh nodes located on the ground will be on either side of the crack. Therefore, the measured displacement and the restriction to the ground nodes of the computed displacement (denoted u G ) can be written as
u d = u + d u − d , u G = u + G u − G .
Consider then the reduction matrix O R such that
O R X = u G ,
where X is the solution of (12). The discrete version of the cost function J defined by (17) is then
J d (t) = 1 2 (O R X − u d ) T C − 1 2 M G C − 1 2 (O R X − u d ) + α 0 2 (t M F 0 t) + α 1 2 (t M F 1 t).(19)
The optimality conditions for the minimizer of the discrete cost function J d are obtained via the characterization of the saddle point of the following (discrete) Lagrangian function:
L d (X, t, φ) = J d (t) − KX − (L Ω f + L C t), φ .
Computing the partial derivative of L d and then cancelling them, lead to the discrete counterpart of the adjoint problem (14) as:
Kφ = O R C − 1 2 M G C − 1 2 (O R X − u d ), which rewrites, denoting u G = O R X, Kφ = O R C − 1 2 M G C − 1 2 (u G − u d ),(20)
where K is the (symmetric) matrix of System (12). The gradient of J d at any point t is then:
∇J d (t) = α 0 M F 0 t + α 1 M F 1 t + L C φ.(21)
Practical minimization algorithm
In a previous work (see [2]), we presented two methods to minimize a simpler version of our cost function J d : the conjugate gradient algorithm and a quasi-Newton method (low storage BFGS). These two techniques are natural choices due to the quadratic structure of the cost function. They were studied in depth in terms of computational performance and accuracy. The BFGS method appeared to be converging faster when the mesh of the domain Ω is rather coarse. When the mesh becomes finer, the number of unknowns in the problem increases and the two methods tend to give similar results. In the framework of applications which we are interested in, we aim at fast processes, that is we will most of the time consider meshes that are rather coarse except in the neighborhood of the crack Γ C . Therefore we will use a BFGS algorithm (see [4,5,7,10]) in our numerical tests. It involves the construction of a sequence of approximation of the inverse of the Hessian matrix of the cost function J d . We use a limited storage version of the algorithm called L-BFGS (see [8]). The method is presented in Algorithm 1, with the necessary adaptations to our problem. As usual with this type of methods sequences are built and will be denoted as follows:
• t k : traction force vector,
• X k : solution of the state equation (12),
• u k : displacement field on the ground,
• φ k : solution of the adjoint state equation (20),
• g k : gradient of the cost function J d at point t k ,
• d k , w k : displacement directions for the optimization,
• ρ k : optimal displacement step.
The number k ≥ 0 represents the iteration number. Notice that the underlying quadratic form in the cost function allow to compute explicitely the optimal step size at each iteration, via formula (28). The detailed computation is presented in the Appendix.
In the algorithm, the domain decomposition technique is used to solve the discrete state and adjoint state equations (12) and (20). Notice that the matrix K is symmetric and does not change during the iterative process, which means that the computation cost due to matrix factorization is reduced.
The algorithm iterates until the gradient of J d becomes sufficiently small, as precised in the algorithm below.
Numerical tests and validation with synthetic data
The numerical application is performed by a synthetic test via simulation of a flat volcano as follows: A semi-infinite elastic domain at center with radius 100 km and extending down to [0, −20] km (see Figure 3(a)), with the elasticity parameters: the Young modulus E = 5000.0 MPa and Poisson ratio ν = 0.25. A horizontal circular fracture is defined with radius r = 1 km and located at (x, y, z) = (0, 0, −0.3) km in the topo flat. The mesh generation and level set implementation is explained in [1,2]. The ground surface is free to move and the other sides is fixed i.e. satisfy the boundary conditions on Γ N and Γ D respectively. We generated a synthetic solution u d associated to a discontinuous traction t ± = (0, 0, ±1.5) MPa, applied on a part of
Algorithm 1 Algorithm to minimize J d Require: t 0 , u d and > 0 k ← 0 Solve (12) with t = t 0 → X 0 u 0 ← O R X 0 Solve (20) with u = u 0 → φ 0 Initial gradient: g 0 ← α 0 M F 0 t 0 + α 1 M F 1 t 0 + L C φ 0 Initial direction: d 0 ← −g 0 H 0 ← I C while g k+1 Γ C < g 0 Γ C do Solve (12) with t = t k → W k w k ← O R W k Compute step size ρ k by (28) with u = u k , t = t k , d = d k , w = w k t k+1 = t k + ρ k d k u k+1 = u k + ρ k w k Solve (20) with u = u k+1 → φ k+1 g k+1 in (15) with t k+1 and φ k+1 Compute H k+1 via formula (30) d k+1 = −H k+1 g k+1 k ← k + 1 end while
the fracture Γ C , as shown in Figure 3(b) (the yellow patches) and t ± = (0, 0, 0) on the rest of the fracture (the blue patches). In some numerical experiments, this traction is interpreted as a pressure p ± = 1.5 MPa applied on the yellow zone.
Finding a robust and efficient method to compute appropriate regularization parameters α 0 and α 1 for given mesh and noisy data, is not straightforward. The admissible solution t is dependent on the chosen α 0 and α 1 and it is crucial to obtain a good approximation of the solution to the optimization problem. For choosing the optimal parameters, a series of tests is organized by using the graphical tools. Indeed, three important criteria are compared by varying α 0 and α 1 as follows: we set α 0 ∈ {1.0E−12, 1.0E−11, . . . , 1.0E0} and α 1 ∈ {1.0E−5, 1.0E−4, . . . , 1.0E5}. Then we compare: A relative ground error defined by:
E u = Γ N |u − u d | 2 dΓ N Γ N |u d | 2 dΓ N × 100.
A relative error of traction t on the fracture source, defined by: Note that an inevitable numerical error is already produced to compute the traction t on the fracture source. Using the non-conformal mesh to implement the fracture by level set method, is the reason of this error which can be reduced by using a finer mesh around the fracture. finally, the iteration number for a given = 1.0E−14 in Algorithm 1 to satisfying the convergence criteria The numerical tests presented in Figure 4, show that, α 1 = 1 can be chosen as an acceptable parameter. Moreover, it seems, the above criteria are less sensitive to α 0 between 1.0E − 7 and 1.0E − 4. Accordingly, we employ L-curve, another graphical tool, to trade off between two criteria: the misfit
E t = |t − t exact | 2 |t exact | 2 dΓ C × 100.Γ C |g k+1 | 2 dΓ C Γ C |g 0 | 2 dΓ C < .u − u d = Γ N |u − u d | 2 dΓ N 1 2 ,(22)
and the norm of the traction or smoothing
t = Γ C |t| 2 dΓ C ) 1 2 .(23)
The L-curves are depicted by setting α 0 = 1.0E − 7 and varying α 1 . Closest point to the origin shows the best compromise between misfit and smoothing, and it will be the acceptable α 1 . In particular case when we aim to find the pressure, the traction t may replaced by the pressure p. In Figure 5, for the above fracture and mesh, two L-curves corresponding to both pressure p and traction t are illustrated. Fracture pressure corresponding to α 1 = 1.0E1 as an appropriate choice is presented in Figure 5(c). As shown in Figures 6 and 7, in 3D realistic volcanoes where we interested in large-scale problem, the values around the 1.0E1 are yet acceptable. Normal and tangential tractions are depicted in Figure 7. Note that we suppose (0, 0, −1) as the unit normal vector n. We then aim to consider the impact of the fracture depth to the approximated admissible solution. To do that, the numerical experiments are repeated for the fracture at −0.9 km of depth. The best combination for α 1 = 1.0E − 1 and α 0 between 1.0E − 7 and 1.0E − 4, in red box are shown in Figure 8. The L-curves depicted for p and t in Figure 9(a, b) show that the closest point to the origin for the 0.9 km is α 1 = 1.0E0. The normal stress projected on the fracture presented in Figure 9(c) compare to exact solution confirms an acceptable approximated solution. As previous case the values around the 1.0E0 for α 1 are acceptable (see Figures 12 and 13). A comparison of the iteration number presented in Table 1, indicates that the number of iterations is augmented for the greater values of α 1 . Table 1: Comparison between the number of iterations for two depths -0.3 and -0.9 km and when the unknowns are pressure p and traction t for α 0 = 1.0E−7.
As seen in the numerical simulations, we may trust to the L-curves results to find an appropriate α 1 for more complicated geometry of fracture and mesh, which is clearly help us to 3 Taking into account the earth-satellite directions 3
.1 Mathematical framework and new optimization problem
Before going further, let us first remark that the previous section can easily be adapted to the case where the measurements of the displacement field are performed only on a part of the ground Γ N ⊂ Γ N . The cost function to be minimized becomes
J(t) = 1 2 Γ N (u − u d ) C −1 (u − u d ) dΓ N + α 0 2 Γ C |t| 2 dΓ C + α 1 2 Γ C |∇t| 2 dΓ C .
Rewriting the optimality conditions and adapting the algorithms is straightforward. This is what is actually implemented in our software. However, assuming the displacement field to be known in cartesian coordinates is not realistic for the applications. In this section we aim at adapting our work to the case where measurements are provides by satellite radar interferometry (as e.g. in [6]). We will first state precisely what measurements are made and then adapt our methods to this new framework.
Let u d = (u x , u y , u z ) be a displacement field on Γ N , written in cartesian coordinates. A satellite will aim at the ground to measure a displacement, and the resulting measurement will be in the form p u d where the aiming direction (also called earth-satellite direction) p = (p x , p y , p z ) is a unit vector oriented from the ground to the satellite. Let then N ≥ 1 be the number of satellites, each associated to a direction p i = (p ix , p iy , p iz ) for i = 1, . . . , N . We can build a matrix P : R 3 → R N as P = (p 1 , p 2 , · · · , p N ) , so that the actual measurement is
R d = Pu d .
The vector field R d ∈ R N is defined on Γ N , and the matrix P also operates on the solutions of System (1)-(4). Hence, given the field R d , our new optimization problem consists now in finding the traction vector t minimizing the following cost function:
J(t) = 1 2 Γ N (Pu − R d ) C − 1 2 C − 1 2 (Pu − R d ) dΓ N + α 0 2 Γ C |t| 2 dΓ C + α 1 2 Γ C |∇t| 2 dΓ C ,(24)
where α 0 > 0 and α 1 > 0 are the regularization parameters. The optimality conditions for this problem are derived in the same way than in Section 2.1. We will not go into details about this and focus on the discrete version of the problem. Using the definitions of Sections 1.2 and 2.2, and in view of (19) and (24) the discrete cost function becomes
J d (t) = 1 2 (PO R X − R d ) T C − 1 2 M G C − 1 2 (PO R X − R d ) + α 0 2 (t M F 0 t) + α 1 2 (t M F 1 t). (25)
where X is the solution of the discrete system (12). This finite dimensional problem then boils down to finding the saddle point of the following Lagrangian
L d (X, t, φ) = J d (t) − KX − (L Ω f + L C t), φ .
Computing the partial derivative of L d and then cancelling them, leads to the discrete adjoint problem (14):
Kφ = O R P C − 1 2 M G C − 1 2 (PO R X − R d ),(26)
and the gradient of J d is:
∇J d = α 0 M F 0 t + α 1 M F 1 t + L C φ.(27)
The minimization Algorithm 1 can then be adapted to the new adjoint state and gradient to obtain the numerical approximation of the traction t on the crack by using the surface measurements provided by radar interferometry r d .
Applications to synthetic test
The numerical applications here are proceeded in three different steps: First, we aim to the adaptation of the theoretical results in Earth-Satellite directions. For the sake of simplicity, we assume the covariance is an identity matrix. Next, in order to reduce the observed data, the identity covariance matrix is reduced to the nodal mesh points. We then consider a problem with dense covariance matrix adapted to a limited number of observed data. The vector directions are chosen by using InSAR satellite directions, listed in Table 2. The numerical tests are performed with one to four radar looks to confirm the theoretical results. We set the same configuration as previous section for the problems in Cartesian coordinate. The horizontal circular fracture beneath the flat topography located at 0.3 km (see Figure 3). We still take α 0 = 10 −7 and by depicting the L-curves, we find the best α 1 with the best compromise between misfit and the traction norm t or pressure p (see Figure 10) Normal and tangential tractions projected on the fracture corresponding to the best α 1 = 1 are presented in Section supplementary Material, Figure 14. A comparison between the iteration numbers listed in Table 3 for the observed synthetic data provided by different number of radar looks, show that the use of more radar looks leads to a decrease in the number of iteration. Table 3: Comparison between the number of iterations taking into account the earth-satellite directions with identity covariance matrix in InSAR unit vector directions. The fracture is located at −0.3 km, when solving for pressure p and traction t. Here, we set α 0 = 1.0E−7.
In a more realistic scenario, the number of observed data are limited. In the previous numerical experiments, we assumed a synthetic u d in P2 finite elements. The degrees of freedom's nodes of a classical P2 Lagrange elements on a 3D tetrahedron, are located on the vertices or nodal mesh points and the midpoints of the edges. In realistic volcano phenomenons (threedimensional elasticity problem), using the P2 finite element guarantees a good approximated solution. However, we aim to reduce the synthetic u d to nodal mesh points on the ground surface as a P1 finite elements. On the other side, to conserve the P2 finite elements for the elasticity problem, we should conserve the dimension of the covariance C − 1 2 and the mass matrix M G in a P2 elements. Therefore, from implementation point of view, the degrees of freedom's related to the midpoints of the edges in C − 1 2 and M G are set to zero. Despite the increase in the iterations number, the numerical experiments presented still show very satisfactory results (see Section Supplementary Material Figure 15).
Another step to approach the reality is using the dense covariance matrices for limited number of data, obtained by InSAR and cGNSS.
To do that, we are using the DEfvolc interfaces The dense covariance matrices are provided by DefVolc, pre and post-processor software. We started by creating the synthetic u d and the dense covariance matrix in S4 radar look. The results obtained in Figure 11 and first row of Table 4, confirm the adaptability of our method to the dense covariance matrices. In reality, most of the time, the atmospheric contribution caused masked and noisy InSAR data. Therefore, some numerical experiments is performed by creating masked and noisy synthetic u d and covariance matrix. The numerical experiments are presented in Figure 11. The L-curves are depicted for synthetic u d projected to S4 an S6 directions. First without any mask and noise in the data and then different possible cases by adding the mask and noise to the synthetic data. We summarized appropriate choices for α 1 and the number of iterations to achieve the convergence with these α 1 , for different cases of synthetic u d , in Table 4. Despite of choosing a greater α 1 , for the masked data, the number of iteration is reduced. However, for the noisy data the convergence of the inversion process is much more harder. Moreover the vector size of the synthetic u d for each case are listed in Table 4. Table 4: Comparison between the number of iterations for fracture at 0.3 km depth and when the unknowns are pressure p and traction t for different radar looks with identity covariance matrix. Figure 11: L-curves used to find α 1 representing the best compromise between the data fit (equation (22)) and the smoothing (equation (23)) in the cost function (equation (24)) taking into account the earth-satellite directions with the dense covariance matrix. The fracture is located at −0.3 km, when solving for the traction t. The larger points indicate the best compromise. The normal and tangential traction are presented for the appropriate α 1 . The synthetic data u d is projected in S4 and S6 radar looks. (a) without any noise and mask, (b) with mask, (c) with noise (d) with noise and mask. Here, we set α 0 = 1.0E − 7.
Appendix
The optimal step size This section is dedicated to some mathematical details for interested readers. First part is concerned to compute the optimal step size. To simplify the presentation, let us introduce c N defined on Γ C
c N (w, w) = Γ N w C −1 w dΓ N ,
so that the cost functional in (13) becomes
J(t) := 1 2 c N (u − u d , u − u d ) + α 0 2 Γ C |t| 2 dΓ C + α 1 2 Γ C |∇t| 2 dΓ C .
Then the directional derivative of J in the direction of a given d reads
∂J ∂t (t), d α 0 ,α 1 = c N (u − u d , w) + α 0 Γ C (t · d) dΓ C + α 1 Γ C (∇t · ∇d) dΓ C ,
where w is the solution to (5) with t = d. Therefore, we may compute the optimal step size ρ, with a search direction d by solving
∂J ∂t (t + ρd), d α 0 ,α 1 = 0, that is c N (u + ρw − u d , w) + α 0 Γ C (t + ρd) · d dΓ C + α 1 Γ C (∇t + ρ∇d) · ∇d dΓ C = 0, which gives ρ c N (w, w) + α 0 Γ C (d · d) dΓ C + α 1 Γ C (∇d · ∇d) dΓ C + c N (u − u d , w) + α 0 Γ C (t · d) dΓ C + α 1 Γ C (∇t · ∇d) dΓ C = 0.
The optimal step size is therefore
ρ * = − c N (u − u d , w) + α 0 Γ C (t · d) dΓ C + α 1 Γ C (∇t · ∇d) dΓ C c N (w, w) + α 0 Γ C (d · d) dΓ C + α 1 Γ C (∇d · ∇d) dΓ C .(28)
The optimal step size for Earth-Satellite direction
The optimal step size for Earth-Satellite direction as previous, is still obtained by solving ∂J ∂t (t + ρd), d α 0 ,α 1 = 0 that is c N (Pu + ρPw − r d , Pw) + α 0 Γ C (t + ρd) · d dΓ C + α 1 Γ C (∇t + ρ∇d) · ∇d dΓ C = 0, where w is still the solution to (5) with t = d. This gives the optimal step size
ρ * = − c N (Pu − r d , Pw) + α 0 Γ C (t · d) dΓ C + α 1 Γ C (∇t · ∇d) dΓ C c N (Pw, Pw) + α 0 Γ C (d · d) dΓ C + α 1 Γ C (∇d · ∇d) dΓ C .
(29)
L-BFGS Update
The second part is concerned to L-BFGS Update in minimization Algorithm 1 H k+1 = (I − θ k s k y k )H k (I − θ k y k s k ) + θ k s k s k ,
with s k = t k+1 − t k , y k = g k+1 − g k , θ k = 1 y k s k .
Supplementary Material
Figure 1 :
1Volcanic domain Ω and its boundaries
Figure 2 :
2Splitting a volcanic cracked domain by domain decomposition method
Figure 3 :
3Configuration of the synthetic test corresponding to a horizontal circular fracture beneath a flat topography. (a) Progressive mesh used. The mesh has a radius of 100 km. The fine mesh region has a 1.5 km radius; the intermediate mesh size goes from 1.5 to 15 km radius; further away the mesh gets coarser. (b) The source is a pressurized disk with a 1 km radius located at different depths. The yellow patch is submitted to a normal traction t exact = (0, 0, 1.5) MPa and the blue patch has a null traction t exact = (0, 0, 0). (c) Surface synthetic displacements obtained by this pressure.
Figure 4 :
4Systematic exploration of the smoothing parameters α 0 and α 1 for minimizing the cost function in equation (13). The source is located at −0.3 km depth beneath the flat topography. The acceptable combination of smoothing parameters is found by comparing (a) relative ground error, (b) relative error of traction on the disk and (c) iteration number. The source has 291 unknowns. The best compromises are indicated by the red boxes.
Figure 5 :
5L-curves used to find α 1 representing the best compromise between the data fit (equation(22)) and the smoothing (equation (23)) in the cost function (equation(13)). The fracture is located at −0.3 km. (a) L-curve when solving for pressure p. The larger points indicate the best compromise. (b) L-curve when solving for traction t. The larger points indicate the best compromise. (c) Fracture pressure corresponding to the best α 1 = 1.0E1. (d) Normal and (e) tangential tractions projected on the fracture corresponding to the best α 1 = 1.0E1. Here, we set α 0 = 1.0E − 7.
Figure 6 :Figure 7 :
67Fracture pressure located at −0.3 km, corresponding to (a) α 1 = 1.0E−1, (b) Normal and tangential tractions projected on the fracture located at −0.3 km, corresponding to (a) α 1 = 1.0E0, (b) α 1 = 1.0E1 and (c) α 1 = 1.0E2. The directions of the tangetial stress are shown by red vectors.
Figure 8 :
8Systematic exploration of the smoothing parameters α 0 and α 1 for minimizing the cost function in equation (13). The source is located at −0.9 km depth beneath the flat topography. The acceptable combination of smoothing parameters is found by comparing (a) relative ground error, (b) relative error of traction on the disk and (c) iteration number. The source has 107 unknowns. The best compromises are indicated by the red boxes.
Figure 9 :
9L-curves used to find α 1 representing the best compromise between the data fit (equation(22)) and the smoothing (equation (23)) in the cost function (equation(13)). The fracture is located at −0.9 km. (a) L-curve when solving for pressure p. The larger points indicate the best compromise. (b) L-curve when solving for traction t. The larger points indicate the best compromise. (c) Fracture pressure corresponding to the best α 1 = 1.0E1. (d) Normal and (e) tangential tractions projected on the fracture corresponding to the best α 1 = 1.0E0. Here, we set α 0 = 1.0E − 7.computational cost.
Figure 10 :
10L-curves used to find α 1 representing the best compromise between the data fit (equation(22)) and the smoothing (equation(23)) in the cost function (equation(24)) taking into account the earth-satellite directions with identity covariance matrix. The fracture is located at −0.3 km, when solving for pressure p and traction t. The larger points indicate the best compromise. (a) in S4 (b) in S4 and S6 (c) in S4, S6 and TSXA (d) in S4, S6, TSXA and TSXD directions. Here, we set α 0 = 1.0E − 7.
Figure 12 :
12Fracture pressure located at −0.9 km, corresponding to (a) α 1 = 1.0E−1, (b) α 1 = 1.0E1 and (c) α 1 = 1.0E2.
Figure 13 :
13Normal and tangential tractions projected on the fracture located at −0.9 km, corresponding to (a) α 1 = 1.0E−1, (b) α 1 = 1.0E0 and (c) α 1 = 1.0E1. The directions of the tangetial stress are shown by red vectors.
Figure 14 :
14Fracture stress located at −0.3 km, first row normal stress and the second row tangential stress, corresponding to (a) α 1 = 1.0E1 for S4 direction, (b) α 1 = 1.0E2 S4, S6 directions and (c) α 1 = 1.0E2 S4, S6, TSXA directions with identity covariance matrix. The directions of the tangential stress are shown by red vectors.
Figure 15 :
15(a) L-curves for a circular fracture located at −0.3 km, corresponding to α 1 . (b), (c) fracture stress with reduced identity covariance matrix for α 1 = 1.0E0 for S4 radar look.
Table 2 :
2InSAR Unit vector directionsradar look
α1
Unknown
Number of iteration Number of unknowns
S4
1.0E1
Pressure p
93
291
Traction t
108
873
S4 S6
1.0E1
Pressure p
59
291
Traction t
80
873
S4 S6 TSXA
1.0E1
Pressure p
51
291
Traction t
58
873
S4 S6 TSXA TSXD 1.0E1
Pressure p
50
291
Traction t
58
873
Fictitious domain method for an inverse problem in volcanoes. O Bodart, V Cayol, F Dabaghi, J Koko, Domain Decomposition Methods in Science and Engineering XXV. DD 2018. Springer138O. Bodart, V. Cayol, F. Dabaghi, and J. Koko. Fictitious domain method for an inverse problem in volcanoes. Springer, Domain Decomposition Methods in Science and Engineer- ing XXV. DD 2018. Lecture Notes in Computational Science and Engineering, 138, 2020.
An inverse problem in an elastic domain with a crack : a fictitious domain approach. submitted. Oliver Bodart, Valérie Cayol, Farshid Dabaghi, Jonas Koko, Oliver Bodart, Valérie Cayol, Farshid Dabaghi, and Jonas Koko. An inverse problem in an elastic domain with a crack : a fictitious domain approach. submitted, 2021.
Xfem-based fictitious domain method for linear elasticity model with crack. Olivier Bodart, Valérie Cayol, Sébastien Court, Jonas Koko, SIAM Journal on Scientific Computing. 382Olivier Bodart, Valérie Cayol, Sébastien Court, and Jonas Koko. Xfem-based fictitious do- main method for linear elasticity model with crack. SIAM Journal on Scientific Computing, 38(2):B219-B246, 2016.
The convergence of a class of double-rank minimization algorithms 1. general considerations. C G Broyden, IMA Journal of Applied Mathematics. 61C. G. BROYDEN. The convergence of a class of double-rank minimization algorithms 1. general considerations. IMA Journal of Applied Mathematics, 6(1):76-90, 1970.
A new approach to variable metric algorithms. R Fletcher, The Computer Journal. 133R. Fletcher. A new approach to variable metric algorithms. The Computer Journal, 13(3):317-322, 1970.
Finding realistic dike models from interferometric synthetic aperture radar data: The february 2000 eruption at piton de la fournaise. Y Fukushima, V Cayol, P Durand, Journal of Geophysical Research: Solid Earth. B3110Y. Fukushima, V. Cayol, and P. Durand. Finding realistic dike models from interferometric synthetic aperture radar data: The february 2000 eruption at piton de la fournaise. Journal of Geophysical Research: Solid Earth, 110(B3).
A family of variable-metric methods derived by variational means. Donald Goldfarb, Math. Comp. 24Donald Goldfarb. A family of variable-metric methods derived by variational means. Math. Comp., 24:23-26, 1970.
Updating quasi-newton matrices with limited storage. Jorge Nocedal, Mathematics of Computation. 35151Jorge Nocedal. Updating quasi-newton matrices with limited storage. Mathematics of Computation, 35(151):773-782, 1980.
An Open Source generic C++ library for finite element methods. Y Renard, J Pommier, Getfem++, Y. Renard and J. Pommier. Getfem++. An Open Source generic C++ library for finite element methods. http://home.gna.org/getfem.
Conditioning of quasi-newton methods for function minimization. D F Shanno, Math. Comp. 24D. F. Shanno. Conditioning of quasi-newton methods for function minimization. Math. Comp., 24:647-656, 1970.
Inverse Problem Theory and Methods for Model Parameter Estimation. A Tarantola, Society for Industrial and Applied Mathematics. A. Tarantola. Inverse Problem Theory and Methods for Model Parameter Estimation. So- ciety for Industrial and Applied Mathematics, 2005.
| []
|
[
"An Efficient Split Fine-tuning Framework for Edge and Cloud Collaborative Learning",
"An Efficient Split Fine-tuning Framework for Edge and Cloud Collaborative Learning"
]
| [
"Shaohuai Shi [email protected] ",
"Qing Yang \nPeng Cheng Laboratory\nShenzhenChina\n",
"Yang Xiang [email protected] \nPeng Cheng Laboratory\nShenzhenChina\n",
"Shuhan Qi ",
"Xuan Wang [email protected] ",
"\nHarbin Institute of Technology\nShenzhenChina\n"
]
| [
"Peng Cheng Laboratory\nShenzhenChina",
"Peng Cheng Laboratory\nShenzhenChina",
"Harbin Institute of Technology\nShenzhenChina"
]
| []
| To enable the pre-trained models to be fine-tuned with local data on edge devices without sharing data with the cloud, we design an efficient split fine-tuning (SFT) framework for edge and cloud collaborative learning. We propose three novel techniques in this framework. First, we propose a matrix decomposition-based method to compress the intermediate output of a neural network to reduce the communication volume between the edge device and the cloud server. Second, we eliminate particular links in the model without affecting the convergence performance in fine-tuning. Third, we implement our system atop PyTorch to allow users to easily extend their existing training scripts to enjoy the efficient edge and cloud collaborative learning. Experiments results on 9 NLP datasets show that our framework can reduce the communication traffic by 96 times with little impact on the model accuracy. | 10.48550/arxiv.2211.16703 | [
"https://export.arxiv.org/pdf/2211.16703v1.pdf"
]
| 254,095,898 | 2211.16703 | 77b50a8887d3cb699988253749cee1362f91b1a8 |
An Efficient Split Fine-tuning Framework for Edge and Cloud Collaborative Learning
Shaohuai Shi [email protected]
Qing Yang
Peng Cheng Laboratory
ShenzhenChina
Yang Xiang [email protected]
Peng Cheng Laboratory
ShenzhenChina
Shuhan Qi
Xuan Wang [email protected]
Harbin Institute of Technology
ShenzhenChina
An Efficient Split Fine-tuning Framework for Edge and Cloud Collaborative Learning
Index Terms-AI Systemcloud-edge collaborative trainingsplit learningmatrix decomposition
To enable the pre-trained models to be fine-tuned with local data on edge devices without sharing data with the cloud, we design an efficient split fine-tuning (SFT) framework for edge and cloud collaborative learning. We propose three novel techniques in this framework. First, we propose a matrix decomposition-based method to compress the intermediate output of a neural network to reduce the communication volume between the edge device and the cloud server. Second, we eliminate particular links in the model without affecting the convergence performance in fine-tuning. Third, we implement our system atop PyTorch to allow users to easily extend their existing training scripts to enjoy the efficient edge and cloud collaborative learning. Experiments results on 9 NLP datasets show that our framework can reduce the communication traffic by 96 times with little impact on the model accuracy.
I. INTRODUCTION
In recent years, pre-trained language models (PLMs) (or called foundation models [1]) [2]- [5] have achieved significant breakthroughs in many downstream natural language processing (NLP) applications like text generation [6], language translation [7], etc. Once PLMs are well trained with pretraining, they can be used in many scenarios with fine-tuning, where the model is fine-tuned on task-specific datasets with only several epochs going through the datasets. Compared to training from scratch, fine-tuning on PLMs normally takes much faster time (i.e., several epochs on task-specific datasets) and higher accuracy, so it becomes a common practice in many NLP tasks. For example, top-ranked models on GLUE [8] and SQuAD [9] benchmarks are fine-tuned from PLMs.
On the other hand, with the exponential growth of edge devices (e.g., mobile and Internet of Things, IoT), lots of generated data are privacy sensitive, which could not be shared for training models. The fine-tuning technique is very suitable for local training on edge devices as they can keep their data private to learn a model for local usage. However, due to the low computational resources and the memory limitation of edge devices, keeping all training processes on devices may not be possible or it takes very long training time. To alleviate this problem, split learning (SL) [10] has become a promising distributed learning paradigm to enable resourceconstraint edge devices to train deep neural networks (DNNs) with the help of powerful cloud servers without exposing their *Corresponding author. data to the server [11]. Specifically, SL splits the DNN into two parts (one part is stored on the edge and the other part on the cloud) at a particular layer. Meanwhile, modern DNN training (or fine-tuning) mainly uses stochastic gradient descent (SGD) and its variants (e.g., Adam) with backpropagation [12] to update model parameters iteratively. At each iteration, the training algorithm loads a mini-batch of local data to do the feed-forward computations, and then do backpropagation computations to calculate the gradients to update the model parameters. As the model is split into two parts in SL, the client should send the activation outputs to the server in the feed-forward pass, and the server sends the gradient w.r.t. the activation to the client in the backpropagation pass for updating model parameters locally [10], [13].
However, due to the bandwidth between the edge devices and the server (e.g., 1-1000Mb/s) is typically much smaller than the bandwidth between two servers (e.g., 1-200Gb/s) in a data center, the data exchange of communicating activation outputs and their gradients between the client and the server is very slow. For example, fine-tuning a popular BERT BASE model (around 110 million parameters) [2] on an Nvidia V100 GPU takes 120ms per iteration, while the communication volume per iteration is 340MB which requires 2300ms for communication in SL under a 1000Mb/s connection. It means that the introduced communication cost hinders the advantage of SL in making use of powerful servers for training. While there exist some studies [14]- [16] trying to reduce training or fine-tuning costs on edge devices, they fail to address the communication problem in SL.
To this end, in this work, we propose an efficient split fine-tuning framework, SFT. Specifically, we first identify the low-rank property of weight parameters and their gradients in fine-tuning BERT models ( §IV-B). Then, we propose a novel compression approach ( §III) to reduce the communication cost of exchanging data between the edge device and the cloud server by decomposing a single feed-forward layer into three much smaller feed-forward layers based on matrix decomposition while requiring no extra computation overheads. We implement our framework 1 atop PyTorch to allow users to easily utilize our SFT with pre-trained models with little impact on model accuracy ( §III-E). To show the effectiveness of our SFT, we conduct extensive experiments on GLUE [8] and SQuAD [9] datasets with the pre-trained BERT [2] model. Experimental results show that SFT can reduce 96× communication volume than SL with little impact on the model accuracy.
The rest of the paper is organized as follows. We first introduce some background and related work in Section II. Then we present our proposed system in Section III. After that, we demonstrate the experimental studies in Section IV. We finally conclude the paper in Section V.
II. BACKGROUND AND RELATED WORK
A. The feed-forward layer (FFN) in Transformer Existing popular PLMs in NLP are mainly built on the Transformer [17] architecture. As shown in Fig. 1(a), a PLM consists of a series of transformer layers (or blocks), each of which contains an attention layer and a feed-forward layer (FFN) with residual connections. Since each layer requires a residual connection (Add & Norm), the input size and the output size of the layer should be identical, which means the shapes of all input and output tensors within the transformer layer or between transformer layers are identical. This is an important feature causing extensive communication traffics in SL for fine-tuning such models. See below for more details.
B. Split learning
Split learning (SL) [10], [18] has provided an emerging solution for edge and cloud collaborative learning without exposing the private data on the edge while enjoying the powerful computational resources from the cloud servers. As shown in Fig. 1(b), a DNN model is split into two parts at a particular layer. One part (say net 1 ) is stored on the edge and the other part (say net 2 ) is stored on the cloud. The number of layers on each part may be different, and more layers typically take higher computational costs. Thus, in SL, the data located on the edge is loaded for feed-forward computation on net 1 , and its activation output is transferred (upload) to the cloud. We assume the activation output is a tensor a with shape (B, M, N ), where B is the mini-batch size and M and N are two dimensions of a matrix of the hidden feature representation. Note that M and N may be the explicit dimensions of the hidden feature as different types of layers (e.g., FFN, attention, etc.) have different structures, but they all can be organized as a matrix. For the transformerbased architecture, as we analyzed in the above subsection, the shape is the same no matter which layer is chosen as the split layer in SL. During the backpropagation pass, the gradient w.r.t. a, which is denoted as δ and has the same shape with a, is calculated on the cloud side and transferred to the edge for calculating the gradients w.r.t. the model parameters for updating the model.
It is seen that the computations of training have been partially (net 2 ) offloaded to the server to utilize the powerful server to reduce the training time for edge devices. However, the exchange data (i.e., a and δ) has a large number of elements, which requires significant communication time at each training iteration and becomes the performance bottleneck. In this work, we will present how to reduce the communicating volume of a and δ without sacrificing model accuracy.
C. Activation or model compression
Some works in reducing a and δ are activation compression techniques [16], [19]- [22]. These studies try to compress the activation outputs for each layer to save computational costs and memory for storing the temporary data in activation, which can be classified as the quantization method and the pruning method with tensor decomposition. Particularly, Evans et al., [20] propose the AC-GC lossy compression algorithm to dynamically compress the activation via quantization and an optimization objective of maximizing the compression ratio while minimizing the accuracy loss. While the quantization technique can maximally reduce the storage of a (with 1bit) by 32 times compared to the 32-bit counterpart, too aggressive compression easily introduces an accuracy loss in model training [20] in practice. The tensor decomposition approaches [14], [16], [21] are to decompose the parameter tensors to small-size approximate tensors, thus reducing the computational and memory costs. Particularly, MPO [14], [16] decomposes an original parameter matrix (say w) into n (n ≥ 2) small matrices (i.e., w → [w 1 , w 2 , ..., w n ]). Even though MPO achieves high compression ratios with slight accuracy loss, it requires particular tricks like dimension squeezing and takes extra computational costs for finding the layer with the least reconstruction error, which makes the optimization procedure difficult to be applied in edge and cloud collaborative learning.
III. SFT: THE SPLIT FINE-TUNING FRAMEWORK
A. Overview
The architecture overview of our proposed SFT is shown in Fig. 1(c). The key idea is two folds: 1) decomposing the FFN layer into three smaller FFN layers (FFN-1, FFN-2, and FFN-3) after loading the pre-trained parameters, and 2) the residual connection in the original FFN layer is eliminated, but the model convergence is guaranteed. The communication volume between the edge and cloud in the split layer of FFN-1 becomes much smaller than the original FFN output, thus significantly reducing the communication time in collaborative learning. The details are provided as follows.
B. System architecture
As the high communication cost is caused by the large dimension of a and δ at each iteration, we propose to decompose w to multiple small tensors, so that the generated activation output is with small dimensions. Specifically, given a transformer-based model, we first load the pre-trained parameters to the original model architecture. In SFT, we only need to compress the layer that needs to be communicated between the edge and the cloud. According to our observation that decomposition does not affect the model accuracy ( §IV-B), the weights in FFN layers in fine-tuning PLMs are mostly low-rank matrices. We choose the FFN layer as the split layer in SL. Formally, for an FFN layer at layer l with weight matrix w l and the input a l−1 (the output of its previous layer), its output can be represented as
a l = a l−1 w l .(1)
Assuming that SL splits the DNN into two parts at layer l, a l ∈ R M ×N should be communicated from the edge to the cloud in the forward pass, and its gradient δ l ∈ R M ×N should be communicated from the cloud to the edge in the backward pass. Note that in fine-tuning tasks, w l has been initialized with pre-trained parameters. Our goal is to compress a l and δ l such that the communication volume is small enough to eliminate the communication bottleneck in SL.
In SFT, we decompose the split layer (i.e., layer l) whose weights are denoted by w ∈ R N ×H to three matrices via singular value decomposition (SVD), i.e.,
w = uΣv,(2)
where u ∈ R N ×R , Σ ∈ R R×R is a diagonal matrix whose diagonal entries are singular values of w, and v ∈ R R×H . R ≤ min{N, H} is the rank of w. With the decomposed matrices, Eq. (1) becomes
a l = a l−1 uΣv = (a l−1 u)(Σv).(3)
Letâ l = a l−1 u andâ l ∈ R M ×R . Note thatâ l = a l−1 u is equivalent to constructing an FFN layer whose weight is u. Similarly, a l =â l (Σv) is equivalent to constructing two FFN layers whose weights are Σ and v respectively. Instead of splitting the DNN in the original architecture at layer l, SFT splits its decomposed form at u. It means that the activation output on the edge side isâ l ∈ R M ×R , which should be communicated to the server side in the forward pass. The corresponding gradient isδ l ∈ R M ×R in the backward pass. Thus, the communication volume is reduced from M × N to M × R, i.e., the communication time is shortened by N/R times theoretically. Note that SFT decomposes a single FFN layer to three smaller FFN layers after loading the pre-trained parameters but before fine-tuning. The three constructed FFN layers are initialized with the SVD decomposition and they are tuned every iteration in fine-tuning. Due to the low-rank feature of the weight matrix of the FFN layer during fine-tuning, we can choose an extremely small value of R without affecting the convergence performance in fine-tuning. In our experiments, setting R = 1 or R = 8 can almost preserve the model accuracy ( §IV). In summary, SFL can reduce the communication traffic by N/R times over SL. Feed-forward with net1:â l = net1(xi); On edge 7: Edge sendsâ l and yi to Cloud; 8: Feed-forward with net2:ŷ i = net2(â l ); On cloud 9: Calculate loss withŷ i and y i ; On cloud 10:
C. Algorithm
Back-propagation with loss:δ l = loss.backward(); On cloud 11: Cloud sendsδ l to Edge; 12: Back-propagation withδ l ; On edge 13: Update net1; On edge 14: Update net2; On cloud 15: end for
The SFT algorithm is shown in Algorithm 1, where some comments illustrate that some code is only executed on the edge and some are only on the cloud. The inputs are the neural network architecture net, the split layer l, and the number of iterations I for fine-tuning. In Algorithm 1, lines 1-3 are splitting the model, loading pre-trained models, and reconstructing the split layer based on SVD. The for loop in lines 4-14 is the training procedure in both the edge and cloud sides. For each iteration, the edge first loads the training data (line 5) and then performs the feed-forward on the first part of the network (line 6), whose results (â) are sent to the cloud (line 7). After the cloud receives the activation from the client, it performs feed-forward on the second part of the network (line 8), followed by the backward computation with loss (lines 9-10). The gradient (δ) w.r.t. the activation is sent from the cloud to the edge (line 11), followed by back-propagation on the edge side (line 12). After that, the models are updated simultaneously on the edge and the cloud (lines 13-14). Our system enables users with existing training scripts to be able to explore the cloud to fine-tune their models without exposing the data to the server.
D. Performance analysis
SFL has two main goals: 1) making the low-memory edge device possible for fine-tuning when the edge cannot store the whole model, and 2) enabling edge devices to explore the cloud servers to fine-tune a model more efficiently. The first goal is obviously our advantage using SFT when lowmemory devices cannot store the whole model. For the second goal, however, we should consider whether SFT can reduce the overall fine-tuning time. Assume that the original iteration time on the edge to fine-tune a model net is denoted by t naive = t edge (net). When we use SFT, the fine-tuning time per iteration is
t sf t = t edge (net 1 ) + t cloud (net 2 ) + t comm ,(4)
where t cloud (net 2 ) is the computation time on the cloud and t comm is the communication time between the edge and the cloud. Thus, SFT can be used if t sf t < t naive . As the cloud server typically has much higher computational power than the edge device, t edge (net 1 ) + t cloud (net 2 ) should become much smaller than t naive . However, t comm should not be too large so that one can enjoy the efficiency of SFT. t edge (net 1 ) and t cloud (net 2 ) depend on the split layer. A lower split layer indicates that more computational workloads are uploaded to the server for calculation, which would make t edge (net 1 ) smaller and t cloud (net 2 ) larger, and vice visa. As the cloud is much more powerful than the edge, we expect the split layer to be as low as possible, but the lower layer may sacrifice the model accuracy ( §IV-B). Therefore, we have a trade-off between accuracy and efficiency in SFT. t comm depends on the rank we used for decomposition. A larger rank makes t comm higher. Therefore, the two parameters (i.e., split layer and rank) should be well-tuned for achieving better end-to-end training performance in SFT. In this work, we do not develop a strategy for tuning them, but we present some observations from the experiments for helping tune them.
E. Implementation
To enable users to easily use our SFT framework, we implement a distributed optimizer named "SFTOptimizer" atop the PyTorch optimizer to integrate the layer decomposition of the model and the communication between the edge and the Testbed. We use two Nvidia Tesla V100 GPUs in a single node as an emulation environment. One GPU is used to simulate an edge device, and one GPU is used to work as a cloud server.
DNN and datasets. We use the popular language model, BERT BASE [2], which has 12 layers and 110M parameters, as our experimental neural network. The dimension of the split layer is (M, N ) = (3072, 768). Its corresponding pretrained model is downloaded from the well-trained parameters at HuggingFace 2 . We fine-tune the model on 9 datasets from GLUE [8] and SQuAD [9] for different downstream tasks including named entity recognition, textual entailment, coreference resolution, etc.
B. Convergence performance
SFT w/ the residual connection. We first demonstrate the convergence results of the SVD decomposition for replacing the large FFN with three small FFNs for fine-tuning, which shows the low-rank feature of FFN weights. We use SST-2 and QNLI datasets to study the convergence property of our SFT as shown in Fig. 2. Due to the page limit, we do not show the results of other datasets as they have similar patterns. The results show that the weight matrix in the split layer can be decomposed as a rank-1 matrix without sacrificing the model accuracy. In some scenarios, rank-1 decomposition is better than the baseline. For example, decomposing at layers 2 or 3 (11 or 12) 3 is much better than the baseline in SST-2 (QNLI).
How to determine which layer should be split to achieve the highest efficiency in a given environment is also of importance to explore, and we will leave it as our future work. SFT w/o the residual connection. Since we also need to eliminate the residual connection in the original transformer layer in SFT, we conduct experiments to verify the convergence performance by eliminating the residual connection and decomposing the FFN with SVD. The results are shown in Fig. 3, which shows that the model accuracy tends to decrease when the split layer is chosen at lower layers. However, it can still preserve the model accuracy when splitting at higher layers. Thus, it is possible to use SFT to enjoy the powerful cloud server to accelerate the fine-tuning tasks for edge devices without sharing the data.
Model accuracy in all datasets. Based on the convergence results in Fig. 3, we use a rank of 8, 16, and 32 for decomposition at layer 11 in SFT (denoted as SFT(l=11,R=8), SFT(l=11,R=12), and SFT(l=11,R=32) respectively). Therefore, the communication cost can be reduced by 768/8=96 times in the chosen BERT model if R=8. The model accuracy of validation sets in all chosen 9 datasets is shown in Table I. The results show that different ranks have little differences and a higher rank does not guarantee a higher accuracy. For example, SFT(l=11,R=8) achieves higher accuracy than SFT(l=11,R=16) on QQP and RTE datasets. In most cases, SFT(l=11,R=8) preserves the model accuracy compared to the baseline. In the particular case of RTE, SFT(l=11,R=8) achieves much lower accuracy than the baseline due to the extremely small size (only around 2,500 training samples) of the dataset. The results show that SFT (using a rank of 8) makes the fine-tuning tasks possible to train in a collaborative learning environment so that the edge does not need to expose the data to the server. 3 The smaller layer index means the lower layer
C. Estimated iteration performance
To show the end-to-end performance of our SFT compared with SL on the BERT BASE model, we benchmark the time performance on a V100 GPU using SST-2 (other datasets have similar patterns). Let t bert (gpu, nlayers) denote the wallclock time per iteration training the BERT BASE model on a particular gpu using nlayers layers of the model. The full 12 layer BERT BASE on SST-2 takes t bert (V100, 12) = 124ms (5) with a mini-batch size of 32 and a sequence length of 66. Since the 12 layers of BERT BASE have an identical architecture, each layer has the same computational workloads and thus has the same computation time. We can estimate the iteration time of one layer as t bert (V100, 1) = 124/12ms = 10.3ms.
Assume that the edge side is a relatively new Nvidia edge device, XAVIER-NX, with 21 TOPs AI performance, and the cloud side is a V100 GPU with 130 TOPs AI performance, which means the cloud server is around 6 times faster than the edge device. Therefore, on a XAVIER-NX GPU, we have t bert (XAVIER, 12) = 6t bert (V100, 12) = 6 × 124 = 744ms (7) and t bert (XAVIER, 1) = 6t bert (V100, 1) = 6 × 10.3 = 60.3ms.
(8) The communication traffic with SL is 32 × 3076 × 768 × 4 = 288MB, while it is 32 × 3076 × 8 × 4 = 36MB with our SFT. Assume that the typical bandwidth between the edge to the cloud is 1000Mbps Ethernet 4 , the communication time of SL and SFT is 2,300ms and 24ms respectively. Thus, the iteration times with local training (on an edge device), with SL (on an edge and a cloud), and with SFT (on an edge and a cloud) are t naive = t bert (XAVIER, 12) = 744ms,
t sl = t bert (XAVIER, 10) + t bert (V100, 2) + t comm = (60.3 × 10 + 10.3 × 2 + 2300)ms = 2923.6ms, (10) and t sf t = t bert (XAVIER, 10) + t bert (V100, 2) + t comm (11) = (60.3 × 10 + 10.3 × 2 + 24)ms = 647.6ms (12) respectively. The results show that the introduced communication time in SL is very high, making it even slower than the local training. Our SFT, on the other hand, can achieve faster training time (14% reduction in the end-to-end training time) by reducing the communication volume between the cloud server and the edge device.
D. Discussion
Our experimental results conclude two folds. First, when the edge devices cannot conduct fine-tuning tasks due to their memory constraint, it is possible to enjoy our SFT to fine-tune the model. Second, even if the edge clients can fine-tune the model locally, our SFT can help accelerate the training by exploring the more powerful cloud servers without requiring the data from the edge devices. However, there are still two important problems that should be further studied to make SFT more practical. First, could it be possible to split the layer in the lower layer of the model such that more computational workloads can be offloaded to the server? In our existing results, splitting the lower layer may introduce some accuracy loss. Thus, it is a trade-off between the model accuracy and training efficiency. Enjoying higher training speed may sacrifice some accuracy. Second, as keeping the residual connection can preserve the model accuracy (as shown in Fig. 2) even using rank-1 decomposition, could it be possible to keep the residual connection without introducing significant communication costs between the edge and the cloud? As SFT decomposes the FFN to smaller FFNs which are distributed to the client and server, the residual connection should require the activation data to be transferred between the edge and the cloud, which makes the communication extremely heavy. However, eliminating the residual connection only allows higher layers to be split layers, which means only a small proportion of layers can be uploaded to the server if we want to preserve the model's accuracy.
V. CONCLUSION
In this work, we proposed an efficient split learning framework for fine-tuning tasks, which is called split fine-tuning (SFT). Specifically, we first observed that model weights are normally low-rank in fine-tuning tasks based on our extensive experiments. Then based on the low-rank feature of finetuning, we introduced a novel layer decomposition method using SVD such that we can significantly reduce the communication volume between the edge and the cloud in collaborative learning. We implemented our prototype system atop PyTorch and enable end-users to easily conduct SFT with very little change to their existing training scripts. Extensive experimental results showed that our SFT reduces the communication traffic by 96 times compared to SL with little impact on the model accuracy.
Fig. 1 :
1(a) A typical transformer architecture which consists of multiple transformer layers (say L layers), and each transformer layer (also called a block) has a feed-forward (FFN) layer. (b) Split learning architecture: the full model is split into two parts, each of which is put on the edge and the cloud respectively. (c) Our split fine-tuning framework (SFT): the split layer (FFN) is decomposed into three smaller FFNs, i.e., FFN-1 is located on the edge; FFN-2 and FFN-3 are located on the cloud.
Algorithm 1
1SFT: Split fine-tuning on an edge and a cloud Input: net, l, I 1: Split net to net1 and net2 at layer l; 2: Load pre-trained parameters net1 on edge and net2 on cloud; 3: Reconstruct layer l with three FFN layers with Eq. (3); Keep the first layer on edge and keep the last two layers on cloud; 4: for i = 1 → I do 5: Load B training samples: (xi, y i );
Fig. 2 :
2Model accuracy on validation tests w/ rank-1 decomposition in SFT while preserving the residual connection. The baseline is the result run with the original fine-tuning algorithm. cloud. To use SFT, users only need two more lines of code to extend their original training scripts. An example is shown in Listing 1, where lines 2-3 are inserted into the original training code to use SFT. 1 optim = torch.optim.Adam(model.parameters(), ...) 2 role = 'edge' if edge else 'cloud' # +++ 3 optim = SFLOptimizer(optim, role=role, ...) # +++ 4 for i in range(epochs):
Fig. 3 :
3Model accuracy of on validation tests w/ rank-8 decomposition in SFT while eliminating the residual connection.
(a) Transformer based pre-train model.(b) Split learning.(c) Split fine-tuning.Attention
Add & Norm
Feed-Forward
(FFN)
Add & Norm
L layers
Transformer Layer
Transformer Layer
Transformer Layer
…
Inputs
Outputs
Transformer Layer
Transformer Layer
Transformer Layer
…
Inputs
Outputs
Cloud
Edge
Transformer Layer
Transformer Layer
…
Inputs
Outputs
Cloud
Edge
FFN-1
Attention
Add & Norm
FFN-2
FFN-3
Attention
Transformer Layer
Inputs
Transformer Layer
Transformer Layer
Transformer Layer
…
Inputs
Outputs
Cloud
Edge
Transformer Layer
Transformer Layer
…
Inputs
Outputs
Cloud
Edge
FFN-1
Attention
Add & Norm
FFN-2
FFN-3
Attention
Transformer Layer
Inputs
Transformer Layer
Transformer Layer
Transformer Layer
…
Inputs
Outputs
Cloud
Edge
Transformer Layer
Transformer Layer
…
Inputs
Outputs
Cloud
Edge
FFN-1
Attention
Add & Norm
FFN-2
FFN-3
TABLE I :
IFine-tuning accuracy (the higher the better) in validation sets. For each dataset (the numbers in brackets indicate the sizes of the datasets), we run three independent experiments and calculate their mean.Algorithm
SST-2
QNLI
MNLI
QQP
CoLA
RTE
STS-B MRPC SQuAD
(67k)
(105k) (364k) (91.2k)
(8.5k)
(2.5k)
(7k)
(3.7k)
(88k)
Baseline
92.54
91.24
84.56
90.73
55.3
66.06
88.38
85.33
88.25
SFT(l=11,R=8)
92.43
90.98
83.98
90.93
57.13
64.25
86.46
84.81
88.33
SFT(l=11,R=16)
92.31
91.22
84.33
90.75
57.35
62.09
86.78
83.47
88.75
SFT(l=11,R=32)
92.77
91.04
84.27
90.99
57.87
62.81
87.46
84.23
88.56
Our code is available in https://openi.pcl.ac.cn/Encore/splitfinetuning. arXiv:2211.16703v1 [cs.DC] 30 Nov 2022
https://huggingface.co/bert-base-uncased
We also assume the bandwidth can be fully utilized, while it cannot in practice. The main purpose of the emulation is to demonstrate the potential benefits of our SFT in real-world environments, and the performance also varies with different configurations.
On the opportunities and risks of foundation models. R Bommasani, D A Hudson, E Adeli, R Altman, S Arora, S Arx, M S Bernstein, J Bohg, A Bosselut, E Brunskill, arXiv:2108.07258arXiv preprintR. Bommasani, D. A. Hudson, E. Adeli, R. Altman, S. Arora, S. von Arx, M. S. Bernstein, J. Bohg, A. Bosselut, E. Brunskill et al., "On the opportunities and risks of foundation models," arXiv preprint arXiv:2108.07258, 2021.
Bert: Pre-training of deep bidirectional transformers for language understanding. J D , M.-W C Kenton, L K Toutanova, Proceedings of NAACL-HLT. NAACL-HLTJ. D. M.-W. C. Kenton and L. K. Toutanova, "Bert: Pre-training of deep bidirectional transformers for language understanding," in Proceedings of NAACL-HLT, 2019, pp. 4171-4186.
GPT-3: Its nature, scope, limits, and consequences. L Floridi, M Chiriatti, 30Minds and MachinesL. Floridi and M. Chiriatti, "GPT-3: Its nature, scope, limits, and consequences," Minds and Machines, vol. 30, no. 4, pp. 681-694, 2020.
Efficient large-scale language model training on GPU clusters using megatron-lm. D Narayanan, M Shoeybi, J Casper, P Legresley, M Patwary, V Korthikanti, D Vainbrand, P Kashinkunti, J Bernauer, B Catanzaro, Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. the International Conference for High Performance Computing, Networking, Storage and AnalysisD. Narayanan, M. Shoeybi, J. Casper, P. LeGresley, M. Patwary, V. Ko- rthikanti, D. Vainbrand, P. Kashinkunti, J. Bernauer, B. Catanzaro et al., "Efficient large-scale language model training on GPU clusters using megatron-lm," in Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, 2021, pp. 1-15.
Palm: Scaling language modeling with pathways. A Chowdhery, S Narang, J Devlin, M Bosma, G Mishra, A Roberts, P Barham, H W Chung, C Sutton, S Gehrmann, Proceedings of Machine Learning and Systems 2022. Machine Learning and Systems 2022A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton, S. Gehrmann et al., "Palm: Scaling language modeling with pathways," in Proceedings of Machine Learning and Systems 2022, MLSys 2022, 2022.
Distilling knowledge learned in BERT for text generation. Y.-C Chen, Z Gan, Y Cheng, J Liu, J Liu, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsY.-C. Chen, Z. Gan, Y. Cheng, J. Liu, and J. Liu, "Distilling knowledge learned in BERT for text generation," in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020, pp. 7893-7905.
A few thousand translations go a long way! leveraging pre-trained models for African news translation. D Adelani, J Alabi, A Fan, Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesD. Adelani, J. Alabi, A. Fan et al., "A few thousand translations go a long way! leveraging pre-trained models for African news translation," in Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Jul. 2022, pp. 3053-3070.
GLUE: A multi-task benchmark and analysis platform for natural language understanding. A Wang, A Singh, J Michael, F Hill, O Levy, S R Bowman, International Conference on Learning Representations. A. Wang, A. Singh, J. Michael, F. Hill, O. Levy, and S. R. Bowman, "GLUE: A multi-task benchmark and analysis platform for natural language understanding," in International Conference on Learning Rep- resentations, 2018.
SQuAD: 100, 000+ questions for machine comprehension of text. P Rajpurkar, J Zhang, K Lopyrev, P Liang, EMNLP. P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang, "SQuAD: 100, 000+ questions for machine comprehension of text," in EMNLP, 2016.
Split learning for health: Distributed deep learning without sharing raw patient data. P Vepakomma, O Gupta, T Swedish, R Raskar, arXiv:1812.00564arXiv preprintP. Vepakomma, O. Gupta, T. Swedish, and R. Raskar, "Split learning for health: Distributed deep learning without sharing raw patient data," arXiv preprint arXiv:1812.00564, 2018.
LocFedMix-SL: Localize, federate, and mix for improved scalability, convergence, and latency in split learning. S Oh, J Park, P Vepakomma, S Baek, R Raskar, M Bennis, S.-L Kim, Proceedings of the ACM Web Conference 2022. the ACM Web Conference 2022S. Oh, J. Park, P. Vepakomma, S. Baek, R. Raskar, M. Bennis, and S.-L. Kim, "LocFedMix-SL: Localize, federate, and mix for improved scalability, convergence, and latency in split learning," in Proceedings of the ACM Web Conference 2022, 2022, pp. 3347-3357.
Theory of the backpropagation neural network. R Hecht-Nielsen, Neural networks for perception. ElsevierR. Hecht-Nielsen, "Theory of the backpropagation neural network," in Neural networks for perception. Elsevier, 1992, pp. 65-93.
Spliteasy: A practical approach for training ml models on mobile devices. K Palanisamy, V Khimani, M H Moti, D Chatzopoulos, Proceedings of the 22nd International Workshop on Mobile Computing Systems and Applications. the 22nd International Workshop on Mobile Computing Systems and ApplicationsK. Palanisamy, V. Khimani, M. H. Moti, and D. Chatzopoulos, "Spliteasy: A practical approach for training ml models on mobile devices," in Proceedings of the 22nd International Workshop on Mobile Computing Systems and Applications, 2021, pp. 37-43.
Compressing deep neural networks by matrix product operators. Z.-F Gao, S Cheng, R.-Q He, Z.-Y Xie, H.-H Zhao, Z.-Y. Lu, T Xiang, Physical Review Research. 2223300Z.-F. Gao, S. Cheng, R.-Q. He, Z.-Y. Xie, H.-H. Zhao, Z.-Y. Lu, and T. Xiang, "Compressing deep neural networks by matrix product operators," Physical Review Research, vol. 2, no. 2, p. 023300, 2020.
Tinytl: Reduce memory, not parameters for efficient on-device learning. H Cai, C Gan, L Zhu, S Han, Advances in Neural Information Processing Systems. 33H. Cai, C. Gan, L. Zhu, and S. Han, "Tinytl: Reduce memory, not param- eters for efficient on-device learning," Advances in Neural Information Processing Systems, vol. 33, pp. 11 285-11 297, 2020.
Enabling lightweight fine-tuning for pre-trained language model compression based on matrix product operators. P Liu, Z.-F Gao, W X Zhao, Z.-Y Xie, Z.-Y. Lu, J.-R Wen, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing1P. Liu, Z.-F. Gao, W. X. Zhao, Z.-Y. Xie, Z.-Y. Lu, and J.-R. Wen, "Enabling lightweight fine-tuning for pre-trained language model com- pression based on matrix product operators," in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 2021, pp. 5388-5398.
Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, L Kaiser, I Polosukhin, Advances in Neural Information Processing Systems. I. Guyon, U. von Luxburg, S. Bengio, H. M. Wallach, R. Fergus, S. V. N. Vishwanathan, and R. GarnettA. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, "Attention is all you need," in Advances in Neural Information Processing Systems, I. Guyon, U. von Luxburg, S. Bengio, H. M. Wallach, R. Fergus, S. V. N. Vishwanathan, and R. Garnett, Eds., 2017, pp. 5998-6008.
Split learning for collaborative deep learning in healthcare. M G Poirot, P Vepakomma, K Chang, J Kalpathy-Cramer, R Gupta, R Raskar, arXiv:1912.12115arXiv preprintM. G. Poirot, P. Vepakomma, K. Chang, J. Kalpathy-Cramer, R. Gupta, and R. Raskar, "Split learning for collaborative deep learning in health- care," arXiv preprint arXiv:1912.12115, 2019.
Accelerating convolutional neural networks via activation map compression. G Georgiadis, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionG. Georgiadis, "Accelerating convolutional neural networks via activa- tion map compression," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 7085-7095.
AC-GC: Lossy activation compression with guaranteed convergence. R D Evans, T Aamodt, Advances in Neural Information Processing Systems. 34R. D. Evans and T. Aamodt, "AC-GC: Lossy activation compression with guaranteed convergence," Advances in Neural Information Process- ing Systems, vol. 34, pp. 27 434-27 448, 2021.
Structured pruning learns compact and accurate models. M Xia, Z Zhong, D Chen, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational Linguistics1M. Xia, Z. Zhong, and D. Chen, "Structured pruning learns compact and accurate models," in Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022, pp. 1513-1528.
Nebula-I: A general framework for collaboratively training deep learning models on low-bandwidth cloud clusters. Y Xiang, Z Wu, W Gong, S Ding, X Mo, Y Liu, S Wang, P Liu, Y Hou, L Li, arXiv:2205.09470arXiv preprintY. Xiang, Z. Wu, W. Gong, S. Ding, X. Mo, Y. Liu, S. Wang, P. Liu, Y. Hou, L. Li et al., "Nebula-I: A general framework for collaboratively training deep learning models on low-bandwidth cloud clusters," arXiv preprint arXiv:2205.09470, 2022.
| []
|
[
"Coupling spin defects in hexagonal boron nitride to monolithic bullseye cavities",
"Coupling spin defects in hexagonal boron nitride to monolithic bullseye cavities"
]
| [
"Johannes E Fröch [email protected] \nSchool of Mathematical and Physical Sciences\nUniversity of Technology Sydney\n2007UltimoNew South WalesAustralia\n",
"Lesley Spencer \nSchool of Mathematical and Physical Sciences\nUniversity of Technology Sydney\n2007UltimoNew South WalesAustralia\n\nARC Centre of Excellence for Transformative Meta-Optical Systems (TMOS)\nUniversity of Technology Sydney\n2007UltimoNew South WalesAustralia\n",
"Mehran Kianinia \nSchool of Mathematical and Physical Sciences\nUniversity of Technology Sydney\n2007UltimoNew South WalesAustralia\n\nARC Centre of Excellence for Transformative Meta-Optical Systems (TMOS)\nUniversity of Technology Sydney\n2007UltimoNew South WalesAustralia\n",
"Daniel Totonjian \nSchool of Mathematical and Physical Sciences\nUniversity of Technology Sydney\n2007UltimoNew South WalesAustralia\n",
"Minh Nguyen \nSchool of Mathematical and Physical Sciences\nUniversity of Technology Sydney\n2007UltimoNew South WalesAustralia\n",
"Vladimir Dyakonov \nExperimental Physics 6 and Würzburg-Dresden Cluster of Excellence ct.qmat\nJulius Maximilian University of Würzburg\n97074WürzburgGermany\n",
"Milos Toth \nSchool of Mathematical and Physical Sciences\nUniversity of Technology Sydney\n2007UltimoNew South WalesAustralia\n\nARC Centre of Excellence for Transformative Meta-Optical Systems (TMOS)\nUniversity of Technology Sydney\n2007UltimoNew South WalesAustralia\n",
"Sejeong Kim \nDepartment of Electrical and Electronic Engineering\nUniversity of Melbourne\n3010VictoriaAustralia\n",
"Igor Aharonovich [email protected] \nSchool of Mathematical and Physical Sciences\nUniversity of Technology Sydney\n2007UltimoNew South WalesAustralia\n\nARC Centre of Excellence for Transformative Meta-Optical Systems (TMOS)\nUniversity of Technology Sydney\n2007UltimoNew South WalesAustralia\n"
]
| [
"School of Mathematical and Physical Sciences\nUniversity of Technology Sydney\n2007UltimoNew South WalesAustralia",
"School of Mathematical and Physical Sciences\nUniversity of Technology Sydney\n2007UltimoNew South WalesAustralia",
"ARC Centre of Excellence for Transformative Meta-Optical Systems (TMOS)\nUniversity of Technology Sydney\n2007UltimoNew South WalesAustralia",
"School of Mathematical and Physical Sciences\nUniversity of Technology Sydney\n2007UltimoNew South WalesAustralia",
"ARC Centre of Excellence for Transformative Meta-Optical Systems (TMOS)\nUniversity of Technology Sydney\n2007UltimoNew South WalesAustralia",
"School of Mathematical and Physical Sciences\nUniversity of Technology Sydney\n2007UltimoNew South WalesAustralia",
"School of Mathematical and Physical Sciences\nUniversity of Technology Sydney\n2007UltimoNew South WalesAustralia",
"Experimental Physics 6 and Würzburg-Dresden Cluster of Excellence ct.qmat\nJulius Maximilian University of Würzburg\n97074WürzburgGermany",
"School of Mathematical and Physical Sciences\nUniversity of Technology Sydney\n2007UltimoNew South WalesAustralia",
"ARC Centre of Excellence for Transformative Meta-Optical Systems (TMOS)\nUniversity of Technology Sydney\n2007UltimoNew South WalesAustralia",
"Department of Electrical and Electronic Engineering\nUniversity of Melbourne\n3010VictoriaAustralia",
"School of Mathematical and Physical Sciences\nUniversity of Technology Sydney\n2007UltimoNew South WalesAustralia",
"ARC Centre of Excellence for Transformative Meta-Optical Systems (TMOS)\nUniversity of Technology Sydney\n2007UltimoNew South WalesAustralia"
]
| []
| Color centers in hexagonal boron nitride (hBN) are becoming an increasingly important building block for quantum photonic applications. Herein, we demonstrate the efficient coupling of recently discovered spin defects in hBN to purposely designed bullseye cavities. We show that the allmonolithic hBN cavity system exhibits an order of magnitude enhancement in the emission of the coupled boron vacancy spin defects. In addition, by comparative finite-difference time-domain modelling, we shed light on the emission dipole orientation, which has not been experimentally demonstrated at this point. Beyond that, the coupled spin system exhibits an enhanced contrast in optically detected magnetic resonance readout and improved signal to noise ratio. Thus, our experimental results supported by simulations, constitute a first step towards integration of hBN spin defects with photonic resonators for a scalable spin-photon interface.Hexagonal boron nitride, a naturally occurring van der Waals crystal, is becoming a prevalent platform to study nanophotonics and light matter interaction at the nanoscale. 1, 2 Of particular importance is its ability to host optically active defects that have been recently studied as promising solid state quantum emitters. 3-16 Furthermore, several of these defects exhibit a spin -photon interface, with a clear optically detected magnetic resonance (ODMR) even at room temperature.[17][18][19][20]This class of defects that exhibit ODMR, is highly sought after for quantum sensing, quantum information and integrated quantum photonics applications.21,22 | 10.1021/acs.nanolett.1c01843 | [
"https://arxiv.org/pdf/2105.12317v1.pdf"
]
| 235,195,856 | 2105.12317 | 344175ce266da7678527f3dbadad3ecac9b06afb |
Coupling spin defects in hexagonal boron nitride to monolithic bullseye cavities
Johannes E Fröch [email protected]
School of Mathematical and Physical Sciences
University of Technology Sydney
2007UltimoNew South WalesAustralia
Lesley Spencer
School of Mathematical and Physical Sciences
University of Technology Sydney
2007UltimoNew South WalesAustralia
ARC Centre of Excellence for Transformative Meta-Optical Systems (TMOS)
University of Technology Sydney
2007UltimoNew South WalesAustralia
Mehran Kianinia
School of Mathematical and Physical Sciences
University of Technology Sydney
2007UltimoNew South WalesAustralia
ARC Centre of Excellence for Transformative Meta-Optical Systems (TMOS)
University of Technology Sydney
2007UltimoNew South WalesAustralia
Daniel Totonjian
School of Mathematical and Physical Sciences
University of Technology Sydney
2007UltimoNew South WalesAustralia
Minh Nguyen
School of Mathematical and Physical Sciences
University of Technology Sydney
2007UltimoNew South WalesAustralia
Vladimir Dyakonov
Experimental Physics 6 and Würzburg-Dresden Cluster of Excellence ct.qmat
Julius Maximilian University of Würzburg
97074WürzburgGermany
Milos Toth
School of Mathematical and Physical Sciences
University of Technology Sydney
2007UltimoNew South WalesAustralia
ARC Centre of Excellence for Transformative Meta-Optical Systems (TMOS)
University of Technology Sydney
2007UltimoNew South WalesAustralia
Sejeong Kim
Department of Electrical and Electronic Engineering
University of Melbourne
3010VictoriaAustralia
Igor Aharonovich [email protected]
School of Mathematical and Physical Sciences
University of Technology Sydney
2007UltimoNew South WalesAustralia
ARC Centre of Excellence for Transformative Meta-Optical Systems (TMOS)
University of Technology Sydney
2007UltimoNew South WalesAustralia
Coupling spin defects in hexagonal boron nitride to monolithic bullseye cavities
* These Authors contributed equally Corresponding
Color centers in hexagonal boron nitride (hBN) are becoming an increasingly important building block for quantum photonic applications. Herein, we demonstrate the efficient coupling of recently discovered spin defects in hBN to purposely designed bullseye cavities. We show that the allmonolithic hBN cavity system exhibits an order of magnitude enhancement in the emission of the coupled boron vacancy spin defects. In addition, by comparative finite-difference time-domain modelling, we shed light on the emission dipole orientation, which has not been experimentally demonstrated at this point. Beyond that, the coupled spin system exhibits an enhanced contrast in optically detected magnetic resonance readout and improved signal to noise ratio. Thus, our experimental results supported by simulations, constitute a first step towards integration of hBN spin defects with photonic resonators for a scalable spin-photon interface.Hexagonal boron nitride, a naturally occurring van der Waals crystal, is becoming a prevalent platform to study nanophotonics and light matter interaction at the nanoscale. 1, 2 Of particular importance is its ability to host optically active defects that have been recently studied as promising solid state quantum emitters. 3-16 Furthermore, several of these defects exhibit a spin -photon interface, with a clear optically detected magnetic resonance (ODMR) even at room temperature.[17][18][19][20]This class of defects that exhibit ODMR, is highly sought after for quantum sensing, quantum information and integrated quantum photonics applications.21,22
One specific example is the negatively charged boron vacancy (VB -) defect which has a triplet ground state, with zero field splitting of ~ 3.5 GHz and a broad emission around 810 nm. 17 While its level structure is the focus of growing research, many fundamental aspects are still unknown. These include the position of its zero phonon line and its detailed photophysical properties, as well as the fundamental reasons for the relatively low quantum efficiency (QE). [23][24][25][26] To circumvent the latter, emission enhancement must be realized, for example by coupling the VBdefects to plasmonic or dielectric resonators. Integration of spin defects with photonic waveguides or cavities is also ultimately required to improve photon collection efficiencies and is critical for many applications in quantum photonics and quantum sensing. [27][28][29] In this work we demonstrate the integration of the VBinto a dielectric cavity -specifically a bullseye cavity. [30][31][32][33][34][35] The rationale behind the choice for this particular device type is given by three factors. First, the dipole of the VBemitter has not been experimentally determined. While photonic crystal cavities or plasmonic cavities are typically designed to enhance either in plane or out of plane dipole orientations, a bullseye cavity improves the collection efficiency of any dipole orientation. Second, the ensemble emission of the VBcovers a wide spectral range from 750 nm to 850 nm, which does not narrow at cryogenic temperature. The resonance of the bullseye (several tens of nanometers) can match a broad range at the maximum of that emission, which facilitates efficient coupling to the device. Finally, the bullseye cavity is a planar device (unlike, for example, solid immersion lenses or nanoscale pillars) that dramatically enhances the collection efficiency, especially for lower numerical aperture lenses, and is therefore particularly suitable for van der Waals and 2D materials.
To enable the largest possible spatial overlap of the VBensemble with the cavity fields, we employed a monolithic approach. Here, the bullseye cavity hosting the VBdefects is fabricated entirely from hBN, as shown schematically in Figure 1(a). The monolithic hBN bullseye cavity facilitates coupling to the VBwith a strong directionality of emission into a narrow angle of the far field, enabling improved spin readout of the emitter in hBN. The inset is a schematic of the hBN lattice hosting the VBspin defects.
The device fabrication is described in detail in the methods section and follows the principal steps as described in our previous studies. 36,37 Briefly, hBN was transferred from a high quality bulk crystal onto 285 nm SiO2 and suitable flakes of desired thickness (~ 290 nm) were identified by optical contrast. The sample was then coated with a polymer resist and patterned via electron beam lithography. Subsequently, reactive ion etching was used to cast the pattern from the resist into the hBN. A representative optical image of a hBN flake after fabrication is shown in Figure 1 For our work, we based the lattice defining parameter a on the second order bragg condition a=λ/neff, 38 for which we assumed an effective refractive index of neff ~ 1.7 at λ = 800 nm (n‖= 2.1) 39 with further structure defining parameters given by the central disk diameter d0, and the air gap between rings g, with 9 rings in total. As noticeable in Figure 1(b), we tuned these parameters in order to cover a larger range of spectral resonances. In the following, the notations BE1-BE8 correspond to 8 different groups of bullseye cavities with different resonances that were engineered by tuning the structural parameters a, d0, and g. Specifically, the scaling was set in increments of 5% smaller/larger relative to the parameters of device BE3, defined by a=475 nm, d0=950 nm, and g=180 nm.
After fabrication, the entire flake was homogeneously irradiated with a focused ion beam (FIB) to generate vacancies. Previous studies employed heavier ions like nitrogen or xenon to generate the VBdefects. 40 This results in a rather shallow implantation depth. To engineer the emitters all throughout the hBN cavity (depth-wise), we used a focused hydrogen beam. (further detailed in the Methods and Supporting Information 1). We now turn to a detailed characterization of the photonic functionality. All experiments were conducted at room temperature using a lab-built confocal microscope setup with a 0.9 NA objective and a 532 nm CW laser as the excitation source. Figure 2(a) shows spectra of the bullseye cavities for each scaling (blue color scale) in comparison to the emission collected from pristine hBN (red).
Notably, we observed that devices BE3 coupled to the central wavelength of the VBat 795 nm, while BE1 and BE2 show enhanced VBemission at shorter wavelengths of 740 nm and 765 nm, respectively. The BE4 and BE5 cavity modes were observed at longer wavelengths at 825 nm and 855 nm, respectively. Devices with larger scaling factor (BE6 -BE8) showed weakly coupled modes or no mode at all (Supporting Information 2). The quality factor, as determined by a Lorentzian fit for BE3 is on the order of ~ 100 (Figure 2(a) inset). We note that we did not observe modes before ion beam irradiation. This is direct evidence that the light coupling to the cavity stems from VBensembles as opposed to background emission (e.g. surface contamination or other luminescent defects). Furthermore, the equidistant distribution of modes with device scaling indicates that the emission is coupled to the same type of resonance (further discussed below).
We observed modes among all devices of the sets BE1 -BE5 with PL enhancements as outlined in Figure 2(b). On average, the bullseye cavities with a lattice constant of a=475nm, corresponding to BE3, showed the largest gain in PL intensity (up to a factor of ~ 6.5). Even for detuned devices we still observed on average a PL enhancement by a factor of ~ 3 and for some devices up to ~ 6. Yet, in absolute numbers, devices of scaling BE3 showed the largest increase in total PL intensity, due to the closest match of the mode to the center of the VBemission. Regardless, the fact that the enhancement occurs to be on par among different scalings shows equivalently efficient coupling of the bullseye cavity to the VBthroughout its entire emission range. This indicates that the dipole properties of the emitter (orientation and strength) throughout the range are homogeneous.
To further characterize the system, we studied the saturation behavior of the VBensembles under increasing excitation power. Measurements for bare hBN compared to the bullseye cavities are shown in Figure 2(c). Here we determined the intensity from the integrated spectrometer counts over an emission range from 775 nm to 805 nm (inset). The measured data set is fitted to the equation I = (IsatP)/(P + Psat) where Isat is the saturation intensity and Psat is a saturation power for the excitation laser. We determined saturation intensities of 242 and 123 and powers of 16.4 mW and 42.3 mW for VBemission coupled to a bullseye cavity and from plain hBN, respectively. The Enhancement factor for PL is ~ 6 and ~ 2 in the undersaturation and saturation regime, respectively.
Figure 2. PL characterization. a) Spectra of devices with varying lattice constants. The inset shows a Lorentzian fit of the mode for a device BE3 with a Q-factor of ~ 100. b) Comparison of the PL enhancement for various devices relative to emission in an unstructured hBN film. Measurements of bullseye cavities are represented as blue circles. The mean and standard deviation are represented for each scaling set as red square and error bars, respectively. c) Comparison of PL saturation for VBemitter in a bullseye cavity (blue) and unstructured hBN (red) derived by integrating over the indicated spectral range of the VB -(inset).
To gain deeper insight into the enhancement mechanism, we simulated various models based on the Finite Difference Time Domain (FDTD) method (see Methods for details). Specifically, as the dipole orientation of the defect has not been experimentally determined, we consider two cases: in-plane and out-of-plane dipole relative to the hBN sheet, which couple to a TE-like and TM-like mode of the bullseye cavity, respectively. Structural parameters obtained from a SEM image of BE3 are used for simulation and the results are shown in Figure 3(a). A TM (TE) mode occurs at 690 nm (785 nm), plotted as red (green) curves, while the experimental spectrum measured from the bullseye cavity shows a mode at 795 nm (blue curve). A large difference between the TM and TE resonant wavelength stems from two refractive indices (i.e. birefringence) 39 from hBN. Here, the experimental resonance wavelength nearly matches the TE mode and it is unlikely to be the TM mode even considering the fabrication imperfection that can cause the discrepancy between the simulation and experiment mode. Regardless, we emphasize further that with larger device scaling, the TE like mode shifts towards longer wavelengths (Figure 2(a)) and no further modes emerge. This is an indication of an extremely weak out-of-plane emission dipole component. Despite the identification of the emission dipole, we note that the absorption dipole may not be co-aligned, 41 which thus remains a topic of future experiments.
Figure 3. Simulation and far field emission. a) Spectral comparison between device BE3 (blue) to FDTD simulations assuming an out-of-plane (red) and an in-plane dipole (green). b) Top view of the field intensity inside the bullseye cavity. c) Side view of the field intensity. d) and e) show the simulated far field intensity at 783 nm for a VBemitter with in-plane emission dipole in unstructured hBN and integrated in a bullseye cavity, respectively. The intensity distribution in the unstructured hBN was multiplied by a factor of 100 to make it visible. f) Average over several far field measurements of VBfrom unstructured hBN. g) Far field measurement from the center of device BE3.
Figures 3(b) and 3(c) display the simulated electric field |E| for the in-plane dipole at the center of the bullseye cavity, showing a highly localized field at the center of the device, with a clear directionality and collimated emission to the perpendicular directions of the bullseye. This becomes more evidence in simulations of the far field intensity, compared for the emission from unstructured hBN and emission from a VBcenter coupled to a bullseye cavity in Figure 3(d) and (e), respectively. Noteworthy, the far field emission from the color center inside the structure is highly directional with the intensity into low angles (10°) increased by a factor of ~ 2000. Figure 3 (f, g) show back focal plane images of the pristine hBN and the bullseye cavity, with an excellent matching field profile to the simulation. Due to this directionality effect, higher collection efficiencies can be expected with objectives of lower NA, yielding even higher experimental countrates (~ factors 10 -100). Moreover, the directional emission from the bullseye cavity would be perfectly suitable for coupling into a single mode fiber, due to the close NA match, which is a requirement for integration towards several practical applications. We note that in support of our prior conclusion of the in plane dipole orientation, we observed in simulation that the far field intensity distribution for an out of plane dipole would not match the experimental results observed here (Supporting Information 3).
Finally, to showcase the enhanced spin readout properties in the bullseye cavity, we measure the ODMR signature of the VBensembles at room temperature. The level scheme of the VBis shown in figure 4(a) and consists of a fully non-degenerate triplet ground state (3A2'), with a zero field splitting of ~ 3.5 GHz between the ms=0 and ms=+/-1 states. For the ODMR measurements, a copper microwire (~ 20 µm diameter) was placed in the vicinity (few 10 µm) of the bullseye cavities, and the microwave was swept in a range from 3.1 to 3.7 GHz as the total PL counts were recorded with an avalanche photodiode. A PL map of the structure next to the wire is shown in Figure 4(b), where ODMR spectra were taken from the center of the bullseye cavity and outside of the structured region. Here, the PL counts from the center of the bullseye cavity are a factor of ~ 5 higher relative to unstructured hBN as directly apparent from the PL map. Here the high directionality of the emission becomes beneficial, because the microwire obstructs some of the collection into larger angles for emission from unstructured hBN. However, this is negligible for emitters inside the bullseye cavity, as the emission is highly directional (discussed in Figure 3).
A direct comparison of an average over 10 microwave sweeps is shown in Figure 4(a). Due to an improved signal-to-background ratio, the ODMR spectra of the VB-inside a bullseye cavity ( Figure 4(c), upper panel) displays a contrast of ~ 5.4 % as compared to the signal from the pristine hBN (~ 3.6 %). Due to the improved PL counts from the bullseye cavity we also achieved a better signalto-noise ratio, indicated by a lower value for the average standard deviation of 1.2 % vs. 1.7 %. The improved ODMR contrast and an improved signal-to-noise ratio is an important outcome that can yield single sweep readout and enable further integration of hBN devices with other 2D materials for sensing applications. We emphasize that the demonstration of an improved ODMR readout is not necessarily guaranteed. Specifically, as fabrication may introduce rough sidewalls and lattice distortion, which can affect the VBspins in a detrimental way by introducing non radiative decay paths that reduce the ODMR contrast. However, the results show clearly that the device structure is suitable and the fabrication did not affect the spin defect.
In summary we fabricated monolithic hBN bullseye cavities and demonstrated the first coupling of VBspin defects to a photonic cavity. We achieved a significant enhancement of the collected PL signal (6-fold and 10-fold for collection at ~ 60° and ~ 10° respectively). Using FDTD modelling, we further presented strong evidence that the emission dipole of the VBspin defects is in plane, as spectra and far-field images match almost perfectly. Ultimately, we proved the advanced functionality of the fabricated devices by showcasing an improved contrast and signal-to-noise ratio for ODMR measurements. Our results constitute an important step forward in employing hBN for integrated quantum photonic devices.
(b), whereas the structures are well defined and etched entirely through to the underlying substrate, as shown by a high resolution SEM image in Figure 1(c).
Figure 1 .
1The Bullseye Cavity. a) A schematic of a hBN bullseye cavity on SiO2 generating collimated emission into free space. The top right inset shows a schematic of the VBspin defect in the hBN lattice, where nitrogen and boron are depicted as blue and green spheres, respectively. The optically active VBspin defect is illustrated as a red arrow. b) Optical microscope image of an exfoliated hBN flake on 285 nm SiO2 with an array of fabricated devices. The various colors of the flake correspond to different thicknesses. c) An SEM image of a bullseye cavity.
Figure 4 .
4Optically Detected Magnetic Resonance. a) The level structure of the VBdefect with a triplet ground state, indicated by 0 and +/-1 states. b) PL map of the BE cavity taken for ODMR measurements. The color scale represents normalized PL counts. c) ODMR spectra of VB-inside (top panel) and outside of the cavity (bottom panel). The solid lines correspond to the mean over 10 scans, the shaded regions represent the standard deviation of the mean.
AcknowledgementWe acknowledge the Australian Research Council (CE200100010, DP190101058) and the Asian Office of Aerospace Research and Development (FA2386-20-1-4014). V. D. acknowledge financial support from the DFG through the Würzburg-Dresden Cluster of Excellence on Complexity and Topology in Quantum Matter-ct.qmat (EXC 2147, project-id 39085490)
Photonics with Hexagonal Boron Nitride. J D Caldwell, I Aharonovich, G Cassabois, J H Edgar, B Gil, D N Basov, Nat. Rev. Mater. 4Caldwell, J. D.; Aharonovich, I.; Cassabois, G.; Edgar, J. H.; Gil, B.; Basov, D. N., Photonics with Hexagonal Boron Nitride. Nat. Rev. Mater. 2019, 4, 552-567.
Atomically Thin Hexagonal Boron Nitride and Its Heterostructures. J Zhang, B Tan, X Zhang, F Gao, Y Hu, L Wang, X Duan, Z Yang, P Hu, Advanced Materials. 33Zhang, J.; Tan, B.; Zhang, X.; Gao, F.; Hu, Y.; Wang, L.; Duan, X.; Yang, Z.; Hu, P., Atomically Thin Hexagonal Boron Nitride and Its Heterostructures. Advanced Materials 2021, 33, 2000769.
Rabi Oscillations and Resonance Fluorescence from a Single Hexagonal Boron Nitride Quantum Emitter. K Konthasinghe, C Chakraborty, N Mathur, L Qiu, A Mukherjee, G D Fuchs, A N Vamivakas, Optica. 6Konthasinghe, K.; Chakraborty, C.; Mathur, N.; Qiu, L.; Mukherjee, A.; Fuchs, G. D.; Vamivakas, A. N., Rabi Oscillations and Resonance Fluorescence from a Single Hexagonal Boron Nitride Quantum Emitter. Optica 2019, 6, 542-548.
Mechanical Decoupling of Quantum Emitters in Hexagonal Boron Nitride from Low-Energy Phonon Modes. M Hoese, P Reddy, A Dietrich, M K Koch, K G Fehler, M W Doherty, A Kubanek, Science. 20206038Hoese, M.; Reddy, P.; Dietrich, A.; Koch, M. K.; Fehler, K. G.; Doherty, M. W.; Kubanek, A., Mechanical Decoupling of Quantum Emitters in Hexagonal Boron Nitride from Low-Energy Phonon Modes. Science Advances 2020, 6, eaba6038.
Deterministic Quantum Emitter Formation in Hexagonal Boron Nitride Via Controlled Edge Creation. J Ziegler, R Klaiss, A Blaikie, D Miller, V R Horowitz, B J Alemán, Nano Lett. 19Ziegler, J.; Klaiss, R.; Blaikie, A.; Miller, D.; Horowitz, V. R.; Alemán, B. J., Deterministic Quantum Emitter Formation in Hexagonal Boron Nitride Via Controlled Edge Creation. Nano Lett. 2019, 19, 2121-2127.
Near-Deterministic Activation of Room Temperature Quantum Emitters in Hexagonal Boron Nitride. N V Proscia, Z Shotan, H Jayakumar, P Reddy, M Dollar, A Alkauskas, M W Doherty, C A Meriles, V M Menon, 51128Proscia, N. V.; Shotan, Z.; Jayakumar, H.; Reddy, P.; Dollar, M.; Alkauskas, A.; Doherty, M. W.; Meriles, C. A.; Menon, V. M., Near-Deterministic Activation of Room Temperature Quantum Emitters in Hexagonal Boron Nitride. Optica 2018, 5, 1128.
Compact Cavity-Enhanced Single-Photon Generation with Hexagonal Boron Nitride. T Vogl, R Lecamwasam, B C Buchler, Y Lu, P K Lam, ACS Photonics. 6Vogl, T.; Lecamwasam, R.; Buchler, B. C.; Lu, Y.; Lam, P. K., Compact Cavity-Enhanced Single-Photon Generation with Hexagonal Boron Nitride. ACS Photonics 2019, 6, 1955-1962.
Efficient Optical Quantification of Heterogeneous Emitter Ensembles. S A Breitweiser, A L Exarhos, R N Patel, J Saouaf, B Porat, D A Hopper, L C Bassett, ACS Photonics. 7Breitweiser, S. A.; Exarhos, A. L.; Patel, R. N.; Saouaf, J.; Porat, B.; Hopper, D. A.; Bassett, L. C., Efficient Optical Quantification of Heterogeneous Emitter Ensembles. ACS Photonics 2020, 7, 288-295.
Magnetic-Field-Dependent Quantum Emission in Hexagonal Boron Nitride at Room Temperature. A L Exarhos, D A Hopper, R N Patel, M W Doherty, L C Bassett, Nat. Commun. 10222Exarhos, A. L.; Hopper, D. A.; Patel, R. N.; Doherty, M. W.; Bassett, L. C., Magnetic-Field- Dependent Quantum Emission in Hexagonal Boron Nitride at Room Temperature. Nat. Commun. 2019, 10, 222.
Spectrally Resolved Photodynamics of Individual Emitters in Large-Area Monolayers of Hexagonal Boron Nitride. H L Stern, R Wang, Y Fan, R Mizuta, J C Stewart, L.-M Needham, T D Roberts, R Wai, N S Ginsberg, D Klenerman, S Hofmann, S F Lee, ACS Nano. 13Stern, H. L.; Wang, R.; Fan, Y.; Mizuta, R.; Stewart, J. C.; Needham, L.-M.; Roberts, T. D.; Wai, R.; Ginsberg, N. S.; Klenerman, D.; Hofmann, S.; Lee, S. F., Spectrally Resolved Photodynamics of Individual Emitters in Large-Area Monolayers of Hexagonal Boron Nitride. ACS Nano 2019, 13, 4538-4547.
Near-Unity Light Collection Efficiency from Quantum Emitters in Boron Nitride by Coupling to Metallo-Dielectric Antennas. X Li, R A Scully, K Shayan, Y Luo, S Strauf, ACS Nano. 13Li, X.; Scully, R. A.; Shayan, K.; Luo, Y.; Strauf, S., Near-Unity Light Collection Efficiency from Quantum Emitters in Boron Nitride by Coupling to Metallo-Dielectric Antennas. ACS Nano 2019, 13, 6992-6997.
Microcavity-Coupled Emitters in Hexagonal Boron Nitride. V P Nicholas, J Harishankar, G Xiaochen, L.-M Gabriel, S Zav, Z Weidong, A M Carlos, M M Vinod, Nanophotonics. 2020Nicholas, V. P.; Harishankar, J.; Xiaochen, G.; Gabriel, L.-M.; Zav, S.; Weidong, Z.; Carlos, A. M.; Vinod, M. M., Microcavity-Coupled Emitters in Hexagonal Boron Nitride. Nanophotonics 2020, 9, 2937-2944.
Coupling Hexagonal Boron Nitride Quantum Emitters to Photonic Crystal Cavities. J E Fröch, S Kim, N Mendelson, M Kianinia, M Toth, I Aharonovich, ACS Nano. 14Fröch, J. E.; Kim, S.; Mendelson, N.; Kianinia, M.; Toth, M.; Aharonovich, I., Coupling Hexagonal Boron Nitride Quantum Emitters to Photonic Crystal Cavities. ACS Nano 2020, 14, 7085-7091.
Optical Gating of Photoluminescence from Color Centers in Hexagonal Boron Nitride. P Khatri, A J Ramsay, R N E Malein, H M H Chong, I J Luxmoore, Nano Lett. 20Khatri, P.; Ramsay, A. J.; Malein, R. N. E.; Chong, H. M. H.; Luxmoore, I. J., Optical Gating of Photoluminescence from Color Centers in Hexagonal Boron Nitride. Nano Lett. 2020, 20, 4256- 4263.
Evidence of Photochromism in a Hexagonal Boron Nitride Single-Photon Emitter. M A Feldman, C E Marvinney, A A Puretzky, B J Lawrie, Optica. 8Feldman, M. A.; Marvinney, C. E.; Puretzky, A. A.; Lawrie, B. J., Evidence of Photochromism in a Hexagonal Boron Nitride Single-Photon Emitter. Optica 2021, 8, 1-5.
Wide-Field Spectral Super-Resolution Mapping of Optically Active Defects in Hexagonal Boron Nitride. J Comtet, E Glushkov, V Navikas, J Feng, V Babenko, S Hofmann, K Watanabe, T Taniguchi, A Radenovic, Nano Lett. 19Comtet, J.; Glushkov, E.; Navikas, V.; Feng, J.; Babenko, V.; Hofmann, S.; Watanabe, K.; Taniguchi, T.; Radenovic, A., Wide-Field Spectral Super-Resolution Mapping of Optically Active Defects in Hexagonal Boron Nitride. Nano Lett. 2019, 19, 2516-2523.
Initialization and Read-out of Intrinsic Spin Defects in a Van Der Waals Crystal at Room Temperature. A Gottscholl, M Kianinia, V Soltamov, S Orlinskii, G Mamin, C Bradac, C Kasper, K Krambrock, A Sperlich, M Toth, I Aharonovich, V Dyakonov, Nature Materials. 19Gottscholl, A.; Kianinia, M.; Soltamov, V.; Orlinskii, S.; Mamin, G.; Bradac, C.; Kasper, C.; Krambrock, K.; Sperlich, A.; Toth, M.; Aharonovich, I.; Dyakonov, V., Initialization and Read-out of Intrinsic Spin Defects in a Van Der Waals Crystal at Room Temperature. Nature Materials 2020, 19, 540-545.
Single-Spin Resonance in a Van Der Waals Embedded Paramagnetic Defect. N Chejanovsky, A Mukherjee, J Geng, Y.-C Chen, Y Kim, A Denisenko, A Finkler, T Taniguchi, K Watanabe, D B R Dasari, P Auburger, A Gali, J H Smet, J Wrachtrup, Nature Materials. Chejanovsky, N.; Mukherjee, A.; Geng, J.; Chen, Y.-C.; Kim, Y.; Denisenko, A.; Finkler, A.; Taniguchi, T.; Watanabe, K.; Dasari, D. B. R.; Auburger, P.; Gali, A.; Smet, J. H.; Wrachtrup, J., Single-Spin Resonance in a Van Der Waals Embedded Paramagnetic Defect. Nature Materials 2021.
Room-Temperature Optically Detected Magnetic Resonance of Single Defects in Hexagonal Boron Nitride. H L Stern, J Jarman, Q Gu, S Barker, N Mendelson, D Chugh, S Schott, H H Tan, H Sirringhaus, I Aharonovich, M Atatüre, arXiv:2103.164942021Stern, H. L.; Jarman, J.; Gu, Q.; Eizagirre Barker, S.; Mendelson, N.; Chugh, D.; Schott, S.; Tan, H. H.; Sirringhaus, H.; Aharonovich, I.; Atatüre, M., Room-Temperature Optically Detected Magnetic Resonance of Single Defects in Hexagonal Boron Nitride. arXiv e-prints 2021, arXiv:2103.16494.
Identifying Carbon as the Source of Visible Single-Photon Emission from Hexagonal Boron Nitride. N Mendelson, D Chugh, J R Reimers, T S Cheng, A Gottscholl, H Long, C J Mellor, A Zettl, V Dyakonov, P H Beton, S V Novikov, C Jagadish, H H Tan, M J Ford, M Toth, C Bradac, I Aharonovich, Nature Materials. 20Mendelson, N.; Chugh, D.; Reimers, J. R.; Cheng, T. S.; Gottscholl, A.; Long, H.; Mellor, C. J.; Zettl, A.; Dyakonov, V.; Beton, P. H.; Novikov, S. V.; Jagadish, C.; Tan, H. H.; Ford, M. J.; Toth, M.; Bradac, C.; Aharonovich, I., Identifying Carbon as the Source of Visible Single-Photon Emission from Hexagonal Boron Nitride. Nature Materials 2021, 20, 321-328.
Material Platforms for Spin-Based Photonic Quantum Technologies. M Atatüre, D Englund, N Vamivakas, S.-Y Lee, J Wrachtrup, Nat. Rev. Mater. 3Atatüre, M.; Englund, D.; Vamivakas, N.; Lee, S.-Y.; Wrachtrup, J., Material Platforms for Spin-Based Photonic Quantum Technologies. Nat. Rev. Mater. 2018, 3, 38-51.
Quantum Technologies with Optically Interfaced Solid-State Spins. D D Awschalom, R Hanson, J Wrachtrup, B B Zhou, Nat. Photonics. 12Awschalom, D. D.; Hanson, R.; Wrachtrup, J.; Zhou, B. B., Quantum Technologies with Optically Interfaced Solid-State Spins. Nat. Photonics 2018, 12, 516-527.
Ab Initio Theory of the Negatively Charged Boron Vacancy Qubit in Hexagonal Boron Nitride. V Ivády, G Barcza, G Thiering, S Li, H Hamdi, J.-P Chou, Ö Legeza, A Gali, Computational Materials. 641Ivády, V.; Barcza, G.; Thiering, G.; Li, S.; Hamdi, H.; Chou, J.-P.; Legeza, Ö.; Gali, A., Ab Initio Theory of the Negatively Charged Boron Vacancy Qubit in Hexagonal Boron Nitride. npj Computational Materials 2020, 6, 41.
Photoluminescence, Photophysics, and Photochemistry of the ${{\Mathrm{V}}_{\Mathrm{B}}}^{\Ensuremath{-}}$ Defect in Hexagonal Boron Nitride. J R Reimers, J Shen, M Kianinia, C Bradac, I Aharonovich, M J Ford, P Piecuch, Physical Review. 2020144105Reimers, J. R.; Shen, J.; Kianinia, M.; Bradac, C.; Aharonovich, I.; Ford, M. J.; Piecuch, P., Photoluminescence, Photophysics, and Photochemistry of the ${{\Mathrm{V}}_{\Mathrm{B}}}^{\Ensuremath{-}}$ Defect in Hexagonal Boron Nitride. Physical Review B 2020, 102, 144105.
W Liu, Z P Li, Y Z Yang, S Yu, Y Meng, Z A Wang, N J Guo, F F Yan, Q Li, J F Wang, J S Xu, Y Dong, X D Chen, F W Sun, Y T Wang, J S Tang, C F Li, G C Guo, A Gottscholl, M Diez, V Soltamov, C Kasper, A Sperlich, M Kianinia, C Bradac, I Aharonovich, V Dyakonov, arXiv:2101.11220.26Rabi Oscillation of V$_\Text{B}^-$ Spin in Hexagonal Boron Nitride. arXiv e-prints 2021. Room Temperature Coherent Control of Spin Defects in Hexagonal Boron Nitride. Science Advances 2021, 7, eabf3630Liu, W.; Li, Z. P.; Yang, Y. Z.; Yu, S.; Meng, Y.; Wang, Z. A.; Guo, N. J.; Yan, F. F.; Li, Q.; Wang, J. F.; Xu, J. S.; Dong, Y.; Chen, X. D.; Sun, F. W.; Wang, Y. T.; Tang, J. S.; Li, C. F.; Guo, G. C., Rabi Oscillation of V$_\Text{B}^-$ Spin in Hexagonal Boron Nitride. arXiv e-prints 2021, arXiv:2101.11220. 26. Gottscholl, A.; Diez, M.; Soltamov, V.; Kasper, C.; Sperlich, A.; Kianinia, M.; Bradac, C.; Aharonovich, I.; Dyakonov, V., Room Temperature Coherent Control of Spin Defects in Hexagonal Boron Nitride. Science Advances 2021, 7, eabf3630.
Inverse-Designed Photon Extractors for Optically Addressable Defect Qubits. S Chakravarthi, P Chao, C Pederson, S Molesky, A Ivanov, K Hestroffer, F Hatami, A W Rodriguez, K.-M C Fu, 7Chakravarthi, S.; Chao, P.; Pederson, C.; Molesky, S.; Ivanov, A.; Hestroffer, K.; Hatami, F.; Rodriguez, A. W.; Fu, K.-M. C., Inverse-Designed Photon Extractors for Optically Addressable Defect Qubits. Optica 2020, 7, 1805-1811.
Experimental Demonstration of Memory-Enhanced Quantum Communication. M K Bhaskar, R Riedinger, B Machielse, D S Levonian, C T Nguyen, E N Knall, H Park, D Englund, M Lončar, D D Sukachev, M D Lukin, Nature. 2020Bhaskar, M. K.; Riedinger, R.; Machielse, B.; Levonian, D. S.; Nguyen, C. T.; Knall, E. N.; Park, H.; Englund, D.; Lončar, M.; Sukachev, D. D.; Lukin, M. D., Experimental Demonstration of Memory-Enhanced Quantum Communication. Nature 2020, 580, 60-64.
Large-Scale Integration of Artificial Atoms in Hybrid Photonic Circuits. N H Wan, T.-J Lu, K C Chen, M P Walsh, M E Trusheim, L De Santis, E A Bersin, I B Harris, S L Mouradian, I R Christen, E S Bielejec, D Englund, Nature. 583Wan, N. H.; Lu, T.-J.; Chen, K. C.; Walsh, M. P.; Trusheim, M. E.; De Santis, L.; Bersin, E. A.; Harris, I. B.; Mouradian, S. L.; Christen, I. R.; Bielejec, E. S.; Englund, D., Large-Scale Integration of Artificial Atoms in Hybrid Photonic Circuits. Nature 2020, 583, 226-231.
Nanoscale Optical Positioning of Single Quantum Dots for Bright and Pure Single-Photon Emission. L Sapienza, M Davanço, A Badolato, K Srinivasan, Nat. Commun. 67833Sapienza, L.; Davanço, M.; Badolato, A.; Srinivasan, K., Nanoscale Optical Positioning of Single Quantum Dots for Bright and Pure Single-Photon Emission. Nat. Commun. 2015, 6, 7833.
Efficient Photon Collection from a Nitrogen Vacancy Center in a Circular Bullseye Grating. L Li, E H Chen, J Zheng, S L Mouradian, F Dolde, T Schröder, S Karaveli, M L Markham, D J Twitchen, D Englund, Nano Lett. 15Li, L.; Chen, E. H.; Zheng, J.; Mouradian, S. L.; Dolde, F.; Schröder, T.; Karaveli, S.; Markham, M. L.; Twitchen, D. J.; Englund, D., Efficient Photon Collection from a Nitrogen Vacancy Center in a Circular Bullseye Grating. Nano Lett. 2015, 15, 1493-1497.
Strain-Tunable Single-Photon Source Based on a Circular Bragg Grating Cavity with Embedded Quantum Dots. M Moczała-Dusanowska, Ł Dusanowski, O Iff, T Huber, S Kuhn, T Czyszanowski, C Schneider, S Höfling, ACS Photonics. 7Moczała-Dusanowska, M.; Dusanowski, Ł.; Iff, O.; Huber, T.; Kuhn, S.; Czyszanowski, T.; Schneider, C.; Höfling, S., Strain-Tunable Single-Photon Source Based on a Circular Bragg Grating Cavity with Embedded Quantum Dots. ACS Photonics 2020, 7, 3474-3480.
Enhanced Single-Photon Emission from Gan Quantum Dots in Bullseye Structures. S Xia, T Aoki, K Gao, M Arita, Y Arakawa, M J Holmes, ACS Photonics. Xia, S.; Aoki, T.; Gao, K.; Arita, M.; Arakawa, Y.; Holmes, M. J., Enhanced Single-Photon Emission from Gan Quantum Dots in Bullseye Structures. ACS Photonics 2021.
Single Photon Sources with near Unity Collection Efficiencies by Deterministic Placement of Quantum Dots in Nanoantennas. H Abudayyeh, B Lubotzky, A Blake, J Wang, S Majumder, Z Hu, Y Kim, H Htoon, R Bose, A V Malko, J A Hollingsworth, R Rapaport, APL Photonics. 636109Abudayyeh, H.; Lubotzky, B.; Blake, A.; Wang, J.; Majumder, S.; Hu, Z.; Kim, Y.; Htoon, H.; Bose, R.; Malko, A. V.; Hollingsworth, J. A.; Rapaport, R., Single Photon Sources with near Unity Collection Efficiencies by Deterministic Placement of Quantum Dots in Nanoantennas. APL Photonics 2021, 6, 036109.
A Solid-State Source of Strongly Entangled Photon Pairs with High Brightness and Indistinguishability. J Liu, R Su, Y Wei, B Yao, S F C Silva, Y Yu, J Iles-Smith, K Srinivasan, A Rastelli, J Li, X Wang, Nat. Nanotechnol. 14Liu, J.; Su, R.; Wei, Y.; Yao, B.; Silva, S. F. C. d.; Yu, Y.; Iles-Smith, J.; Srinivasan, K.; Rastelli, A.; Li, J.; Wang, X., A Solid-State Source of Strongly Entangled Photon Pairs with High Brightness and Indistinguishability. Nat. Nanotechnol. 2019, 14, 586-593.
Photonic Nanostructures from Hexagonal Boron Nitride. J E Fröch, Y Hwang, S Kim, I Aharonovich, M Toth, Advanced Optical Materials. 71801344Fröch, J. E.; Hwang, Y.; Kim, S.; Aharonovich, I.; Toth, M., Photonic Nanostructures from Hexagonal Boron Nitride. Advanced Optical Materials 2019, 7, 1801344.
Photonic Crystal Cavities from Hexagonal Boron Nitride. S Kim, J E Fröch, J Christian, M Straw, J Bishop, D Totonjian, K Watanabe, T Taniguchi, M Toth, I Aharonovich, Nat. Commun. 92623Kim, S.; Fröch, J. E.; Christian, J.; Straw, M.; Bishop, J.; Totonjian, D.; Watanabe, K.; Taniguchi, T.; Toth, M.; Aharonovich, I., Photonic Crystal Cavities from Hexagonal Boron Nitride. Nat. Commun. 2018, 9, 2623.
Integration of Single Photon Emitters in 2d Layered Materials with a Silicon Nitride Photonic Chip. F Peyskens, C Chakraborty, M Muneeb, D Van Thourhout, D Englund, Nat. Commun. 104435Peyskens, F.; Chakraborty, C.; Muneeb, M.; Van Thourhout, D.; Englund, D., Integration of Single Photon Emitters in 2d Layered Materials with a Silicon Nitride Photonic Chip. Nat. Commun. 2019, 10, 4435.
. Kyoungsik Yu, Y R Sejeong, Kim , Yeonghoon Jin, Kyoungsik Yu, Y. R., Sejeong KIM, and Yeonghoon Jin;
Optical Analysis of the Refractive Index and Birefringence of Hexagonal Boron Nitride from the Visible to near-Infrared. K Yu, Y Rah, S Kim, Y Jin, Opt. Lett. Yu, K.; Rah, Y.; Kim, S.; jin, Y., Optical Analysis of the Refractive Index and Birefringence of Hexagonal Boron Nitride from the Visible to near-Infrared. Opt. Lett. 2019.
Generation of Spin Defects in Hexagonal Boron Nitride. M Kianinia, S White, J E Fröch, C Bradac, I Aharonovich, ACS Photonics. 7Kianinia, M.; White, S.; Fröch, J. E.; Bradac, C.; Aharonovich, I., Generation of Spin Defects in Hexagonal Boron Nitride. ACS Photonics 2020, 7, 2147-2152.
Optical Absorption and Emission Mechanisms of Single Defects in Hexagonal Boron Nitride. N R Jungwirth, G D Fuchs, Phys Rev Lett. 11957401Jungwirth, N. R.; Fuchs, G. D., Optical Absorption and Emission Mechanisms of Single Defects in Hexagonal Boron Nitride. Phys Rev Lett 2017, 119, 057401.
| []
|
[
"SOCIAL DISTANCING AND COVID-19: RANDOMIZATION INFERENCE FOR A STRUCTURED DOSE-RESPONSE RELATIONSHIP",
"SOCIAL DISTANCING AND COVID-19: RANDOMIZATION INFERENCE FOR A STRUCTURED DOSE-RESPONSE RELATIONSHIP"
]
| [
"B O Zhang [email protected] \nDepartment of Statistics\nThe Wharton School\nUniversity of Pennsylvania\n\n",
"Siyu Heng [email protected] \nGraduate Group in Applied Mathematics and Computational Science\nSchool of Arts and Sciences\nUniversity of Pennsylvania\n\n",
"Ting Ye *[email protected] \nDepartment of Statistics\nThe Wharton School\nUniversity of Pennsylvania\n\n",
"Dylan S Small †[email protected] \nDepartment of Statistics\nThe Wharton School\nUniversity of Pennsylvania\n\n"
]
| [
"Department of Statistics\nThe Wharton School\nUniversity of Pennsylvania\n",
"Graduate Group in Applied Mathematics and Computational Science\nSchool of Arts and Sciences\nUniversity of Pennsylvania\n",
"Department of Statistics\nThe Wharton School\nUniversity of Pennsylvania\n",
"Department of Statistics\nThe Wharton School\nUniversity of Pennsylvania\n"
]
| []
| Social distancing is widely acknowledged as an effective public health policy combating the novel coronavirus. But extreme forms of social distancing like isolation and quarantine have costs and it is not clear how much social distancing is needed to achieve public health effects. In this article, we develop a design-based framework to test the causal null hypothesis and make inference about the dose-response relationship between reduction in social mobility and COVID-19 related public health outcomes. We first discuss how to embed observational data with a time-independent, continuous treatment dose into an approximate randomized experiment, and develop a randomization-based procedure that tests if a structured dose-response relationship fits the data. We then generalize the design and testing procedure to accommodate a time-dependent treatment dose in a longitudinal setting. | 10.1214/22-aoas1613 | [
"https://arxiv.org/pdf/2011.06917v3.pdf"
]
| 226,955,999 | 2011.06917 | 847539aa081353ce906451e70940827416e00169 |
SOCIAL DISTANCING AND COVID-19: RANDOMIZATION INFERENCE FOR A STRUCTURED DOSE-RESPONSE RELATIONSHIP
B O Zhang [email protected]
Department of Statistics
The Wharton School
University of Pennsylvania
Siyu Heng [email protected]
Graduate Group in Applied Mathematics and Computational Science
School of Arts and Sciences
University of Pennsylvania
Ting Ye *[email protected]
Department of Statistics
The Wharton School
University of Pennsylvania
Dylan S Small †[email protected]
Department of Statistics
The Wharton School
University of Pennsylvania
SOCIAL DISTANCING AND COVID-19: RANDOMIZATION INFERENCE FOR A STRUCTURED DOSE-RESPONSE RELATIONSHIP
Submitted to the Annals of Applied Statistics
Social distancing is widely acknowledged as an effective public health policy combating the novel coronavirus. But extreme forms of social distancing like isolation and quarantine have costs and it is not clear how much social distancing is needed to achieve public health effects. In this article, we develop a design-based framework to test the causal null hypothesis and make inference about the dose-response relationship between reduction in social mobility and COVID-19 related public health outcomes. We first discuss how to embed observational data with a time-independent, continuous treatment dose into an approximate randomized experiment, and develop a randomization-based procedure that tests if a structured dose-response relationship fits the data. We then generalize the design and testing procedure to accommodate a time-dependent treatment dose in a longitudinal setting.
Social distancing is widely acknowledged as an effective public health policy combating the novel coronavirus. But extreme forms of social distancing like isolation and quarantine have costs and it is not clear how much social distancing is needed to achieve public health effects. In this article, we develop a design-based framework to test the causal null hypothesis and make inference about the dose-response relationship between reduction in social mobility and COVID-19 related public health outcomes. We first discuss how to embed observational data with a time-independent, continuous treatment dose into an approximate randomized experiment, and develop a randomization-based procedure that tests if a structured dose-response relationship fits the data. We then generalize the design and testing procedure to accommodate a time-dependent treatment dose in a longitudinal setting. Finally, we apply the proposed design and testing procedures to investigate the effect of social distancing during the phased reopening in the United States on public health outcomes using data compiled from sources including Unacast ™ , the United States Census Bureau, and the County Health Rankings and Roadmaps Program. We rejected a primary analysis null hypothesis that stated the social distancing from April 27, 2020, to June 28, 2020, had no effect on the COVID-19-related death toll from June 29, 2020, to August 2, 2020 (p-value < 0.001), and found that it took more reduction in mobility to prevent exponential growth in case numbers for non-rural counties compared to rural counties.
Introduction.
1.1. Social distancing, a pilot study, and dose-response relationship. Social distancing is widely acknowledged as one of the most effective public health strategies to reduce transmission of the novel coronavirus (Lewnard and Lo, 2020). There seemed to be ample evidence from China (Lau et al., 2020) and Italy (Sjödin et al., 2020) that a strict lockdown and practice of social distancing could have a substantial effect on reducing disease transmission, but social distancing has economic, psychological and societal costs (Acemoglu et al., 2020;Atalan, 2020;Grover et al., 2020;Sheridan et al., 2020;Venkatesh and Edirappuli, 2020). How much social distancing is needed to achieve the desired public health effect? In this article, we measure the level of social distancing using data on daily percentage change in total distance traveled compared to the pre-coronavirus level (data compiled and made available by Unacast ™ ) and investigate the causal relationship between social distancing and COVIDrelated public health outcomes.
We conducted a pilot study in March to investigate the effect of social distancing during the first week of President Trump's 15 Days to Slow the Spread campaign (March 16-22, 2020) on the influenza-like illness (ILI) percentage two and three weeks later. We tested the causal null hypothesis and found some weak evidence (p-value = 0.08) that better social distancing had an effect on ILI percentage three weeks later. In Supplementary Material A, we described in detail our pilot study. A protocol of the design and analysis was posted on arXiv (https://arxiv.org/abs/2004.02944) before outcome data were available and analyzed.
In addition to the causal null hypothesis, the "dose-response relationship" between the degree of social distancing and potential public health outcomes under various degrees of social distancing is also of great interest. Infectious disease experts seemed to express sentiments that the effect of social distancing on public health outcomes might be small or even negligible under a small degree of social distancing, but much more substantial under a large degree of social distancing. In an interview with the British Broadcasting Corporation (BBC Radio 4, 2020), director of the National Institute of Allergy and Infectious Diseases (NIAID), Dr. Anthony S. Fauci said: "We never got things down to baseline where so many countries in Europe and the UK and other countries did -they closed down to the tune of about 97 percent lockdown. In the United States, even in the most strict lockdown, only about 50 percent of the country was locked down. That allowed the perpetuation of the outbreak that we never did get under very good control". Perhaps Dr. Fauci was proposing a hypothesis that the treatment dose, i.e., level of social distancing, played a very important role, and the causal effect of social distancing as a public health strategy combating coronavirus transmission is likely to be very different depending on the extent to which it is practiced (see, e.g., Gelfand et al. (2021)). We would like to formalize and test the hypothesis concerning a dose-response relationship between social distancing and public health outcomes.
1.2. Reopening, causal null hypothesis, dose-response kink model, and connection to epidemiological models. Starting late April and early May, many states in the U.S. started phased reopening. States and local governments differed in their reopening timelines; people in different states and counties also differed in their social mobility during the process: some ventured out; some continued to stay at home as much as possible. Figure 1 plots the 7-day rolling average of percentage change in total distance traveled of all counties in the U.S., from mid-March to late May. It is evident that as many counties started to ease social distancing measures, we saw less reduction in distance traveled; in fact, in many counties, distance traveled started to return to and even supersede the pre-coronavirus level.
In this article, we leverage the county-level social mobility data since phased reopening in the U.S. to study the relationship between social mobility and its effect on public health outcomes. Let t 0 denote a baseline period, T some endpoint of interest, z t0:T a longitudinal measurement of change in social mobility in county n from t 0 to T , and Y n,T (z t0:T ) county n's potential public health outcome at time T under the social mobility trajectory z t0:T , e.g., the number of patients succumbing to the COVID-19 at time T . Our first scientific query is about the causal null hypothesis: had the social mobility trajectory changed from z t0:T to z t0:T , would the potential public health outcome at time T change at all? In other words, does Y n,T (z t0:T ) = Y n,T (z t0:T ) hold for all z t0:T = z t0:T ? Suppose that we have enough evidence from observational data to reject this causal null hypothesis, our second query then is about the dose-response relationship between the level of social distancing and its effect on the potential public health outcome. To illustrate, one such dose-response relationship (among many other candidates) is the following dose-response kink model (see Figure 2 for an illustration):
H K 0 : Y n,T (z) = Y n,T (z * ), ∀z ≤ τ, and Y n,T (z) − Y n,T (τ ) = β(z − τ ), ∀z > τ, ∀n = 1, 2, · · · , N, for some τ and β, where z captures some aggregate dose of the social mobility trajectory z t0:T , e.g., the average reduction in social mobility from t 0 to T , and z * a reference dose level. Model (1) states that the potential health-related outcome (e.g., daily death toll, test positivity rate, etc) at time T would remain unchanged as the potential outcome under the reference level when the aggregate dose z is less than a certain threshold τ , and then increases at a rate proportional to how much z exceeds the threshold. Model (1) succinctly captures two key features policy makers may be most interested in: τ the minimum dose that "activates" the treatment effect, and β how fast the potential outcome changes as the dose changes after exceeding the threshold. Model (1) may remind readers of the "broken line regression" models in regression analysis; see Zhang and Singer (2010, Chapter 4). The key difference here is that Model (1) and other dose-response relationships in this article are about the contrast in potential outcomes, not the observed outcomes in a regression analysis.
Our analysis in this article complements standard analyses based on epidemiological models, e.g., the SIR (susceptible-infected-recovered) compartment models (Brauer and Castillo-Chavez, 2012). The primary interest of epidemiological models is to understand infectious disease dynamics, in particular how the public health outcome trajectory evolves over time.
To investigate a dose-response relationship, we only posit a parsimonious model on the contrast between potential outcomes at time T under different doses, e.g., Y n,T (z) − Y n,T (τ ) in (1), not on the disease dynamics that generate the outcome Y n,T (z). In other words, a parsimonious dose-response relationship does not preclude nonlinear infectious disease dynamics, e.g., those based on the compartment models; moreover, our primary inferential target, the causal null hypothesis, does not impose any restriction on the infectious disease mechanism.
1.3. Our contribution. We have three goals in this article. First, we propose a simple, model-free randomization-based procedure that tests if a causal null hypothesis or a structured dose-response relationship, e.g., the dose-response kink model, fits the data in a static setting with a time-independent, continuous or many-leveled treatment dose. To be specific,
Slope = β 0 1 2 0 1 τ 2 3 4 Dose: z Y(z) − Y(0) Fig 2:
The dose-response kink model an empirical researcher posits a structured dose-response relationship that she finds scientifically meaningful, parsimonious, and flexible enough to describe data at hand; our developed procedure can then be applied to test if such a postulated dose-response relationship is sufficient to describe the causal relationship. If the hypothesis is rejected, empirical researchers are then advised to re-examine the scientific theory underpinning the postulated model; otherwise, the model seems a good starting point for data analysis. In this way, our method can be deemed as a model-free "diagnostic test" for a dose-response relationship, and more broadly a test of the underlying scientific theory. In our application, the treatment and outcome are both longitudinal. Our second goal is to generalize the proposed design and testing procedure to the longitudinal setting. We define a notion of cumulative dose for a time-varying treatment dose trajectory, and discuss how to embed observational longitudinal data into an approximate randomized controlled trial in order to permute two treatment trajectories. Finally, we closely examine our assumptions in the context of an infectious disease transmission mechanism and apply the developed design and testing procedure to characterize the dose-response relationship between reduction in social mobility and public health outcomes during the reopening phases in the U.S. using county-level data we compiled from sources including Unacast ™ , the United States Census Bureau, and the County Health Rankings and Roadmaps Program (Remington, Catlin and Gennuso, 2015). The rest of the article is organized as follows. Section 2 and 3 study how to investigate a dose-response relationship using nonbipartite matching in a static setting. Section 4 incorporates interference and considers the spillover effects. Section 5 extends the method to longitudinal studies. Section 6 describes the design of the case study and Section 7 presents results and extensive sensitivity analyses. Section 8 concludes with a discussion.
2. Investigating the dose-response relationship via nonbipartite matching.
2.1. Observational data with a continuous treatment dose in a static setting. Suppose there are N = 2I units, indexed by n = 1, 2, · · · , N . Each unit is associated with a vector of observed covariates X n , an observed treatment dose assignment Z obs n , and an observed outcome Y obs n . The vector of observed covariates X n are collected before the treatment assignment and not affected by the treatment. Let Z n denote the treatment dose assignment of unit n, Z the set of all possible treatment doses, z ∈ Z a realization of Z n , and |Z| the cardinality of Z. For a binary treatment, |Z| = 2; for a continuous treatment dose, |Z| is an infinite number. In most applications, Z is an ordered set (either partially ordered or totally ordered) with a (partial or total) order defined in light of the application.
Let Y n (z) denote the potential outcome that unit n exhibits under the dose assignment z assuming no interference among units (Rubin, 1980(Rubin, , 1986. Each unit n is associated with a possibly infinite array of potential outcomes {Y n (z), z ∈ Z}. We will assume consistency so that Y obs n = Y n (Z obs n ). A causal estimand is necessarily a contrast between potential outcomes. Each unit n is associated with a collection of unit-level causal effects Table 1 summarizes all information regarding these N units, where we let Z = {0, 1, 2, · · · } be a countable set for ease of exposition. Table 1 is referred to as a science table in the literature (Rubin, 2005). In a causal inference problem, the fundamental estimands of interest are the arrays of potential outcomes in Table 1; the task of uncovering the arrays of potential outcomes is challenging because one and only one of the potentially infinite array of potential outcomes for each unit is actually observed.
{f n (z, z ) = Y n (z) − Y n (z ), ∀z, z ∈ Z}.
Potential Outcomes
Units
Covariates
X Observed Dose Z Y (0) Y (1) · · · Y (z ) · · · Unit-Level Causal Effects Unit-level Causal Effects Summary Summary Causal Effects 1 X 1 Z obs 1 Y 1 (0) Y 1 (1) · · · Y 1 (z ) · · · {Y 1 (z) − Y 1 (z ), z, z ∈ Z} Unit-Level Dose-response relationship e.g., Yn(z) − Yn(z * ) = fn(z; z * , θn) Summarize dose-response relationship for a common set of units 2 X 2 Z obs 2 Y 2 (0) Y 2 (1) · · · Y 2 (z ) · · · {Y 2 (z) − Y 2 (z ), z, z ∈ Z} . . . . . . . . . . . . . . . . . . . . . . . . . . . n Xn Z obs n Yn(0) Yn(1) · · · Yn(z ) · · · {Yn(z) − Yn(z ), z, z ∈ Z} . . . . . . . . . . . . . . . . . . . . . . . . . . . N X N Z obs N Y N (0) Y N (1) · · · Y N (z ) · · · {Y N (z) − Y N (z ), z, z ∈ Z}
One unique feature of problems with a continuous treatment dose assignment is that the unit-level causal effect is an infinite set of comparisons between any two potential outcomes Y n (z) and Y n (z ), unlike with a binary treatment where the unit-level causal effect unambiguously refers to a comparison between Y n (1) and Y n (0). Let z * ∈ Z denote an arbitrary reference dose. Observe that Y n (z) − Y n (z ) = Y n (z) − Y n (z * ) − {Y n (z ) − Y n (z * )}, and the collection of contrasts {Y n (z) − Y n (z * ), z ∈ Z} is sufficient in summarizing all pairwise comparisons of potential outcomes. With a binary treatment, a "summary causal effect" (Rubin, 2005) is defined as a comparison between Y n (1) and Y n (0) over the same collection of units, e.g., the mean unit-level difference for females. With a continuous treatment dose, we first summarize the causal effects with a "unit-level dose-response relationship" for each unit n. For example, one simple unit-level dose-response relationship states that Y n (z) − Y n (z * ) = τ 0 , ∀z ∈ Z; in words, for unit n, the causal effect when comparing treatment dose z to the reference dose z * is equal to a constant τ 0 regardless of the dose z ∈ Z. We may then summarize such unit-level dose-response relationships for a collection of units. For example, one such summary may state that a structured dose-response relationship f (z; z * , θ) holds for all counties in the U.S.; this summary can be represented by the following null hypothesis:
H 1 0 : Y n (z) − Y n (z * ) = f (z; z * , θ)
, for all counties in the U.S. indexed by n, for some θ. We first develop a simple, randomization-based testing procedure to assess hypotheses of the form H 1 0 . The work most relevant to our development is Ding, Feller and Miratrix (2016), who studied testing the existence of treatment effect variation in a randomized controlled trial with a time-independent binary treatment. In a randomization-based inferential procedure, the potential outcomes (i.e., the infinite collection {Y n (z), ∀z ∈ Z, ∀n} in Table 1, are held fixed and the only probability distribution that enters statistical inference is the randomization distribution that describes the treatment dose assignment. The key step here is to properly embed the observational data into an approximately randomized experiment (Rosenbaum, 2002(Rosenbaum, , 2010Bind and Rubin, 2019), as we are ready to describe.
2.2. Embedding observational data with a time-independent, continuous treatment into an as-if randomized experiment via nonbipartite matching. In a randomized controlled experiment, physical randomization creates "the reasoned basis" for drawing causal inference (Fisher, 1935). In the absence of physical randomization as with retrospective observational data, one strategy is to use statistical matching to embed observational data into a hypothetical randomized controlled trial (Rosenbaum, 2002(Rosenbaum, , 2010Rubin, 2007;Ho et al., 2007;Stuart, 2010;Bind and Rubin, 2019) by matching subjects with the same (or at least very similar) estimated propensity score or observed covariates and forging two groups that are well-balanced in observed covariates.
One straightforward design to handle observational data with a continuous treatment is to dichotomize the continuous treatment based on some prespecified threshold and create a binary treatment out of the dichotomization scheme. For instance, let Z denote a measure of social distancing; one can define counties with the social distancing measure above the median as the "above-median," or "treated" group, and the others as the "below-median," or "control" (or "comparison") group. One may then pair counties in the "above-median" group to those in the "below-median" group via a standard bipartite matching algorithm (for instance, via the R package optmatch by Hansen, 2007), and test the null hypothesis that social distancing has no effect on the outcome. Such a strategy is often seen in empirical research, probably because of its simplicity; however, dichotomizing the continuous treatment inevitably censors the rich information contained in the original, continuous dose and prevents researchers from studying the dose-response relationship.
To address this limitation, Lu et al. (2001Lu et al. ( , 2011 proposed optimal nonbipartite matching. In a nonbipartite matching, units with similar observed covariates but different treatment doses are paired. Suppose there are N = 2I units, e.g., counties in the U.S in our application. In the design stage, distances {δ ij , i = 1, · · · , N, j = 1, · · · , N } are calculated between each pair of units and a N × N distance matrix is constructed (Lu et al., 2001(Lu et al., , 2011Baiocchi et al., 2010). Some commonly used distances δ ij include the Mahalanobis distance between observed covariates X i and X j and the rank-based robust Mahalanobis distance. Researchers may further modify the distance to incorporate specific design aspects of the study. For instance, in a study that involves effect modification, researchers are advised to match exactly or near-exactly on the effect modifier (Rosenbaum, 2005), e.g., the geographic location of the county, and such an aspect of design can be pursued by adding a large penalty to δ ij if county i and j are not from the same geographic region.
An optimal nonbipartite matching algorithm then divides these N = 2I units into I nonoverlapping pairs of two units such that the total within-matched-pair distance is minimized. Nonbipartite matching allows more flexible pairing compared to bipartite matching based on a dichotomization scheme, and preserves the continuous nature of the treatment, which is essential for investigating a dose-response relationship.
Suppose that we have formed I matched pairs of 2 units so that index ij, i = 1, · · · , I, j = 1, 2, uniquely identifies a unit. We follow Rosenbaum (1989) and Heng et al. (2019) and define the following potential outcomes after nonbipartite matching. DEFINITION 2.1 (Potential Outcomes After Nonbipartite Matching). Let Z obs i1 ∨ Z obs i2 = max(Z obs i1 , Z obs i2 ) and Z obs i1 ∧ Z obs i2 = min(Z obs i1 , Z obs i2 ) denote the maximum and minimum of two observed treatment doses in each matched pair i. We define the following two potential outcomes for each unit ij:
Y T ij ∆ = Y ij (Z obs i1 ∨ Z obs i2 ), Y Cij ∆ = Y ij (Z obs i1 ∧ Z obs i2 ),
where we abuse the notation and use subscripts T and C to denote the potential outcomes under the maximum and minimum of two observed doses within each matched pair, respectively.
Write F = {X ij , Y T ij , Y Cij , i = 1, · · · , I, j = 1, 2}, where Y T ij and Y Cij are defined in Definition 2.1, Z obs ∨ = (Z obs 11 ∨ Z obs 12 , · · · , Z obs I1 ∨ Z obs I2 ), and Z obs ∧ = (Z obs 11 ∧ Z obs 12 , · · · , Z obs I1 ∧ Z obs I2 ). As always in randomization inference (Rosenbaum, 2002(Rosenbaum, , 2010Ding, Feller and Miratrix, 2016), we condition on observed covariates, potential outcomes, and observed dose assignments, i.e., we do not model X or the potential outcomes, and rely on the treatment assignment mechanism to draw causal conclusions. The law that describes the treatment dose assignment in each matched pair i is
π i1 = P (Z i1 = Z obs i1 ∨ Z obs i2 , Z i2 = Z obs i1 ∧ Z obs i2 | F, Z obs ∨ , Z obs ∧ )
, and π i2 = 1 − π i1 . In an ideal randomized experiment, experimenters use physical randomization (e.g., coin flips) to ensure π i1 = π i2 = 1/2: for matched pair i with two treatment doses Z obs i1 and Z obs i2 , a fair coin is flipped; if the coin lands heads, the first unit is assigned Z obs i1 and the second unit Z obs i2 , and vice versa if the coin lands tails. The design stage of an observational study aims to approximate this ideal (yet unattainable) hypothetical experiment by matching units with similar covariates X so that π i1 ≈ π i2 after matching. In this way, nonbipartite matching embeds observational data with a continuous treatment dose into a randomized experiment; this induced randomization scheme will serve as our "reasoned basis" for inferring any causal effect including a dose-response relationship. As is always true with retrospective observational studies, a careful design may alleviate, but most likely never eliminate bias due to the residual imbalance in X or unmeasured confounding variables. The departure from randomization, i.e., π i1 = π i2 , is investigated via a sensitivity analysis (Rosenbaum, 1989(Rosenbaum, , 2002(Rosenbaum, , 2010.
Randomization-based inference for a dose-response relationship.
3.1. Randomization inference for τ = τ 0 and β = β 0 in the dose-response kink model. Endowed with the randomization scheme induced by nonbipartite matching, we now turn to statistical inference. We first consider testing the dose-response kink model for a fixed τ = τ 0 and β = β 0 for all units, i.e.,
H τ0,β0 0,kink : Y ij (z) = Y ij (z * ), ∀z ≤ τ 0 , and Y ij (z) − Y ij (τ 0 ) = β 0 (z − τ 0 ), ∀z > τ 0 , ∀i, j.
Under H τ0,β0 0,kink , the entire dose-response relationship for subject ij is known up to Y ij (z * ). Fortunately, we do observe one point on the dose-response curve, namely Y ij (Z obs ij ); hence, the entire dose-response curve for the subject ij is fixed, and both potential outcomes
Y ij (Z obs i1 ∧ Z obs i2 ) and Y ij (Z obs i1 ∨ Z obs i2 )
can then be imputed for each unit ij. In matched Table 2 illustrates the imputation scheme by imputing the missing potential outcome for each subject under the null hypothesis H τ0,β0 0,kink with τ 0 = 0.3 and β 0 = 1. Imputed science table when testing the dose-response kink model with τ 0 = 0.3 and β 0 = 1. Two units in each pair i are arranged so that i1 has a smaller dose and i2 a larger dose. For each unit, one and only one potential outcome is observed and the other one imputed under H τ 0 ,β 0 0,kink .
pair i, for the unit with Z ij = Z obs i1 ∧ Z obs i2 , the potential outcome under Z obs i1 ∧ Z obs i2 is the observed outcome Y obs ij and under Z obs i1 ∨ Z obs i2 is (2) Y obs ij , Z obs i1 ∨ Z obs i2 ≤ τ 0 ; Y obs ij + β 0 × (Z obs i1 ∨ Z obs i2 − τ 0 ), Z obs i1 ∧ Z obs i2 ≤ τ 0 and Z obs i1 ∨ Z obs i2 > τ 0 ; Y obs ij + β 0 × (Z obs i1 ∨ Z obs i2 − Z obs i1 ∧ Z obs i2 ), Z obs i1 ∧ Z obs i2 > τ 0 . Analogously, for the unit with Z ij = Z obs i1 ∨ Z obs i2 , the potential outcome under Z obs i1 ∨ Z obs i2 is the observed outcome Y obs ij and under Z obs i1 ∧ Z obs i2 is (3) Y obs ij , Z obs i1 ∨ Z obs i2 ≤ τ 0 ; Y obs ij − β 0 × (Z obs i1 ∨ Z obs i2 − τ 0 ), Z obs i1 ∨ Z obs i2 > τ 0 and Z obs i1 ∧ Z obs i2 ≤ τ 0 ; Y obs ij − β 0 × (Z obs i1 ∨ Z obs i2 − Z obs i1 ∧ Z obs i2 ), Z obs i1 ∧ Z obs i2 > τ 0 .
Observe One Potential Outcome Imputed Potential Outcomes
Units Observed Dose Z obs ij Y ij (Z obs i1 ∧ Z obs i2 ) Y ij (Z obs i1 ∨ Z obs i2 ) Y ij (Z obs i1 ∧ Z obs i2 ) Y ij (Z obs i1 ∨ Z obs i2 ) 11 0.2 Y obs 11 ? Y obs 11 Y obs 11 12 0.4 ? Y obs 12 Y obs 12 Y obs 12 21 0.9 Y obs 21 ? Y obs 21 Y obs 21 + 0.3 × (2.2 − 1) 22 2.2 ? Y obs 22 Y obs 22 − 0.3 × (2.2 − 1) Y obs 22 31 1.4 Y obs 31 ? Y obs 31 Y obs 31 + 0.3 × (1.9 − 1.4) 32 1.9 ? Y obs 32 Y obs 32 − 0.3 × (1.9 − 1.4) Y obs 32 . . . . . . . . . . . . . . . . . . I1 Z I1 Y obs I1 ? Y obs I1 Impute according to scheme (2) I2 Z I2 ? Y obs I2 Impute according to scheme (3) Y obs I2 Let ij denote the unit with dose Z obs i1 ∧ Z obs i2 in matched pair i, Y obs min = {Y obs ij , i = 1, · · · , I}, and F min (·) the CDF of Y obs min . Analogously, let ij denote the unit with dose Z obs i1 ∨ Z obs i2 and Y obs max = {Y obs ij , i = 1, · · · , I}. For each Y obs ij ∈ Y obs max , define the transformed outcome Y obs ij to be unit ij 's potential outcome under the dose Z obs i1 ∧ Z obs i2 according to (3). Let Y obs max = { Y obs ij , i = 1, · · · ,
I} denote the collection of transformed outcomes, and F tr max (·) its CDF. The null hypothesis H τ0,β0 0,kink can then be tested by comparing the following Kolmogorov-Smirnov-type (KS) test statistic
(4) t KS (τ 0 , β 0 ) = sup y F min (y) − F tr max (y)
evaluated at the observed data to a reference distribution generated based on the imputed science table (e.g., Table 2) and enumerating all 2 I possible randomizations: within each matched pair i, unit i1 receives Z obs i1 ∨ Z obs i2 and exhibits Y obs
i1 = Y i1 (Z obs i1 ∨ Z obs i2 ) and i2 receives Z obs i1 ∧ Z obs i2 and exhibits Y obs i2 = Y i2 (Z obs i1 ∧ Z obs i2 ), or unit i1 receives Z obs i1 ∧ Z obs i2 and exhibits Y obs i1 = Y i1 (Z obs i1 ∧ Z obs i2 ) and i2 receives Z obs i1 ∨ Z obs i2 and exhibits Y obs i2 = Y i2 (Z obs i1 ∨ Z obs i2 ).
In principle, any test statistic can be combined with this randomization scheme to yield a valid test. We motivate the test statistic (4) in Supplementary Material B. Note that when τ 0 = ∞ or β 0 = 0, H τ0,β0 0,kink reduces to the following causal null hypothesis: H 0,null : Y ij (z) = Y ij (z * ), ∀z ∈ Z, ∀i = 1, · · · , I, j = 1, 2, and the developed procedure can be used to test H 0,null .
We illustrate the procedure using the following example. We generate (1) with τ = 1 and β = 0.5. We test the null hypothesis H τ0,β0 0,kink with τ 0 = 1 and β 0 = 0.5 using the test statistic (4). The left panel of Figure 3 plots the empirical distribution F min (y) (blue) and F tr max (y) (red), and t KS (1, 0.5) = 0.08 for the observed data. Instead of enumerating all 2 I = 2 200 possible treatment dose assignments, we draw with replacement 100, 000 samples from all 2 200 possible configurations. The right panel of Figure 3 plots the reference distribution based on these 100, 000 samples. Such a "sampling with replacement" strategy is referred to as a "modified randomization test" in the literature (Dwass, 1957;Pagano and Tritchler, 1983) and known to still preserve the level of the test. In this way, a p-value equal to 0.445 is obtained in this simulated dataset and the null hypothesis H τ0,β0 0,kink with τ 0 = 1 and β 0 = 0.5 is not rejected. The p-value is exact as the procedure does not resort to any asymptotic theory and works in small samples. , τ = 1, and β = 0.5. We test the null hypothesis H τ 0 ,β 0 0,kink with τ 0 = 1 and β 0 = 0.5. The left panel plots F min (y), the empirical CDF of Y obs min (blue) and F tr max (y), the empirical CDF of the transformed outcomes Y obs max (red). The test statistic t KS (1, 0.5) evaluated at the observed data is 0.08. The right panel plots the exact reference distribution of the test statistic given the sample and under the null hypothesis. The reference distribution is generated using 100, 000 Monte Carlo draws from the 2 200 randomization configurations. The red dashed line plots the position of the observed test statistic. The exact p-value in this case is 0.445.
I = 200 matched pairs of 2 units, each with Z obs ij ∼ Unif[0, 4], Y ij (0) ∼ Normal(0, 1), and Y obs ij = Y ij (Z obs ij ) follows Model
3.2.
Testing the dose-response kink model. Let H K 0 denote a composite hypothesis that is equal to the union of H τ0,β0 0,kink over all τ = τ 0 and β = β 0 , i.e.,
H K 0 = τ0,β0 H τ0,β0 0,kink .
In other words, the activation dose τ and the slope β are nuisance parameters to be taken into account. One strategy testing H K 0 is to take the supremum p-value over the entire range of (τ, β); another commonly used strategy (Berger and Boos, 1994) is to first construct a confidence set around (τ, β) and then take the supremum p-values over the (τ, β) values in this confidence set. This latter strategy is particularly useful when the treatment dose and/or the outcome of interest are not bounded so that τ and β are not bounded; for some applications in the causal inference literature, see Nolen andHudgens (2011), Ding, Feller andMiratrix (2016), and Zhang et al. (2021). In Supplementary Material C, we discuss how to construct a bounded level-γ confidence set for (τ, β) based on inverting a variant of the Wilcoxon rank sum test statistic and its properties. Being able to reject H K 0 suggests evidence against the postulated dose-response relationship; otherwise, the model is deemed sufficient to characterize the dose-response relationship for the data at hand. We illustrate the procedure using the following example. We gen- Figure 4 plots the p-values in log scale against τ 0 and β 0 . The maximum p-value is obtained at τ 0 = 3.8 and β 0 = 0.4 and equal to 0.004. The null hypothesis H K 0 , i.e., the dose-response relationship follows a kink model, can be rejected at level 0.05 for this simulated dataset.
erate I = 200 matched pairs of 2 units with Z obs ij ∼ Unif[0, 4], Y ij (0) ∼ Normal(0, 1), and Y obs ij = Y ij (Z obs ij ) = Y ij (0) + 2 · 1{0 ≤ Z obs ij ≤ 1} + 1 · 1{1 < Z obs ij ≤ 4}.model is Y ij (z) = Y ij (0) + 2 · 1{0 ≤ z ≤ 1} + 1 · 1{1 < z ≤ 4}. We let Y ij (0) ∼ N (0, 1) and I = 200.
We test H τ 0 ,β 0 0,kink and plot the p-value in log scale against τ 0 and β 0 values. The maximum p-value is obtained at τ 0 = 3.8 and β 0 = 0.4 and equal to 0.004. The null hypothesis H K 0 is hence rejected at level 0.05 for this simulated dataset.
3.3. Testing any structured dose-response model. Our discussion above suggests a general model-free, randomization-based framework to test any structured dose-response relationship. Here, we say a dose-response relationship is "structured" if it is characterized by a few structural parameters. Consider the following structured dose-response relationship model:
H dose-response 0 : Y ij (z) − Y ij (z * ) f (z; z * , θ) = 0, ∀i = 1, · · · , I, j = 1, 2, for some θ,
where z * ∈ Z is a reference dose, and f (· ; z * , θ) is a univariate function that satisfies f (z * ; z * , θ) = 0 and is parametrized by a p-dimensional vector of structural parameters θ ∈ R p . Algorithm 1 summarizes a general procedure testing H dose-response 0 at level α. In Supplementary Material D, we briefly discuss and illustrate how to sequentially test a few dose-response relationships ordered in their model complexity.
: Y ij (z) − Y ij (z * ) f (z; z * , θ) = 0, ∀i, j, for some θ
1. Construct CI θ , a level-γ confidence set for the structural parameter θ; 2. For each θ 0 ∈ CI θ , do the following steps: a) Compute the test statistic t obs . For each unit ij with Z ij = Z obs i1 ∨ Z obs i2 , i.e., the unit with maximum dose in each matched pair i, define the following transformed outcome
(5) Y obs ij = Y obs ij − f (Z obs i1 ∨ Z obs i2 ; z * , θ 0 ) + f (Z obs i1 ∧ Z obs i2 ; z * , θ 0 ). Let F tr max (·) denote the empirical CDF of { Y obs ij , i = 1, · · · , I} and F min (·) the empirical CDF of the collection of units ij with Z ij = Z obs i1 ∧ Z obs i2 . Calculate t obs = sup y F min (y) − F tr max (y) ; b) Impute the science table. For each unit ij with Z ij = Z obs i1 ∧ Z obs i2 , impute Y ij (Z obs i1 ∧ Z obs i2 ) = Y obs ij and Y ij (Z obs i1 ∨ Z obs i2 ) = Y obs ij + f (Z obs i1 ∨ Z obs i2 ; z * , θ 0 ) − f (Z obs i1 ∧ Z obs i2 ; z * , θ 0 ); for each unit ij with Z ij = Z obs i1 ∨ Z obs i2 , impute Y ij (Z obs i1 ∨ Z obs i2 ) = Y obs ij and Y ij (Z obs i1 ∧ Z obs i2 ) = Y obs ij − f (Z obs i1 ∨ Z obs i2 ; z * , θ 0 ) + f (Z obs i1 ∧ Z obs i2 ; z * , θ 0 );
c) Generate a reference distribution. Sample with replacement MC = 100, 000 dose assignment configurations from the 2 I possible configurations. For each sampled dose assignment configuration Z k ,
calculate t (k) KS (θ 0 ) according to Step (a). Let F θ 0 denote the distribution of { t (k)
KS (θ 0 ), k = 1, 2, · · · , MC}; d) Compute the p-value p θ 0 by comparing t obs to the reference distribution F θ 0 , i.e.,
p θ 0 = 1 MC MC k=1 1 t (k) KS (θ 0 ) ≥ t obs ;
3. Let pmax = sup θ∈CI θ p θ and reject the null hypothesis H dose-response 0
at level α if pmax + γ ≤ α.
4. Relaxing the SUTVA: dose-response relationship under interference.
4.1.
Potential outcomes under interference. We relax the stable unit treatment value assumption in this section and consider inference for a structured dose-response relationship under interference. To this end, we collect the treatment doses of all study units in our matched-pair design and use Z = (Z 11 , Z 12 , · · · , Z I1 , Z I2 ) to represent the treatment dose configuration with z being its realization. We further let Z obs denote the observed treatment dose configuration of all 2I study units and
(6) Y ij ( Z) := Y ij (Z 11 , · · · , Z I2 )
unit ij's potential outcome that is random only through the randomness in the treatment dose configuration Z. The SUTVA states that for all pairs of z and z ,
z ij = z ij implies Y ij ( z) = Y ij ( z ); in other words, Y ij ( Z) depends on Z only through its dependence on Z ij .
Definition (6) is in a most general form and useful when the scientific interest lies in testing the null hypothesis of no direct or spillover effect under arbitrary interference pattern. To further explore the dose-response relationship in the presence of the spillover effect, researchers need to model the local interference structure possibly based on units' spatial relationship (e.g., closeness of counties in our case study). To this end, we assume study units are connected through an undirected network with a symmetric, 2I × 2I adjacency matrix G. Matrix G has its rows and columns arranged in the order corresponding to unit 11, 12, · · · , I1, I2 after nonbipartite matching. If unit ij and i j are connected, then the corresponding entry in G is equal to 1 and otherwise 0. The diagonal entries of G are defined to be 0.
Our reasoned basis for testing any causal null hypothesis under interference will still be the randomization scheme endowed by the nonbipartite matching. We have two goals. First, we show that the test developed for H 0,null under the SUTVA remains a valid level-α test for a null hypothesis of no direct or spillover effect under arbitrary interference pattern. Second, we relax the dose-response relationship H 0,kink by modeling various forms of local interference pattern using the adjacency matrix G.
4.2.
No direct or spillover effect. Following Rosenbaum (2007); Bowers, Fredrickson and Panagopoulos (2013); Athey, Eckles and Imbens (2018), a null hypothesis of no direct or spillover effect states that H 0,direct or spillover : Y ij ( z) = Y ij ( z ), ∀i = 1, · · · , I, j = 1, 2, and all pairs of treatment dose configurations of 2I study units z and z . Under H 0,direct or spillover , the unit-level potential outcome of each study unit under any treatment dose configuration z can still be imputed; in fact, Y ij ( z) = Y ij ( Z obs ) for any z. Any test statistic (e.g., the Kolmogorov-Smirnov statistic used in Algorithm 1) that depends on units' potential outcomes (possibly under interference) is random only through its dependence on the treatment dose configurations of all study units; therefore, the null distribution of the test statistic can again be inferred by enumerating 2 I different configurations of Z as discussed in Section 3. In other words, the testing procedure for H 0,null is still exact and has correct level for testing H 0,direct or spillover . Moreover, since H 0,direct or spillover does not impose any interference pattern, rejecting H 0,null implies rejecting H 0,direct or spillover under arbitrary interference pattern. 4.3. Dose-response relationship under local interference modeling. Testing the null hypothesis is often regarded a starting point of causal analysis (Imbens and Rubin, 2015). Next, we build up a causal hypothesis regarding a dose-response relationship allowing for local interference. Our construction is guided by the following general principles adapted from the literature on interference (Hong and Raudenbush, 2006;Bowers, Fredrickson and Panagopoulos, 2013;Athey, Eckles and Imbens, 2018) Principle I: The total effect of treatment dose configuration z compared to a reference dose configuration z * can be decomposed into a dose-response direct effect due to ij's own treatment dose z ij and a spillover effect due to other study units' treatment doses so that
Y ij ( z) − Y ij ( z * ) = f (z ij ; z * ij , θ) + g( z −ij ; z * −ij ) where f (z ij ; z * ij , θ)
is a dose-response direct effect described in Section 3, z −ij (resp. z * −ij ) treatment doses (resp. reference treatment doses) of all study units except ij, and g(·) a function modeling the spillover effect. For a binary treatment, z * = 0 is referred to as a uniformity trial (Rosenbaum, 2007). Principle II: The spillover effect depends only on the aggregate, excess treatment doses of ij's neighbors with respect to the reference dose configuration so that
Y ij ( z) − Y ij ( z * ) = f (z ij ; z * ij , θ) + g( z − z * , G ij, • ) where G ij,
• is the ij-th row of the adjacency matrix G.
Principle III: The spillover effect is always dominated by the dose-response direct effect in the sense that
(7) G ij, • −1 0 · z − z * , G ij, • ≤ z ij − z * ij implies g( z − z * , G ij, • ) ≤ f (z ij ; z * ij , θ)
. One simple modeling strategy of g( z − z * , G ij, • ) that satisfies (7) is to scale the magnitude of the dose-response direct effect towards zero.
To illustrate the three principles above, we consider a concrete example of causal hypothesis under local interference. We consider a causal null hypothesis that states that the direct effect is proportional to the dose difference, i.e., f (z ij ; z * ij , θ) = β(z ij − z * ij ). We then model the local interference pattern by scaling the direct effect using a logistic function so
that g( z − z * , G ij, • ) = C × f (z ij ; z * ij , θ) with C = 1/(1 + exp{−k( z − z * , G ij, • − s)})
. According to this specification, the spillover effect modeled by g( z − z * , G ij, • ) trivially satisfies the third principle above as the multiplication factor C is always upper bounded by 1. The causal null hypothesis then becomes
H 0,interference : Y ij ( z) − Y ij ( z * ) = β(z ij − z * ij ) · 1 + 1 1 + exp{−k( z − z * , G ij, • − s)} .
Statistical inference in the presence of interference parameters (k, s) depends on one's perspective on (k, s) (Bowers, Fredrickson and Panagopoulos, 2013). Inference may proceed by regarding interference parameters as sensitivity parameters and researchers could report how confidence sets of the dose-response relationship parameters in the direct effect (e.g., β in H 0,interference ) change as interference parameters change. For fixed interference parameters (k 0 , s 0 ), we can test β = β 0 in H 0,interference by imputing potential outcomes for each study unit and each of the 2 I treatment dose configurations Z under H 0,interference , choosing a test statistic t(Y( Z), Z) that is a function of potential outcomes of all study units Y( Z) and random only via its dependence on Z, generating the randomization-based reference distribution of t(Y( Z), Z), and comparing the observed test statistic t(Y( Z obs ), Z obs ) to this reference distribution.
Extension to longitudinal studies with a time-varying treatment.
5.1. Treatment dose trajectory and potential outcome trajectory. In our application, the treatment dose evolves over time and the public-health-related outcomes, e.g., county-level COVID-19 related death toll, may depend on the treatment dose trajectory. We first consider the no-interference case. Let t 0 denote a baseline period and t 1 , t 2 , · · · , t i , · · · , T subsequent treatment periods. Fix t 0 ≤ t i ≤ t j and let Z be the set of all possible treatment doses at each time point. Let
Z ti:tj = (Z ti , Z ti+1 , · · · , Z tj ) ∈ Z × · · · × Z tj−ti+1
denote the random treatment dose trajectory of one study unit from t i to t j (Robins, 1986;Bojinov and Shephard, 2019), z ti:tj one realization of Z ti:tj , and Z obs n,ti:tj = (Z obs n,ti , · · · , Z obs n,tj ) the observed treatment dose trajectory of unit n from t i to t j . In our application, t 0 denotes the start of the phased reopening and Z ti:tj the trajectory of daily percentage change in total distance traveled from t i to t j . We are interested in the effect of a sustained period of treatment on some future outcome. We assume that the treatment dose at time t temporally precedes the outcome at time t. Fix a time t and let Y n,t (z t0:t ) = Y n,t (z t0 , z t1 , · · · , z t ) denote the potential outcome of unit n at time t under the treatment dose trajectory Z n;t0:t = z t0:t . We assume consistency so that Y obs n,t = Y n,t (Z obs n;t0:t ). Finally, we let Y n;ti:tj (z t0:tj ) denote unit n's potential outcome trajectory from time t i to t j under the treatment dose trajectory z t0:tj . 5.2. Covariate history and sequential randomization assumption. One unique feature of longitudinal data is that the observed outcome trajectory up to time t − 1 may confound the treatment dose at time t; this is particularly true in our application: if the COVID-19 related case and death numbers were high during the last week in a county, then residents may be more wary of the disease and reduce social mobility this week. Following the literature on longitudinal studies, we let L n,t denote the time-dependent covariate process of unit n up to but not including time t; L n,t contains both time-independent covariates X n and time-dependent covariates like the observed outcomes {Y obs n,t0 , Y obs n,t1 , · · · , Y obs n,t−1 }. We further assume the sequential randomization assumption (SRA) (Robins, 1998), which states that conditional on the treatment history up to time t − 1 and covariate process up to time t, the treatment dose assignment at time t is independent of the potential outcome trajectories, i.e., Y n;t0:T (z t0:T ) |= Z n,t | Z n;t0:t−1 = z n;t0:t−1 , L n,t , ∀z t0:T .
This assumption holds if residents' adopting the social distancing measures at time t depends on (1) their history of adopting social distancing measures, (2) time-independent covariates, and (3) observed daily COVID-19 related case numbers and death toll up to time t − 1. See also Mattei, Ricciardi and Mealli (2019) for a relaxed version of this assumption.
5.3.
Cumulative treatment dose, W-equivalence, and dose-response relationship in a longitudinal setting. One general recipe for drawing causal inference from longitudinal data is to model the marginal distribution of the counterfactual outcomes Y n,t (z t0:t ), or the marginal joint distribution of Y n;ti:tj (z t0:t ), as a function of the treatment trajectory and baseline covariates; see Robins (1986Robins ( , 1994; Robins, Greenland and Hu (1999); Robins, Hernán and Babette (2000) for seminal works. For example, one simplest model may state that N units are i.i.d. samples from a superpopulation such that the counterfactual mean of the outcome at time t depends on the treatment dose trajectory and the time-independent covariates X through a known functional form g(·), i.e., E[Y t (Z t0:t ) | X] = g(Z t0:t , X; β), and the interest lies in efficient estimation of the structural parameters β.
In the infectious disease context, modeling the potential outcomes is a daunting task and our interest here lies in testing a structural dose response relationship in a less modeldependent way. To proceed, we generalize the notion of "dose" from the static to longitudinal setting. Consider the following weighted difference between two treatment dose trajectories z ti:tj and z ti:tj :
(8) z ti:tj − z ti:tj W = ti≤t ≤tj w(t ) · (z t − z t ),
where W is a shorthand for the weight function W(t ) = {w(t ) | 0 ≤ w(t ) ≤ 1 and ti≤t ≤tj w(t ) = 1}. Let z * ti:tj denote a reference trajectory, e.g., z * ti:tj = (−0.5, · · · , −0.5) corresponding to 50% reduction in total distance traveled from t i to t j . For each treatment dose trajectory z ti:tj , we define its "cumulative dose" as the weighted difference between z ti:tj and z * ti:tj .
DEFINITION 5.1 (Cumulative Dose). Let z ti:tj be a realization of the treatment dose trajectory Z ti:tj . Its cumulative dose with respect to the reference trajectory z * ti:tj and the weight function W is CD(z ti:tj ; z * ti:tj , W) = z ti:tj − z *
ti:tj W ,
where · W is defined in (8).
REMARK 1. The cumulative dose of a treatment dose trajectory is defined with respect to a reference trajectory and a weight function. The choices of the reference trajectory and weight function should be guided by expert knowledge so that the cumulative dose reflects some scientifically meaningful aspect of the treatment dose trajectory. For instance, in a longitudinal study of the effect of zidovudine (AZT), an antiretroviral medication, on mortality, Robins, Hernán and Babette (2000) defined the cumulative dose to be the aggregate AZT dose during the treatment period, i.e., the reference dose z * t0:t = (0, · · · , 0) and CD(z t0:t ; z * t0:t , W) = t0≤t ≤t z t .
A collection of treatment dose trajectories is said to be "W-equivalent" if they have the same cumulative dose with respect to the same weight function and reference trajectory.
DEFINITION 5.2 (W-Equivalence). Two treatment dose trajectories z ti:tj and z ti:tj are said to be W-equivalent w.r.t.to the reference trajectory z * ti:tj , written as z ti:tj W ≡ z ti:tj , if CD(z ti:tj ; z * ti:tj , W) = CD(z ti:tj ; z * ti:tj , W). Treatment dose trajectories that are equivalent to z ti:tj form an equivalence class and is denoted as [z ti:tj ] W = z ti:tj | CD(z ti:tj ; z * ti:tj , W) = CD(z ti:tj ; z * ti:tj , W) .
Equipped with Definition 5.1 and 5.2, we are ready to state a major assumption that facilitates extending a dose-response relationship to longitudinal settings.
ASSUMPTION 1 (Potential outcomes under W-equivalence). Let [z t0:t ] W be an equivalence class as defined in Definition 5.2 with respect to · W and a reference trajectory z * t0:t . Then unit-level potential outcomes at time t, Y n,t (·), satisfies: Y n,t (z t0:t ) = Y n,t (z t0:t ), ∀ z t0:t , z t0:t ∈ [z t0:t ] W .
EXAMPLE. In the study of AZT's effect on mortality, Z t0:t represents the AZT dose trajectory from t 0 to t. Let Y n,30 = 1 if unit n dies at time t = 30 and 0 otherwise. Assumption 1 applied to Y n,30 then states that patient n's 30-day mortality status depends on the AZT trajectory from t 0 to t only through some "cumulative dose" captured by CD(z t0:t ; z * t0:t , W) (e.g., the aggregate dose; see Remark 1). REMARK 2. Although Assumption 1 and its variants are often assumed in the literature on longitudinal studies to reduce the number of potential outcomes (Robins, Hernán and Babette, 2000, Section 7), its validity needs to be evaluated on a case-by-case basis. We evaluated Assumption 1 in the infectious disease dynamics context using standard compartment model before invoking it in our application; see Supplementary Material H for details.
We now extend the dose-response relationship to a longitudinal setting. DEFINITION 5.3 (Unit-Level Dose-Response Relationship in Longitudinal Studies). Let CD(z t0:t ; z * t0:t , W) be a cumulative dose defined in Definition 5.1 and f n (·; θ n ) a univariate dose-response model parametrized by θ n such that f n (0; θ n ) = 0. Suppose that Assumption 1 holds. A unit-level dose-response relationship for unit n states that (9) Y n,t (z t0:t ) − Y n,t (z * t0:t ) = f n CD(z t0:t ; z * t0:t , W); θ n .
REMARK 3. Observe that when z t0:t = z * t0:t , the LHS of (9) evaluates to 0 and the RHS evaluates to f n CD(z * t0:t ; z * t0:t , W); θ n = f n {0; θ n } = 0.
REMARK 4. Let z t0:t and z t0:t be two treatment dose trajectories such that z t0:t = z t0:t but CD(z t0:t ; z * t0:t , W) = CD(z t0:t ; z * t0:t , W). For the dose-response relationship (9) to be well-defined, we necessarily have Y n,t (z t0:t ) = Y n,t (z t0:t ), which is guaranteed by Assumption 1.
REMARK 5. Similar to the static setting considered in Section 2, the dose-response relationship (9) can be thought of as a parsimonious summary of unit-level causal effects from a sustained period of treatment.
5.4.
Embedding longitudinal data into an experiment and testing a dose-response relationship. Let i = 1, 2, · · · , I be I pairs of two units matched on the covariate process L i1,t = L i2,t but Z obs i1;t0:t = Z obs i2;t0:t . Units i1 and i2 are each associated with the following two potential outcomes at time t: Y ij,t (Z obs i1;t0:t ) and Y ij,t (Z obs i2;t0:t ), i = 1, · · · , I, j = 1, 2, in parallel with Definition 2.1 in the static setting. Write F t = {L ij,t , Y ij,t (Z obs i1;t0:t ), Y ij,t (Z obs i2;t0:t ), i = 1, · · · , I, j = 1, 2}. Let ij denote the unit with the minimum cumulative dose in matched pair i and ij the other unit, and write Z obs ∧;t0:t = {Z obs 1j ;t0:t , · · · , Z obs Ij ;t0:t }, and Z obs ∨;t0:t = {Z obs 1j ;t0:t , · · · , Z obs Ij ;t0:t }. By iteratively applying the sequential randomization assumption, it is shown in the Supplementary Material E that π i1 =P (Z i1;t0:t = Z obs i1;t0:t , Z i2;t0:t = Z obs i2;t0:t | F t , Z obs ∧;t0:t , Z obs ∨;t0:t ) =P (Z i1;t0:t = Z obs i2;t0:t , Z i2;t0:t = Z obs i1;t0:t | F t , Z obs ∧;t0:t , Z obs ∨;t0:t ) = π i2 = 1/2.
REMARK 6. In the static setting, it suffices to match on observed covariates to embed data into an approximate experiment; in the longitudinal setting, one needs to match on the covariate process L t including the time-independent covariates and observed outcomes during the treatment period. REMARK 7. Our framework is different from the balance risk set matching of Li, Propert and Rosenbaum (2001). According to Li, Propert and Rosenbaum (2001)'s setup, units receive a binary treatment at most once in the entire study period. Our framework is also different from Imai, Kim and Wang (2018). Imai, Kim and Wang (2018)'s primary interest is the treatment effect of an intervention at a particular time point t; hence, Imai, Kim and Wang (2018) pair a subject receiving treatment at time t to subjects with the same treatment dose and covariate process up to time t − 1 but not receiving the treatment at time t. In sharp contrast, we are focusing on the causal effect of a sustained period of treatment, similar to the setup in Robins, Hernán and Babette (2000). The entire treatment dose trajectory is the unit to be permuted and our design reflects this aspect.
Consider testing the following dose-response relationship in a longitudinal study:
H L 0 : Y ij,t (z t0:t ) − Y ij,t (z * t0:t ) =f CD(z t0:t ; z *
t0:t , W); θ , ∀i = 1, · · · , I, j = 1, 2, for some θ, where z * t0:t is a reference trajectory, CD(z t0:t ; z * t0:t , W) a cumulative dose, and f (·; θ) a doseresponse relationship of scientific interest. Within each matched pair are two observed treatment dose trajectories Z obs i1;t0:t and Z obs i2;t0:t . We observe the potential outcome that i1 exhibits under Z obs i1;t0:t , i.e., Y i1,t (Z obs i1;t0:t ) = Y obs i1,t ; moreover, we are able to impute Y i1,t (·) evaluated at Z obs i2;t0:t , under H L 0 and Assumption 1:
(11) Y i1,t (Z obs i2;t0:t ) = Y obs i1,t + f CD(Z obs i2;t0:t ; z * t0:t , W); θ − f CD(Z obs i1;t0:t ; z * t0:t , W); θ .
Similarly, we have Y i2,t (Z obs i2;t0:t ) = Y obs i2,t and can impute: Table 3 summarizes the observed and imputed information. The problem has now been reduced to the static setting, except that instead of permuting the two scalar treatment doses, we now permute two treatment dose trajectories. Randomization-based testing procedure like the one discussed in Section 3 in a static setting can be readily applied to testing (1) θ = θ 0 for a fixed θ 0 and (2) the validity of a postulated dose-response relationship H L 0 .
(12) Y i2,t (Z obs i1;t0:t ) = Y obs i2,t + f CD(Z obs i1;t0:t ; z * t0:t , W); θ − f CD(Z obs i2;t0:t ; z * t0:t , W); θ .Y ij,t (Z obs i1;t0:t ) Y ij,t (Z obs i2;t0:t ) Y ij,t (Z obs i1;t0:t ) Y ij,
5.5.
Time lag and lag-incorporating weights. One unique aspect of our application is that there is typically a time lag between social distancing and its effect on public-healthrelated outcomes. We formalize this in Assumption 2.
ASSUMPTION 2 (Time Lag). The treatment trajectory is said to have a " -lagged effect" on unit n's potential outcomes at time t if Y n,t (z t0 , z t1 , · · · , z t− , z t− +1 , · · · , z t ) = Y n,t (z t0 , z t1 , · · · , z t− , z t− +1 , · · · , z t ), for all z t0 , · · · , z t− , z t− +1 , · · · , z t , and z t− +1 , · · · , z t .
In words, Assumption 2 says that the outcome of interest at time t depends on the entire treatment dose trajectory only up to time t − . Assumption 2 holds in particular when Y n,t measures the number of people succumbing to the COVID-19 at time t. Researchers estimated that the time lag between contracting COVID-19 and exhibiting symptoms (i.e., the so-called incubation period) had a median of 5.1 days and could be as long as 11.5 days (Lauer et al., 2020), and the time lag between the onset of the COVID-19 symptoms and death ranged from 2 to 8 weeks (Testa et al., 2020;World Health Organization, 2020). Therefore, it may be reasonable to believe that the number of COVID-19 related deaths at time t does not depend on social distancing practices days immediately preceding t for some properly chosen . Assumption 2 may be further combined with Assumption 1 to state that unit n's potential outcomes at time t depend on the entire treatment dose trajectory z t0:t only via some cumulative dose from time t 0 to t − by defining the cumulative dose with respect to some lag-incorporating weight function W lag that assigns 0 weights to treatment doses immediately preceding time t.
REMARK 8. Suppose that the time lag assumption holds for potential outcomes Y n,t (·), · · · , Y n,t+ −1 (·), and let g : R l → R be a function that maps these potential outcomes to an aggregate outcome g{Y n,t (·), · · · , Y n,t+ −1 (·)}. One immediate consequence of Assumption 2 is that g{Y n,t (·), · · · , Y n,t+ −1 (·)} depends on the entire treatment dose trajectory Z t0:t+ −1 only via Z t0:t−1 ; moreover, we may invoke Assumption 1 and further state that g{Y n,t (·), · · · , Y n,t+ −1 (·)} depends on the entire treatment dose trajectory Z t0:t+ −1 only via some cumulative dose of Z t0:t−1 . Dose-response relationships, statistical matching, and testing procedures described in Section 5.3 and 5.4 then hold by replacing Y n,t (·) with the aggregate outcome g{Y n,t (·), · · · , Y n,t+ −1 (·)} where appropriate. Details are provided in Supplementary Material F. 5.6. Incorporating interference. One may further allow the outcome of interest of unit ij to depend not only on its own cumulative dose, but also the cumulative doses of neighboring units, as described in Section 4. Let Z t0:t = (Z 11;t0:t , · · · , Z I2;t0:t ) denote the random treatment dose trajectories from t 0 to t of all study units, z t0:t its realization, z * t0:t = (z * t0:t , · · · , z * t0:t ) a collection of reference dose trajectories, and Y ij,t ( Z t0:t ) the potential outcome. Finally, collect the cumulative doses of all study units in z cumu = (CD(z 11;t0:t ; z * t0:t , W), · · · , CD(z I2;t0:t ; z * t0:t , W)). We stress that each entry of Z t0:t is itself a random trajectory, while each entry of z cumu is a scalar. Synthesizing our development in Section 4 and 5.4, we consider testing a dose-response relationship in a longitudinal study under interference by modeling the contrast between Y ij,t ( z t0:t ) and Y ij,t ( z * t0:t ). Combining Principle I and II in Section 4 with Assumption 1, we have a causal null hypothesis of the form
H L 0,interference : Y ij,t ( z t0:t ) − Y ij,t ( z * t0:t ) = f CD(z ij;t0:t ; z * t0:t , W); θ + g( z cumu , G ij, • ),
where f CD(z ij;t0:t ; z * t0:t , W); θ captures the dose-response direct effect and g( z cumu , G ij, • ) models a spillover effect that depends only on the cumulative doses of ij's neighboring units. Simple parametric models as described in Section 4.3 can be readily applied to model the spillover effect. By imputing under H L 0,interference and permuting the two treatment dose trajectories within each matched pair as described in Section 5.4, one can readily conduct randomization-based inference to construct confidence sets for structural parameters in the dose-response relationship while treating interference parameters in the g(·) model as sensitivity parameters.
6. Social distancing and COVID-19 during phased reopening: study design.
6.1. Data: time frame, granularity, cumulative dose, outcome, and covariate history. The first state in the U.S. that reopened was Georgia at April 24th, 2020. We hence consider data from April 27th, the first Monday following April 24th, to August 2nd, the first Sunday in August in the primary analysis. We choose a Monday (April 27th) as the baseline period and a Sunday (August 2rd) as the endpoint because social distancing and public-health-related outcomes data exhibited consistent weekly periodicity (Unnikrishnan, 2020).
We analyze the data at a county-level granularity and use the county-level percentage change in the total distance traveled compiled by Unacast ™ as the continuous, time-varying treatment dose. We consider a two-month treatment period from April 27th (Monday) to June 28th (Sunday). According to the data compiled by Unacast ™ , counties cut social mobility by at most 50% during most of the phased reopening; hence, we set the reference dose trajectory to be −0.5 throughout the treatment period and define a notion of cumulative dose with respect to this reference dose trajectory and a uniform weighting scheme that assigns equal weight to each day during the treatment period. In a sensitivity analysis, we further repeated all dose-response analyses using different notions of cumulative dose based on different weighting schemes. In the Supplementary Material H, we assess the appropriateness of Assumption 1 in the context of standard epidemiological models using simulation studies. The primary outcome of interest is the cumulative COVID-19 related death toll per 100, 000 people from June 29th (Monday) to August 2nd (Sunday), a total of five weeks. The countylevel COVID-19 case number and death toll are both obtained from the New York Times COVID-19 data repository (The New York Times, 2020).
As discussed in Section 5.4, we matched counties similar on covariates, including timeindependent covariates and time-dependent covariate processes, in order to embed data into an approximate randomized experiment. Specifically, we matched on the following timeindependent baseline covariates: female (%), black (%), Hispanic (%), above 65 (%), smoking (%), driving alone to work (%), flu vaccination (%), some college (%), number of membership associations per 10, 000 people, rural (0/1), poverty rate (%), population, and population density (residents per square mile). These county-level covariates data were derived from the census data collected by the United States Census Bureau and the County Health Rankings and Roadmaps Program (Remington, Catlin and Gennuso, 2015). Moreover, we matched on the number of new COVID-19 cases and new COVID-19 related deaths per 100, 000 people every week from April 20th -26th to June 23th -29th.
6.2. Statistical matching, matched samples, and assessing balance. A total of 1, 211 matched pairs of two counties were formed using optimal nonbipartite matching (Lu et al., 2001(Lu et al., , 2011. We matched exactly on the covariate "rural (0/1)" for later subgroup analysis and balanced all other 32 covariates. We added a mild penalty on the cumulative dose so that two counties within the same matched pair had a tangible difference in their cumulative doses, and added 20% sinks to eliminate 20% of counties for whom no good match can be found (Baiocchi et al., 2010;Lu et al., 2011). Following the advice in Rubin (2007), the design was conducted without any access to the outcome data in order to assure the objectivity of the design.
Within each matched pair, the county with smaller cumulative dose is referred to as the "better social distancing" county, and the other "worse social distancing" county. Appendix A shows where the 1, 211 better social distancing counties and the other 1, 211 worse social distancing counties are located in the U.S., and Figure 5 plots the average daily percentage change in total distance traveled and the average daily COVID-19 related death toll per 100, 000 people during the treatment period (April 27th to June 28th) in two groups. It is evident that two groups differ in their extent of social distancing, but are very similar in their daily COVID-19 related death toll per 100, 000 people during the treatment period. Finally, Appendix B summarizes the balance of all 33 covariates in two groups after matching. All variables have standardized differences less than 0.15 and are considered sufficiently balanced (Rosenbaum, 2002). In Supplementary Material G.1, we further plot the cumulative distribution functions of important variables in two groups. A detailed pre-analysis plan, including matched samples and specification of a primary analysis and three secondary analyses, can be found via doi:10.13140/RG.2.2.23724.28800. 7. Social distancing and COVID-19 during phased reopening: outcome analysis. and average daily COVID-19 related death toll per 100,000 people (solid lines) in 1, 211 better social distancing counties (blue) and 1, 211 worse social distancing counties (red). We saw a sharp contrast in the level of social distancing but little difference in the COVID-19 related death during this treatment period.
7.1. Primary analysis: causal null hypothesis regarding the death toll. Fix t 0 = April 27th and T = June 28th. Let Z t0:T = z t0:T denote a treatment dose trajectory from t 0 to T and Y t (·) the potential COVID-19 related deaths at time t. We specify the time-lag parameter = 35 so that T + corresponds to August 2nd. As discussed in Remark 8, we consider the aggregate outcome Y agg (·) = g{Y T +1 (·), · · · , Y T + (·)} = T +1≤t≤T + Y t (·). Our primary analysis tests the following causal null hypothesis for the 1, 211 × 2 = 2, 422 counties in our matched samples: H 0,primary : Y ij,agg (z t0:T ) − Y ij,agg (z * t0:T ) = 0, ∀z t0:T , ∀i = 1, · · · , I = 1211, j = 1, 2. This null hypothesis states that the treatment dose trajectory from April 27th to June 28th had no effect whatsoever on the COVID-19 related death toll from June 29th to August 2nd.
The top left panel of Figure 6 plots F min (CDF of the better social distancing counties' observed outcomes) and F tr max (CDF of the worse social distancing counties' observed outcomes under H 0,primary ); we calculate the test statistic t KS = 0.735 and contrast it to a reference distribution generated using 1, 000, 000 samples from all possible 2 1211 randomizations; see the top right panel of Figure 6. In this way, an exact p-value equal to 2.06 × 10 −4 is obtained and the causal null hypothesis H 0,primary is rejected at 0.05 level. Moreover, as detailed in Section 4 and 5.6, rejecting the null hypothesis H 0,primary also implies rejecting the null hypothesis of no direct or spillover effect under arbitrary interference pattern.
We further conducted two sensitivity analyses to assess the no unmeasured confounding assumption and the time lag assumption we made in the primary analysis. In the first sensitivity analysis, we allowed the dose trajectory assignment probability π i1 and π i2 as in (10) to be biased from the randomization probability and then generated the reference distribution with this biased randomization probability; specifically, we considered a biased treatment assignment model where log(Γ i ) = log{π i1 /π i2 } in each matched pair i was proportional to the absolute difference in the cumulative doses of two units in the pair (π i1 = π i2 = 1/2 and Γ i = 1 in a randomized experiment for all i). We found that our primary analysis conclusion would hold up to Γ i having a median as large as 3.82. See Supplementary Material G.3.1 for details. In the second sensitivity analysis, we repeated the primary analysis using a shorter time lag l = 28 days and the result was similar; see Supplementary Material G.3.2 for details.
Our primary analysis results suggested that different social distancing trajectories during the treatment period had an effect on the COVID-19-related death toll in the subsequent weeks. This causal conclusion stands under arbitrary interference pattern and is robust to unmeasured confounding. 7.2. Secondary analysis I: secondary outcome. Let Y ij,case,agg denote the cumulative COVID-19 cases per 100, 000 people from June 29th to July 12th (corresponding to time lag l = 14 days), as specified in our pre-analysis plan. We test the following null hypothesis concerning the secondary outcome Y ij,case,agg : H 0,secondary : Y ij,case,agg (z t0:T )−Y ij,case,agg (z * t0:T ) = 0, ∀z t0:T , ∀i = 1, · · · , I = 1211, j = 1, 2.
The exact p-value is less than 10 −5 ; see the bottom panels of Figure 6. In a sensitivity analysis, we repeated the test with a shorter time lag l = 10 days and the result was similar; see Supplementary Material G.3.3 for details. Our result suggests strong evidence that social distancing from April 27th to June 28th had an effect on cumulative COVID-19 cases per 100, 000 people from June 29th to July 12th in our matched samples.
7.3. Secondary analysis II: explore dose-response relationship. Let z * t0:T denote a reference trajectory equal to the −0.50 for all t 0 ≤ t ≤ T (corresponding to 50% reduction in total distance traveled from April 27th to June 28th), W lag a weight function that assigns equal weight to all t such that t 0 ≤ t ≤ T and 0 otherwise, and a cumulative dose CD(z t0:T ; z * t0:T , W lag ) defined with respect to z * t0:T and W lag . We invoke Assumption 1 so that Y agg (·) depends on Z t0:T = z t0:T only via CD(z t0:T ; z * t0:T , W lag ), and consider testing the dose-response kink model concerning the aggregate case number Y ij,case,agg : H 0,kink : Y ij,case,agg (z t0:T ) − Y ij,case,agg (z * t0:T ) = 0, ∀z t0:T such that CD(z t0:T ; z * t0:T , W lag ) ≤ τ, and log{Y ij,case,agg (z t0:T )} − log{Y ij,case,agg (z * t0:T )} = β · {CD(z t0:T ; z * t0:T , W lag ) − τ }, ∀z t0:T such that CD(z t0:T ; z * t0:T , W lag ) > τ, ∀i = 1, · · · , I = 1211, j = 1, 2.
This dose-response relationship states that the potential COVID-19 cases from June 29th to July 12th remains the same as the potential outcome under Z t0:T = z * t0:T , i.e., strict social distancing that reduces total distance traveled by 50% everyday from April 27th to June 28th, when the cumulative dose (defined w.r.t. z * t0:T and W lag ) is less than some threshold τ ; after the cumulative dose exceeds this threshold, the COVID-19 case number increases exponentially at a rate proportional to how much the cumulative dose exceeds the threshold. We tested (13) for different τ = τ 0 and β = β 0 combinations; the maximum p-value obtained at (τ 0 , β 0 ) = (0.48, 10.0) is equal to 0.417 and hence the kink model (13) cannot be rejected. The left panel of Figure 7 plots the level-0.1 and level-0.05 confidence sets of (τ, β). The right panel of Figure 7 plots three dose-response curves with baseline (i.e., 50% reduction) case number equal to 1 per 100, 000 people for three selected (τ 0 , β 0 ) pairs in the level-0.1 confidence set. In a sensitivity analysis, we repeated the analysis by considering two different specifications of cumulative dose, one assigning more weight to early days of the phased reopening and the other towards the end of the phased reopening. Confidence sets results look similar under different specifications; see Supplementary Material G.3.4 for details.
The confidence set of the threshold parameter τ is tightly centered around 0.5, suggesting that as a county's average percentage change in total distance traveled from April 27th to June 28th increases from −50% to around −5% to 5%, the potential COVID-19 cases number from June 29th to July 12th would largely remain unchanged; however, once beyond this threshold, the case number would rise exponentially and could incur an increase as large as 10-fold when a county's average distance traveled increased by about 20% compared to the pre-coronavirus level. 7.4. Secondary analysis III: subgroup analysis and differential dose-response relationship. We also conducted subgroup analyses by repeating the primary and secondary analyses described in Section 7.1 to 7.3 on 462 matched pairs of 2 non-rural counties and 749 matched pairs of 2 rural counties. P-values when testing the primary analysis hypothesis H 0,primary concerning the death toll and secondary analysis hypothesis H 0,secondary concerning the case number are 0.004 and less than 10 −5 , respectively, in the non-rural subgroup, and 0.008 and 0.009, respectively, in the rural subgroup. We also allowed a differential doseresponse relationship between social distancing and case numbers in rural and non-rural counties and constructed confidence sets for (τ, β) separately; see Figure 8. We further repeated the subgroup analyses under different specifications of the cumulative dose and the results were similar; see Supplementary Material G.3.4 for details.
A comparison of confidence sets for the non-rural counties (top left panel of Figure 8) and rural counties (bottom left panel of Figure 8) revealed an intriguing pattern: while the level-0.1 confidence set of the activation threshold τ is centered around the range of 0.4 − 0.6 for the rural counties, it contains 0 for the non-rural counties; moreover, the level-0.1 confidence set of rural counties covers a much larger range of β values compared to that of the non-rural counties. Together, these results suggest that the activation dose required to trigger exponential growth in case numbers in rural counties seemed much larger than that in non-rural counties; however, once exponential growth in case numbers was incurred, the growth seemed more rapid in rural counties. 7.5. Dose-response relationship under local interference. We next applied the methodology developed in Section 4 and 5.6 to obtain corrected confidence sets of (τ, β) under local interference. To this end, we collect 2 × 1, 211 copies of reference dose trajectory z * t0:T in z * t0:T and the cumulative doses of all study units during the treatment period in z cumu = (CD(z 11;t0:T ; z * t0:T , W lag ), · · · , CD(z I2;t0:T ; z * t0:T , W lag )). We consider relaxing the dose-response relationship by incorporating local interference as follows:
H 0,kink,interference : Y ij,case,agg ( z t0:T ) − Y ij,case,agg ( z * t0:T ) = 0, ∀ z t0:T such that CD(z ij;t0:T ; z * t0:T , W lag ) ≤ τ, and log{Y ij,case,agg ( z t0:T )} − log{Y ij,case,agg ( z * t0:T )} =β · {CD(z ij;t0:T ; z * t0:T , W lag ) − τ } · 1 +
1 1 + exp{−k( G ij, • −1 0 · z cumu , G ij, • − s)}
Spillover Effect Factor C , ∀ z t0:T such that CD(z ij;t0:T ; z * t0:T , W lag ) > τ, ∀i = 1, · · · , I = 1211, j = 1, 2. According to this null hypothesis, there is no direct effect if county ij's cumulative dose is below some threshold τ ; hence, there is no spillover effect in this case by Principle III described in Section 4. Once county ij's cumulative dose is above the threshold, this triggers exponential growth captured by the dose-response direct effect plus a spillover effect. The magnitude of the spillover effect is equal to the direct effect multiplied by a spillover effect factor C. This spillover effect factor depends on the average cumulative dose of ij's neighbors but is always upper bounded by 1 so that the spillover effect is no larger than the direct effect (see Section 4.3). In rare cases when a county has no neighbor, C is defined to be 0 so that there is no spillover effect. We used the county adjacency file provided by the United States Census Bureau (U.S Census Bureau) as our adjacency matrix G.
The interference parameters (k, s) in the above model are easy to interpret and specify. For instance, (k, s) = (5.0, 1.0) corresponds to a small spillover effect of approximately 1% of the direct effect when neighbors' average cumulative dose is 0.10 (corresponding to an average 40% reduction in social mobility compared to the pre-pandemic level during the treatment period) and approximately 8% of the direct effect when neighbors' average cumulative dose is 0.50 (corresponding to social mobility remaining the same as the pre-pandemic level during the treatment period). In this way, the interference parameters (k, s) carry concrete meanings and can be easily tuned and communicated to the audience.
The left panel of Figure 9 plots the level-0.1, 0.05, and 0.005 confidence sets of (τ, β) when the interference parameters (k, s) = (5, 1). The right panel of Figure 9 further illustrates the inferred dose-response direct effects (solid lines) and the associated spillover effects (dotted lines) under (k, s) = (5, 1) for (τ, β) = (0.44, 2.5) (red lines) and (0.48, 5.0) (blue lines).
The level-0.05 confidence set of the dose-response direct effect contains similar τ values but considerably smaller β values compared to assuming no interference and modeling the total effect using a dose-response kink model (see the left panel of Figure 7). This makes intuitive sense as the total effect has now been decomposed into the dose-response direct effect and a spillover effect due to neighboring counties. under interference parameters (k, s) = (5, 1). Maximum p-value is obtained at (τ 0 , β 0 ) = (0.44, 2.5) (red marker). Three isopleths (0.1, 0.05, and 0.005) are plotted. Right panel: dose-response direct effect (solid lines) and the associated spillover effects (dotted lines) for selected (τ 0 , β 0 ) in the 0.05 confidence set with baseline Y ij,case,agg (z * t 0 :T ) equal to 1 per 100, 000. Two red lines correspond to (τ 0 , β 0 ) = (0.44, 2.5) and blue lines (τ 0 , β 0 ) = (0.48, 5.0). 8. Discussion. We studied in detail the effect of social distancing during the early phased reopening in the United States on COVID-19 related death toll and case numbers using our compiled county-level data. To address the statistical challenge brought by a time-dependent, continuous treatment dose trajectory, we developed a design-based framework based on nonbipartite matching to embed observational data with time-dependent, continuous treatment dose trajectory into a randomized controlled experiment. This embedding induces a randomization scheme that we then used to conduct randomization-based, model-free statistical inference for causal relationships, including testing a causal null hypothesis, a structured dose-response relationship and a causal null hypothesis under local interference modeling.
Upon applying the proposed design and testing procedures to the mobility and COVID-19 data, we found very strong evidence against the causal null hypothesis and supportive of a causal effect of social distancing during the early phases of reopening on subsequent COVID-19-related death and case numbers. Our finding complements many recent studies based on standard epidemiological models Koo et al., 2020) and structural equation modeling (Chernozhukov, Kasahara and Schrimpf, 2021;Bonvini et al., 2021) from a unique perspective, and once again confirms the important role of social distancing (as captured by a reduction in mobility in this article) in combating the novel coronavirus. Our transparent comparison of two groups of similar counties makes our findings digestible and easy to communicate to the general public.
In a dose-response analysis, we found that the confidence set of the dose needed to activate exponential growth was tightly centered and its magnitude suggested that once the total distance traveled returned to or even superseded the pre-coronavirus level, it would have a devastating effect on the COVID-19 case numbers by contributing to exponential growth. Moreover, in a subgroup analysis where we allowed a differential dose-response relationship, we found that more stringent social distancing would be needed to avoid devastating exponential growth for non-rural counties; however, once the exponential growth was incurred, the growth appeared more rapid in rural counties. This striking difference in dose-response relationship between rural and non-rural communities agrees with experts' assessment of the transmission dynamics. Given its clinical features, the rate of virus reproduction is likely higher in large, urban areas due to more reproductive opportunities afforded by denser populations (Souch and Cossman, 2020) and this may explain the absence of an "activation dose" in non-rural counties (see top left panel of Figure 8). On the other hand, although rural residents have less social interaction compared to non-rural counterparts, they often have more underlying medical conditions and are more likely to present for treatment at more advanced stages of disease (Callaghan et al., 2021), which may partly explain why rural communities seemed to incur more drastic exponential growth in case numbers once the activation dose was exceeded (see bottom left panel of Figure 8).
The design-based approach and analysis proposed in this article has its limitations. First, we used social mobility data as a proxy measure for social distancing. It would be interesting to look at other aspects of social distancing, e.g., closure of borders, reduction in aviation travel, etc, in future works. Second, in order to permute two treatment dose trajectories in a longitudinal setting, one necessarily needs to match on observed outcomes during the treatment period and compare outcomes after the treatment period; therefore, in a longitudinal setting, the method is suited only for applications where the effect of a time-varying treatment is not immediate, e.g., effect of precautionary measures on the death toll. Third, when the sample size is limited, the interference parameters are treated as sensitivity parameters that researchers vary, rather than population parameters for which researchers construct confidence sets. The proposed method also has its unique strengths: it embeds the noisy observational data into an approximate randomized controlled trial and has a clear "reasoned basis" (Fisher, 1935) when testing the causal null hypothesis, and researchers can always conduct a sensitivity analysis to investigate how causal conclusions would change when the randomization assumption is relaxed. The method developed in this article can be readily applied to many practical problems where there is a continuous exposure and the scientific interest lies in testing a dose-response relationship. Understanding a dose-response relationship is central to many scientific disciplines like public health (Gorell et al., 1999;Farrelly et al., 2005), pharmacology (Tallarida and Jacob, 2012), and toxicology (Calabrese and Baldwin, 2003), among many others. Supplementary Material F provides details on generalizing the dose-response relationship to an aggregate outcome. Supplementary Material G provides further details on the case study, including maps of the 1, 211 better and worse social distancing counties in the matched samples, the balance table, a closer examination of the distributions of some important variables after matching, separate analyses of rural and non-rural counties, and numerous sensitivity analyses. Supplementary Material H assesses Assumption 1 using a standard epidemiological model. code and data.zip Data and R code implementing the statistical matching and randomization inference.
Fig 1 :
1County-averaged 7-day rolling average (black solid line), middle 50% (dark shade), and middle 90% (light shade) of percentage change in total distance traveled, e.g., −0.35 corresponds to 35% reduction in total distance traveled compared to the pre-coronavirus period. The first week of 15 Days to Slow the Spread campaign (March 16-22) is marked in red and the first week of reopening in blue.
Fig 3 :
3An illustrative example. I = 200
Fig 4 :
4The probability contour plot (in log scale) against values of τ 0 and β 0 . The true dose-response
Fig 5 :
5Trajectories of the average daily percentage change in total distance traveled (dashed lines)
Fig 6 :
6Top left panel: CDFs of cumulative COVID-19 related deaths per 100, 000 people in the better social distancing (blue) and worse social distancing (red) groups. Top right panel: randomizationbased reference distribution. The exact p-value is 2.06 × 10 −4 . Bottom left panel: CDFs of cumulative COVID-19 cases per 100, 000 people in the better social distancing (blue) and worse social distancing (red) groups. Bottom right panel: randomization-based reference distribution. The exact p-value is less than 10 −5 .
Fig 7 :
7Left panel: contour plot of p-values when testing H 0,kink as in (13) against τ = τ 0 and β = β 0 . Maximum p-value is obtained at (τ 0 , β 0 ) = (0.48, 10.0) (red marker). Three isopleths (0.1, 0.05, and 0.005) are plotted. Right panel: dose-response relationships for selected (τ 0 , β 0 ) in the 0.1 confidence set with baseline Y ij,case,agg (z * t 0 :T ) equal to 1 per 100, 000. The red line corresponds to (τ 0 , β 0 ) = (0.46, 5.0), blue line (τ 0 , β 0 ) = (0.48, 10.0), and orange line (τ 0 , β 0 ) = (0.50, 12.0).
Fig 8 :
8Top left panel: contour plot of p-values when testing H 0,kink against τ = τ 0 and β = β 0 in 462 matched pairs of 2 non-rural counties. Three isopleths (0.1, 0.05, and 0.005) are plotted. Top right panel: dose-response relationships for selected (τ 0 , β 0 ) in the 0.1 confidence set as in the top left panel with baseline Y ij,case,agg (z * t 0 :T ) equal to 1 per 100, 000. The red line corresponds to (τ 0 , β 0 ) = (0.10, 3.0), blue line (τ 0 , β 0 ) = (0.38, 8.0), and orange line (τ 0 , β 0 ) = (0.41, 12.0). Bottom left panel: contour plot of p-values when testing H 0,kink against τ = τ 0 and β = β 0 in 749 matched pairs of rural counties. Three isopleths (0.1, 0.05, and 0.005) are plotted. Bottom right panel: dose-response relationships for selected (τ 0 , β 0 ) in the 0.1 confidence set as in the bottom left panel with baseline Y ij,case,agg (z * t 0 :T ) equal to 1 per 100, 000. The red line corresponds to (τ 0 , β 0 ) = (0.5, 10.0), blue line (τ 0 , β 0 ) = (0.55, 15.0), and orange line (τ 0 , β 0 ) = (0.60, 20.0).
Fig 9 :
9Left panel: contour plot of p-values when testing H 0,kink,interference against τ = τ 0 and β = β 0
Fig 10 :
10Map of 1, 211 better social distancing (light blue) and 1, 211 worse social distancing counties (red) in the matched analysis. Unmatched counties are in white. a confidence set for nuisance parameters (τ, β) in a dose-response kink model based on a variant of rank sum test. Supplementary Material D illustrates how to test a sequence of dose-response relationship ordered according to their model complexity. Supplementary Material E derives the treatment dose trajectory assignment probability in each matched pair.
TABLE 1
1Science table of N = 2I units for a countable set Z = {0, 1, 2, · · · }.
TABLE 2
2
Algorithm 1: Randomization Inference for a Dose-Response Relationship: Pseudo AlgorithmInput: I matched pairs after nonbipartite matching and a dose-response relationship model Hdose-response
0
TABLE 3
3Imputed science table when testing a dose-response relationship in a longitudinal setting. For each
unit, one and only one potential outcome is observed; however, the other potential outcome can be
imputed under Assumption 1 and H L
0 .
Observe One Potential Outcome
Imputed Potential Outcomes
Units
Obs. Treatment
Dose Trajectory
Z obs
ij;t0:t
Cumulative
Dose
Z obs
ij;t0:t
APPENDIX A: MAP OF 1, 211 BETTER AND 1, 211 WORSE SOCIAL DISTANCING COUNTIES APPENDIX B: BALANCE TABLE AFTER STATISTICAL MATCHING SUPPLEMENTARY MATERIAL Pilot study, technical details, and further details on the case study Supplementary Material A provides details on the pilot study described in Section 1.1 in the main article. Supplementary Material B motivates the Kolmogorov-Smirnov-type test statistic considered in the main article. Supplementary Material C discusses how to construct
Best of Today. BBC RADIO. 4BBC RADIO 4 (2020). Best of Today. https://www.bbc.co.uk/programmes/p08jn7g4.
Amulti-risk SIR model with optimally targeted lockdown Technical Report. D Acemoglu, V Chernozhukov, I Werning, M D Whinston, National Bureau of Economic ResearchACEMOGLU, D., CHERNOZHUKOV, V., WERNING, I. and WHINSTON, M. D. (2020). Amulti-risk SIR model with optimally targeted lockdown Technical Report, National Bureau of Economic Research.
Is the lockdown important to prevent the COVID-19 pandemic? Effects on psychology, environment and economy-perspective. A Atalan, Annals of Medicine and Surgery. 56ATALAN, A. (2020). Is the lockdown important to prevent the COVID-19 pandemic? Effects on psychology, environment and economy-perspective. Annals of Medicine and Surgery 56 38-42.
Exact p-values for network interference. S Athey, D Eckles, G W Imbens, Journal of the American Statistical Association. 113ATHEY, S., ECKLES, D. and IMBENS, G. W. (2018). Exact p-values for network interference. Journal of the American Statistical Association 113 230-240.
Building a stronger instrument in an observational study of perinatal care for premature infants. M Baiocchi, D S Small, S Lorch, P R Rosenbaum, Journal of the American Statistical Association. 105BAIOCCHI, M., SMALL, D. S., LORCH, S. and ROSENBAUM, P. R. (2010). Building a stronger instrument in an observational study of perinatal care for premature infants. Journal of the American Statistical Association 105 1285-1296.
P values maximized over a confidence set for the nuisance parameter. R L Berger, D D Boos, Journal of the American Statistical Association. 89BERGER, R. L. and BOOS, D. D. (1994). P values maximized over a confidence set for the nuisance parameter. Journal of the American Statistical Association 89 1012-1016.
Bridging observational studies and randomized experiments by embedding the former in the latter. M.-A C Bind, D B Rubin, Statistical Methods in Medical Research. 28BIND, M.-A. C. and RUBIN, D. B. (2019). Bridging observational studies and randomized experiments by embedding the former in the latter. Statistical Methods in Medical Research 28 1958-1978.
Time series experiments and causal estimands: exact randomization tests and trading. I Bojinov, N Shephard, Journal of the American Statistical Association. 114BOJINOV, I. and SHEPHARD, N. (2019). Time series experiments and causal estimands: exact randomization tests and trading. Journal of the American Statistical Association 114 1665-1682.
M Bonvini, E Kennedy, V Ventura, L Wasserman, arXiv:2103.04472Causal Inference in the Time of Covid-19. arXiv preprintBONVINI, M., KENNEDY, E., VENTURA, V. and WASSERMAN, L. (2021). Causal Inference in the Time of Covid-19. arXiv preprint arXiv:2103.04472.
Reasoning about interference between units: A general framework. J Bowers, M M Fredrickson, C Panagopoulos, Political Analysis. BOWERS, J., FREDRICKSON, M. M. and PANAGOPOULOS, C. (2013). Reasoning about interference between units: A general framework. Political Analysis 97-124.
Mathematical Models in Population Biology and Epidemiology 2. F Brauer, C Castillo-Chavez, SpringerBRAUER, F. and CASTILLO-CHAVEZ, C. (2012). Mathematical Models in Population Biology and Epidemiology 2. Springer.
. S Census Bureau County Adjacency File, S CENSUS BUREAU County Adjacency File.
Hormesis: the dose-response revolution. E J Calabrese, L A Baldwin, Annual review of pharmacology and toxicology. 43CALABRESE, E. J. and BALDWIN, L. A. (2003). Hormesis: the dose-response revolution. Annual review of pharmacology and toxicology 43 175-197.
Rural and urban differences in COVID-19 prevention behaviors. T Callaghan, J A Lueck, K L Trujillo, Ferdinand , A O , The Journal of Rural Health. 37CALLAGHAN, T., LUECK, J. A., TRUJILLO, K. L. and FERDINAND, A. O. (2021). Rural and urban differences in COVID-19 prevention behaviors. The Journal of Rural Health 37 287-295.
Causal impact of masks, policies, behavior on early covid-19 pandemic in the US. V Chernozhukov, H Kasahara, P Schrimpf, Journal of econometrics. 220CHERNOZHUKOV, V., KASAHARA, H. and SCHRIMPF, P. (2021). Causal impact of masks, policies, behavior on early covid-19 pandemic in the US. Journal of econometrics 220 23-62.
Institutional, not home-based, isolation could contain the COVID-19 outbreak. B L Dickens, J R Koo, A Wilder-Smith, A R Cook, The Lancet. 395DICKENS, B. L., KOO, J. R., WILDER-SMITH, A. and COOK, A. R. (2020). Institutional, not home-based, isolation could contain the COVID-19 outbreak. The Lancet 395 1541-1542.
Randomization inference for treatment effect variation. P Ding, A Feller, L Miratrix, Journal of the Royal Statistical Society: Series B (Statistical Methodology). 78DING, P., FELLER, A. and MIRATRIX, L. (2016). Randomization inference for treatment effect variation. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 78 655-671.
Modified randomization tests for nonparametric hypotheses. M Dwass, The Annals of Mathematical Statistics. DWASS, M. (1957). Modified randomization tests for nonparametric hypotheses. The Annals of Mathematical Statistics 181-187.
Evidence of a dose-response relationship between "truth" antismoking Ads and youth smoking prevalence. M C Farrelly, K C Davis, M L Haviland, P Messeri, C G Healton, American journal of public health. 95FARRELLY, M. C., DAVIS, K. C., HAVILAND, M. L., MESSERI, P. and HEALTON, C. G. (2005). Evidence of a dose-response relationship between "truth" antismoking Ads and youth smoking prevalence. American journal of public health 95 425-431.
The Design of Experiments. R A Fisher, Oliver and Boyd. London and EdinburghFISHER, R. A. (1935). The Design of Experiments. Oliver and Boyd. London and Edinburgh.
The relationship between cultural tightnesslooseness and COVID-19 cases and deaths: a global analysis. M J Gelfand, J C Jackson, X Pan, D Nau, D Pieper, E Denison, M Dagher, P A Van Lange, C.-Y Chiu, Wang , M , The Lancet Planetary Health. 5GELFAND, M. J., JACKSON, J. C., PAN, X., NAU, D., PIEPER, D., DENISON, E., DAGHER, M., VAN LANGE, P. A., CHIU, C.-Y. and WANG, M. (2021). The relationship between cultural tightness- looseness and COVID-19 cases and deaths: a global analysis. The Lancet Planetary Health 5 e135-e144.
Smoking and Parkinson's disease: a dose-response relationship. J M Gorell, B A Rybicki, C C Johnson, E L Peterson, Neurology. 52GORELL, J. M., RYBICKI, B. A., JOHNSON, C. C. and PETERSON, E. L. (1999). Smoking and Parkinson's disease: a dose-response relationship. Neurology 52 115-115.
Psychological impact of COVID-19 lockdown: An online survey from India. S Grover, S Sahoo, A Mehra, A Avasthi, A Tripathi, A Subramanyan, A Pattojoshi, G P Rao, G Saha, K Mishra, Indian Journal of Psychiatry. 62354GROVER, S., SAHOO, S., MEHRA, A., AVASTHI, A., TRIPATHI, A., SUBRAMANYAN, A., PATTOJOSHI, A., RAO, G. P., SAHA, G., MISHRA, K. et al. (2020). Psychological impact of COVID-19 lockdown: An online survey from India. Indian Journal of Psychiatry 62 354.
Optmatch: Flexible, optimal matching for observational studies. B B Hansen, R News. 7HANSEN, B. B. (2007). Optmatch: Flexible, optimal matching for observational studies. R News 7 18-24.
S Heng, B Zhang, X Han, S A Lorch, D S Small, arXiv:1911.09171Instrumental Variables: to Strengthen or not to Strengthen? arXiv preprint. HENG, S., ZHANG, B., HAN, X., LORCH, S. A. and SMALL, D. S. (2019). Instrumental Variables: to Strengthen or not to Strengthen? arXiv preprint arXiv:1911.09171.
Matching as nonparametric preprocessing for reducing model dependence in parametric causal inference. D E Ho, K Imai, G King, E A Stuart, Political Analysis. 15HO, D. E., IMAI, K., KING, G. and STUART, E. A. (2007). Matching as nonparametric preprocessing for reduc- ing model dependence in parametric causal inference. Political Analysis 15 199-236.
Evaluating kindergarten retention policy: A case study of causal inference for multilevel observational data. G Hong, S W Raudenbush, Journal of the American Statistical Association. 101HONG, G. and RAUDENBUSH, S. W. (2006). Evaluating kindergarten retention policy: A case study of causal inference for multilevel observational data. Journal of the American Statistical Association 101 901-910.
Matching methods for causal inference with time-series cross-section data. K Imai, I S Kim, Wang , E , IMAI, K., KIM, I. S. and WANG, E. (2018). Matching methods for causal inference with time-series cross-section data. https://imai.fas.harvard.edu/research/files/tscs.pdf.
Causal inference in statistics, social, and biomedical sciences. G W Imbens, D B Rubin, Cambridge University PressIMBENS, G. W. and RUBIN, D. B. (2015). Causal inference in statistics, social, and biomedical sciences. Cam- bridge University Press.
Interventions to mitigate early spread of SARS-CoV-2 in Singapore: a modelling study. J R Koo, A R Cook, M Park, Y Sun, H Sun, J T Lim, C Tam, B L Dickens, The Lancet Infectious Diseases. 20KOO, J. R., COOK, A. R., PARK, M., SUN, Y., SUN, H., LIM, J. T., TAM, C. and DICKENS, B. L. (2020). Interventions to mitigate early spread of SARS-CoV-2 in Singapore: a modelling study. The Lancet Infectious Diseases 20 678-688.
The positive impact of lockdown in Wuhan on containing the COVID-19 outbreak in China. H Lau, V Khosrawipour, P Kocbach, A Mikolajczyk, J Schubert, J Bania, T Khosraw-Ipour, Journal of Travel Medicine. 2737LAU, H., KHOSRAWIPOUR, V., KOCBACH, P., MIKOLAJCZYK, A., SCHUBERT, J., BANIA, J. and KHOSRAW- IPOUR, T. (2020). The positive impact of lockdown in Wuhan on containing the COVID-19 outbreak in China. Journal of Travel Medicine 27 taaa037.
The incubation period of coronavirus disease 2019 (COVID-19) from publicly reported confirmed cases: estimation and application. S A Lauer, K H Grantz, Q Bi, F K Jones, Q Zheng, H R Meredith, A S Azman, N G Re-Ich, J Lessler, Annals of Internal Medicine. 172LAUER, S. A., GRANTZ, K. H., BI, Q., JONES, F. K., ZHENG, Q., MEREDITH, H. R., AZMAN, A. S., RE- ICH, N. G. and LESSLER, J. (2020). The incubation period of coronavirus disease 2019 (COVID-19) from publicly reported confirmed cases: estimation and application. Annals of Internal Medicine 172 577-582.
Scientific and ethical basis for social-distancing interventions against COVID-19. J A Lewnard, N C Lo, The Lancet. Infectious diseases. 20631LEWNARD, J. A. and LO, N. C. (2020). Scientific and ethical basis for social-distancing interventions against COVID-19. The Lancet. Infectious diseases 20 631.
Balanced risk set matching. Y P Li, K J Propert, P R Rosenbaum, Journal of the American Statistical Association. 96LI, Y. P., PROPERT, K. J. and ROSENBAUM, P. R. (2001). Balanced risk set matching. Journal of the American Statistical Association 96 870-882.
Matching with doses in an observational study of a media campaign against drug abuse. B Lu, E Zanutto, R Hornik, P R Rosenbaum, Journal of the American Statistical Association. 96LU, B., ZANUTTO, E., HORNIK, R. and ROSENBAUM, P. R. (2001). Matching with doses in an observational study of a media campaign against drug abuse. Journal of the American Statistical Association 96 1245-1253.
Optimal nonbipartite matching and its statistical applications. B Lu, R Greevy, X Xu, C Beck, The American Statistician. 65LU, B., GREEVY, R., XU, X. and BECK, C. (2011). Optimal nonbipartite matching and its statistical applications. The American Statistician 65 21-30.
Bayesian Inference for Sequential Treatments Under Latent Sequential Ignorability. A Mattei, F Ricciardi, F Mealli, Journal of the American Statistical Association. MATTEI, A., RICCIARDI, F. and MEALLI, F. (2019). Bayesian Inference for Sequential Treatments Under Latent Sequential Ignorability. Journal of the American Statistical Association.
Randomization-based inference within principal strata. T L Nolen, M G Hudgens, Journal of the American Statistical Association. 106NOLEN, T. L. and HUDGENS, M. G. (2011). Randomization-based inference within principal strata. Journal of the American Statistical Association 106 581-593.
Report of the WHO-China joint mission on coronavirus disease. World Health, Organization, COVID-19WORLD HEALTH ORGANIZATION (2020). Report of the WHO-China joint mission on coronavirus disease 2019 (COVID-19).
On obtaining permutation distributions in polynomial time. M Pagano, D Tritchler, Journal of the American Statistical Association. 78PAGANO, M. and TRITCHLER, D. (1983). On obtaining permutation distributions in polynomial time. Journal of the American Statistical Association 78 435-440.
The county health rankings: rationale and methods. P L Remington, B B Catlin, K P Gennuso, Population Health Metrics. 1311REMINGTON, P. L., CATLIN, B. B. and GENNUSO, K. P. (2015). The county health rankings: rationale and methods. Population Health Metrics 13 11.
A new approach to causal inference in mortality studies with a sustained exposure period-application to control of the healthy worker survivor effect. J Robins, Mathematical Modelling. 7ROBINS, J. (1986). A new approach to causal inference in mortality studies with a sustained exposure pe- riod-application to control of the healthy worker survivor effect. Mathematical Modelling 7 1393-1512.
Correcting for non-compliance in randomized trials using structural nested mean models. J M Robins, Communications in Statistics -Theory and Methods. 23ROBINS, J. M. (1994). Correcting for non-compliance in randomized trials using structural nested mean models. Communications in Statistics -Theory and Methods 23 2379-2412.
Marginal structural models. J M Robins, Proceedings of the Section on Bayesian Statistical Science. American Statistical AssociationROBINS, J. M. (1998). Marginal structural models. In: 1997 Proceedings of the Section on Bayesian Statistical Science, Alexandria, VA: American Statistical Association, 1998;1-10.
Estimation of the causal effect of a time-varying exposure on the marginal mean of a repeated binary outcome. J M Robins, S Greenland, F.-C Hu, Journal of the American Statistical Association. 94ROBINS, J. M., GREENLAND, S. and HU, F.-C. (1999). Estimation of the causal effect of a time-varying expo- sure on the marginal mean of a repeated binary outcome. Journal of the American Statistical Association 94 687-700.
Marginal Structural Models and Causal Inference in Epidemiology. J Robins, M A Hernán, Babette , B , Epidemiology. 11ROBINS, J., HERNÁN, M. A. and BABETTE, B. (2000). Marginal Structural Models and Causal Inference in Epidemiology. Epidemiology 11 550-560.
Sensitivity analysis for matched observational studies with many ordered treatments. P R Rosenbaum, Scandinavian Journal of Statistics. ROSENBAUM, P. R. (1989). Sensitivity analysis for matched observational studies with many ordered treatments. Scandinavian Journal of Statistics 227-236.
Observational Studies. P R Rosenbaum, SpringerROSENBAUM, P. R. (2002). Observational Studies. Springer.
Heterogeneity and causality: Unit heterogeneity and design sensitivity in observational studies. P R Rosenbaum, The American Statistician. 59ROSENBAUM, P. R. (2005). Heterogeneity and causality: Unit heterogeneity and design sensitivity in observa- tional studies. The American Statistician 59 147-152.
Interference between units in randomized experiments. P R Rosenbaum, Journal of the American Statistical Association. 102ROSENBAUM, P. R. (2007). Interference between units in randomized experiments. Journal of the American Statistical Association 102 191-200.
Design of Observational Studies. P R Rosenbaum, SpringerROSENBAUM, P. R. (2010). Design of Observational Studies. Springer.
Randomization analysis of experimental data: The Fisher randomization test comment. D B Rubin, Journal of the American Statistical Association. 75RUBIN, D. B. (1980). Randomization analysis of experimental data: The Fisher randomization test comment. Journal of the American Statistical Association 75 591-593.
Statistics and causal inference: Comment: Which ifs have causal answers. D B Rubin, Journal of the American Statistical Association. 81RUBIN, D. B. (1986). Statistics and causal inference: Comment: Which ifs have causal answers. Journal of the American Statistical Association 81 961-962.
Causal inference using potential outcomes: Design, modeling, decisions. D B Rubin, Journal of the American Statistical Association. 100RUBIN, D. B. (2005). Causal inference using potential outcomes: Design, modeling, decisions. Journal of the American Statistical Association 100 322-331.
The design versus the analysis of observational studies for causal effects: parallels with the design of randomized trials. D B Rubin, Statistics in Medicine. 26RUBIN, D. B. (2007). The design versus the analysis of observational studies for causal effects: parallels with the design of randomized trials. Statistics in Medicine 26 20-36.
Social distancing laws cause only small losses of economic activity during the COVID-19 pandemic in Scandinavia. A Sheridan, A L Andersen, E T Hansen, N Johannesen, Proceedings of the National Academy of Sciences. 117SHERIDAN, A., ANDERSEN, A. L., HANSEN, E. T. and JOHANNESEN, N. (2020). Social distancing laws cause only small losses of economic activity during the COVID-19 pandemic in Scandinavia. Proceedings of the National Academy of Sciences 117 20468-20473.
Only strict quarantine measures can curb the coronavirus disease (COVID-19) outbreak in Italy. H Sjödin, A Wilder-Smith, S Osman, Z Farooq, J Rocklöv, Eurosurveillance. 25SJÖDIN, H., WILDER-SMITH, A., OSMAN, S., FAROOQ, Z. and ROCKLÖV, J. (2020). Only strict quarantine measures can curb the coronavirus disease (COVID-19) outbreak in Italy, 2020. Eurosurveillance 25 2000280.
A commentary on rural-urban disparities in COVID-19 testing rates per 100,000 and risk factors. J M Souch, J S Cossman, The Journal of Rural Health. SOUCH, J. M. and COSSMAN, J. S. (2020). A commentary on rural-urban disparities in COVID-19 testing rates per 100,000 and risk factors. The Journal of Rural Health.
Matching methods for causal inference: A review and a look forward. E A Stuart, Statistical Science. 25STUART, E. A. (2010). Matching methods for causal inference: A review and a look forward. Statistical Science 25 1-21.
The dose-Response relation in pharmacology. R J Tallarida, Jacob , L S , Springer Science & Business MediaTALLARIDA, R. J. and JACOB, L. S. (2012). The dose-Response relation in pharmacology. Springer Science & Business Media.
Visualizing the lagged connection between COVID-19 cases and deaths in the United States: An animation using per capita state-level data. C C Testa, N Krieger, J T Chen, W P Hanage, The Harvard Center for Population and Development Studies. MunichTESTA, C. C., KRIEGER, N., CHEN, J. T. and HANAGE, W. P. (2020). Visualizing the lagged connection be- tween COVID-19 cases and deaths in the United States: An animation using per capita state-level data (January 22, 2020 -July 8, 2020). Technical Report, The Harvard Center for Population and Development Studies, Mu- nich.
Coronavirus (Covid-19) Data in the United States. The, York Times, THE NEW YORK TIMES (2020). Coronavirus (Covid-19) Data in the United States. https://github.com/ nytimes/covid-19-data. Accessed: 2020-09-30.
Globally Coherent Weekly Periodicity in the Covid-19 Pandemic. medRxiv. C Unnikrishnan, UNNIKRISHNAN, C. (2020). Globally Coherent Weekly Periodicity in the Covid-19 Pandemic. medRxiv.
Social distancing in covid-19: what are the mental health implications?. A Venkatesh, S Edirappuli, BMJ. 369VENKATESH, A. and EDIRAPPULI, S. (2020). Social distancing in covid-19: what are the mental health implica- tions? BMJ 369.
Recursive Partitioning and Applications. H Zhang, B H Singer, Springer Science & Business MediaZHANG, H. and SINGER, B. H. (2010). Recursive Partitioning and Applications. Springer Science & Business Media.
Bridging preference-based instrumental variable studies and cluster-randomized encouragement experiments: study design, noncompliance, and average cluster effect ratio. B Zhang, S Heng, E J Mackay, Y E , T , Biometrics. ZHANG, B., HENG, S., MACKAY, E. J. and YE, T. (2021). Bridging preference-based instrumental variable studies and cluster-randomized encouragement experiments: study design, noncompliance, and average cluster effect ratio. Biometrics.
| []
|
[
"Learning quantum many-body systems from a few copies",
"Learning quantum many-body systems from a few copies"
]
| [
"Cambyse Rouzé \nDepartment of Mathematics\nTechnische Universität München\n85748GarchingGermany\n",
"Daniel Stilck França \nQMATH\nDepartment of Mathematical Sciences\nUniversity of Copenhagen\nDenmark\n\nUniv Lyon\nENS Lyon\nUCBL\nCNRS\nF-69342Lyon Cedex 07Inria, LIPFrance\n"
]
| [
"Department of Mathematics\nTechnische Universität München\n85748GarchingGermany",
"QMATH\nDepartment of Mathematical Sciences\nUniversity of Copenhagen\nDenmark",
"Univ Lyon\nENS Lyon\nUCBL\nCNRS\nF-69342Lyon Cedex 07Inria, LIPFrance"
]
| []
| Estimating physical properties of quantum states from measurements is one of the most fundamental tasks in quantum science. In this work, we identify conditions on states under which it is possible to infer the expectation values of all quasi-local observables of a state from a number of copies that scales polylogarithmically with the system's size and polynomially on the locality of the target observables. We show that this constitutes a provable exponential improvement in the number of copies over state-of-the-art tomography protocols. We achieve our results by combining the maximum entropy method with tools from the emerging fields of classical shadows and quantum optimal transport. The latter allows us to fine-tune the error made in estimating the expectation value of an observable in terms of how local it is and how well we approximate the expectation value of a fixed set of few-body observables. We conjecture that our condition holds for all states exhibiting some form of decay of correlations and establish it for several subsets thereof. These include widely studied classes of states such as one-dimensional thermal and high-temperature Gibbs states of local commuting Hamiltonians on arbitrary hypergraphs or outputs of shallow circuits. Moreover, we show improvements of the maximum entropy method beyond the sample complexity that are of independent interest. These include identifying regimes in which it is possible to perform the postprocessing efficiently as well as novel bounds on the condition number of covariance matrices of many-body states. arXiv:2107.03333v3 [quant-ph] 6 Apr 2023 | null | [
"https://export.arxiv.org/pdf/2107.03333v3.pdf"
]
| 235,755,103 | 2107.03333 | 5c5a2ec3e686ec2edab1b0c76b76aacff8ab010f |
Learning quantum many-body systems from a few copies
Cambyse Rouzé
Department of Mathematics
Technische Universität München
85748GarchingGermany
Daniel Stilck França
QMATH
Department of Mathematical Sciences
University of Copenhagen
Denmark
Univ Lyon
ENS Lyon
UCBL
CNRS
F-69342Lyon Cedex 07Inria, LIPFrance
Learning quantum many-body systems from a few copies
(Dated: April 7, 2023)
Estimating physical properties of quantum states from measurements is one of the most fundamental tasks in quantum science. In this work, we identify conditions on states under which it is possible to infer the expectation values of all quasi-local observables of a state from a number of copies that scales polylogarithmically with the system's size and polynomially on the locality of the target observables. We show that this constitutes a provable exponential improvement in the number of copies over state-of-the-art tomography protocols. We achieve our results by combining the maximum entropy method with tools from the emerging fields of classical shadows and quantum optimal transport. The latter allows us to fine-tune the error made in estimating the expectation value of an observable in terms of how local it is and how well we approximate the expectation value of a fixed set of few-body observables. We conjecture that our condition holds for all states exhibiting some form of decay of correlations and establish it for several subsets thereof. These include widely studied classes of states such as one-dimensional thermal and high-temperature Gibbs states of local commuting Hamiltonians on arbitrary hypergraphs or outputs of shallow circuits. Moreover, we show improvements of the maximum entropy method beyond the sample complexity that are of independent interest. These include identifying regimes in which it is possible to perform the postprocessing efficiently as well as novel bounds on the condition number of covariance matrices of many-body states. arXiv:2107.03333v3 [quant-ph] 6 Apr 2023
I. INTRODUCTION
The subject of quantum tomography has as its goal devising methods for efficiently obtaining a classical description of a quantum system from access to experimental data. However, all tomographic methods for general quantum states inevitably require resources that scale exponentially in the size of the system [1,2], be it in terms of the number of samples required or the post-processing needed to perform the task.
Fortunately, most of the physically relevant quantum systems can be described in terms of a (quasi)-local structure. These range from that of a local interaction Hamiltonian corresponding to a finite temperature Gibbs state to that of a shallow quantum circuit. Hence, locality is a physically motivated requirement that brings the number of parameters describing the system to a tractable number. Effective tomographic procedures should be able to incorporate this information. And, indeed, departing from physically motivated assumptions, many protocols in the liter- * [email protected] † daniel.stilck [email protected] ature achieve a good recovery guarantee in trace distance from a number of copies that scales polynomially with system size [3][4][5][6][7][8]. Furthermore, in many cases one is interested in learning only physical properties of the state on which tomography is being performed. These are mostly encoded into the expectation values of quasi-local observables that often only depend on reduced density matrices of subregions of the system. By Helstrom's theorem, obtaining a good recovery guarantee in trace distance is equivalent to demanding that the expectation value of all bounded observables are close for the two states, a much larger class of observables than quasi-local ones.
It is in turn desirable to design tomographic procedures that can take advantage of the fact that we wish to only approximate quasi-local observables, instead of demanding a recovery in trace distance. And some methods in the literature take advantage of that. For instance, the overlapping tomography or classical shadows methods of [9][10][11][12] allow for approximately learning all k-local reduced density matrices of an n-qubit state with failure probability δ using O(e ck k log(nδ −1 ) −2 ) copies without imposing any assumptions on the underlying state.
This constitutes an exponential improvement in the system size compared to the previously mentioned many-body setting at the expense of an undesirable exponential dependency in the locality of the observables.
In light of the previous discussion, it is natural to ask the guiding question of our work: is it possible to devise a tomography protocol that has a sample complexity that is logarithmic in system size and polynomial in the locality of the observables we wish to estimate?
At first this might sound like a tall order: as we show in Section G by importing results of [13], even if we depart from the assumption that the underlying state we wish to learn is a high-temperature product state with n qubits, the number of samples required to obtain an estimate that is close in trace distance from the target state scales like Ω(n −2 ). Thus, in order to obtain a sample complexity that is logarithmic in system size we cannot quantify closeness in trace distance and need to resort to more physically motivated distinguishability measures. Moreover, we show in Section G that even for product states the classical shadows protocol will fail to produce a good estimate for k-local observables if the number of samples is not exponential in k. We conclude that protocols like shadow tomography on their own cannot achieve our goal of a sample complexity that is polynomial in the locality of the underlying observables and need to be combined with other estimation methods in a nontrivial way.
In spite of these challenges, we provide an affirmative answer for the guiding question above for a large class of physically motivated states. We achieve this by combining two insights. First, we observe that recently introduced Wasserstein distances [14-19] are better suited than the trace distance to estimate by how much the expectation values of physically motivated observables can differ on two states. We introduce these distances and motivate this claim below. But in summary the Wasserstein distance quantifies how well we can distinguish states through observables whose expectation value does not change much when we apply a unitary acting only on a few qubits. By focusing on the Wasserstein distance instead of the trace distance we can bypass the Ω(n −2 ) lower bound we mentioned previously. Intuitively, this means that exponentially fewer samples are required to estimate all such local expectation values than arbi-trary, global ones.
The second insight is to combine techniques from quantum optimal transport with the wellestablished maximum entropy method [20] and the classical shadows protocols in a novel way. In particular, we will demonstrate that so-called transportation cost inequalities [14][15][16][17][18][19] allow us to control how well we approximate the expectation value of k-local observables by how well we approximate certain observables that only act on a constant number of qubits. Thus, we only use the shadows protocol to estimate the expectation of many observables that are highly local, the regime in which classical shadows excels, and bypass the exponential scaling of only using shadows for such an estimation task. This way we obtain a provable exponential improvement over known methods of many-body tomography [3][4][5][6][7][8] that focus on the trace distance and recent shadow tomography or overlapping tomography techniques [9][10][11][12], as summarized in Table I. Examples where we obtain exponential improvements include thermal states of 1D systems and high-temperature thermal states of commuting Hamiltonians on arbitrary hypergraphs and outputs of shallow circuits. Furthermore, based on results by [21][22][23], we conjecture that our results should hold for any high-temperature Gibbs state, even. More ambitiously, we conjecture that our results can be extended to states exhibiting exponential decay of correlations. This would allow us to extend our findings to classes of states that are not known to be tractable classically, such as ground states of gapped Hamiltonians in higher dimensional lattices [24].
The main ingredient to obtain our improvements are so-called transportation cost (TC) inequalities [25]. They allow us to bound the difference of expectation values of Lipschitz observables, a concept we will review shortly, on two states by their relative entropy. Such inequalities constitute a powerful tool from the theory of optimal transport [26] and are traditionally used to prove sharp concentration inequalities [27,Chapter 3]. Moreover, they have been recently extended to quantum states [15,17,19]. By combining such inequalities with the maximum entropy principle, we are able to easily control the relative entropy between the states and, thus, the difference of expectation values of Lipschitz observables.
Our revisit of the maximum entropy principle is further motivated by recent breakthroughs TABLE I. Summary of underlying assumptions and sample complexity of other approaches to perform tomography on quantum many-body states.
in Hamiltonian learning [5,22], shadow tomography [9], the understanding of correlations and computational complexity of quantum Gibbs states [21,[28][29][30][31] and quantum functional inequalities [19,32] that shed new light on this seasoned technique.
Before we summarize our contributions in more detail, we first define and revise the main concepts required for our results, namely Lipschitz observables, transportation cost inequalities and the maximum entropy principle.
A. Lipschitz observables
In the classical setting, given a metric d on a sample space S, the regularity of a function f : S → R can be quantified by its Lipschitz constant [27,Chapter 3] f Lip = sup For instance, if we consider functions on the ndimensional hypercube {−1, 1} n endowed with the Hamming distance, the Lipschitz constant quantifies by how much a function can change per flipped spin. It should then be clear that physical quantities like average magnetization have a small Lipschitz constant. Some recent works [15,17] extended this notion to the noncommutative setting, but we will focus on the approach of [17] in the main text. This is justified by the fact that it is more intuitive and technically simpler. For the approach followed in [17], the Lipschitz constant of an observable on n qudits is defined as [33] O Lip, √ n := max 1≤i≤n max ρ,σ∈D d n tri[ρ]=tri [σ] tr
[O(ρ − σ)] ,(2)
where D d n denotes the set of n-qudit states.
O = n −1 n i=1 ⊗ i+k j=i Z j ,(3)
where for each site j, Z j denotes the Pauli observable Z acting on site j and we take addition modulo n. It is not difficult to see that O Lip, = 2kn − 1 2 , while O ∞ = 1. We refer to the discussion in Fig. 1 for another example.
Moreover, one can show that shallow local circuits or short-time local dynamics satisfying a Lieb-Robinson bound cannot substantially increase the Lipschitz constant of an observable when evolved in the Heisenberg picture. That is, if we have that Φ * t is the quantum channel that describes some local dynamics at time/depth t in the Heisenberg picture and it satisfies a Lieb-Robinson bound, then we have:
Φ * t (O) Lip, = O(e vt O Lip, ) ,
where v denotes the Lieb-Robinson velocity. This result is discussed in more detail in Section B 1 of the supplemental material. Thus, averages over local observables and short-time evolutions thereof all belong to the class of observables that have a small Lipschitz constant when compared to generic observables. These facts justify our claim that quasi-local observables are Lipschitz. Once we are given a Lipschitz constant on observables or a set of quasi-local observables, we can define a Wasserstein-1 distance on states by duality [15,17]. The latter quantifies how well we can distinguish two states by their action on regular or local observables and is given by
W 1 (ρ, σ) := sup O: O Lip, ≤1 tr [O(ρ − σ)] .(4)
The definition (4) is in direct analogy with the variational definition of the trace distance, given by:
ρ − σ tr := sup O: O ∞≤1 tr [O(ρ − σ)] . (5)
Note, however, that the two quantities have different scalings. To illustrate this point, let us consider the observable in Eq.
(3). If we measure the distance in trace distance, then we need that ρ − σ tr ≤ to ensure that σ approximates the expectation of value of ρ up to on O. On the other hand, as O Lip, ≤ 2kn − 1 2 , the bound W 1 (ρ, σ) ≤ √ n 2k is sufficient to guarantee the same approximation. This difference in scaling is at the heart of our results, as we will see now.
B. Transportation cost inequalities
The paragraphs before motivated the idea that observables with a small Lipschitz constant capture quasilocal observables, and thus, that controlling the Wasserstein distance between two states gives rise to a more physically motivated distance measure than the trace distance. However, it is a priori not clear how to effectively control the Wasserstein distance between states, as it does not admit a closed formula in terms of eigenvalues like the trace distance.
In this work, we will achieve this by relating Wasserstein distances to the relative entropy between two states, D(ρ σ) := tr [ρ (log(ρ) − log(σ))], for σ of full-rank. This can be achieved through the notion of a transportation cost inequality: an n-qudit state σ is said to satisfy a transportation cost inequality with parameter α > 0 if the Wasserstein distance of σ to any other state ρ can be controlled by their relative entropy, i.e.
W 1 (ρ, σ) ≤ D(ρ σ) 2α(6)
holds for all states ρ ∈ D d n . Such inequalities are particularly powerful whenever the constant α does not depend on the system size n or does so at most inverse polylogarithmically, and can be thought of as a strengthening of Pinsker's inequality.
Transportation cost inequalities are closely related to the notion of Gaussian concentration [15,17,34], i.e. that Lipschitz functions strongly concentrate around their mean. Establishing analogues of such concentration inequalities for quantum many-body systems has been a fruitful line of research in the last years and they are related to fundamental questions in statistical physics, see e.g. [21,[35][36][37][38]. Although we are certain that inequalities like Eq. (6) also shed new light on this matter, here we will focus on their application to learning a classical description of a state through maximum entropy methods. We refer to Table II for a summary of classes of states known to satisfy it, as discussed in more detail below. Unfortunately, for some important classes the inequalities are only known for the more technically involved variations of the Wasserstein distance, and we refer the reader to the supplemental material, Section B 2 for precise definitions.
Recent works have established transportation cost inequalities with α either constant or logarithmic in system size for several classes of Gibbs states of commuting Hamiltonians [17,19,32]. In summary, they are known to hold for local Hamiltonians on arbitrary hypergraphs at high enough temperature or in 1D. In this work we enlarge the class of examples by showing them for outputs of short-depth circuits in Sec. C 1. Note that Eq. (6) is trivial for pure states, as then the relative entropy between that state and any other is always +∞. Thus, we first find an appropriate full-rank approximation of the pure state for which the inequality holds, as we will discuss below.
C. Maximum-entropy methods
Let us now show how transportation cost inequalities can be combined with maximum entropy methods. Such methods depart from the assumption that we are given a set of self-adjoint, linearly independent Hermitian observables over an n-qudit system, E 1 , . . . , E m ∈ M sa d n , with E i ∞ ≤ 1, a maximal inverse temperature β > 0 and the promise that the state we wish to learn can be expressed as:
σ ≡ σ(λ) = exp −β m i=1 λ i E i Z(λ) ,(7)
where λ ∈ R m with sup norm λ ∞ ≤ 1 and
Z(λ) = tr exp −β m i=1 λ i E i the partition function.
Denoting by e(λ) the vector with components e i (λ) = tr [E i σ(λ)], the crux of the maximum entropy method is that λ is the unique optimizer of
min µ∈R m µ ∞ ≤1 log(Z(µ)) + β m i=1 µ i e i (λ) ,(8)
which gives us a convex variational principle to learn the state given e(λ). We refer to Sec. A for a discussion of the maximum entropy principle and its properties. Typical examples of observables E i are e.g. all 2−local Pauli observables corresponding to edges of a given graph. This models the situation in which we are guaranteed that the state is a thermal state of a Hamiltonian with a known locality structure. More generally, for most of the examples discussed here the E i are given as follows: we depart from a hypergraph G = (V, E) and assume that there is a maximum radius r 0 ∈ N such that, for any hyperedge A ∈ E, there exists a vertex v ∈ V such that the ball B(v, r 0 ) centered at v and of radius r 0 includes A. The E i are then given by a basis of the traceless matrices on each hyperedge A. This definition captures the notion of a local Hamiltonian w.r.t. the hypergraph.
Our framework also encompasses pure states after making an appropriate approximation. For instance, in this article we will also consider the outputs of shallow quantum circuits and believe our framework extends to unique ground states of gapped Hamiltonians, which are pure. Indeed, although it might not be a-priori clear, we will show below how the outputs of constant depth constant circuits are contained in the class of ground states of gapped, commuting Hamiltonians.
Suppose that |ψ = U |0 ⊗n is the output of an unknown shallow circuit U of depth L with respect to a known interaction graph (V, E) on n vertices. That is,
U = ∈[L] e∈E U ,e ,
where each E ⊂ E is a subset of non-intersecting edges. Thus, in this setting the locality of the circuit is known, but the underlying unitary is not. As we then show in Theorem C.1, the state |ψ is 2 close in Wasserstein distance to the Gibbs state σ corresponding to the Hamiltonian with local terms UZ i U † at the inverse temperature β = log( −1 ). By a simple light-cone argument we can bound the support of each UZ i U † , since we know the underlying structure of the circuit. We then show in Thm. C.1 that it is indeed possible to efficiently learn the outputs of such circuits as long as the support of each time-evolved Z i is at most logarithmic in system size.
We see from Eq. (8) that the expectation values of the E i completely characterize the state σ(λ). But it is possible to obtain a more quantitative version of this statement through the following identity, also observed in [5]:
D(σ(µ) σ(λ)) + D(σ(λ) σ(µ)) = −β λ − µ|e(λ) − e(µ) .(9)
In addition to showing that if e(λ) = e(µ) then σ(µ) = σ(λ), Eq. (9) implies that by controlling how well the local expectations values of one state approximate another, we can also easily control their relative entropies. In particular, if m = O(n) and e(µ) − e(λ) 1 = O( n) for some > 0, we obtain from an application of Hölder's inequality that:
D(σ(µ) σ(λ)) ≤ D(σ(µ) σ(λ)) + D(σ(λ) σ(µ)) = O(β n) .(10)
We refer to Section A for more details. Thus, if we can find a state that approximates the expectation value of each E i up to , we are guaranteed to have a O(β ) relative entropy density. This observation is vital to ensure that the maximum entropy principle still yields a good estimate of the state even under some statistical noise in the vector of expectation values e(λ). Indeed, the variational principle of Eq. (8) would allow us to recover the state exactly if we had access to the exact values of e(λ). However, it turns out that solving Eq. (8) with some estimateê(λ) such that ê(λ)−e(λ) ∞ ≤ still yields a Gibbs state σ(µ) satisfying Eq. (10). The maximum entropy problem is a strictly convex optimization problem. Thus, it can be solved efficiently with access to the gradient of the target function. The gradieent turns out to be proportional to e(λ)−e(µ), where µ is the current guess for the optimum. Although we will discuss the details of solving the problem later in Sec. A, in a nutshell the maximum entropy problem can be solved efficiently if it is possible to efficiently compute expectation values of the observables E i on the family of Gibbs states under consideration.
D. Combining TC with the maximum entropy principle
Suppose now that we have that each of the E i acts on at most l 0 qudits. Then, by using e.g. the method of classical shadows, we can estimate the expectation values of all E i up to with failure probability at most δ > 0 with O(4 l0 −2 log(mδ −1 )) samples. From our discussion above, we see that this is enough to obtain a state σ(µ) satisfying Eq. (10). Further assuming that we have a TC with some constant α > 0 for σ(λ) we conclude that:
|tr [O(σ(λ) − σ(µ))]| ≤ O Lip W 1 (σ(λ), σ(µ)) ≤ O Lip D(σ(µ) σ(λ)) 2α = O( β n O Lip ).
Finally, recall that for sums of k local operators on a 2D lattice like in Fig. 1, where we have k = L 2 , the Lipschitz constant satisfies O Lip = O( √ n) and we require a precision of O( n/k) to obtain a relative error of . Putting all these elements together, we conclude that by setting = β˜ 2 /(k 2 ) for some˜ > 0 we arrive at
|tr [O(σ(λ) − σ(µ))]| = O(˜ k −1 n),(11)
which constitutes a relative error for the expectation value. In particular, we see that the sample complexity required to obtain this error was O(4 l0 k 4 β 2˜ −4 log(m)).
We then obtain:
Theorem I.1 (Learning Gibbs states). Let σ(λ) be a Gibbs state as defined in Eq. (7) and such that each E i acts on at most l 0 qudits. Moreover, suppose that σ(λ) satisfies a TC inequality with α depending at most inverse logarithmically with system size. Then with probability of success 1 − δ we can obtain a state σ(µ) such that for all
observables O ∈ M d n |tr [O(σ(λ) − σ(µ))]| ≤ O( √ n O Lip ) (13)
from O(4 l0 β 2 poly( −1 , log(mδ −1 ))) samples of σ(λ). Moreover, if it is possible to compute the expectation values of the E i on σ(τ ) for τ ∞ ≤ 1, then the postprocessing can also be in polynomial time.
We once again stress that the recovery guarantee in Eq. (13) suffices to give good relative approximations for the expectation value of quasilocal observables. Furthermore, if we did not resort to the Wasserstein distance but rather the trace distance, as in known results for the tomography of many-body states [3-5, 7, 39], the sample complexity would be exponentially worse, as we prove in Sec. G. More precisely, any algorithm that estimates Gibbs states on a lattice at inverse temperatures β = Ω(n − 1 2 ) up to trace distance requires Ω(n −2 ) samples. Thus, even for states whose inverse temperature goes to 0 as the system size increases, a focus on the trace distance instead of the Wasserstein distance implies an exponentially worse sample complexity.
Theorem I.1 also provides an exponential improvement over shadow techniques in the locality of the observables, as we argue in Sec. G. However, unlike our methods, shadow techniques do not need to make any assumptions on the underlying states. Thus, we see that Theorem I.1 opens up the possibility of highly efficient characterization of quantum states and provably exponentially better sample complexities when compared with recovery in trace distance.
We also remark that it is possible to improve the scaling in accuracy in Eq. (12) from −4 to the expected −2 . In order to do that, it is important to bound the condition number of the Hessian of the log-partition function, as we explain in the methods.
E. Numerical results
We will now compare the performance of our method to the classical shadow protocol [9] to estimate the average of a local observable on a Gibbs state. To ensure that we can still generate samples for a high number of qubits, we will consider the following family of commuting Gibbs states in 1-D:
H(λ) = (14) − n/2−1 k=0 S 2k (λ k X 0 X 1 + λ n+k X 0 X 1 Y 2 Y 3 )S 2k ,
where S is the shift operator, λ ∈ B ∞ (0, 1) and we assumed n is even. We will then estimate the All the estimates are up to polylog(n) factors. We refer to Sec. B 3 for proofs of the TC used and C for how to combine them with maximum entropy methods. In Section D we explain how to obtain the sample complexity by combining Thm. A.1 with strong convexity bounds. For the postprocessing we refer to section E. The case of shallow circuits is discussed in more detail in Sec. C 1. By lightcone l0 we mean the size of the largest lightcone of each qubit in the circuit. , log(n), −1 ) samples, an exponentially worse dependency in L. Even for moderate values of L, say L = 5, this can lead to 10 7 factor savings sample complexity and gives an exponential speedup for L = poly(log(n)). Other many-body methods have a poly(L, n, −1 ) [3-5, 7, 39] scaling, which in turn is exponentially worse in the system size. expectation value of the observable:
O = n/2−1 k=0 S 2k X 0 X 1 Y 2 Y 3 X 4 X 5 Y 6 Y 7 S 2k . (15)
The results for one particular choice of Gibbs state in this class are shown in Fig. 2. It shows that even for observables of moderate locality like the one in Eq. (15), shadows are outperformed by maximum entropy methods by orders of magnitude. Also note that the quality of our estimates decays like ∼ 1/ √ s, where s is the number of samples, showing how the quality of the recovery is essentially independent of the system's size.
We also remark that for obtaining these results, we obtained the expectation values of the X i X i+1 terms by measuring in the X basis on each qubit and of the XXY Y terms by measuring in a sequence of XXY Y bases followed by the same basis shifted by 2.
II. CONCLUSION
In this article we have demonstrated that ideas from quantum optimal transport yield provable exponential improvements in the sample complexity required to learn a quantum state when compared to state of the art methods. More precisely, we showcased how the interplay between maximum entropy methods and the Wasserstein distance, which is mediated by a transportation cost inequality, allows for fine-tuning the complexity of observables whose expectation value we wish to estimate and the number of samples required for that. Through our techniques we essentially settled most questions related to how efficiently it is possible to learn a commuting Gibbs state and significantly advanced our understanding of general Gibbs states. With the impressive growth in the size of quantum devices available in the lab over the last years, we believe that the polylogarithmic in system size sample complexities obtained here will come in handy to callibrate and characterize systems containing thousands or millions of qubits.
We believe that the framework and philosophy we began to develop here will find applications in other areas of quantum information theory. Indeed, although a bound in trace distance is the golden standard and one of the most widely accepted and used measures of distance between quantum states in quantum information and computation, we argued that in many phys- . We have set the number of qubits to 100, β = 1.1 and the λi unformly at random between 0.5 and 0.9. The x-axis denotes the logarithm of the number of samples in base 10 and the y the error in absolute value to the true value. We ran each protocol 300 times on the same Gibbs state to see how the estimate varied. We see that even with 10 4 samples the shadows method still has errors of the order 10 0 in the 75 percentile, whereas the maximum entropy already yields good estimates when the number of samples is of order 10 2 , showcasing that maximum entropy methods outperform classical shadows by orders of magnitudes for observables of moderate locality like those in Eq. (15).
ically relevant settings demanding a trace distance bound might be too strong. More importantly, replacing the trace distance by a Wasserstein distance bound can lead to exponential in complexity gains, as in this article. Thus, we believe that this approach is likely to lead to substantial gains and improvements also in other areas like quantum machine learning, process tomography or in quantum many-body problems.
Some of the outstanding open questions raised by this article are establishing that a suitable notion of exponential decay of correlations in general implies a transportation cost inequality and showing that TC holds for a larger class of systems. We believe that our framework should also extend to ground states of gapped Hamiltonians in 1D, however such a statement would still require us to refine our bounds. The results presented here also make us conjecture that any high temperature Gibbs state satisfies a TC inequality, even for long range interactions, which would make our techniques applicable to essentially all physically relevant models at high temperature.
Moreover, it would also be interesting to investigate other applications of Gaussian concentration in many-body physics [21,[35][36][37][38] from the angle of transportation cost inequalities.
III. METHODS
A. Summary of the maximum entropy procedure and contributions Now that we have discussed how our results yield better sample complexities for some classes of states, we discuss the maximum entropy algorithm in more detail and comment on how our results equip it with better performance guarantees.
As the maximum entropy principle in Eq. (A2) corresponds to solving a convex optimization problem, it should come as no surprise that promises on the strong convexity of the underlying functions being optimized can be leveraged to give improved performance guarantees [40]. For the specific case of the maximum entropy principle, strong convexity guarantees translate to bounds of the form LI ≤ ∇ 2 log(Z(µ)) ≤ U I
for constants L, U > 0 and all µ ∈ B ∞ (0, 1). We refer to Sec. A of the supplemental material for a thorough discussion. We note that in [5] the authors show such results in a more general setting, although with U, L polynomial in n, which is not sufficient for our purposes. For us it will be important to ensure that the condition number of the log-partition function is at most polylogarithmic in system size (i.e. L −1 U =Õ(1)). If we define the function f : µ → log(Z(µ)) + β m i=1 µ i e i (λ), then ∇f = β(e(λ) − e(µ)). It then follows from standard properties of strongly convex functions that:
λ − µ 2 = L −1 β e(λ) − e(µ) 2
That is, whenever the expectation values are close, the underlying parameters must be close as well. In this case, we have from e(λ)−e(µ) 2 = O( √ m) that:
− β λ − µ|e(λ) − e(µ)(17)
≤ β λ − µ 2 e(λ) − e(µ) 2 = O(L −1 β 2 2 m).
As we will see later, For each µ ∈ B ∞ (0, 1), let σ(µ) be a Gibbs state at inverse temperature β corresponding to the commuting Hamiltonian H(µ) = j µ j E j on the hypergraph G = (V, E), where tr [E i E j ] = 0 for all i, j and each local operator E j is traceless on its support. Then for β such that the states σ(µ) satisfy exponential decay of correlations, the Hessian of the log-partition function is bounded by
L −1 = O(e β β −O(β 2 )I ≥ ∇ 2 log Z(µ) ≥ Ω(β 2 e −cβ )I. (18)
for some constant c > 0.
After the completion of the first version of this work, in [22, Corollary 4.4] Tang et al proved strong convexity bounds for high-temperature, (not necessarily geometrically) local Hamiltonians. More precisely, for β = O(k −8 ), where k is the maximal number of qudits each term acts on, they show that L −1 ≤ 2β −2 . Although the result in Eq. (18) has the advantage of giving an estimate at any temperature, we see that also for noncommuting Gibbs states strong convexity holds at high enough temperatures.
The flowchart in Figure 3 gives the general scheme behind the maximum entropy method. Besides the exponential improvements in sample complexity laid out in Table II, we also provide structural improvements which we elaborate on while also explaining the general scheme:
a. Input: The input consists of m linearly independent operators E i of operator norm 1, some upper bound β > 0, precision parameter > 0 and step size η. Moreover, we are given the promise that the state of interest satisfies (7). Although we will be mostly concerned with the case in which the observables are local, we show the convergence of the algorithm in general in Sec. A. The step size should be picked as η = O(U −1 ) with U satisfying (16), as explained in Sec. A 1.
b. Require: We assume that we have access to copies of σ(λ) and that we can perform measurements to estimate the expectation values of the observables E i up to precision > 0. For most of the examples considered here, this will mostly require implementing simple, few-qudit measurements.
c. Output: The output is in the form of a vector of parameters µ of a Gibbs state σ(µ) as in Eq. (7). Note that unlike [5], our goal is not to estimate the vector of parameters λ, but rather obtain an approximation of the state satisfying σ(λ) σ(µ). Here we will focus on quantifying the output's quality in relative entropy. More precisely, the output of the algorithm is guaranteed to satisfy D(σ(µ) σ(λ)) = O( n).
d.
Step 1: In this step, we estimate the expectation values of each observable E i on the state σ(λ) up to an error . The resources to be optimized here are the number of samples of σ(λ) we require and the complexity to implement them. Using shadow tomography or Pauli grouping methods [9, 10, 41] we can do so requiring O(4 r0 −2 polylog(m)) samples and Pauli or 1−qubit measurements, where r 0 is maximum number of qubits the E i act on. This is discussed in more detail in Sec. C.
e.
Step 2: The maximum entropy problem in Eq. (8) can be solved with gradient descent, as it corresponds to a strictly convex optimization problem [40]. At this step we simply initialize the algorithm to start at the maximally mixed state.
f.
Step 3: It turns out that the gradient of the maximal entropy problem target function at µ t is proportional to e(µ t )−e(λ). Thus, to imple-ment an iteration of gradient descent, it is necessary to compute e(µ t ), as we assumed we obtained an approximation of e(λ) in Step 1. Moreover, it is imperative to ensure that the algorithm also converges with access to approximations to e(µ) and e(λ). This is because most algorithms to compute e(µ t ) only provide approximate values [28][29][30]42] . In addition, they usually have a poor scaling in the accuracy [29], making it necessary to show that the process converges with rough approximations to e(µ t ) and e(λ). Here we show that it is indeed possible to perform this step with only approximate computations of expectation values. This allows us to identify classes of states for which the postprocessing can be done efficiently. These results are discussed in more detail in Sec. A 2.
g. Convergence loop: Now that we have seen how to compute one iteration of gradient descent, the next natural question is how many iterations are required to reach the stopping criterium. As this is a strongly convex problem, the convergence speed depends on the eigenvalues of the Hessian of the function being optimized [40, Section 9.1.2]. For max-entropy, this corresponds to bounding the eigenvalues of a generalized covariance matrix. In [5] the authors already showed such bounds for local Hamiltonians implying the convergence of the algorithm in a number of steps depending polynomially in m and logarithmically on the tolerance for a fixed β. Here we improve their bound in several directions. First, we show that the algorithm converges after a polynomial in m number of iterations for arbitrary E i , albeit with a polynomial dependence on the error, as discussed in Sec. A 2. We then specialize to certain classes of states to obtain various improvements. For high-temperature, commuting Hamiltonians we provide a complete picture and show that the condition number of the Hessian is constant in Prop. III.1. This implies that gradient descent converges in a number of iterations that scales logarithmically in system size and error.
h. Stopping condition and recovery guarantees: the stopping condition,
e(λ) − e(µ t ) 2 ≤ √ m ,
can be immediately converted to a relative entropy bound between the target state and the current iterate by the identity (9). This justifies its choice as a stopping criterion.
Since we already discussed the sample com-plexity of the maximum entropy method, let us now discuss some of its computational aspects. There are two quantities that govern the complexity of the algorithm: how many iterations we need to perform until we converge and how expensive each iteration is. As the maximum entropy problem is strongly convex, one can show that O(U L −1 log(n −1 )) iterations suffice to converge. Here again U, L are bounds on the Hessian as in Eq. (16). Nevertheless, we also show how to bypass requiring such bounds in Sec. A 1 and obtain that the maximum entropy algorithm converges after O(m −2 ) iterations without any locality assumptions on the E i or strong convexity guarantees. That is, the number of iterations is at most polynomial in m.
Let us now discuss the cost of implementing each iteration of the algorithm on a classical computer. This boils down to estimating e(µ t ) for the current iterate, which can be achieved in various ways. In the worst case, it is possible to just compute the matrix exponential and the expectation values directly, which yields a complexity of O(d 3n m). However, for many of the classes considered here it is possible to do this computation in polynomial time. For instance, in [29] the authors show that for high-temperature Gibbs states it is possible to approximate the partition function efficiently. Thus, for the case of hightemperature Gibbs states, not necessarily commuting ones, we can do the postprocessing efficiently. It is also worth mentioning tensor network techniques to estimate e(µ t ). As we only require computing the expectation value of local observables, recent works show that it is possible to find tensor network states of constant bond dimension that approximate all expectation values of a given locality well [43][44][45]. From such a representation it is then possible to compute e(µ t ) efficiently in the 1D case by contracting the underlying tensor network. Unfortunately, however, in higher dimensions the contraction still takes exponential time. Table II provides a summary of the complexity of the postprocessing for various classes.
It is also worth considering the complexity of the postprocessing with access to a quantum computer, especially for commuting Hamiltonians. As all high-temperature Gibbs states satisfy exponential decay of correlations, the results of [46] imply that high-temperature Gibbs states can be prepared with a circuit of depth logarithmic in system size. Thus, by using the same Input: set of operators E1, . . . , Em, β > 0, error tolerance > 0, step size η.
Target state σ(λ) ∝ exp(−β i λiEi).
Require: ability to prepare σ(λ)
Output: estimate µ of λ
Step 1: obtain estimate of e(λ) = tr [σ(λ)Ei] up to
Step 2: Initialize µ0 = 0
Step 3: Estimate e(µt)
Is e(µt) − e(λ) ≤ √ m? Set µt+1 = µt − η(e(µt) − e(λ)) Output µt yes no t → t + 1 1 FIG. 3.
Flowchart for general maximum entropy algorithms.
method we used to estimate e(λ) we can also estimate e(µ t ) by using the copies provided by the quantum computer. The complexity of the postprocessing for shadows is linear in system size. Thus, with access to a quantum computer we can perform the post-processing for each iteration in timeÕ(m −2 ). As in this case we showed that the number of iterations isÕ(1), we conclude that we can perform the postprocessing in timẽ O(m −2 ). That is, for this class of systems our results give an arguably complete picture regarding the postprocessing, as it can be performed in a time comparable to writing down the vector of parameters, up to polylogarithmic factors. Furthermore, given that the underlying Gibbs states are known to satisfy TC and Prop. III.1 gives essentially optimal bounds on the covariance matrices, we believe that the present work essentially settles the question of how efficiently we can learn such Hamiltonians and corresponding Gibbs states. We discuss this in more detail in Sec. F. Finally, an example of our bounds is illustrated in Fig. 4, where we show that the number of samples required to estimate a local observable to relative precision is essentially system-size independent. . Error in estimating a Lipschitz observable after performing the maximum entropy reconstruction method. The underlying state is a classical 1D-Gibbs state with randomly chosen nearest-neighbor interactions and inverse temperature β = 1. We estimated all the ZiZi+1 expectation values from the original state based from 10 3 samples of the original state. We then computed the upper bound on the trace distance predicted by Eq. (9) and Pinsker's inequality and compared it to the actual of discrepancy for a Lipschitz observable on the reconstructed and actual state. The Lipschitz observable was chosen as i n −1 U ZiZi+2U † , where we picked U as a depth 3 quantum circuit. We observe that the error incurred is essentially independent of system size, and we get good predictions even when the number of samples is smaller than it.
The research of CR has been supported by project QTraj (ANR-20-CE40-0024-01) of the French National Research Agency (ANR) and by a Junior Researcher START Fellowship from the MCQST. DSF and CR are grateful to Richard
Kueng, Fernando Brandão and Giacomo De
Palma for interesting discussions.
[
SUPPLEMENTAL MATERIAL
This is the supplemental material to "Learning many-body states from very few copies". We will start in Sec. A with a review of the basic properties of the maximum entropy principle to learn quantum states. This is followed by a discussion of Lipschitz constants, Wasserstein distances and transportation cost inequalities in Sec. B. After that, in Sec. C we more explicitily discuss the interplay between the maximum entropy method and transportation cost inequalities. We then briefly discuss scenarios in which the postprocessing required for the maximum entropy method can be performed efficiently in Sec. E. In Sec. F we discuss a class of examples were we show that all technical results required to obtain the strongest guarantees of our work hold, that is, Gibbs states of commuting Hamiltonians at high enough temperature and 1D commuting Hamiltonians. Finally, in Sec. G we discuss lower bounds on the sample complexity of both shadow protocols and many-body algorithms that focus on a recovery in trace distance.
We start by setting some basic notations. Throughout this article, we denote by M k the algebra of k × k matrices on C k , whereas M sa k denotes the subspace of self-adjoint matrices. The set of quantum states over C k is denoted by D k . Typically, k will be taken as d n for n qudit systems. The trace on M k is denoted by tr. Given two quantum states ρ, σ, we denote by S(ρ) = − tr [ρ log(ρ)] the von Neumann entropy of ρ, and by D (ρ σ) the relative entropy between ρ and σ, i.e. D(ρ σ) = tr [ ρ (log(ρ) − log(σ))] whenever the support of ρ is contained in that of σ and +∞ otherwise. The trace distance is denoted by ρ − σ tr := tr [|ρ − σ|] and the operator norm of an observable by O ∞ . Scalar products are denoted by ·|· . Moreover, we denote the p norm of vectors by · p , and for
x ∈ R m and r ∈ R, B p (x, r) denotes the ball of radius r in p norm around x. The identity matrix is denoted by I. The adjoint of an operator A is denoted by A † and that of a channel Φ with respect to the trace inner product by Φ * . For a hypergraph G = (V, E) we will denote the distance between subsets of vertices induced by the hypergraph by dist.
Appendix A: Maximum entropy principle for quantum Gibbs states One of the main aspects of this work concerns the effectiveness of the maximum entropy method for the tomography of quantum Gibbs states in various settings and regimes. Thus, we start by recalling some basic properties of the maximum entropy method. Our starting assumption is that the target state is well-described by a quantum Gibbs state with respect to a known set of operators and that we are given an upper bound on the inverse temperature:
Definition A.1 (Gibbs state with respect to observables). Given a set of observables E = {E i } m i=1 , E 1 , . . . , E m ∈ M sa d n being linearly independent with E i ∞ ≤ 1, we call a state σ ∈ D d n a Gibbs state at inverse temperature β > 0 if there exists a vector λ ∈ R m with λ ∞ ≤ 1 such that:
σ = exp −β m i=1 λ i E i /Z(λ), where Z(λ) = tr exp −β m i=1 λ i E i (A1)
denotes the partition function. In what follows, we will denote σ by σ(λ) and i λ i E i = H(λ), where the dependence of σ(λ) on β is implicitly assumed.
We are mostly interested in the regime where m d n . Then the above condition can be interpreted as imposing that the matrix log(σ) is sparse with respect to a known basis E. A canonical example for such states are Gibbs states of local Hamiltonians on a lattice, for which m = O(n) and the observables E i are taken as tensor products of Pauli matrices acting on neighboring sites. But we could also consider a basis consisting of quasi-local operators or some subspace of Pauli strings.
Next, we review some basic facts about quantum Gibbs states. One of their main properties is that they satisfy a well-known maximum entropy principle [20]. This allows us to simultaneously show that the expectation values of the observables E completely characterize the state σ(λ) and further provides us with a variational principle to learn a description from which we can infer an approximation of other expectation values. Let us start with the standard formulation of the maximum entropy principle:
Proposition A.1 (Maximum entropy principle). Let σ(λ) ∈ D d n be a quantum Gibbs state (A1) with respect to the basis E at inverse temperature β and introduce e i (λ) := tr [σ(λ)E i ] for i = 1, . . . , m. Then σ(λ) is the unique optimizer of the maximum entropy problem:
maximize ρ∈D d n S(ρ) (A2) subject to tr [E i ρ] = e i (λ) for all i = 1, . . . , m.
Moreover, σ(λ) optimizes:
min µ∈R m µ ∞ ≤1 log(Z(µ)) + β m i=1 µ i e i (λ) .(A3)
Proof. The proof is quite standard, but we include it for completeness. Note that for any state ρ = σ that is a feasible point of Eq. (A2) we have that:
S(σ(λ)) − S(ρ) = D(ρ σ(λ)) + tr [(ρ − σ(λ)) log(σ(λ))] = D(ρ σ(λ)) − β m i=1 λ i tr [E i (ρ − σ(λ))] = D(ρ σ(λ)) > 0 ,
where we have used the fact that tr [E i (ρ − σ(λ))] = 0 for all feasible points and that the relative entropy between two different states is strictly positive. This shows that σ(λ) is the unique solution of (A2). Eq. (A3) is nothing but the dual program of Eq. (A2).
Eq. (A2) above gives a variational principle to find a quantum Gibbs state corresponding to certain expectation values. As it is well-known, one can use gradient descent to solve the problem in Eq. (A3), as it is a strongly convex problem. Various recent works have discussed learning of Gibbs states [5,47,48] and it is certainly not a new idea to do so through maximum entropy methods. Nevertheless, we will discuss how to perform the postprocessing in more detail, as some recent results allow us to give this algorithm stronger performance guarantees. Finally, it should be said that although we draw inspiration from [5], our main goal will be to learn a set of parameters µ ∈ R m such that the Gibbs states σ(µ) and σ(λ) are approximately the same on sufficiently regular observables while optimizing the sample complexity. This is in contrast to the goal of [5], which was to learn the vector of parameters λ. Learning λ corresponds to a stronger requirement, in the sense that if the vectors of parameters are close, then the underlying states are also close, as made precise in the following Prop. A.2.
One of the facts that we are going to often exploit is that it is possible to efficiently estimate the relative entropy between two Gibbs states σ(λ) and σ(µ) given the parameters λ, µ and the expectation values of observables in E. This also yields an efficiently computable bound on the trace distance. Indeed, as observed in [5], we have that:
Proposition A.2. Let σ(µ), σ(λ) ∈ D d n be Gibbs states with respect to a set of observables E at
inverse temperature β. Denote e(λ) = (tr [σ(λ)E i ]) i ∈ R m . Then σ(µ) − σ(λ) 2 tr ≤ D(σ(µ) σ(λ)) + D(σ(λ) σ(µ)) = −β λ − µ|e(λ) − e(µ) .(A4)
Proof. The equality in Eq. (A4) follows from a simple manipulation. Indeed:
D(σ(µ) σ(λ)) + D(σ(λ) σ(µ)) = β m i=1 (λ i − µ i ) tr [[σ(µ) − σ(λ)]E i ] .
The bound on the trace distance then follows by applying Pinkser's inequality.
The statement of Proposition A.2 allows us to obtain quantitative estimates on how well a given Gibbs state approximates another one in terms of the expectation values of known observables. In particular, a simple application of Hölder's inequality shows that if two Gibbs states are such that | tr [E i [σ(µ) − σ(λ)]] | ≤ , then the sum of their relative entropies is at most
β | λ − µ|e(λ) − e(µ) | ≤ β λ − µ ∞ m ≤ 2m β ,(A5)
where the outer bound arises from our assumption that λ ∞ , µ ∞ ≤ 1. Moreover, it is straightforward to relate the difference of the target function in Eq. (A3) evaluated at two vectors to the difference of relative entropies between the target state and their corresponding Gibbs states:
Lemma A.1. Let σ(λ) ∈ D d n be a Gibbs state with respect to a set of observables E at inverse temperature β > 0 and define, for any µ ∈ R m ,
f (µ) := log(Z(µ)) + β m i=1 e i (λ) µ i .(A6)
Then for any other two vectors µ, ξ ∈ R m with µ ∞ , ξ ∞ ≤ 1:
f (µ) − f (ξ) = D(σ(λ) σ(µ)) − D(σ(λ) σ(ξ)).
Proof. The proof follows from straightforward manipulations.
Thus, we see that a decrease of the target function f when solving the max entropy problem is directly connected to the decrease of the relative entropy between the target state and the current iterate. We will later use this to show the convergence of gradient descent for solving the max entropy problem with arbitrary E. However, before that we discuss how the convergence of the state σ(µ) to σ(λ) is related to the convergence of the parameters µ to λ.
Strong convexity and convergence guarantees
The maximum entropy problem (A2) being a convex problem, it should come as no surprise that properties of the Hessian of the function being optimized are vital to understanding its complexity and stability. For the maximum entropy problem, the Hessian at a point is given by a generalized covariance matrix corresponding to the underlying Gibbs state. As the results of [5] showcase, the eigenvalues of such covariance matrices govern both the stability of Eq. (A3) with respect to µ and the convergence of gradient descent to solve it. To see why, we recall some basic notions of optimization of convex functions and refer to [40] for an overview.
Definition A.2 (Strong convexity). Let C ⊂ R m be a convex set. A twice differentiable function f : C → R is called strongly convex with parameters U, L > 0 if we have for all x ∈ C that:
U I ≥ ∇ 2 f (x) ≥ LI.
The optimization of strongly convex functions is well-understood. Indeed, we have: Proposition A.3. Let C ⊂ R m be a convex set and f : C → R be strongly convex with parameters L, U as in the definition above. Then, for all > 0, the optimal value α := min x∈C f (x) is achieved up to error by the gradient descent algorithm initiated at x 0 ∈ C with step size U −1 after S steps for
S ≤ U L log f (x 0 ) − α .(A7)
Moreover, the gradient norm satisfies ∇f (x k ) 2 2 ≤ δ after S ∇ steps with
S ∇ ≤ U L log 2L(f (x 0 ) − α) δ .
Finally, we have for all µ, λ ∈ C that:
µ − λ 2 ≤ L −1 ∇f (µ) − ∇f (λ) 2 .(A8)
Proof. These are all standard results which can be found e.g. in [40, Section 9].
To see the relevance of these results for the maximum entropy problem, we recall the following Lemma:
Lemma A.2. Let C = B ∞ (0, 1) ⊂ R m , let σ(λ) ∈ D d n be a Gibbs state with respect to a set of operators E at inverse temperature β and define f : C → R as in Eq. (A6). Then:
(∇f (µ)) i = β tr [(σ(λ) − σ(µ))E i ](A9)
and
∇ 2 f (µ) ij = β 2 2 tr {E j , Φ H(µ) (E i )}σ(µ) − β 2 e i (µ)e j (µ) ,(A10)
with
Φ H(µ) (E i ) = +∞ −∞ ν β (t)e −iH(µ)t E i e iH(µ)t dt
where ν β (t) is a probability density function whose Fourier transform is given by:
ν β (ω) = 2 tanh βω 2 βω .
Proof. The quantum belief propagation theorem [49] states that:
∂ ∂λ i e −βH(λ) = − β 2 {e −βH(λ) , Φ H(λ) (E i )} .
The claim then follows from a simple computation.
Thus, we see that in order to compute the gradient of the target function f for the maximum entropy problem, we simply need to compute the expectation values of observables E on the current state and on the target state. Moreover, the Hessian is given by a generalized covariance matrix of the quantum Gibbs state. That this should indeed be interpreted as a covariance matrix is most easily seen by considering commuting Hamiltonians. Then indeed we have:
∇ 2 f (µ) ij = β 2 [tr [σ(µ)E i E j ] − e i (µ)e j (µ)] .
For any Gibbs state it holds that:
Proposition A.4. For all µ ∈ B ∞ (0, 1) ⊂ R m , inverse temperature β > 0 and set of operators E of cardinality m, we have:
∇ 2 f (µ) ≤ 2β 2 m I . Proof. Note that tr E i σ(µ)e iH(µ)t E j e −iH(µ)t ≤ 1 ,
by Hölder's inequality, the submultiplicativity and unitary invariance of the operator norm and the fact that E i ∞ ≤ 1. Similarly, we have that |e i (µ)|, |e j (µ)| ≤ 1. Thus, by Lemma A.2, | ∇ 2 f (µ) ij | ≤ 2β 2 . As ∇ 2 f (µ) is an m × m matrix, it follows from Gershgorin's circle theorem that ∇ 2 f (µ) ≤ 2β 2 mI.
The proof above also showcases how exponential decay of correlations can be used to sharpen estimates on the maximal eigenvalue of ∇ 2 f , since in that case ∇ 2 f (µ) ij will have exponentially decaying entries. We discuss this in more detail when we focus on many-body states, for which we also consider the more challenging question of lower bounds.
Convergence with approximate gradient computation and expectation values
Proposition A.3 already establishes the convergence of gradient descent whenever we can compute the gradient exactly and have a bound on L. Moreover, we see from Lemma A.2 that, in order to compute the gradient of the function f above, it suffices to estimate local expectation values. Moreover, it is a standard result that gradient descent is strictly decreasing for strictly convex problems [40].
However, in many settings it is only possible or desirable to compute the expectation values of quantum Gibbs states approximately. Moreover, the expectation values of the target state are only known up to statistical fluctuations. It is then not too difficult to see that gradient descent still offers the convergence guarantees if we only approximately compute the gradient. We state the exact convergence guarantees and precision requirement for completeness.
Theorem A.1 (Computational complexity and convergence guarantees). Let σ(λ) ∈ D d n be a quantum Gibbs state at inverse temperature β with respect to a set of operators E and C E be the computational cost of computing e (µ) satisfying e (µ) − e(µ) 2 ≤ δ µ for µ ∈ B ∞ (0, 1) and δ µ > 0. Moreover, assume that we are given an estimate e (λ) of e(λ) satisfying
e(λ) − e (λ) ∞ ≤ (A11)
and that the partition function is strongly convex with parameters U, L. Then gradient descent starting at µ = 0 with step size 1 cU and input data e (λ) converges to a state σ(µ * ) satisfying:
σ(λ) − σ(µ * ) 2 tr ≤ D(σ(λ) σ(µ * )) + D(σ(µ * ) σ(λ)) = O(βδ µ min{ √ m, βL −1 δ µ } + β min{1, L −1 β }m). in O min U C E β −2 n log(d)δ −2 µ , U C E L log(n −1 )
time.
We will prove this theorem at the end of this section, as before we will need some auxiliary statements. But the reader familiar with basic concepts from convex optimization should feel comfortable to skip them. Proposition A.5 (Convergence of gradient descent with constant relative precision). Let σ(λ) ∈ D d n be a quantum Gibbs state at inverse temperature β with respect to a set of operators E, and for a Gibbs state σ(µ) let z(µ) ∈ R m be a vector such that
z(µ) − β(e(λ) − e(µ)) 2 ≤ β 4c e(µ) − e(λ) 2 (A12)
for some c > 10. Then we have that:
D(σ(λ) σ(µ − z(µ) cU )) − D(σ(λ) σ(µ)) ≤ − 9β 2 e(µ) − e(λ) 2 2 10 c U ,(A13)
where U is a uniform bound on the operator norm of the Hessian of the function f defined in Eq. (A6).
Proof. From a Taylor expansion and strong convexity we have for any two points µ, ξ that:
f (ξ) ≤ f (µ) + ∇f (µ)|(ξ − µ) + U 2 ξ − µ 2 2 .
Note that ∇f (µ) = β (e(λ) − e(µ)) by Eq. (A9). Setting ξ = µ − z cU = µ + 1 cU (−∇f (µ) + ∇f (µ) − z) we obtain:
f µ − z cU ≤ f (µ) − 1 cU ∇f 2 2 + 1 cU ∇f (µ)|∇f (µ) − z + 1 2c 2 U − ∇f (µ) + ∇f (µ) − z 2 2 ≤ f (µ) − 1 cU ∇f 2 2 + 1 cU ∇f (µ) 2 ∇f (µ) − z 2 + 1 2c 2 U ( ∇f (µ) 2 + ∇f (µ) − z 2 ) 2 ,
where in the last step we used the Cauchy-Schwarz inequality. By our assumption in Eq. (A12) for z ≡ z(µ) we have:
f (µ) − 1 cU ∇f 2 2 + 1 cU ∇f (µ) 2 ∇f (µ) − z 2 + 1 2c 2 U ( ∇f (µ) 2 + ∇f (µ) − z 2 ) 2 ≤ f (µ) − 1 cU ∇f 2 2 + 1 4c 2 U ∇f (µ) 2 2 + (1 + (4c) −1 ) 2 2c 2 U ∇f (µ) 2 2 = f (µ) − 1 cU 1 − 1 4c − (1 + (4c) −1 ) 2 2c ∇f (µ) 2 2
and it can be readily checked that 1 4c + (1+(4c) −1 ) 2 2c ≤ 1 10 for c ≥ 10. To conclude the proof, note that by Lemma A.1:
D(σ(λ) σ(µ − z cU )) − D(σ(λ) σ(µ)) = f (µ − z cU )) − f (µ) , and insert ∇f (µ) 2 2 = β 2 e(µ) − e(λ) 2 2 .
Thus, we see that we make constant progress for the gradient descent algorithm if we only compute the derivative up to constant relative precision. We show now how to pick our stopping criterium based on approximate computations of the gradient which ensure convergence in polynomial time.
Proposition A.6. Let σ(λ) ∈ D d n be a quantum Gibbs state at inverse temperature β with respect to a set of operators E. Suppose that at each time step t of gradient descent we compute an estimate e (µ t ) of e(µ t ) that satisfies e (µ t ) − e(µ t ) 2 ≤ δ µ , and set the stopping criterium to be:
e(λ) − e (µ * ) 2 < (4c + 1)δ µ .
for some constant c > 10. Then gradient descent starting at µ = 0 with update rule µ t+1 := µ t − β(e(λ)−e (µt)) cU will converge to a state σ(µ * ) satisfying:
σ(λ) − σ(µ * ) 2 tr ≤ D(σ(λ) σ(µ * )) + D(σ(µ * ) σ(λ)) ≤ 2(4c + 1)βδ µ √ m after at most O(U β −2 n log(d)δ −2 µ ) iterations.
Proof. First, we show that the relative precision bound required for Proposition A.5 holds under these assumptions. By our choice of the stopping criterium, at each time step we have the property that, while we did not stop,
e(λ) − e(µ t ) 2 = e(µ) − e (µ t ) + e (µ t ) − e(λ) 2 ≥ e (µ t ) − e(λ) 2 − e(µ t ) − e (µ t ) 2 ≥ (4c + 1)δ µ − δ µ = 4cδ µ
by the reverse triangle inequality. As we assumed we have that e (µ t ) − e(µ t ) 2 ≤ δ µ , it follows that (4c) −1 e(λ) − e(µ t ) 2 ≥ (e (µ t ) − e(λ)) − (e(µ t ) − e(λ)) 2 . Multiplying the inequality by β, we see that the conditions of Proposition A.5 are satisfied for z(µ t ) := β(e(λ) − e (µ t )). Let us now show the convergence. By our choice of initial point, we have that:
D(σ(λ) σ(0)) ≤ n log(d) .
Now, suppose that we did not stop before T iterations. It follows from a telescopic sum argument and Proposition A.5 that:
D(σ(λ) σ(µ T )) ≤ n log(d) − T 9β 2 (4c + 1) 2 δ 2 µ 10 c U ,
since e (µ t ) − e(λ) ≥ (4c + 1)δ µ at all iterations because we did not halt. As the relative entropy is positive, it follows that T = O(β −2 U c −1 n log(d)δ −2 µ ) before the stopping criterium is met. The recovery guarantee whenever the stopping criterium is met follows from Proposition A.2, the Cauchy-Schwarz inequality and the equivalence of norms λ − µ 2 ≤ √ m λ − µ ∞ ≤ 2 √ m.
Since we proved in Proposition A.4 that U = O(β 2 m), it follows that the number of iterations is O(nm). Thus, we see that having a lower bound on the Hessian is not necessary to ensure convergence, but it can speed it up exponentially: Proposition A.7 (Exponential convergence of gradient descent with approximate gradients). In the same setting as Proposition A.5 we have:
f µ − z(µ) cU − f (λ) ≤ 1 − 18L 10cU (f (µ) − f (λ)).(A14)
In particular, gradient descent with approximate gradient computations starting at µ 0 = 0 converges after O U L log(n −1 ) iterations to µ such that f (µ) − f (λ) ≤ .
Proof. For any strongly convex function f and points µ, ξ ∈ C we have that:
f (ξ) ≥ f (µ) + ∇f (µ)|(ξ − µ) + L 2 µ − ξ 2 2 .
As explained in [40, Chapter 9], the R.H.S. of the equation above is a convex quadratic function of ξ for µ fixed. One can then easily show that its minimum is achieved atξ = µ − 1 L ∇f (µ). From this we obtain:
f (µ) − f (λ) ≥ − 1 2L ∇f (µ) 2 2 = − β 2 e(µ) − e(λ) 2 2 2L ,
where the last identity follows from Eq. (A9). By subtracting f (λ) from both sides of the inequality (A13) in Proposition A.5 and rearranging the terms we have that:
f µ − z(µ) cU − f (λ) ≤ f (µ) − f (λ) − 9β 2 e(µ) − e(λ) 2 2 10cU ≤ f (µ) − f (λ) − 18L 10cU (f (µ) − f (λ)) = 1 − 18L 10cU (f (µ) − f (λ)) .
This yields the claim in Eq. (A14). To obtain the second claim, note that applying Eq. (A14) iteratively yields that after k iterations we have that, for µ
k = µ k−1 − z(µ k−1 ) cU , (f (µ k ) − f (λ)) ≤ 1 − 18L 10cU k (f (0) − f (λ)) .
By our choice of initial point and Lemma
A.1 we have that f (0) − f (λ) = O(n)
, which yields the claim solving for k and noting that − log 1 − 18L 10cU = Ω( L U ). Remark A.1 (Comparison to mirror descent). It is also worth noting that the convergence guarantees of Proposition A.6 and update rules of gradient descent are very similar to the ones of mirror descent with the von Neumann entropy as potential, another algorithm used for learning of quantum states [50][51][52][53]. In this context, mirror descent would use a similar update rule. However, instead of computing the whole gradient, i.e. all expectation values of the basis, for one iteration, mirror descent just requires us to find one i such that |e i (λ) − e i (µ)| ≥ δ and updates the Hamiltonian in the direction i. This implies that the algorithm can be run online while we still estimate some other e i , but we will not analyse this variation more in detail here.
Finally, we assumed so far that we knew the expectation values of the target state, e(λ), exactly. However, it follows straightforwardly from Proposition A.6 that knowing each expectation value up to an error is sufficient to ensure that the additional error due to statistical fluctuations is at most m. More precisely, if we have that e(λ) − e (λ) ∞ ≤ for some > 0, then any Gibbs state σ(µ * ) satisfying e(µ * ) − e (λ) 2 ≤ δ satisfies:
D(σ(λ) σ(µ * )) + D(σ(µ * ) σ(λ)) ≤ 2βδ √ m + 2β m
by Proposition A.2 and a Cauchy-Schwarz inequality. With these statements at hand we are finally ready to prove Thm. A.1.
Proof of Thm. A.1. We will show in Propositions A.6, A.7 that under the conditions outlined above, the maximum entropy problem will converge to a µ * that satisfies:
e (λ) − e(µ * ) 2 ≤ (4c + 1)δ µ .
Without making any assumptions on L we can then bound
D(σ(λ) σ(µ * )) + D(σ(µ * ) σ(λ) = β | λ − µ * |e(λ) − e(µ * ) | ≤ β (| λ − µ * |e(λ) − e(µ * ) | + | λ − µ * |e (λ) − e(µ * ) |) ≤ (4c + 1)δ µ √ m + 2β m
by Hölder inequality and our assumptions on e (λ). Let us now discuss how strong convexity can improve these estimates. First note that by strong convexity and Cauchy-Schwarz we have:
D(σ(λ) σ(µ * )) + D(σ(µ * ) σ(λ) = β | λ − µ * |e(λ) − e(µ * ) | ≤ β e(λ) − e(µ * ) 2 λ − µ * 2 ≤ L −1 β 2 e(λ) − e(µ * ) 2 2 ≤ L −1 β 2 ( e (λ) − e(µ * ) 2 + e(λ) − e (λ) 2 ) 2
which yields the claim.
In short, we see that we can perform the recovery by simply computing the gradient approximately. In particular, as already hinted at in [5], this implies that recent methods developed to approximately compute the partition function of high-temperature quantum Gibbs states can be used to perform the postprocessing in polynomial time [21,[28][29][30]. This and other methods to compute the gradient are discussed in more detail in Sec. E. Furthermore, it should be noted that usually L = Ω(β −2 ) in the high temperature regime, making the bound independent of β for such states. We refer to Sec. D for a summary of the cases for which bounds on L are known.
Appendix B: Lipschitz constants and transportation cost inequalities
In this section, we identify conditions under which it is possible to estimate all expectation values of k-local observables up to an error by measuring O(poly(k, log(n), −1 )) copies of it, where n is the system size, which constitutes an exponential improvement in some regimes. To obtain this result, we combine the maximum entropy method introduced in Section A with techniques from quantum optimal transport. In order to formalize and prove the result claimed, we resort to transportation cost inequalities and the notion of Lipschitz constants of observables, which we now introduce.
Lipschitz constants and Wasserstein metrics
Transportation cost inequalities, introduced by Talagrand in the seminal paper [25], constitute one of the strongest tools available to show concentration of measure inequalities. In the quantum setting, their study was initiated in [15,17,54,55]. Here we are going to show how they can also be used in the context of quantum tomography. On a high level, a transportation cost inequality for a state σ quantifies by how much the relative entropy with respect to another state ρ is a good proxy to estimate to what extent the expectation values of sufficiently regular observables differ on the states. As maximum entropy methods allow for a straightforward control of the convergence of the learning in relative entropy (cf. Section A), they can be combined to derive strong recovery guarantees. But first we need to define what we mean by a regular observable.
We start by a short discussion of Lipschitz constants and the Wasserstein-1 distance. To obtain an intuitive grasp of these concepts, one way is to first recall the variational formulation of the trace distance of two quantum states σ, ρ:
ρ − σ tr = sup P =P † , P ∞ ≤1 tr [P (ρ − σ)] .
Seeing probability distributions as diagonal quantum states, we recover the variational formulation of the total variation distance by noting that we may restrict to diagonal operators P . Thus, the total variation distance quantifies by how much the expectation values of arbitrary bounded functions can differ under the two distributions. However, in many situations we are not interested in expectation values of arbitrary bounded observables, but rather observables that are sufficiently regular. E.g., most observables of physical interest are (quasi)-local. Thus, it is natural to look for distance measures between quantum states that capture the notion that two states do not differ by much when restricting to expectation values of sufficiently regular observables. These concerns are particularly relevant in the context of tomography protocols, as they should be designed to efficiently obtain a state that reflects the expectation values of extensive observables of the system. As we will see, one of the ways of ensuring that the sample complexity of the tomography algorithm reflects the regularity of the observables we wish to recover is through demanding a good recovery in the Wasserstein distance of order 1 [15,17].
In the classical setting [27, Chapter 3], one way to define a Wasserstein-1 distance between two probability distributions is by replacing the optimization over all bounded diagonal observables by that over those that are sufficiently regular: given a metric d on a sample space Ω, we define the Lipschitz constant of a function f : Ω → R to be:
f Lip := sup x,y∈Ω |f (x) − f (y)| d(x, y) .
Denoting the Wasserstein-1 distance by W 1 , it is given for two probability measures p, q on Ω by
W 1 (p, q) := sup f : f Lip≤1 |E p (f ) − E q (f )| . (B1)
That is, this metric quantifies by how much the expectation values of sufficiently regular functions can vary under p and q, in clear analogy to the variational formulation of the trace distance. We refer to [26,27] for other interpretations and formulations of this metric.
a. Quantum Hamming Wasserstein distance
It is not immediately clear how to generalize these concepts to noncommutative spaces. There are by now several definitions of transport metrics for quantum states [15,[17][18][19]. As already noted in the main text, de Palma et al. defined the Lipschitz constant of an observable O ∈ M d n as [17]:
O Lip, = √ n max 1≤i≤n max ρ,σ∈D d n tri[ρ]=tri[σ] tr [O(ρ − σ)] .(B2)
That is, the Lipschitz constant quantifies the amount by which the expectation value of an observable can change when evaluated on two states that only differ on one site. This is in analogy with the Lipschitz constants induced by the Hamming distance on the hypercube, so we denote it with . Note that in our definition we added the √ n factor, which will turn out to be convenient later. Armed with this definition, we can immediately obtain an analogous definition of the Wasserstein distance in Eq. (B1) for two states ρ, σ:
W 1, (ρ, , σ) := sup O=O † , O Lip, ≤1 tr [O(ρ − σ)] .(B3)
The authors of [17] also put forth the following equivalent expression for the norm:
W 1, (ρ, σ) = min n i=1 X (i) 1 : ρ − σ = n i=1 X (i) , X (i) ∈ M sa d n , tr i [X (i) ] = 0 2 √ n .(O 1 = n i=1 I i j =i Z j , and O 2 = n i=1 Z i . Clearly, O 1 ∞ = O 2 ∞ = n.
On the other hand, by considering the states ρ = |0 0| ⊗n and
σ = |1 1| ⊗ |0 0| ⊗n−1 , we see that O 1 Lip, ≥ √ n (2n − 2) whereas O 2 Lip, = 2 √ n. More generally, it is not difficult to see that if O = n i=1 O i with O iO Lip, ≤ 2 √ n max 1≤j≤n i | supp(O i ) ∩ {j}| .
That is, the maximal number of intersections of the support of O i on one qubit. From these examples we see that for local observables, the ratio of the operator norm and Lipschitz constant reflects the locality of the observable.
b. Quantum differential Wasserstein distance
The Wasserstein distance W 1, generalizes the classical Orstein distance, that is the Wasserstein distance corresponding to the Hamming distance on bit strings. Another definition of a Lipschitz constant and its attached Wasserstein distance was put forth in [15], where the construction is based on a differential structure that bears more resemblance to that of the Lipschitz constant of a differentiable function on a continuous sample space, e.g. a smooth manifold [56]. Let us now define the notion of a noncommutative differential structure (see [14]):
Definition B.1 (Differential structure). A set of operators L k ∈ M d n and constants ω k ∈ R defines a differential structure {L k , ω k } k∈K for a full rank state σ ∈ D d n if
1 {L k } k∈K = {L † k } k∈K ;
2 {L k } k∈K consists of eigenvectors of the modular operator ∆ σ (X) := σXσ −1 with
∆ σ (L k ) = e −ω k L k . (B5) 3 L k ∞ ≤ 1.
Such a differential structure can be used to provide the set of matrices with a Lipschitz constant that is tailored to σ, see e.g. [14,15] for more on this. In order to distinguish that constant from the one defined in (B2), we will refer to it as the differential Lipschitz constant and denote it by X Lip,∇ . It is given by:
X Lip,∇ := k∈K (e −ω k /2 + e ω k /2 ) [L k , X] 2 ∞ 1/2 . (B6)
The quantity [L k , X] should be interpreted as a partial derivative and is sometimes denoted by ∂ k X for that reason. Then, the gradient of a matrix A, denoted by ∇A with a slight abuse of notations, refers to the vector of operator-valued coordinates (∇A) i = ∂ i A. For ease of notations, we will denote the differential structure by the couple (∇, σ). The notion of a differential structure is also intimately connected to that of the generator of a quantum dynamical semigroup converging to σ [14], and properties of that semigroup immediately translate to properties of the metric. This is because the differential structure can be used to define an operator that behaves in an analogous way to the Laplacian on a smooth manifold, which in turn induces the heat semigroup. We refer to [14,15] for more details. To illustrate the differential structure version of the Lipschitz constant, it is instructive to think of the maximally mixed state. In this case, one possible choice would consist of picking the L k to be all 1−local Pauli strings and ω j = 0. Then the Lipschitz constant turns out to be given by:
X Lip,∇ = k∈K P k XP k − X 2 ∞ 1/2 ,(B7)
where P k are all 1−local Pauli matrices. Thus, we see that this measures how much the operator changes if we act locally with a Pauli unitary on it. If we think of an operator as a function and conjugating with a Pauli as moving in a direction, the formula above indeed looks like a derivative. In fact, it is possible to make this connection rigorous, see [14]. As before, the definition in Eq. (B6) yields a metric on states by duality:
W 1,∇ (ρ, σ) := sup X=X † , X Lip,∇ ≤1
|Tr (X(ρ − σ))| .
It immediately follows from the definitions that for any observable X:
|tr [X(ρ − σ)]| ≤ X Lip,∇ W 1,∇ (ρ, σ) .(B8)
Although this geometric interpretation opens up other interesting mathematical connections for this definition, the differential Wasserstein distance has the downside of being state dependent. It however induces a stronger topology than the quantum Hamming Wasserstein distance in some situations (see [19,Proposition 5])). In particular, the results of [19, Proposition 5]) imply that for commutative Gibbs states a TC inequality for W 1,∇ implies the corresponding statement for W 1, .
Local evolution of Lipschitz observables
As already discussed in Subsections B 1 a and B 1 b when we defined . Lip,∇ and . Lip, , Lipschitz constants can be easily controlled by making assumptions on the locality of the operators. Indeed, if we apply a local circuit to a local observable, it is straightforward to control the growth of the Lipschitz constant in terms of the growth of the support of the observable under the circuit. More precisely, in [17, Proposition 13] the authors show such a statement for discrete time evolutions with exact lightcones: if we denote by |L| the size of the largest lightcone of one qubit under a channel Φ, then for any observable O ∈ D d n , Φ * (O) Lip, ≤ 2|L| O Lip, . Here we will extend such arguments to the evolution under local Hamiltonians or Lindbladians. By resorting to Lieb-Robinson bounds, we show that the Lipschitz constants . Lip,∇ and . Lip, of initially local observables evolving according to a quasi-local dynamics increase at most with the effective lightcone of the evolution. Thus, shorttime dynamics and shallow-depth quantum channels can only mildly increase the Lipschitz constant. This further justifies the intuition that observables with small Lipschitz constant reflect physical observables.
Lieb-Robinson (LR) bounds in essence assert that the time evolution of local observables under (quasi)-local short-time dynamics have an effective lightcone. There are various formulations of Lieb-Robinson bounds. Reviewing those in detail is beyond the scope of this work and we refer to [57][58][59][60] and references therein for more details. For studying the behaviour of . Lip,∇ under local evolutions, the most natural formulation is the commutator version: the generator L of a quasi-local dynamics t → Φ t = e tL on n qudits arranged on a graph G = (V, E), with graph distance dist, is said to satisfy a LR bound with LR velocity v if for any observable O A supported on a region A and any other observable B supported on a region B, we have:
[Φ * t (O A ), O B ] ∞ ≤ c (e vt − 1) g(dist(A, B)) O A ∞ O B ∞ ,(LR1)
for g : N → R + a function such that lim x→∞ g(x) = 0. We then have:
Proposition B.1 (Growth of differential Lipschitz constant for local evolutions). Let (∇, σ) be a differential structure on M d n and let O =
i O i be an observable with O i ∞ ≤ 1. Let A i denote the support of each O i and B j that of each L j . Moreover, let t → Φ t be an evolution satisfying Eq. (LR1) and set o(i, j)(t) = 2 if A i ∩ B j = ∅ and c (e vt − 1) g(dist(A i , B j )) else. Then: Φ * t (O) 2 Lip,∇ ≤ k∈K (e ωj + e −ωj ) i o i,j (t) 2 .
Proof. The proof follows almost by definition. We have:
Φ * t (O) 2 Lip,∇ = j∈J (e ωj + e −ωj ) [Φ t (O), L j ] 2 ∞ .
By a triangle inequality we have that:
[Φ * t (O), L i ] 2 ∞ ≤ i [Φ * t (O i ), L j ] ∞ 2
For any term in the sum we have [Φ * t (O i ), L j ] ∞ ≤ 2 by the submultiplicativity of the operator norm, a triangle inequality and L j ∞ ≤ 1. In case O i and L j do not overlap, the stronger bound in Eq. (LR1) holds.
To illustrate this bound more concretely, let us take O = n i=1 Z i , L j acting on [j, j + k] for j = 1, . . . , n − k, and g(dist(i, j)) = e −µ|i−j| , for some constant µ. I.e. we have a 1D differential structure and a local time evolution on a 1D lattice. Then for any j:
i o i,j (t) = k + (e vt − 1) j−1 i=1 e −µ|i−j| + n i=j+k+1 e −µ|i−j| ≤ k + (e vt − 1)e −µ 1 − e −µ .
Thus,
Φ * t (O) Lip,∇ ≤ √ n − k k + (e vt − 1)e −µ 1 − e −µ .
We see that for constant times the Lipschitz constant is still of order √ nk.
Let us now derive a similar, yet somehow stronger, version of Prop. B.1 for . Lip, . In some situations, bounds like (LR1) can be further exploited in order to prove the quasi-locality of Markovian quantum dynamics [59]: for any region C ⊂ D ⊂ V , there exists a local Markovian evolution t → Φ (D) t that acts non-trivially only on region D, and such that for any observable O C supported on region C,
Φ * t − (Φ (D) t ) * (O C ) ∞ ≤ c (e vt − 1) h(d(C, V \D)) O C ∞ ,(LR2)
for some other function h : N → R + such that lim x→∞ h(x) = 0 and constant c > 0. In other words, at small times, the channels Φ * t can be well-approximated by local Markovian dynamics when acting on local observables. Let us now estimate the growth of Lipschitz constants for the definition of [17] given a Lieb-Robinson bound: Proposition B.2. Assume that Φ t satisfies the bound (LR2). Then, for any two quantum states ρ, σ ∈ D d n and any ordering {1, . . . , n} of the graph:
W 1, (Φ t (ρ), Φ t (σ)) ≤ 8 + 2 c (e vt − 1) n i=3 h(d({i · · · n}, {1})) W 1, (ρ, σ) . (B9)
Moreover, for any observable H ∈ M d n ,
Φ * t (H) Lip, ≤ 8 + 2 c (e vt − 1) n i=3 h(d({i · · · n}, {1})) H Lip, .(B10)
Proof. From [17], the Wasserstein distance W 1, arises from a norm . W1 , i.e. W 1, (ρ, σ) = ρ−σ W1 . Moreover, the norm . W1 is uniquely determined by its unit ball B n , which in turn is the convex hull of the set of the differences between couples of neighboring quantum states:
N n = i∈V N (i) n , N (i) n = {ρ − σ : ρ, σ ∈ D d n , tr i (ρ) = tr i (σ)} .
Now by convexity, the contraction coefficient for this norm is equal to
Φ t W1→W1 = max Φ t (X) W1 : X ∈ M sa,0 d n , X W1 ≤ 1 = max X∈Nn Φ t (X) W1 ,
where M sa,0 d n denotes the set of self-adjoint, traceless observables. Let then X ∈ N n . By the expression (B4), and choosing without loss of generality an ordering of the vertices such that tr 1 (X) = 0, we have
Φ t (X) W1 ≤ 1 2 √ n n i=1 I d i−1 ⊗ tr 1···i−1 •Φ t (X) − I d i ⊗ tr 1···i •Φ t (X) 1 = 1 2 √ n n i=1 dµ(U i ) tr 1···i−1 •(Φ t (X) − U i Φ t (X)U † i ) 1 ≤ 1 2 √ n n i=1 dµ(U i ) [U i , tr 1···i−1 •Φ t (X)] 1 ≤ 1 √ n n i=1 tr 1···i−1 •Φ t (X) 1 (1) = 1 √ n n i=1 tr 1···i−1 •(Φ t − Φ (i−k···n) t )(X) 1 (B11)
where µ denotes the Haar measure on one qudit, and where (1) follows from the fact that tr 1 (X) = 0, with Φ
(i−k···n) t ≡ Φ ({i−k,··· ,n}) t
defined as in Eq. (LR2) with k < i − 1. Next, by the variational formulation of the trace distance and Eq. (LR2), we have for i ≥ 3 that
tr 1···i−1 •(Φ t − Φ (i−k···n) t )(X) 1 = max Oi···n ∞≤1 tr X(Φ * t − Φ (i−k···n) * t )(O i···n ) ≤ max Oi···n ∞ ≤1 (Φ * t − Φ (i−k···n) * t )(O i···n ) ∞ X 1 ≤ c (e vt − 1) h(dist({i · · · n}, {1 · · · i − k − 1})) X 1 (2) ≤ 2 c (e vt − 1) h(dist({i · · · n}, {1 · · · i − k − 1})) √ n X W1 ,
where (2) follows from [17, Proposition 6]. By picking k = i − 2 and inserting this estimate into Eq. (B11) for i ≥ 3 and the trivial estimate tr 1···i−1 •(Φ t − Φ (i−k···n) t )(X) 1 ≤ 2 X 1 for i = 1, 2 we obtain Eq. (B9). Eq. (B10) follows by duality.
Transportation cost inequalities
Although interesting on their own, the relevance of the Lipschitz constants introduced above becomes clearer in our context when we also have a transportation cost inequality [56,61]. A quantum state σ satisfies a transportation cost inequality with constant α > 0 if for any other state ρ it holds that:
W 1 (ρ, σ) ≤ 1 2α D(ρ σ) ,(B12)
where W 1 ∈ {W 1, , W 1,∇ , W 1,loc } . In what follows, we simply write . Lip and W 1 to denote either of the Lipschitz constants, and their corresponding Wasserstein metrics, defined above. This inequality should be thought of as a stronger version of Pinsker's inequality that is tailored to a state σ and the underlying Wasserstein distance. One of the well-established techniques to establish a transportation cost inequality for W 1,∇ is by exploiting the fact that it is implied by a functional inequality called the modified logarithmic Sobolev inequality. It is beyond the scope of this paper to explain this connection and we refer to e.g. [15] and references therein for a discussion on these topics. But for our purposes it is important to note that in [32] the authors and Capel show modified logarithmic Sobolev inequalities for several classes of Gibbs states. More recently, one of the authors and De Palma derived such transportation cost inequalities for W 1, in [19]. In Theorem B.1 below we summarize the regimes for which transportation cost inequalities are known to hold: Theorem B.1 (transportation cost for commuting Hamiltonians [19,32,62]). Let E 1 , . . . , E m ⊂ M d n be a set of k-local linearly independent commuting operators with E i ∞ ≤ 1. Then σ(λ) satisfies a transportation cost inequality with constant α > 0 for all λ ∈ B ∞ (0, 1) in the following cases:
(i) The E i are classical or nearest neighbour (i.e. k = 2) on a regular lattice and the inverse temperature β < β where β c only depends on k and the dimension of the lattice, for both W 1,∇ , W 1, and α = Ω(1) [32].
(ii) The operators E i are local with respect to a hypergraph G = (V, E) and the inverse temperature satisfies β < β c , where β c only depends on k and properties of the interaction hypergraph for W 1, and α = Ω(1) [19, Theorem 3, Proposition 4].
(iii) The E i are one-dimensional and β > 0, for both W 1,∇ and W 1, and α = Ω(log(n) −1 ) [62].
Moreover, the underlying differential structure (∇, σ) consists of L k acting on at most O(k) qudits.
Theorem B.1 establishes that transportation cost inequalities are satisfied for most classes of commuting Hamiltonians known to have exponential decay of correlations.
Remark B.1. In [19, Proposition 5], the authors show that W 1, ≤ c(k) W 1,∇ holds up to some constant c(k) depending on the locality of the differential structure. This implies that a transportation cost inequality for W 1,∇ implies one for W 1, up to c(k). Thus, although the authors of [32,62] only obtain the result for W 1,∇ , we can use it to translate it to W 1, . We conclude that for commuting Hamiltonians TC are available for essentially all classes of local Hamiltonians for which they are expected to hold.
Appendix C: Combining the maximum entropy method with transportation cost inequalities
With these tools at hand, we are now ready to show that resorting to transportation cost inequalities it is possible to obtain exponential improvements in the sample complexity required to learn a Gibbs state. First, let us briefly review shadow tomography or Pauli regrouping techniques [9][10][11][12]. Although these methods all work under slightly different assumptions and performance guarantees, they have in common that they allow us to learn the expectation value of M k-local observables O 1 , . . . , O M ∈ M 2 n such that O i ∞ ≤ 1 up to an error and failure probability at most 1 − δ by measuring O(e O(k) log(M δ −1 ) −2 ) copies of the state.
For instance, for the shadow methods of [9], we obtain a O(4 k log(M δ −1 ) −2 ) scaling by measuring in 1-qubit random bases. The estimate is then obtained by an appropriate median-of-means procedure for the expectation value of each output string. The computation for obtaining the expectation value of E i through this method then entails evaluating the expectation value of the observables on O(4 k log(M δ −1 ) −2 ) product states. For k-local observables E i , evaluating the expectation value of E i on a product state takes time O(e ck log(M δ −1 ) −2 ) for some c > 0. Thus, we see that for k = O(log(n)) also the postprocessing can be done efficiently.
The application of such results to maximum entropy methods is then clear: given E 1 , . . . , E m assumed to be at most k-local, with probability at least 1 − δ we can obtain an estimate e (λ) of e(λ) satisfying:
e (λ) − e(λ) p ≤ m 1 p (C1)
using O(4 k log(mδ −1 ) −2 ) copies of σ(λ). We then finally obtain our main theorem, Theorem I.1, restated here for the sake of clarity:
Theorem C.1 (Fast learning of Gibbs states). Let σ(λ) ∈ D 2 n be an n-qubit Gibbs state at inverse temperature β with respect to a set of k-local operators E = {E i } m i=1 that satisfies a transportation cost inequality with constant α. Moreover, assume that µ → log(Z(µ)) is L, U strongly convex in B ∞ (0, 1). Then O 4 k α −1 −2 β log(mδ −1 ) min mβ Ln , m 2 α n 2 2 samples of σ(λ) are sufficient to obtain a state σ(µ) that satisfies:
| tr [O(σ(λ) − σ(µ)] | ≤ n 1 2 O Lip (C2)
for all Lipschitz observables O with probability at least 1 − δ.
Proof. Using the aforementioned methods of [9] we can obtain an estimate e (λ) satisfying the guarantee of Eq. (A11) with probability at least 1 − δ. We will now resort to the results of Thm. A.1 to obtain guarantees on the output of the maximum entropy algorithm. Now we solve the maximum entropy problem with e (λ) and set the stopping criterion for the output µ * as
e (λ) − e(µ * ) 2 < (4c + 1) √ m.
for c > 10. Then it follows from Thm. A.1 that:
D(σ(µ * ) σ(λ)) ≤ D(σ(µ * ) σ(λ)) + D(σ(λ) σ(µ * )) = O β min{βL −1 2 m, m} .(C3)
This can then be combined with transportation cost inequalities. Indeed, we have:
| tr [O(σ(λ) − σ(µ * )] | ≤ O Lip W 1 (σ(µ * ), σ(λ)) ≤ O Lip D(σ(µ * ), σ(λ)) 2α .
Inserting our bound on the relative entropy in Eq. (C3) we obtain:
| tr [O(σ(λ) − σ(µ * )] | = O O Lip β min{(αL) − 1 2 β m, √ mα −1 } .
We conclude by suitably rescaling .
The theorem above yields exponential improvements in the sample complexity to learn the expectation value of certain observables for classes of states that satisfy a transportation cost inequality with α = Ω(log(n) −1 ). As discussed in Sec. B 2, extensive observables that can be written as a sum of l-local observables have a Lipschitz constant that satisfies O Lip = O(l √ n). Shadow-like methods would require O(e O(l) log(mδ −1 ) −2 ) samples to learn such observables up to a relative error of n . Our methods, however, require O(e O(k) poly(l, −1 ) log(mδ −1 )), which yields exponential speedups in the regime l = poly(log(n)). Of course it should also be mentioned that classical shadows do not require any assumptions on the underlying state. Furthermore, considering the exponential dependency of the sample complexity in the locality for shadow-like methods, we believe that our methods yield practically significant savings already in the regime in which we wish to obtain expectation values of observables with relatively small support. For instance, for high-temperature Gibbs state of nearest neighbor Hamiltonians and observables supported on 15 qubits, shadows require a factor of ∼ 10 7 more samples than solving the maximum entropy problem for obtaining the same precision.
On the other hand, previous guarantees on learning quantum many-body states [3, 4, 7, 39] required a polynomial in system size precision to obtain a nontrivial approximation, which implies a polynomialtime complexity. Thus, our recovery guarantees are also exponentially better compared to standard many-body tomography results.
Results for shallow circuits
Let us be more explicit on how to leverage our results to also cover the outputs of short depth circuits. To this end, let G = (V, E) be a graph that models the interactions of a unitary circuit and suppose we implement L layers of an unknown unitary circuit consisting of 1 and 2-qubit unitaries laid out according to G. That is, we have an unknown shallow circuit U of depth L with respect to G. More precisely,
U = ∈[L] e∈E U ,e ,(C4)
where each E ⊂ E are a subset of the edges such that any e, e ∈ E do not share a vertex. Our goal is to show how to approximately learn the state |ψ = U |0 ⊗n .
The overall idea consists in finding a Gibbs state approximating |ψ in Wasserstein distance. We will then find a differential structure for (approximations) of shallow circuits and then showing that the (approximation of the) output satisfies a TC inequality with respect to it. Thus, it suffices to control the relative entropy with this approximation to ensure a good approximation in Wasserstein distance. Let us find the appropriate approximation. First, note that for β = log( −1 ) and H 0 = − n i Z i where Z i denotes the Pauli matrix Z = |0 0| − |1 1| acting on site i, we have that:
D U|0 0| ⊗n U † U e −β H0 tr [e −β H0 ] U † = D |0 0| ⊗n e −β H0 tr [e −β H0 ] = nD |0 0| e β Z tr [e β Z ] = −nβ 0|Z|0 + n log tr e β Z = n log 1 + e −2β ≤ ne −2β = 2 n.(C5)
Thus, if the states U e −β H 0 tr[e −β H 0 ] U † satisfy a transportation cost inequality with some constant α, this would allow us to conclude that
W 1 U|0 0| ⊗n U † , U e −β H0 tr [e −β H0 ] U † ≤ n 2α . Moreover, defining H U = − n i UZ i U † , we see that U e −β H 0 tr[e −β H 0 ] U † = e −β H U tr[e −β H 0 ]
. As we know the geometry of U, we can bound the support of UZ i U † . Thus, we only need to find a suitable transportation cost inequality to see that this approximation fits into our framework.
Let us now find a suitable differential structure for the state e −β H 0 tr[e −β H 0 ] . For simplicity, denote by
τ p = p|0 0| + (1 − p)|1 1| and note that e −β H 0 tr[e −β H 0 ] = τ ⊗n p with p = e β e β +e −β . Moreover, let a i = I ⊗i−1 ⊗ |0 1| ⊗ I ⊗n−i−1
be the anihilation operator acting on qubit i. Defining L i,0 = (p(1 − p)) 1 4 a i and L i,1 = L † i,0 , we get a differential structure for τ ⊗n p with {L i,k , ω i,k } for i = 1, . . . , n, k = 0, 1, ω i,0 = p 1−p and ω i,1 = 1−p p . That this is indeed a differential structure follows from a simple computation. One can readily check that the induced Lipschitz constant is given by:
O 2 Lip,∇ = n i=1 p 1 − p + 1 − p p ( [O, L i,0 ] 2 + [O, L i,1 ] 2 ) = n i=1 [O, a i ] 2 + [O, a † i ] 2 .
Thus, we see that the Lipschitz constant takes a particularly simple form for this differential structure. Moreover, it is not difficult to see that {UL i,k U † , ω i,k } provides a differential structure for the state Uτ ⊗n p U † . Indeed, it is easily checked that this new differential structure still gives eigenvectors of the modular operator. Importantly, the result of [63, Theorem 19] establish that the state τ ⊗n p satisfies a transportation cost inequality with constant 1 2 . Putting all these elements together we have: Theorem C.1 (transportation cost for shallow circuits). Let U be an unknown depth L quantum circuit on n qubits defined on a graph G = (V, E) and |ψ = U |0 ⊗n . Define for > 0 H U = − i UZ i U † and σ(U, ) = e −β H U tr[e −β H U ] with β = log( −1 ). Then for any state ρ and all observables O we have:
| tr [O(|ψ ψ| − ρ)] | ≤ O Lip,∇ √ n + D(ρ σ(U, )) with O 2 Lip,∇ = n i=1 [O, Ua i U † ] 2 + [O, Ua † i U † ] 2 , a i = I i−1 ⊗ |0 1| ⊗ I n−i−1
Proof. We have:
| tr [O(|ψ ψ| − ρ)] | ≤ | tr [O(|ψ ψ| − σ(U))] | + | tr [O(σ(U) − ρ)] |.
The claim then follows from the discussion above, as σ(U) satisfies a transportation cost inequality with constant 1 2 .
Of course the result above has the downside that the Lipschitz constant depends on the unknown circuit U. However, as we can estimate the locality of each term Ua i U † , it is also possible to estimate the Lipschitz constant by controlling the overlap of the observable O with each Ua i U † .
The result of Thm. C.1 and the discussion preceding it also give us a method of efficiently learning the outputs of shallow circuits, as illustrated by the following proposition:
Proposition C.1 (Learning the outputs of shallow circuits). Let U be an unknown n-qubit quantum circuit with known locality structure as in Eq. (C4) and |ψ = U |0 ⊗n . Moreover, define l 0 as
l 0 = max 1≤i≤n supp |UZ i U † |.
For some > 0 we have that
O( −2 4 3l0 log 4 (n4 l0 −1 ) log(4 l0 nδ −1 )) (C6)
samples of |ψ suffice to learn a Gibbs state σ(µ) that satisfies
W 1,∇ (|ψ ψ|, σ(µ)) ≤ √ n (C7)
with probability of success at least 1 − δ. In similar lines we have that O( −2 4 3l0 n 2 log 4 (n4 l0 −1 ) log(4 l0 nδ −1 )) (C8)
samples suffice to learn a state σ(λ) such that
|ψ ψ| − σ(λ) tr ≤ √ .
Proof. Let E i be a basis of Pauli operators for matrices on the support of UZ i U † . By our assumption on l 0 , we know that for each i we have that |E i | ≤ 4 l0 . For simplicity we will assume that there are no Pauli words that are contained in two distinct E i and we will enumerate all different Pauli words as {E i,j } for 1 ≤ i ≤ n and 1 ≤ j ≤ 4 l0 indicating the elements of the different E i . Thus, there is a λ ∈ R m with m ≤ n4 l0 and λ ∞ ≤ 1 such that
σ(U, ) ∝ exp −β n i=1 Ei,j ∈Ei λ i,j E i,j .
Let˜ > 0 be given. It follows from Eq. (C5) and Pinsker's inequality that picking =˜ n16 is sufficient to ensure that
|ψ ψ| − σ(U,˜ n4 ) tr ≤˜ 2 .(C9)
Measuring O(4 l0˜ −2 log(4 l0 nδ −1 )) (C10) copies of |ψ is sufficient to obtain estimates of tr [|ψ ψ|E i,j ] up to an˜ /4 error. By Eq. (C9) and a triangle inequality they are also˜ /2 away from tr σ(U,˜ n4 )E i,j . Thus, running the maximum entropy principle with these estimates and the basis of operators given by ∪ n i=1 E i will yield us an estimate σ(µ) that satisfies:
D(σ(µ) σ(U,˜ n )) ≤ log n 4˜ ˜ m ≤ log n 4˜ ˜ 4 l0 n.
To obtain the estimate in Wasserstein distance in Eq. (C6), we can pick˜ = O( /(4 l0 log 2 (n4 l0 −1 ))), as in this case log n 4˜ ˜ m ≤ log n 4˜ ˜ 4 l0 n = log n log 2 (n4 l0 −1 ) −1 log 2 (n4 l0 −1 ) n ≤ n.
The claim then follows from the results of Thm. C.1 and substituing˜ into Eq. (C10). For the statement on the sample complexity for the trace distance, we may pick˜ = O( /(n4 l0 log 2 (n4 l0 −1 ))), which yields the statement after applying Pinsker's inequality.
This shows that shallow circuits can be learned efficiently as long as l 0 = O(log(n)). However, it is not immediately obvious how to also estimate the expectation values of the Gibbs states required to run the maximum entropy algorithm. Thus, at least with the methods presented here, the postprocessing takes exponential time in the number of qubits.
a. Ground states of gapped systems: in light of our results for shallow circuits, it is natural to ask to what extent our framework can be extended to ground states of gapped Hamiltonians, specially in 1D. Thus, let us briefly comment on the technical barriers on the way of such statements. First, notice that the statement of Thm C.1 required inverse temperatures scaling logarithmically in system size for the approximation of the ground state. Most known TC inequalities have an exponential scaling with the inverse temperature and, thus, TC at this inverse temperature the savings compared to Pinsker are not quadratic, hindering a straightforward extension to gapped systems. There are some nontrivial examples of ground states satisfying a TC inequality with the constant depending inverse linearly with the temperature, like graph states [64]. But, as they can also be prepared from a constant depth quantum circuit almost by definition, they fall into the assumptions of the previous statement.
However, the results of [43] assert that k-local density matrices of ground states of gapped Hamiltonians in 1D can be approximated by constant depth circuits, giving evidence that our framework should also extend to such states. And to get there, a technical obstacle has to be overcome in the proof of Thm. C.1. Essentially we need to show that local reduced density matrices of the ground state are already well-approximated at inverse temperature log( −1 ). With such a statement we could show that we can still approximate the expectation values of E i at inverse temperature log( −1 ) from samples from the ground state |ψ .
Appendix D: Summary of known strong convexity constants
As we see in the statement of Thm. A.1 and C.1, having a bound on the strong convexity constant L −1 can give a quadratic improvement in the sample complexity in terms of the error . Here we will briefly summarize for which cases estimates on L are known in the literature for the classes of states we considered here.
a. General many-body quantum: first, we should mention the results of [5]. There the authors show bounds on L −1 for arbitrary many-body Hamiltonians and temperatures that scales linearly in m. Thus, although certainly nontrivial, these bounds do not improve the sample complexity for the regimes we are interested in this work, namely that of logarithmic sample complexity in system size. To obtain improvements in this regime, L −1 needs to be at most polylogarithmic in system size.
In the case of high-temperature Gibbs states, the recent work of [22] shows that this is indeed the case. I.e., in their Corollary 4.4 they show that for β = O(k −8 ), where k is the locality of the Hamiltonian, we indeed have L −1 = O(β −2 ). It should also be noted that their results do not hold only for geometrically local Hamiltonians, but rather any Hamiltonian such that each term acts on at most k qudits. This implies that for the high temperature regime, for which we also have the TC inequality in Thm. (??), the improved samples complexity yielded by our methods holds. Note, however, that there is a slight mismatch between the inverse temperature range for which the two results hold: for the strong convexity we need β = O(k −8 ), whereas for TC β = O(k −1 ) suffices.
b. Commuting Hamiltonians: as we will prove later in Prop. F.1, in the case of commuting Hamiltonians we have that L −1 = O(e β β −2 ). Thus, for any constant inverse temperature β > 0 we have an improved sample complexity. However, in order to analyse ground states in 1D, our current proof techniques still require an inverse temperature scaling logarithmically in system size, so for such states we do not obtain improvements through strong convexity. We plan to address this gap in future work.
Besides the cases mentioned above, we also considered the case of local circuits in this work. For those there are no nontrivial estimates on L available to the best of our knowledge.
Appendix E: Regimes of efficient postprocessing
The only question we have still to answer is how to perform the postprocessing efficiently, namely how the parameter C E appearing in Theorem C.1 scales and how we obtain the bounds in Table II.
There have been many recent breakthroughs on approximating quantum Gibbs states efficiently on classical computers [21,[28][29][30][31]. The gradient descent method only requires us to estimate the gradient of the partition function for a Gibbs state at each iteration. Thus, any algorithm that can approximate the log-partition function Z(µ) efficiently or approximate e(λ) suffices for our purposes.
For Gibbs states on a lattice, the methods of [29] yield that we can perform such approximations on a classical computer in time polynomial in n for temperatures β < β c = 1/(k8e 3 ), where k is the locality of the Hamiltonian, and inverse polynomial in system size. Thus, we conclude that for this temperature range, which coincides with the range for which the results of ?? hold, C E is polynomial in system size and we can also obtain efficient classical postprocessing.
For the case of Gibbs states of 1D systems, to the best of our knowledge the best results available are those of [31]. They show how to efficiently obtain efficient tensor network approximations for 1D Gibbs states for β = o(log(n)). As such tensor networks can be contracted efficiently as well, this gives an efficient classical algorithm to compute local expectation values of such states, which suffices for our purposes. Thus, the results of [29,31] ensure that for the systems considered in Thm. B.1 we can also perform the postprocessing efficiently on a classical computer.
It is also interesting to consider what happens if we have access to a quantum computer to estimate the gradient. We will discuss the implications of this in the next section for the case of commuting Hamiltonians.
Finally, for local quantum circuits we are not aware at this stage of any method that could yield a better postprocessing complexity than computing the partition function explicitily. This would yield a postprocessing that is exponential in the system size, as it requires diagonalizing an exponentially large matrix.
Appendix F: A complete picture: commuting Gibbs states
In this section, we discuss two classes of states for which the strongest version of our theorems holds, namely that of commuting 1D Gibbs states, and the one of high-temperature commuting Gibbs states on arbitrary hypergraphs. We already discussed that they satisfy transportation cost inequalities in Thm. B.1. Thus, the last missing ingredients to obtain an optimal performance is to show that the partition function is indeed strongly convex. More precisely, we will now establish that, for these classes of states, both the upper and lower bounds on the Hessian of the log partition function are order 1. In addition to that, with access to a quantum computer it is possible to perform the postprocessing in timeÕ(m). As writing down the vector λ takes Ω(m) time, we conclude that the postprocessing can be performed in a time comparable to writing down a solution to the problem. Thus, our procedure is essentially optimal.
Also in the setting of commuting Gibbs states, it is worth noting that after the completion of this work we became aware of [65], which gives another method to learn the Gibbs state and its Hamiltonian that neither involves the maximum entropy method nor requires strong convexity guarantees. Their algorithm works by learning local reduced density matrices and showing that the parameters λ of the Hamiltonian of a commuting Gibbs state can be efficiently estimated from that. In principle, obtaining a bound on λ also suffices for our purposes and we could alternatively use their methods to bypass having to solve the maximum entropy problem for such states. In particular, this means that the postprocessing with their methods could be performed even for temperatures at which the partition function cannot be estimated efficiently but we still have access to samples from the state. However, as we ultimately are interested in the regime in which TC inequalities hold, which corresponds to the high temperature regime, we do not further comment on their results. In this section, we consider a hypergraph G = (V, E) and assume that there is a maximum radius r 0 ∈ N such that, for any hyperedge A ∈ E, there exists a vertex v ∈ V such that the ball B(v, r 0 ) centered at v and of radius r 0 includes A. In what follows, we also denote by S(v, r) the sphere of radius r centered at vertex v ∈ V , and define for all r ∈ N:
B(r) := max v∈V |B(v, r)| , S(r) := max v∈V |S(v, r)| .
Next, we consider a Gibbs state σ(λ) on
H V := v∈V H v , where dim(H v ) = d for all v ∈ V . More precisely, σ(λ) := e −βH(λ) /Z(λ),
where with a slight abuse of notations
H(λ) = m i=1 λ i E i = A∈E α A ∈[d 2 ] |A| α A E α A ,(F1)
with E i ∞ = 1 for all i ∈ {1, · · · , m}, [E i , E j ] = 0 for all i, j ∈ {1, · · · , m}, and where the sets {1, · · · , m} and {α A } A∈E,α A ∈[d 2 ] A are in bijection. Note that B(r 0 ) bounds the maximal locality of the Hamiltonian. We also denote by σ A (λ) the Gibbs state corresponding to the restriction of H onto the region A, i.e.
σ A (λ) := e −βH A (λ) tr e −βH A (λ) ,
where
H A (λ) := i| supp(Ei)∩A =0 λ i E i .
Note that, in general, σ A (λ) = tr A c (σ(λ)).
Upper bound on Hessian for commuting Gibbs states with decay of correlations
In this section, we prove tightened strong convexity constants for the log partition function in the case when the Gibbs state arises from a local commuting Hamiltonian at high-temperature. In fact, the upper constant found in Proposition A.4 can be tightened into a system size-independent one under the condition of exponential decay of the correlations in the Gibbs state. Several notions of exponential decay of correlations exist in the literature [32]. Here, we will say that a Gibbs state σ has correlation length ξ if for all observables O A , O B supported on non-overlapping regions A and B respectively, we have that:
| tr [σ O A ⊗ O B ] − tr [σ O A ] tr [σ O B ] | ≤ c O A ∞ O B ∞ e −ξ dist(A,B)
for some constant c > 0. In the classical setting, this condition is known to hold at any inverse temperature for 1D systems, and below a critical inverse temperature β c that depends on the locality of the Hamiltonian when D ≥ 2 [66]. The same bound also holds in the quantum case for 1D systems [67], or above a critical temperature on regular lattices when D ≥ 2 [28,30]. Using these bounds, we obtain the following improvement of Proposition A.4 which shows that for this class of states U = O(1).
Lemma F.1 (Strengthened upper strong convexity at high-temperature and for 1D systems). For each µ ∈ B ∞ (0, 1), let σ(µ) be a Gibbs state at inverse temperature β < β c corresponding to the Hamiltonian defined on the hypergraph G = (V, E) in (F1). Then
∇ 2 log(Z(µ)) ≤ c β 2 B(r 0 ) B(2r 0 ) d 2B(r0) ∞ r=1 e −ξr S(r) I ,
where ξ is the correlation length of the state. Moreover, this result holds for all β > 0 in 1D.
Proof. Let us first use the assumption of commutativity in order to simplify the expression for the Hessian. We find for all
α A ∈ [d 2 ] |A| , α B ∈ [d 2 ] |B| , A, B ∈ E, (∇ 2 log(Z(µ))) α A α B = β 2 tr σ(µ) (E α A − tr [σ(µ) E α A ]) (E α B − tr σ(µ) E α B ) .(F2)
The rest follows similarly to Proposition A.4 from Gershgorin's circle theorem together with the decay of correlations arising at β < β c : for all α A ,
α B =α A ∇ 2 log(Z(µ)) α A α B ≤ c β 2 α B =α A e −ξ dist(A,B) ,
where we also used that the basis operators {E i } have operator norm 1. The claim then follows by bounding the number of basis operators whose support is at a distance r of A for each r ∈ N: the latter is bounded by the product of (i) the number of vertices in A; (ii) the number of vertices at a distance r of a given vertex; (iii) the number of hyperedges containing a given vertex; and (iv) the number of basis operators corresponding to a given hyperedge. A simple estimate of each of these quantities gives the bound B(r 0 ) S(r) B(2r 0 ) d 2B(r0) . Therefore:
α B =α A ∇ 2 log(Z(µ)) α A α B ≤ c β 2 B(r 0 ) B(2r 0 ) d 2B(r0)
∞ r=1 e −ξr S(r) .
Note that for D-regular graphs and r 0 = O(1) we have B(r 0 ) B(2r 0 ) d 2B(r0) = O(1) and S(r) = O(r D−1 ), giving a scaling of ∇ 2 log(Z(µ)) = O(β 2 ξ D ).
Lower bound on Hessian for commuting Gibbs states
Whenever the Gibbs state is assumed to be commuting, the lower strong convexity constant L can also be made independent of m, by a direct generalization of the classical argument as found in [68][Lemma 7] or [69] (see also [5]).
Before we state and prove our result, let us introduce a few useful concepts: given a full-rank quantum state σ, we denote the weighted 2-norm of an observable Y as . and refer to the corresponding non-commutative weighted L 2 space with inner product X, Y σ := tr σ 1 2 X † σ 1 2 Y as L 2 (σ). The Petz recovery map corresponding to a subregion A ⊂ V with respect to σ(µ) is defined as We will also need the notion of a conditional expectation E A with respect to the state σ(µ) into the region A ⊂ Λ (see [70,71] for more details). For instance, one can choose E A := lim n→∞ R n A,σ(µ) , where R A,σ(µ) is the Petz recovery map of σ(µ). In other words, the map E A is a completely positive, unital map that projects onto the algebra N A of fixed points of R A,σ(µ) . This algebra is known to be expressed as the commutant [71]
N A := {σ(µ) it B(H A )σ(µ) −it ; t ∈ R} .
Moreover, the maps E A commute with the modular operator ∆ σ(µ) (.) := σ(µ)(.)σ(µ) −1 , and for any
X A , Z A ∈ N A and all Y ∈ B(H V ), E A [X A Y Z A ] = X A E A [Y ]Z A .
The commutativity condition for H(µ) implies frustration freeness of the family of conditional expectations {E A } A∈E : for any X ∈ L 2 (σ(µ)), E A (X) 2 2,σ(µ) + (id −E A )(X) 2 2,σ(µ) = X 2 L2(σ(µ)) . The next technical lemma is essential in the derivation of our strong convexity lower bound. With a slight abuse of notations, we use the simplified notations σ x (λ) := σ {x} (λ), H x (λ) := H {x} (λ) and so on.
Lemma F.2. Let H = j µ j E j be a local commuting Hamiltonian on the hypergraph G = (V, E) defined in (F1), each local operator E j is further assumed to be traceless. The following holds for any x ∈ V :
c(x, β) := max µ∈B ∞ (0,1) R x,σx(µ) (H x ) − tr [σ x (µ)H x ] I 2,σx(µ) H x − tr [σ x (µ)H x ] I 2,σx(µ) < 1 .
Proof. We first prove that X = R x,σx(µ) (X) is equivalent to R x,σx(µ) (X) 2,σx(µ) = X 2,σx(µ) . One direction trivially holds. For the opposite direction, assume that X = R x,σx(µ) (X). This means that X = Y + Z, with Y, Z two operators that are orthogonal in L 2 (σ x (µ)), with R x,σx(µ) (Y ) = Y and Z = 0. Now, since R x,σx(µ) is self-adjoint and unital, it strictly contracts elements in the orthogonal of its fixed points and we have
R x,σx(µ) (X) 2 2,σx(µ) = Y 2 2,σx(µ) + R x,σx(µ) (Z) 2 2,σx(µ) < Y 2 2,σx(µ) + Z 2 2,σx(µ) = X 2 2,σx(µ) ,
which contradicts the condition of equality of the norms. Now, since the map R x,σx(µ) is unital, it suffices to prove that R x,σx(µ) (H x ) = H x , or equivalently E x (H x ) = H x , in order to conclude. Let us assume instead that equality holds. This means that, for all observables A x supported on site x, and all t ∈ R:
[H x , σ(µ) it A x σ(µ) −it ] = σ(µ) it [H x , A x ]σ(µ) −it = 0 ⇒ H x = I x d ⊗ tr x (H x ) .
However, this contradicts the fact that H x is traceless on site x. Therefore R x,σx(µ) (H x ) = H x and the proof follows.
We are ready to prove our strong convexity lower bound.
Proposition F.1 (Strengthened lower strong convexity constant for commuting Hamiltonians). For each µ ∈ B ∞ (0, 1), let σ(µ) be a Gibbs state at inverse temperature β corresponding to the commuting Hamiltonian H(µ) = j µ j E j on the hypergraph G = (V, E) defined in (F1), where tr [E i E j ] = 0 for all i = j and each local operator E j is traceless on its support. Then the Hessian of the log-partition function is lower bounded by
∇ 2 log Z(µ) ≥ β 2 e −β(B(2r0)+2B(4r0)) d −B(2r0) (1 − c(β) 2 ) I ,
where c(β) := max v∈V c(v, β).
Proof. We first use the assumption of commutativity in order to simplify the expression for the Hessian. As in (F2), we find
(∇ 2 log(Z(µ))) ij = β 2 tr σ(µ) (E i − tr [σ(µ) E i ]) (E j − tr [σ(µ) E j ]) .
Therefore, for any linear combination H ≡ H(λ) = j λ j E j of the basis vectors, we have
ij λ i λ j (∇ 2 log(Z(µ))) ij = β 2 Var σ(µ) (H) ,
with Var σ(µ) (X) := X − tr [σ(µ)X] 2 2,σ(µ) . It is thus sufficient to lower bound the latter. For this, we choose a subregion A ⊆ V such that any basis element E i has support intersecting a unique vertex in A. We lower bound the variance by
H − tr [σ(µ)H] I 2 2,σ(µ) ≥ (id − E A )(H − tr [σ(µ)H] I) 2 2,σ(µ) = H − E A [H] 2 2,σ(µ) ,(F3)
where the first inequality follows by the L 2 (σ(µ)) contractivity of id − E A . Now, the weighted norm can be further simplified into a sum of local weighted norms as follows: first, for any two E i , E j whose supports intersect with a different vertex of A, we have
E A [E i ], E A [E j ] σ(µ) = tr σ(µ) 1/2 E A [E i ]σ(µ) 1/2 E A [E j ] = tr σ(µ)∆ −1/2 σ(µ) • E A [E i ]E A [E j ] = tr [σ(µ)E A [E i ]E A [E j ]](F4)
where in the third line we used the commutation of the modular operator ∆ σ(µ) (X) := σ(µ)Xσ(µ) −1 with E A together with the commutativity of σ(µ) and E i . Now, denoting supp(E i ) ∩ A = {x} and supp(E j ) ∩ A = {y}, we show that
E A [E i ] = E x [E i ] and E A [E j ] = E y [E j ] .(F5)
In order to prove these two identities, we simply need to prove for instance that E x [E i ] belongs to the image algebra N A of E A since N A ⊆ N x by definition. Hence, it is enough to show that E x [E i ] commutes with operators of the form σ(µ) it X A σ(µ) −it for any t ∈ R and any X A ∈ B(H A ). This claim follows from
E x [E i ]σ(µ) it X A σ(µ) −it = σ(µ) it σ(µ) −it E x [E i ]σ(µ) it X A σ(µ) −it = σ(µ) it E x [σ(µ) −it E i σ(µ) it ]X A σ(µ) −it = σ(µ) it E x [E i ]X A σ(µ) −it = σ(µ) it X A E x [E i ]σ(µ) −it = σ(µ) it X A σ(µ) −it E x [E i ] .
where the fourth line follows from the fact that the support of E x [E i ] does not intersect A\{i}, together with the fact that E x [E i ] is locally proportional to I on site x, by definition of N x . Therefore, using (F5) into (F4), we get
E A [E i ], E A [E j ] σ(µ) = tr [σ(µ)E x [E i ]E y [E j ]] = tr [σ(µ)E x E y [E i E j ]] = tr [σ(µ)E i E j ] = E i , E j σ(µ) ,
Theorem F.1. For each µ ∈ B ∞ (0, 1), let σ(µ) be a Gibbs state at inverse temperature β corresponding to the commuting Hamiltonian H(µ) = m j=1 µ j E j on the hypergraph G = (V, E) defined in (F1), where tr [E i E j ] = 0 for all i = j, each local operator E j is traceless on its support and acts on a constant number of qubits and m = O(n). Moreover, assume that σ(λ) satisfies the conditions of Thm. B.1. Then O(log(n) −2 ) samples of σ(λ) suffice to obtain a µ ∈ B ∞ (0, 1) satisfying
tr [O(σ(λ) − σ(µ))] = O( √ n O Lip,∇ ).
with probability at least 1−p. Moreover, we can find µ in O(poly(n, −1 )) time on a classical computer. With access to a quantum computer, the postprocessing can be performed inÕ(n −2 ) time by only implementing quantum circuits ofÕ(1) depth.
Proof. To obtain the claim on the sample complexity, note that for such systems L = Ω (1) Indeed,[32,70] asserts that we can approximately prepare any σ(µ) on a quantum computer with a circuit of depthÕ(1). Moreover, once again resorting to shadows we can estimate e(µ) withÕ( −2 ) samples. We conclude that we can run each iteration of gradient descent inÕ(n −2 ) time. As L −1 U = O(1), Thm. A.1 asserts that we converge afterÕ(1) iterations, which yields the claim
The theorem above nicely showcases how the joint use of transportation cost inequalities and strong convexity bounds to improve maximum entropy methods come together. Moreover, with access to a quantum computer, up to polylogarithmic overhead in system size factors, the computational complexity of learning the Gibbs state is comparable to reading out the results of measurements on the system. Together with the polylog sample complexity bounds that we obtain, this justifies us claiming that the result above almost gives the last word on learning such states.
Appendix G: Comparison of sample complexity of previous methods Our work arguably introduces two technical innovations to the literature of learning and tomography of quantum states that underly our exponential speedups in sample complexity. The first is the observation that most observables of physical interest have a small Lipschitz constant, and, thus, it might be more motivated to look for good approximations in Wasserstein distance instead of trace distance. The second is that for states that satisfy a TC inequality it suffices to obtain an estimate of the state that has a small relative entropy density with the target state to recover Lipschitz observables. And that finding such an estimate can be achieved from a few samples through a combination of maximum entropy and classical shadows methods.
We will now argue that these two innovations are indeed crucial to ensure our exponential speedups. First, we will show that the shadow protocol will yield bad estimates for the expectation value of local observables with high probability even for product states if the number of samples is not exponential in the locality of the underlying observables. This shows that exploiting the locality of the underlying states is crucial to obtain a polynomial sample complexity in the locality of the observables. After that we will show in Subsec. G 2 that Ω( √ n −2 ) samples are necessary for any algorithm that can recover any Gibbs state on a regular lattice at constant temperature up to trace distance . This will follow from the results of [13] and showcases the need to focus on the Wasserstein distance instead of the trace distance.
Lower bounds for sample complexity of classical shadows
One of the main advantages of our results compared with the classical shadows method of [9] is that whenever a TC inequality is available, we can learn all k−local observables with a number of samples that grows polynomially in k, whereas classical shadows require a number of samples that grows exponentially in k. However, the classical shadows framework makes no assumption on the underlying state. Thus, it is natural to ask if this undesired exponential scaling of the shadows framework is due to this broader applicability.
In this section we will demonstrate that this is not the case even for one of the simplest imaginable classes of states, namely tensor products of Pauli eigenstates. We will show that if the number of samples is not exponential in the locality of the desired observables, there will always be a k-local observable whose estimate will be wrong with constant probability.
But before we show that, let us briefly recall how the shadow method works. In order to recover local observables on n qubits, the methods of [9] proceed as follows. First, we sample a random unitary U = ⊗ n−1 i=0 U i , where each U i is an independent rotation into a Pauli basis. Then we proceed to measure the state of interest ρ in the basis defined by U , obtaining a n-bit classical string b 0 b 1 . . . b n−1 . The shadow corresponding to this measurementρ is then defined to be given bŷ
ρ = ⊗ n i=1 (3U † i |b i b i |U i − I).
We then repeat this procedure SM times for S, M ∈ N, obtaining shadows {ρ s,m } 1≤m≤M,1≤s≤S . We That is, we partition the set of shadow samples into M subgroups, take the average on each of them and then take their median. The authors of [9] then proceed to show in Theorem 1 that if we take SM = O(4 k log(N δ −1 ) −2 ), then with probability at least 1 − δ the expectation value any given N k-local observables of bounded operator norm will deviate by at most from the estimate given in Eq. (G1). We will now show that this exponential scaling in k is unavoidable if we want to obtain nontrivial estimates with high probability. More precisely:
Proposition G.1. Let ρ = ⊗ n i=1 |φ i φ i | be an unknown n qubit state where each |φ i is given by a Pauli eigenstate. For k ≥ log 3 (n) log(n), if SM ≤ n then there is an observable O of the form
O = n −1 n i=1 O i
where each O i is k-local and such that:
tr [Oρ] = 1,Ô({ρ s,m }) = 0 (G2)
with probability at least 1 − e −1 .
Proof. We will let O i = ⊗ i+k j=i P j , where we take addition modulo n and let P i be the Pauli matrix such that P i |φ i = 1, where we assume w.l.o.g. that each Pauli eigenstate corresponds to a +1 eigenstate. It is then clear that tr [Oρ] = 1.
Let us now analyse the performance of shadows. Note that as the random unitaries used to obtain each sample are random Pauli bases, we have that tr [O iρs,m ] = 3 k if the unitary U s,m corresponds to the identity on qubits i, i + 1, . . . , i + k and 0 otherwise. This is because then the string we measure will be rotated to a Pauli basis different from that of P i on at least one of the qubits in that interval. Thus, as we pick the Pauli bases uniformly at random and there are three different possibilities for each one of them, we see that the probability that a shadow is different from 0 on a given O i is 3 −k . By a union bound, the probability that a shadow is different than 0 on any of the n O i is at most n3 −k . As the different shadows are independent, the probability that all of the SM shadows return 0 is at least (1 − n3 −k ) SM . As we picked k ≥ log 3 (n) log(n) and SM ≤ n, we have:
(1 − n3 −k ) SM ≥ (1 − n −1 ) n ≥ e −1 2 for n large enough. Thus, as all shadows will have expectation value 0, the median and means procedure will clearly also output 0, which concludes the proof.
In spite of the fact the proof above used quite rough estimates and simple observables and states, it still gives some intuition why classical shadows require an exponential number of samples in the locality. We see that the probability that the shadows "look in the right direction" is exponentially small in the locality of the observables, in the sense that the overlap between the state and a random Pauli eigenstate is likely to be exponentially small. And whenever it does look in a direction in which the underlying state has a significant overlap with that basis it has to compensate that direction exponentially. Thus, if the number of samples is not exponential in the locality, it is unlikely that we will measure in a direction that has significant overlap with our state or there will be significant fluctuations due to the exponential rewarding of the "good" directions.
However, by combining shadows with a locality structure and the maximum entropy principle, as we do in this work, we can bypass the need to measure in random directions for observables with a large locality, bypassing this exponential scaling.
We also note that the authors of [9] already proved the optimality of their protocol by only considering product states in Section 8 of their supplemental material. The main difference between their proof and ours is that we focus on a Lipschitz observable, whereas they focus on observables that only depend on k qubits.
Lower bounds for recovery in trace distance
In the main text we claimed that one of the reasons why we obtain an exponential speedup compared to usual many-body methods is that we focus on a good recovery in the Wasserstein distance instead of trace distance. Moreover, by combining the maximum entropy method with a TC inequality we are able to obtain good recovery guarantees from a constant relative entropy density. That is, as long as two Gibbs states σ(λ), σ(µ) satisfy D(σ(λ) σ(µ)) ≤ n,
for some > 0 we already obtain some nontrivial guarantees. In this section we will argue that focusing on the Wasserstein recovery instead of trace distance is essential to obtain nontrivial recovery guarantees from a number of samples scaling logarithmically with the system size.. In order to achieve this, we will resort to results of [13, Theorem 1.3]:
Proposition G.2. [Lower bound of sample complexity in trace distance] Let G = (V, E) be a graph on n vertices and m edges and for λ ∈ R 15m , λ ∞ ≤ 1 let H(λ) be defined as
H(λ) = i∼j 15 l=1 λ l i,j H l i,j ,
where the H l i,j correspond to some ordering of the nonidentity Pauli strings acting on sites i, j. Then for any β = Ω(m − 1 2 ), letσ(λ) be the estimate of σ(λ) outputted by an algorithm with access to s
(x) − f (y)|/d(x, y) . (1)
FIG
. 1. example of observable O = i Oi for 2D lattice system of size n. Each Oi is supported on a L × L square (L = 3 in the figure). We have O Lip = O( √ n) and O = n/L 2 . Thus our methods require poly(L, log(n), −1 ) samples to estimate the expectation value of all such observables. Shadow-like methods require poly(e cL 2
FIG. 2 .
2Performance on a Gibbs state from the family of Eq. (14) and the 8-local observable in Eq.(15)
was supported by VILLUM FONDEN via the QMATH Centre of Excellence under Grant No. 10059 and from the European Research Council (grant agreement no. 81876).
FIG. 4. Error in estimating a Lipschitz observable after performing the maximum entropy reconstruction method. The underlying state is a classical 1D-Gibbs state with randomly chosen nearest-neighbor interactions and inverse temperature β = 1. We estimated all the ZiZi+1 expectation values from the original state based from 10 3 samples of the original state. We then computed the upper bound on the trace distance predicted by Eq. (9) and Pinsker's inequality and compared it to the actual of discrepancy for a Lipschitz observable on the reconstructed and actual state. The Lipschitz observable was chosen as
by Prop. F.1 and they satisfy a transportation cost inequality by Thm. B.1. Moreover, we can learn the expectation of all E i up to an error and failure probability δ with O(log(nδ −1 ) −2 ) samples using shadows, as they all have constant support. The claimed sample complexity then follows from Thm. C.1. The classical postprocessing result follows from [29]. The postprocessing with a quantum computer follows from the results of [70], [32] combined with the fact that L −1 U = O(1) by also invoking Lemma F.1.
then set our estimate of the expectation value of an observable O to be given bŷ O({ρ s,m }) = median S ρ s,m=M O] . (G1)
That is, O Lip, quantifies the amount by which the expectation value of O changes for states that are equal when tracing out one site. It is clear that O Lip ≤ 2 √ n O ∞ always holds byHölder's inequality, but it can be the case that
O Lip,
√ n O ∞ . For instance, consider for
some k > 0 the n-qubit observable
Talagrand. Transportation cost for Gaussian and other product measures. Geometric and Functional Analysis, 6(3):587-600, may 1996. [26] Cédric Villani. Optimal Transport, volume 338 of Grundlehren der mathematischen Wissenschaften. Springer Berlin Heidelberg, Berlin, Heidelberg, 2009. [27] Maxim Raginsky and Igal Sason. Concentration of Measure Inequalities in Information Theory, Communications, and Coding. Now Publishers, Norwell, MA, 2014. OCLC: 1193114287. [28] M. Kliesch, C. Gogolin, M.J. Kastoryano, A. Riera, and J. Eisert. Locality of Temperature. Tomotaka Kuwahara, Kohtaro Kato, and Fernando G. S. L. Brandão. Clustering of conditional mutual information for quantum Gibbs states above a threshold temperature. Phys. Rev. Lett., 124:220601, Jun 2020. [30] Aram W Harrow, Saeed Mehraban, and Mehdi Soleimanifar. Classical algorithms, correlation decay, and complex zeros of partition functions of quantum many-body systems. In Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, pages 378-386, 2020. [31] Tomotaka Kuwahara,Álvaro M. Alhambra, and Anurag Anshu. Improved Thermal Area Law and Quasilinear Time Algorithm for Quantum Gibbs States. Physical Review X, 11(1Stilck França. The modified logarithmic sobolev inequality for quantum spin systems: classical and commuting nearest neighbour interactions. arXiv preprint arXiv:2009.11817, 2020. [33] the definition of [17] has a different normalization and does not have the √ n term. This normalization will be convenient to treat the constants of [15] and [17] on an equal footing. [34] S.G Bobkov and F Götze. Exponential Integrability and Transportation Cost Related to Logarithmic Sobolev Inequalities. Journal of Functional Analysis, 163(1):1-28, April 1999. [35] Fernando G. S. L. Brandao and Marcus Cramer. Equivalence of Statistical Mechanical Ensembles for Non-Critical Quantum Systems. arXiv:1502.03263 [cond-mat, physics:quant-ph], February 2015. arXiv: 1502.03263. [36] Anurag Anshu. Concentration bounds for quantum states with finite correlation length on quantum spin lattice systems. New Journal of Physics, 18(8):083011, aug 2016. [37] Hal Tasaki. On the Local Equivalence Between the Canonical and the Microcanonical Ensembles for Quantum Spin Systems. Journal of Statistical Physics, 172(4):905-926, August 2018. [38] Tomotaka Kuwahara and Keiji Saito. Eigenstate Thermalization from the Clustering Property of Correlation. Physical Review Letters, 124(20):200604, May 2020. [39] B. P. Lanyon, C. Maier, M. Holzäpfel, T. Baumgratz, C. Hempel, P. Jurcevic, I. Dhand, A. S. Buyskikh, A. J. Daley, M. Cramer, M. B. Plenio, R. Blatt, and C. F. Roos. Efficient tomography of a quantum many-body system. Nature Physics, 13(12):1158-1162, September 2017. [40] Stephen Boyd, Stephen P Boyd, and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004. [41] Xavier Bonet-Monroig, Ryan Babbush, and Thomas E O'Brien. Nearly optimal measurement scheduling for partial tomography of quantum states. Physical Review X, 10(3):031064, 2020. [42] M. B. Hastings. Quantum belief propagation: An algorithm for thermal quantum systems. Alexander M. Dalzell and Fernando G. S. L. Brandão. Locally accurate MPS approximations for ground states of one-dimensional gapped local Hamiltonians. Quantum, 3:187, September 2019. [44]Álvaro M. Alhambra and J. Ignacio Cirac. Locally accurate tensor networks for thermal states and time evolution. arXiv:2106.00710 [cond-mat, physics:quant-ph], June 2021. arXiv: 2106.00710. [45] Yichen Huang. Locally accurate matrix product approximation to thermal states. arXiv:2106.03854 [cond-mat, physics:mathph, physics:quant-ph], June 2021. arXiv: 2106.03854. [46] Fernando G. S. L. Brandão and Michael J. Kastoryano. Finite Correlation Length Implies Efficient Preparation of Quantum Thermal States. Communications in Mathematical Physics, 365(1):1-16, January 2019. [47] Brian Swingle and Isaac H. Kim. Reconstructing quantum states from local data. Physical Review Letters, 113(26):260501, December 2014. arXiv: 1407.2658. [48] Christian Kokail, Rick van Bijnen, Andreas Elben, Benoˆidt Vermersch, and Peter Zoller. Entanglement Hamiltonian Tomography in Quantum Simulation, 2020. arXiv:2009.09000v1. [49] M. B. Hastings. Quantum belief propagation: An algorithm for thermal quantum systems. Physical Review B, 76(20), nov 2007. [50] Scott Aaronson. Shadow tomography of quantum states. In STOC'18-Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, pages 325-338. ACM, New York, 2018. [51] Akram Youssry, Christopher Ferrie, and Marco Tomamichel. Efficient online quantum state estimation using a matrix-exponentiated gradient method. New J. Phys., 21(3):033006, 2019. [52] Fernando G. S. L. Brandão, Amir Kalev, Tongyang Li, Cedric Yen-Yu Lin, Krysta M. Svore, and Xiaodi Wu. Quantum SDP solvers: large speed-ups, optimality, and applications to quantum learning. In 46th International Colloquium on Automata, Languages, and Programming, volume 132 of LIPIcs. Leibniz Int. Proc. Inform., pages Art. No. 27, 14. Schloss Dagstuhl. Leibniz-Zent. Inform., Wadern, 2019. [53] Fernando G. S. L. Brandão, Richard Kueng, and Daniel Stilck França. Fast and robust quantum state tomography from few basis measurements. Eric A Carlen and Jan Maas. Gradient flow and entropy inequalities for quantum Markov semigroups with detailed balance. Journal of Functional Analysis, 273(5):1810-1869, 2017. [56] Maxim Raginsky and Igal Sason. Concentration of measure inequalities in information theory, communications and coding. Foundations and Trends in Communications and Information Theory; NOW Publishers: Boston, MA, USA, 2018. [57] M. B. Hastings. Locality in Quantum Systems. arXiv:1008.5137 [math-ph, physics:quantph], August 2010. arXiv: 1008.5137. [58] David Poulin. Lieb-Robinson Bound and Locality for General Markovian Quantum Dynamics. Physical Review Letters, 104(19):190401, May 2010. [59] Thomas Barthel and Martin Kliesch. Quasilocality and efficient simulation of markovian quantum dynamics. Physical review letters, 108(23):230504, 2012. [60] Martin Kliesch, Christian Gogolin, and Jens Eisert. Lieb-Robinson Bounds and the Simulation of Time-Evolution of Local Observables in Lattice Systems. In Volker Bach and Luigi Delle Site, editors, Many-Electron Approaches in Physics, Chemistry and Mathemat-ics, pages 301-318. Springer International Publishing, Cham, 2014. Series Title: Mathematical Physics Studies. [61] Nathael Gozlan and Christian Léonard. Transport inequalities. a survey. arXiv preprint arXiv:1003.3852, 2010. [62] Ivan Bardet,Ángela Capel, Li Gao, Angelo Lucia, David Pérez-Garc´ida, and Cambyse Rouzé. Entropy decay for davies semigroups of a one dimensional quantum lattice. in preparation, 2021. [63] Salman Beigi, Nilanjana Datta, and Cam-Kristan Temme, Fernando Pastawski, and Michael J Kastoryano. Hypercontractivity of quasi-free quantum semigroups. Journal of Physics A: Mathematical and Theoretical, 47(40):405303, October 2014. [65] Anurag Anshu, Srinivasan Arunachalam, Tomotaka Kuwahara, and Mehdi Soleimanifar. Efficient learning of commuting hamiltonians on lattices. Unpublished notes avaible at Anurag Anshu's website, (link to note). [66] Roland L Dobrushin and Senya B Shlosman. Completely analytical interactions: constructive description. Journal of Statistical Physics, 46(5):983-1014, 1987. [67] Huzihiro Araki. Gibbs states of a one dimensional quantum lattice. Communications in Mathematical Physics, 14(2):120-157, 1969. [68] Marc Vuffray, Sidhant Misra, Andrey Lokhov, and Michael Chertkov. Interaction screening: Efficient and sample-optimal learning of Ising models. In Advances in Neural Information Processing Systems, pages 2595-2603, 2016. [69] Andrea Montanari et al. Computational implications of reducing data to sufficient statistics. Michael J Kastoryano and Fernando GSL Brandao. Quantum Gibbs samplers: the commuting case. Communications in Mathematical Physics, Ivan Bardet, Angela Capel, and Cambyse Rouzé. Approximate tensorization of the relative entropy for noncommuting conditional expectations. arXiv preprint arXiv:2001.07981, 2020.1] Ryan O'Donnell and John Wright. Efficient
quantum tomography. In Daniel Wichs and
Yishay Mansour, editors, Proceedings of the 48th
Annual ACM SIGACT Symposium on Theory of
Computing, STOC 2016, Cambridge, MA, USA,
June 18-21, 2016, pages 899-912. ACM, 2016.
[2] Jeongwan Haah, Aram Wettroth Harrow,
Zhengfeng Ji, Xiaodi Wu, and Nengkun Yu.
Sample-optimal tomography of quantum states.
IEEE Trans. Inf. Theory, 63(9):5628-5641,
2017.
[3] Marcus Cramer, Martin B. Plenio, Steven T.
Flammia, Rolando Somma, David Gross,
Stephen D. Bartlett, Olivier Landon-Cardinal,
David Poulin, and Yi-Kai Liu. Efficient quan-
tum state tomography. Nature Communications,
1(1):149, dec 2010.
[4] T Baumgratz, A Nüßeler, M Cramer, and M B
Plenio. A scalable maximum likelihood method
for quantum state tomography. New Journal of
Physics, 15(12):125004, dec 2013.
[5] Anurag Anshu, Srinivasan Arunachalam, To-
motaka Kuwahara, and Mehdi Soleimanifar.
Sample-efficient learning of interacting quantum
systems. Nature Physics, may 2021.
[6] Giacomo Torlai, Guglielmo Mazzola, Juan Car-
rasquilla, Matthias Troyer, Roger Melko, and
Giuseppe Carleo. Neural-network quantum state
tomography. Nature Physics, 14(5):447-450,
may 2018.
[7] Jun Wang, Zhao-Yu Han, Song-Bo Wang,
Zeyang Li, Liang-Zhu Mu, Heng Fan, and Lei
Wang. Scalable quantum tomography with fi-
delity estimation. Phys. Rev. A, 101:032321,
Mar 2020.
[8] Jens Eisert, Dominik Hangleiter, Nathan Walk,
Ingo Roth, Damian Markham, Rhea Parekh,
Ulysse Chabaud, and Elham Kashefi. Quantum
certification and benchmarking. Nature Reviews
Physics, 2(7):382-390, jul 2020.
[9] Hsin-Yuan Huang, Richard Kueng, and John
Preskill. Predicting many properties of a quan-
tum system from very few measurements. Na-
ture Physics, June 2020.
[10] Jordan Cotler and Frank Wilczek. Quantum
overlapping tomography. Physical Review Let-
ters, 124(10):100401, 2020.
[11] Andrew Jena, Scott Genin, and Michele Mosca.
Pauli Partitioning with Respect to Gate Sets.
arXiv:1907.07859 [quant-ph], July 2019. arXiv:
1907.07859.
[12] Ophelia Crawford, Barnaby van Straaten,
Daochen Wang, Thomas Parks, Earl Campbell,
and Stephen Brierley. Efficient quantum mea-
surement of Pauli operators in the presence of
finite sampling error. arXiv:1908.06942 [quant-
ph], April 2020. arXiv: 1908.06942.
[13] Luc Devroye, Abbas Mehrabian, and Tommy
Reddad. The minimax learning rates of nor-
mal and Ising undirected graphical models. Elec-
tronic Journal of Statistics, 14(1), January 2020.
[14] Eric A. Carlen and Jan Maas. Non-commutative
calculus, optimal transport and functional in-
equalities in dissipative quantum systems. Jour-
nal of Statistical Physics, 178(2):319-378, nov
2019.
[15] Cambyse Rouzé and Nilanjana Datta. Concen-
tration of quantum states from quantum func-
tional and transportation cost inequalities. Jour-
nal of Mathematical Physics, 60(1):012202, Jan-
uary 2019.
[16] Li Gao,
Marius Junge,
and Nicholas
LaRacuente.
Fisher Information and Log-
arithmic Sobolev Inequality for Matrix-
Valued Functions. Annales Henri Poincaré,
21(11):3409-3478, November 2020.
[17] Giacomo De Palma, Milad Marvian, Dario Tre-
visan, and Seth Lloyd. The quantum wasserstein
distance of order 1. IEEE Transactions on In-
formation Theory, pages 1-1, 2021.
[18] Bobak Toussi Kiani, Giacomo De Palma, Mi-
lad Marvian, Zi-Wen Liu, and Seth Lloyd.
Quantum earth mover's distance: A new ap-
proach to learning quantum data. arXiv preprint
arXiv:2101.03037, 2021.
[19] Giacomo De Palma and Cambyse Rouzé.
Quantum
concentration
inequalities.
arXiv:2106.15819 [math-ph,
physics:quant-
ph], June 2021. arXiv: 2106.15819.
[20] E. T. Jaynes. Information Theory and Statisti-
cal Mechanics. Physical Review, 106(4):620-630,
May 1957.
[21] Tomotaka Kuwahara and Keiji Saito. Gaussian
concentration bound and ensemble equivalence
in generic quantum many-body systems includ-
ing long-range interactions. Annals of Physics,
421:168278, 2020.
[22] Jeongwan Haah, Robin Kothari, and Ewin
Tang. Optimal learning of quantum Hamiltoni-
ans from high-temperature Gibbs states, 2021.
arXiv:2108.04842v1.
[23] Huangjun Zhu, Richard Kueng, Markus Grassl,
and David Gross. Alvaro dummy placeholder
when it comes out. preprint arXiv:1609.08172,
2016.
[24] Matthew B. Hastings and Tohru Koma. Spec-
tral Gap and Exponential Decay of Corre-
lations.
Communications in Mathematical
Physics, 265(3):781-804, August 2006.
[25] M. Physical Review X, 4(3):031019, July 2014.
[29] ):011047,
March 2021.
[32]Ángela
Capel,
Cambyse
Rouzé,
and
Daniel Physical Review B, 76(20):201102, November
2007.
[43] arXiv:2009.08216 [quant-ph], September 2020.
arXiv: 2009.08216.
[54] Eric A Carlen and Jan Maas. An analog of
the 2-wasserstein metric in non-commutative
probability under which the fermionic fokker-
planck equation is gradient flow for the en-
tropy. Communications in mathematical physics,
331(3):887-926, 2014.
[55] byse Rouzé. Quantum reverse hypercontractiv-
ity: its tensorization and application to strong
converses. Communications in Mathematical
Physics, 376(2):753-794, 2020.
[64] Electronic Journal of Statistics, 9(2):2370-2390,
2015.
[70] 344(3):915-957, 2016.
[71]
B4 )
B4It follows from an application of Hölder's inequality combined with the variational formulation inEq. (B2) that O Lip, ≤ 2 √ n O ∞ . However, it can be the case that O Lip √ n O ∞ , whichare exactly those observables that should be thought of as regular. This is because this signals that changing the state locally leads to significantly smaller changes to the expectation value of the observable than global ones. Two examples of this behaviour are given by the observables:
where in the second line we used that E x [E i ] is a fixed point of E y , and then that E j is a fixed point of E x , by the support conditions of E i and E j . Therefore, the variance on the right hand side of (F3) can be simplified aswhere we recall that H x := j| supp(Ej ) x λ j E j . Now, for any x ∈ V , we denote x∂ := {A ∈ E| x ∈ A} and decompose the Hamiltonian H(µ) asClearly,Now,. The first and second identities above follow once again from the commutativity of the Hamiltonian similarly to (F4), where for the second one we also use the disjointness of x∂ and supp(K 1x (µ)). The first inequality follows from (F7). The third identity is a consequence of the fact that E x is a projection with respect to L 2 (σ x (µ)). The second inequality follows from R x,σ(µ) [X] 2,σ(µ) ≥ E x [X] 2,σ(µ) for all X (see Proposition 10 in[70]). The last inequality is a consequence of Lemma F.2. Finally, we further bound the weighted L 2 norm on the last line of the above inequality in terms of the Schatten 2 norm in order to getwhere λ min (σ x (µ)) denotes the minimum eigenvalue of σ x (µ). The result follows from the simple estimates K 0 x (µ) ∞ ≤ B(4r 0 ) and λ min (σ x (µ)) ≥ e −βB(2r0) d −B(2r0) .Summary for 1D or high-temperature commutingNow we genuinely have all the elements in place to essentially give the last word on estimating Lipschitz observables for Gibbs states of nearest neighbor 1D or high-temperature commuting Gibbs states.samples from a state σ(λ). Then:Proof. This statement immediately follows from [13, Theorem 1.3], which shows the analogous statement when restricted to classical Ising models. As our class of Hamiltonians includes those as a subset, any algorithm that could provide an estimate for this more general class also can find one for the classical instances. Moreover, in the proof of [13, Theorem 1.3] the inverse temperature is absorbed into the coefficients of the Hamiltonian, which are assumed to have 2-norm bounded by a constant independent of the system's size. This is easily seen to be satisfied by our conditions since β = Ω(m − 1 2 ).The statement above implies in particular that any algorithm that finds an estimate that is close in trace distance for all 2-local Gibbs state on a lattice and constant inverse temperature requires Ω(n −2 ) samples. In contrast, we see that it is possible to obtain an estimate that is O( √ n) close in Wasserstein distance from O( −2 log(n)) samples of the Gibbs states, which is sufficient to already give nontrivial recovery guarantees for Lipschitz observables. Thus, we see from Prop. G.2 that resorting to the Wasserstein the trace distance is essential to obtain recovery guarantees in the regime where the number of samples is logarithmic in the system's size. Furthermore, it is interesting to note that the proof of [13] is based on a set of Gibbs states of the form:with δ = Θ(m − 1 2 ) and s i,j ∈ {±1}. Their proof then proceeds by finding a large subset of Gibbs states of the form in Eq. (G5) which have a trace distance and relative entropy of constant order. The lower bound on the sample complexity then follows from standard information theoretic arguments. We believe that this class of examples in the proof further illustrates why the trace distance is not necessarily the adequate distance measure when estimating the error on extensive observables. Indeed, for extensive, local observables the class of Gibbs states from the Hamiltonians in Eq. (G5) behaves like the maximally mixed state, as each local term converges to 0 as the system size increases. | []
|
[
"Detection of Malfunctioning Modules in Photovoltaic Power Plants using Unsupervised Feature Clustering Segmentation Algorithm",
"Detection of Malfunctioning Modules in Photovoltaic Power Plants using Unsupervised Feature Clustering Segmentation Algorithm"
]
| [
"PradeepDivyanshi Dwivedi ",
"Member, IEEE, Mayukha Pal, Senior Member, IEEEKumar Yemula "
]
| []
| []
| The energy transition towards photovoltaic solar energy has evolved to be a viable and sustainable source for the generation of electricity. It has effectively emerged as an alternative to the conventional mode of electricity generation for developing countries to meet their energy requirement. Thus, many solar power plants have been set up across the globe. However, in these large-scale or remote solar power plants, monitoring and maintenance persist as challenging tasks, mainly identifying faulty or malfunctioning cells in photovoltaic (PV) panels. In this paper, we use an unsupervised deep-learning image segmentation model for the detection of internal faults such as hot spots and snail trails in PV panels. Generally, training or ground truth labels are not available for large solar power plants, thus the proposed model is highly recommended as it does not require any prior learning or training. It extracts the features from the input image and segments out the faults in the image. Here we use infrared thermal images of the PV panel as input, passed to a convolutional neural network which assigns cluster labels to the pixels. Further, optimize the pixel labels, features and model parameters using backpropagation based on iterative stochastic gradient descent. Then, we compute similarity loss and spatial continuity loss to assign the same label to the pixel with similar features and spatial continuity to reduce noises in the image segmentation process. The effectiveness of the proposed approach was examined on an online available dataset for the recognition of snail trails and hot spot failures in monocrystalline solar panels. | null | [
"https://export.arxiv.org/pdf/2212.14653v1.pdf"
]
| 255,340,795 | 2212.14653 | 2bb8aec2df12f06b38eeffc8844d4678c3832346 |
Detection of Malfunctioning Modules in Photovoltaic Power Plants using Unsupervised Feature Clustering Segmentation Algorithm
PradeepDivyanshi Dwivedi
Member, IEEE, Mayukha Pal, Senior Member, IEEEKumar Yemula
Detection of Malfunctioning Modules in Photovoltaic Power Plants using Unsupervised Feature Clustering Segmentation Algorithm
1Index Terms-Deep Learningfeature clusteringhot spot identificationrenewable energy sourcessegmentationsolar PV panelsand unsupervised Learning
The energy transition towards photovoltaic solar energy has evolved to be a viable and sustainable source for the generation of electricity. It has effectively emerged as an alternative to the conventional mode of electricity generation for developing countries to meet their energy requirement. Thus, many solar power plants have been set up across the globe. However, in these large-scale or remote solar power plants, monitoring and maintenance persist as challenging tasks, mainly identifying faulty or malfunctioning cells in photovoltaic (PV) panels. In this paper, we use an unsupervised deep-learning image segmentation model for the detection of internal faults such as hot spots and snail trails in PV panels. Generally, training or ground truth labels are not available for large solar power plants, thus the proposed model is highly recommended as it does not require any prior learning or training. It extracts the features from the input image and segments out the faults in the image. Here we use infrared thermal images of the PV panel as input, passed to a convolutional neural network which assigns cluster labels to the pixels. Further, optimize the pixel labels, features and model parameters using backpropagation based on iterative stochastic gradient descent. Then, we compute similarity loss and spatial continuity loss to assign the same label to the pixel with similar features and spatial continuity to reduce noises in the image segmentation process. The effectiveness of the proposed approach was examined on an online available dataset for the recognition of snail trails and hot spot failures in monocrystalline solar panels.
I. INTRODUCTION
The increasing trend for more power generation utilizing renewable energy resources is born from the fact that conventional energy sources like coal, petroleum, and natural gas are on the verge of extinction [1], [2]. Also, the global energy crisis provoked because Russia invaded Ukraine has given rise to the unrivalled momentum for renewables. Disruptions in supplying and high prices of fossil fuels during the crisis are the reasons that the countries are strengthening their policies for supporting more power generation using renewables. Experts are also predicting that throughout 2022-2027, renewables would account for over 90% of global electricity expansion and surpass coal [3]. The cumulative power generation using various renewable sources is shown in Figure 1, considering the past, present and future scenarios [4]. For instance, India is set to double the new installations of solar photovoltaic panels to achieve the target of 500 GW by 2030 and its additions towards renewables are shown in Figure 2.
From Figure 2, we can say that renewable additions are majorly contributed by the solar PV generation installed from the utility end. The utility-scale solar photovoltaics is referred to as a large number of solar modules installed together to establish a power plant [5]. Solar power plants require acres of land for encompassing. For example, Bhadla Solar Park, in Rajasthan, India is the world's largest solar power plant established in 2020 encompassing nearly 14,000 acres of land and generating 2.25 GW of power. However, after the commissioning of the huge power plants, their monitoring and maintenance become challenging. Monitoring is an important measure taken to increase the power output from solar PV panels, which can be affected by several factors such as shading, wear and tear due to environmental conditions [6]. To enhance the performance of solar PV panels and generate more power, it is feasible to implement smart monitoring which automatically detects the faulty cells in the solar PV panels without the requirement of large human interference as manual inspection of PV modules is simply not possible. Usually, these faults in solar PV panels are referred to as hot spots and potential induced degradation (PID). Hotspots occur when the solar panel is in shading condition and current flow is not possible across the weak cells and current concentration increases to other cells, leading it to overheat and mechanically damage [7]. PID occur because of humidity, heat or voltage variations in the cell. As solar panels are designed with a life span of 25 years but these faults can sharply decrease their performance and efficiency. Thus, it is advisable to have a monitoring feature in large solar power plants to make its performance more reliable and durable.
The monitoring of solar PV panels is implemented manually using visual inspection or analysing through current-voltage characteristics [8] and another effective approach is to capture the infrared (IR) thermographic images of solar panels and identify the faults [9]. The use of IR thermographic images is a reliable technique for the identification of faults in solar PV panels. Capturing the thermal images manually on the ground level is a time taking process as observed in [10]. For 3 MW of solar power plants, analysis using ground images took 34 days of the inspection time; while capturing aerial thermal images reduced the time of analysis to approximately 3 hours. Thus, it is advantageous to prefer aerial thermal images for the analysis [11]. Various methods have been proposed to identify the faults in solar PV panels using aerial thermal images. For instance, the Canny edge image segmentation technique is used to identify the region of interest (ROI) i.e., affected region in solar panels [12]. The Robust Principal Component Analysis (RPCA) is used to separate the sparse corrupted anomalous components from a low-rank background [13]. Recent review works have been presented that detail the proposed methods and their inadequacy to address the problem [14], [15], [16], [17]. It has clearly stated the issues in the current state of the art because of the complexity and multi-layering process.
Recently, many researchers have proposed frameworks using deep-learning algorithms where the infrared thermal imagery dataset is first labelled as faulty and healthy PV modules for autonomous PV module monitoring. Further, they train the model and using classification technique, faults have been identified. For instance, a binary classifier with a multiclass classifier is used to detect the fault and its type [18]. Convolutional neural networks and decision tree algorithms are used for the detection of external faults such as delamination, burn marks, glass breakage, discolouration, and snail trails on solar panels [19]. There has been a lot of work done in the same context [20], [21], [22], [23], [24]. All these works mentioned requires training labels or ground labels for training the proposed models; in a practical scenario, it is not always possible to have ground truth images of solar PV panels. Thus, these classification techniques are not adequate to rely on for real-time automatic monitoring of large-scale solar power plants. Another technique referred to as image segmentation is used to detect hot spots in solar panels. Otsu thresholding algorithm and its modified versions have been used to segment the faulty regions in solar panels but their segmentation accuracy is low, thus they are also not reliable [25], [26].
We propose an unsupervised method that does not require prior training labels or ground truth. The unmanned aerial infrared thermographic images of solar power modules in power plants are captured and directly fed as input. Then with the help of convolutional neural network layers, features of images are extracted and clustered to segment the objects based on the features of the images [27]. This unsupervised learning optimizes the clusters by backpropagation using iterative stochastic gradient descent. Then with help of the loss function, we have also reduced the noises occurring during the segmentation process. Finally, we get segmented RGB images from which we can identify the hotspots, PID and snail trails in the solar PV modules. Further, these RGB images are converted to greyscale to enhance the features of the image so that faulty cells, normal cells and backgrounds are easily differentiable. Earlier, the proposed unsupervised feature clustering segmentation algorithm was found effective for bone age assessment to diagnose growth disorders using x-ray images [28], to realize the extraction process of cage aquaculture [29] and for segmentation of rock and coal images in the mining industry [30].
The key contributions of this work are as follows:
• A novel method is proposed for the identification of internal faults such as hotspots, snail trails and potentialinduced degradation in solar PV panels. • The input infrared thermal images are captured using unmanned aerial vehicles and fed to the unsupervised deep learning algorithm which performs segmentation based on feature clustering. • It does not require any prior training labels and ground truth labels. Thus, the proposed algorithm is suitable to get integrated into any large-scale solar power plant. The paper is organized as follows: a description of the dataset is provided in Section II, detailed explanation of the methodology of the unsupervised learning algorithm for segmentation is presented in Section III. Section IV provides the experimental result on a real dataset of solar PV panels. Finally, Section V concludes the paper.
II. DESCRIPTION OF DATASET
Here, we used the online accessible dataset from [31]. This dataset comprises infrared thermal images of solar photovoltaic panels. The infrared thermographic surveillance technique is found to be effective for defect identification and analysis. Generally, objects with more than absolute zero temperature radiate thermal energy in infrared form; if temperature increases, radiated thermal energy gets more intense. Thus, these thermographic images allow the analysis of temperature variations. Thermal cameras are used for capturing infrared thermographic images [32]. Thermal cameras can capture radiations in the infrared range of the electromagnetic spectrum. The zones with higher temperatures could be captured easily, but the human eye is not capable to identify these zones. Thus, image segmentation techniques are found suitable. For solar photovoltaic panels, infrared thermographic images can be captured using unmanned aerial vehicles (UAVs), which travel over photovoltaic panels by taking the proper angle setting of the camera into consideration.
The considered dataset was captured by Zenmuse-XT with the spectral range of 7.5 -13 µm, thermal sensitivity < 50 mK and provided images of size 336× 256 in JPG format. This camera glides over the monocrystalline solar panels and captures thermal images. Further for enhancing the features of the thermal images, they are pre-processed and converted into grey-scale images as shown in Figure 3. Figure 4 shows the histogram distribution of the grey-scale image. It shows how many times each intensity value in the image occurs. The human eye is not capable to identify the fault arising in solar PV panels such as hot spots or snail trails present in the images of a photovoltaic panel. To solve this issue, we propose a deep learning-based segmentation algorithm to identify the hot spots in solar photovoltaic panels. These image datasets do not have any prior training labels available, thus unsupervised learning algorithm is best suited to identify the fault. In the real world also, the implication of the proposed model is feasible as it does not require any prior ground truth labels.
III. METHODS AND MATERIALS
The IR thermal image dataset of Solar PV panels is of diversified resolutions, it is required to identify the hot-spot and snail trail regions while suppressing the noise. We exploit an unsupervised deep learning algorithm, inspired by a novel and effective unsupervised learning image-segmentation algorithm proposed in [1]. Implementation of supervised learning algorithms involves pixel levelling, ground truth images and original images. On the other hand, unsupervised algorithms require no prior labelling, training images or ground truth images; they extract the features of an image and implement the feature learning process to allocate the labels. Thus, we can easily identify the defects in the solar PV panels by segmenting images based on differentiable feature clustering. Here, we use Convolutional Neural Network (CNN) to extract the pixel-level features and identify the hot spots and snail trails in the input IR image of solar PV panels.
The flow of the proposed algorithm is shown in Figure 5, includes CNN layers to fragment out the high-level features, a batch normalization layer to rescale and make the algorithm perform faster in stable mode, an argmax layer for pseudo labelling to the features extracted and backpropagation of loss function to evaluate the performance of the algorithm.
A. Feature Extraction using CNN
An image {I P V ∈ R X } having dimension L × W , where k n ∈ R X is the pixel value normalized to [0,1] is fed as input to the initial layer of the algorithm. A p-dimensional feature vector {m n ∈ R p } is computed from k n , by passing it through M number of channels of two-dimensional convolutional layers of kernel size 3 × 3, RELU activation function, and followed by batch normalization layer to attain N pixels of an input image. Subsequently, the feature vector {m n } is fed as input to q number of channels of the 1D convolutional kernel of size 1 × 1, then passed through the batch normalization layer. Finally, with a linear classifier, a response vector {rv n ∈ W l k n } is obtained where W l ∈ R q× p , which is further normalized to rv n such that it has zero mean and unit variance [27]. Then, the argmax layer is applied for labelling the clusters c n for each pixel by considering the criteria of selecting the dimension that has a maximum value of rv n . Simply, we can say that the cluster label for the pixel was given which is equivalent to the maximum value of rv n and signifies the clustering of the feature vector into q clusters. The t − th cluster of the endmost response rv n is given as: C t = max{rv n } = rv n ∈ R q | rv n,s ≤ rv n,t , ∀s (1) where, rv n,t represents the t−th element of rv n . This process is equivalent to the assignment process of each pixel to the neighbouring nodes of the q-points which are present at infinite distances, in q-dimensional space on the relevant axis.
B. Number of Clusters
In unsupervised learning of image segmentation, the number of definite labels for the cluster varies on the image fed as input. If an image has a multitude of objects then it will have a high number of clusters and vice versa. Initially, the number of clusters q for training the model is kept high. Then with the effective use of feature similarity and spatial continuity constraints, similar and neighbouring pixels are merged and eventually end up with a smaller number of clusters. Here, we considered 18 and 4 as the maximum and the minimum number of clusters respectively for segmenting the image into hot spots, snail trails, background, and normal regions of solar PV panels.
C. Loss Function
The loss function is used as a constraint for improving the feature similarity and spatial continuity between the pseudo labels assigned to image pixels, which is given as:
L = L f s + αL sc = L f s {rv n , c n } + αL sc {rv n }(2)
where L f s is similarity loss and L sc is the continuity loss. α is weight balancing between the feature similarity and spatial continuity loss functions. As discussed in the previous section argmax function provides pseudo labels to the image's pixels as per the features. After the assignment of pseudo labels, it is passed through this loss function.
1) Feature Similarity Constraint: Implementation of this constraint would help in stabilising the clusters in the image by enhancing the similarity of similar features. The image pixels that have similar features should be within a cluster and various clusters should have distinct features. The network weights are adjusted to minimize the similarity loss function to extract the important features for clustering. The feature similarity is computed using the cross-entropy loss between rv n and c n as:
L f s (rv n , c n ) = N n=1 q z=1 −δ(z − c n ) ln(rv n ) (3) where, δ(z) = 1 if z = 1 0
if z = 1 2) Spatial Continuity Constraints: It is preferred to have spatial continuity among the clusters of the image's pixels as it helps to suppress the excess number of labels created due to the complicated structures and patterns in the image. Spatial Continuity Constraint is computed by taking the L1norm of vertical and horizontal variation of response map rv n into consideration; implemented by a differential operator. Mathematically, it is defined as:
L sc (rv n ) = W −1 β=1 L−1 γ=1
||rv β+1,γ −rv β,γ || 1 +||rv β,γ+1 −rv β,γ || 1 (4) where, L and W are the length and width of an input image. rv (β,γ) is the pixel value at (β, γ) from the response map rv n .
3) Mechanism by Backpropagation: This section details the approach for training the network in unsupervised image segmentation. After feeding the input image, with constant model parameters, cluster labels are predicted and then the model is trained based on the parameters by considering the predicted labels. The prediction of cluster labels is a forward process on the other hand training of the model is a backward process which is based on stochastic gradient descent with momentum by updating the model parameters. This stochastic gradient descent with momentum helps in accelerating the gradient vectors in the right direction, leading to faster convergence. We are computing the loss and backpropagating it for updating the parameters. This forward and backward process is implemented in the loop for E iterations to achieve the final segmentation of the image in clusters.
The identification of similarity between features of various pixels is the first criterion that needs to be addressed for the allocation of labels to the pixels. As discussed above, first we fed the infrared thermal image of solar PV panels of size 336×256 to CNN modules for feature extraction. These CNN modules comprise a convolutional layer, a ReLU layer, and a BN layer and these layers are connected from end to end. We considered three CNN modules. The first two modules are equipped with two-dimensional convolutional layers of kernel size 3×3 and the last module has a one-dimensional convolutional layer of kernel size 1×1. Then it is passed through the argmax layer for pseudo labels. Further, the network is trained by computing the loss function and implementing the backpropagation to enhance the cluster segmentation in the image. All the parameters used for the segmentation of PV panel images are tabulated in Table I. The chosen value for the weight balancing constant in the loss function is based on the loss variation with iterations and found the minimum value of loss when it is taken 5; discussed in detail in section IV.
Other parameters are chosen based on the standardized case for computational ease as in [29]. The image obtained by the implementation of the proposed algorithm would be in RGB colour. Thus, to identify and enhance the feature of RGB colour images, they are converted to greyscale images to detect the faults in the solar PV panels. The process flow of images is illustrated in Figure 6. In this section, we will discuss how the proposed model works for datasets detailed in section II. The available images of solar PV panels in the dataset are converted form of infrared thermal images into greyscale images. These greyscale images are considered as the input image for the proposed model. For analysis, we used six images from the dataset and fed them independently to the proposed unsupervised image segmentation algorithm. In detail, we have already discussed the methodology of the proposed algorithm in section III. Initially, when the greyscale thermal image of the solar PV panel is fed as input using CNN layers, the feature vector is computed and using the argmax function pseudo labels are allotted to the images as shown in 7. These initial labels are obtained using forward propagation and need further optimisation for actual cluster labels.
Thus, with the help of backward propagation the pixel labels, and their features are optimized based on iterative stochastic gradient descent. Then computed the similarity loss and spatial continuity loss to assign the same label to the pixel with similar features and spatial continuity to reduce the noises in the image segmentation process as shown in 8.
We ran the model for 200 iterations to settle the noise in the clustered pixels and get the properly segmented image as output. In addition to that, the value for the weight balancing constant α in the loss function given in equation (2) needs to be fixed. Thus, we ran the model for α = 1, α = 5 and α = 10. And plotted the variation in loss function with the number of iterations as shown in Figure 9. Here, we can see that the value of the loss function is starting with very high loss values for α = 1 and α = 10 in comparison to the α = 5. Also, at 200 iteration, the value of the loss function is 0.508 for α = 1, 0.202 for α = 5, and 0.417 for α = 5. Thus, the loss value is comparatively low for α = 5. Thus, we fixed the α = 5 for the analysis.
After, the implementation of the model we get the final segmented solar PV panels using feature clustering. These segmented images have clustered labels differentiating the background, solar PV panel and fault defects in them. From Figure 10, we can see that six greyscale thermal images of solar PV panels are fed as input to the model and segmented RGB-coloured images are obtained as output. The different colours/cluster labels are allotted to objects in the image so that they are differentiable. Further, to make the analysis more reliable and implementable in the real world, we converted the segmented RGB colour image to a segmented greyscale image. This process makes it simpler to detect the faults in solar PV panels. From Figure 10, in segmented greyscale images, the dark spots are the faulty cells of solar panels which is easily identifiable to the human eye. Thus, we can say that the proposed framework is suitable for monitoring solar power plants. It does not require any prior training labels and ground truth labels. Also in very less computational time, the segmentation of the captured thermal images of solar PV panels is achieved and defects are identified.
V. CONCLUSION In this paper, we proposed a novel unsupervised image segmentation algorithm based on a convolutional neural network used for segmenting faulty cells in solar photovoltaic panels. From the solar power plants, using the thermal camera infrared thermographic images of solar panels are captured. These images are pre-processed to greyscale images to extract the important features. Then the greyscale images are passed to the CNN layers which again extract out the highly important features from the input greyscale image of solar PV panels and then an argmax layer performed the differentiable task for feature clustering. The CNN layers effectively assigned the cluster labels to the pixels of the input image. Further, to achieve better clustering of similar features, backpropagation of the proposed loss function (feature similarity loss and spatial continuity loss) was applied to the normalized response of convolutional layers. As a result, it became effective in distinguishing faulty cells and normal cells in solar PV panels.
To further make fault identification easy, we converted the segmented RGB image to a greyscale image, on which the dark spots are represented as the faulty cells in the panel. Altogether, this proposed algorithm eased the segmentation of the thermal image of solar PV panels; so that defects like hot spots and snail trails can be identified. The following process and analyzed results clearly show the effectiveness of the proposed algorithm. It can easily be implemented in the real world for monitoring and maintenance of large solar power plants with the requirement of less manpower and at a low cost.
(
Corresponding author: Mayukha Pal) Mrs. Divyanshi Dwivedi is a Data Science Research Intern at ABB Ability Innovation Center, Hyderabad 500084, India and also a Research Scholar at Department of Electrical Engineering, Indian Institute of Technology, Hyderabad 502205, IN, (e-mail: [email protected]). Dr. Pradeep Kumar Yemula is an Assoc. Professor with the Department of Electrical Engineering, Indian Institute of Technology, Hyderabad 502205, IN, (e-mail: [email protected]). Dr. Mayukha Pal is a Global R&D Leader -Data Science at ABB Ability Innovation Center, Hyderabad-500084, IN, (e-mail: [email protected]).
Fig. 1 . 2027 Fig. 2 .
120272Cumulative power generating capacity of various sources from 2011-India's renewable capacity additions from 2016-2027
Fig. 3 .
3Grey-scale image pre-processed from a thermal image of solar PV panel Fig. 4. Histogram distribution of the grey-scale image of solar PV panel
Fig. 5 .
5Architecture of the unsupervised image segmentation model for PV panels
Fig. 6 .
6Process flow diagram for the proposed algorithm
Fig. 7 .
7Initial pseudo cluster labels allotted to input thermal image of solar PV panel
Algorithm 1
1Unsupervised segmentation of solar PV panels INPUT: I P V ∈ R X with dimension L×W OUTPUT: Hot spots in solar PV panels INITIALIZE: E ← The number of iterations Feature Vector {m n } ← 2D convolutional {k n } Response vector {rv n } ← 1D convolutional {m n } {rv n } ← N orm{rv n } {c n } ← Argmax{rv n,t } Compute L using equation (2) 2D convolutional layers, 1D convolutional layer ← Update L Segmented RGB Image RETURN: Segmented Greyscale Image IV. RESULTS AND DISCUSSION
Fig. 8 .Fig. 9 .
89Implementation of backward propagation using computational loss for enhancing the cluster formation (up to 200 iterations) Loss v/s iteration curve for various values of α 6 Fig. 10. Segmentation of input greyscale thermal images of solar PV panel
TABLE I HYPERPARAMETERS
IOF UNSUPERVISED SEGMENTATION ALGORITHMHyperparameters
Values
Size of IR thermal image of solar panel
336×256
Stochastic gradient descent momentum
0.9
Learning Rate
0.1
Number of iterations
200
Weight balancing constant in loss function
5
Photovoltaic defect classification through thermal infrared imaging using a machine learning approach. C Dunderdale, W Brettenny, C Clohessy, E E Van Dyk, Progress in Photovoltaics: Research and Applications. 28C. Dunderdale, W. Brettenny, C. Clohessy, and E. E. van Dyk, "Pho- tovoltaic defect classification through thermal infrared imaging using a machine learning approach," Progress in Photovoltaics: Research and Applications, vol. 28, no. 3, pp. 177-188, 2020.
Data-driven approach to form energy resilient smart microgrids with identification of vulnerable nodes in active electrical distribution network. D M Reddy, D Dwivedi, P K Yemula, M Pal, arXiv:2208.11682arXiv preprintD. M. Reddy, D. Dwivedi, P. K. Yemula, and M. Pal, "Data-driven approach to form energy resilient smart microgrids with identification of vulnerable nodes in active electrical distribution network," arXiv preprint arXiv:2208.11682, 2022.
Renewables 2022, Analysis and Forecast to 2027. I E Agency, I. E. Agency, "Renewables 2022, Analysis and Fore- cast to 2027," https://iea.blob.core.windows.net/assets/ 64c27e00-c6cb-48f1-a8f0-082054e3ece6/Renewables2022.pdf, 2022.
Evaluation of energy resilience and cost benefit in microgrid with peer-to-peer energy trading. D Dwivedi, S M B K Victor, P K Yemula, P Chakraborty, M Pal, arXiv:2212.02318arXiv preprintD. Dwivedi, S. M. B. K. Victor, P. K. Yemula, P. Chakraborty, and M. Pal, "Evaluation of energy resilience and cost benefit in microgrid with peer-to-peer energy trading," arXiv preprint arXiv:2212.02318, 2022.
Identification of surface defects on solar pv panels and wind turbine blades using attention based deep learning model. arXiv:2211.15374arXiv preprint--, "Identification of surface defects on solar pv panels and wind turbine blades using attention based deep learning model," arXiv preprint arXiv:2211.15374, 2022.
A methodology for identifying resiliency in renewable electrical distribution system using complex network. D Dwivedi, P K Yemula, M Pal, arXiv:2208.11682arXiv preprintD. Dwivedi, P. K. Yemula, and M. Pal, "A methodology for identifying resiliency in renewable electrical distribution system using complex network," arXiv preprint arXiv:2208.11682, 2022.
Photovoltaic module segmentation and thermal analysis tool from thermal images. L E Montañez, L M Valentín-Coronado, D Moctezuma, G Flores, 2020 IEEE International Autumn Meeting on Power, Electronics and Computing (ROPEC). 4L. E. Montañez, L. M. Valentín-Coronado, D. Moctezuma, and G. Flo- res, "Photovoltaic module segmentation and thermal analysis tool from thermal images," in 2020 IEEE International Autumn Meeting on Power, Electronics and Computing (ROPEC), vol. 4, 2020, pp. 1-6.
Measuring the i-v curve of pv generators. E D Aranda, J A Gomez Galan, M S De Cardona, J M Marquez, IEEE Industrial Electronics Magazine. 33E. D. Aranda, J. A. Gomez Galan, M. S. de Cardona, and J. M. Andujar Marquez, "Measuring the i-v curve of pv generators," IEEE Industrial Electronics Magazine, vol. 3, no. 3, pp. 4-14, 2009.
Segmentation of thermography image of solar cells and panels. E Alfaro-Mejía, H Loaiza-Correa, E Franco-Mejía, L Hernández-Callejo, Smart Cities, S. Nesmachnow and L. Hernández CallejoSpringer International PublishingE. Alfaro-Mejía, H. Loaiza-Correa, E. Franco-Mejía, and L. Hernández- Callejo, "Segmentation of thermography image of solar cells and pan- els," in Smart Cities, S. Nesmachnow and L. Hernández Callejo, Eds. Springer International Publishing, 2020, pp. 1-8.
Image resolution influence in aerial thermographic inspections of photovoltaic plants. S Gallardo-Saavedra, L Hernández-Callejo, O Duque-Perez, IEEE Transactions on Industrial Informatics. 1412S. Gallardo-Saavedra, L. Hernández-Callejo, and O. Duque-Perez, "Im- age resolution influence in aerial thermographic inspections of photo- voltaic plants," IEEE Transactions on Industrial Informatics, vol. 14, no. 12, pp. 5678-5686, 2018.
Light unmanned aerial vehicles (uavs) for cooperative inspection of pv plants. P B Quater, F Grimaccia, S Leva, M Mussetta, M Aghaei, IEEE Journal of Photovoltaics. 44P. B. Quater, F. Grimaccia, S. Leva, M. Mussetta, and M. Aghaei, "Light unmanned aerial vehicles (uavs) for cooperative inspection of pv plants," IEEE Journal of Photovoltaics, vol. 4, no. 4, pp. 1107-1113, 2014.
Fault diagnosis of photovoltaic modules through image processing and canny edge detection on field thermographic measurements. J Tsanakas, D Chrysostomou, P Botsaris, A Gasteratos, International Journal of Sustainable Energy. 346J. Tsanakas, D. Chrysostomou, P. Botsaris, and A. Gasteratos, "Fault diagnosis of photovoltaic modules through image processing and canny edge detection on field thermographic measurements," International Journal of Sustainable Energy, vol. 34, no. 6, pp. 351-372, 2015.
Online automatic anomaly detection for photovoltaic systems using thermography imaging and low rank matrix decomposition. Q Wang, K Paynabar, M Pacella, Journal of Quality Technology. 545Q. Wang, K. Paynabar, and M. Pacella, "Online automatic anomaly detection for photovoltaic systems using thermography imaging and low rank matrix decomposition," Journal of Quality Technology, vol. 54, no. 5, pp. 503-516, 2022.
Infrared imaging of photovoltaic modules: a review of the state of the art and future challenges facing gigawatt photovoltaic power stations. C Buerhop, L Bommes, J Schlipf, T Pickel, A Fladung, I M Peters, Progress in Energy. 4442010C. Buerhop, L. Bommes, J. Schlipf, T. Pickel, A. Fladung, and I. M. Peters, "Infrared imaging of photovoltaic modules: a review of the state of the art and future challenges facing gigawatt photovoltaic power stations," Progress in Energy, vol. 4, no. 4, p. 042010, 2022.
Applied imagery pattern recognition for photovoltaic modules' inspection: A review on methods, challenges and future development. Z Yahya, S Imane, H Hicham, A Ghassane, E Bouchini-Idrissi, Safia, Sustainable Energy Technologies and Assessments. 52102071Z. Yahya, S. Imane, H. Hicham, A. Ghassane, and E. Bouchini-Idrissi Safia, "Applied imagery pattern recognition for photovoltaic modules' inspection: A review on methods, challenges and future development," Sustainable Energy Technologies and Assessments, vol. 52, p. 102071, 2022.
Inspection and condition monitoring of large-scale photovoltaic power plants: A review of imaging technologies. I Høiaas, K Grujic, A G Imenes, I Burud, E Olsen, N Belbachir, Renewable and Sustainable Energy Reviews. 161112353I. Høiaas, K. Grujic, A. G. Imenes, I. Burud, E. Olsen, and N. Belbachir, "Inspection and condition monitoring of large-scale photovoltaic power plants: A review of imaging technologies," Renewable and Sustainable Energy Reviews, vol. 161, p. 112353, 2022.
Failures of photovoltaic modules and their detection: A review. M Akram, G Li, Y Jin, X Chen, Applied Energy. 313118822M. Waqar Akram, G. Li, Y. Jin, and X. Chen, "Failures of photovoltaic modules and their detection: A review," Applied Energy, vol. 313, p. 118822, 2022.
Methodology for automatic fault detection in photovoltaic arrays from artificial neural networks. R F Colmenares-Quintero, E R Rojas-Martinez, F Macho-Hernantes, K E Stansfield, J C Colmenares-Quintero, Cogent Engineering. 811981520R. F. Colmenares-Quintero, E. R. Rojas-Martinez, F. Macho-Hernantes, K. E. Stansfield, and J. C. Colmenares-Quintero, "Methodology for automatic fault detection in photovoltaic arrays from artificial neural networks," Cogent Engineering, vol. 8, no. 1, p. 1981520, 2021.
Visual fault detection in photovoltaic modules using decision tree algorithms with deep learning features. N V Sridharan, V Sugumaran, Energy Sources, Part A: Recovery, Utilization, and Environmental Effects. 0N. V. Sridharan and V. Sugumaran, "Visual fault detection in photo- voltaic modules using decision tree algorithms with deep learning fea- tures," Energy Sources, Part A: Recovery, Utilization, and Environmental Effects, vol. 0, no. 0, pp. 1-17, 2021.
Automatic detection of visual faults on photovoltaic modules using deep ensemble learning network. S Venkatesh, B Rebecca Jeyavadhanam, A Sizkouhi, S Esmailifar, M Aghaei, V Sugumaran, Energy Reports. 8S. Naveen Venkatesh, B. Rebecca Jeyavadhanam, A. Moradi Sizkouhi, S. Esmailifar, M. Aghaei, and V. Sugumaran, "Automatic detection of visual faults on photovoltaic modules using deep ensemble learning network," Energy Reports, vol. 8, pp. 14 382-14 395, 2022.
Deep learning-based model for fault classification in solar modules using infrared images. P Haidari, A Hajiahmad, A Jafari, A Nasiri, Sustainable Energy Technologies and Assessments. 52102110P. Haidari, A. Hajiahmad, A. Jafari, and A. Nasiri, "Deep learning-based model for fault classification in solar modules using infrared images," Sustainable Energy Technologies and Assessments, vol. 52, p. 102110, 2022.
Detection and identification of faults in a pv module using cnn based algorithm. N Prajapati, R Aiyar, A Raj, M Paraye, 2022 3rd International Conference for Emerging Technology (INCET). N. Prajapati, R. Aiyar, A. Raj, and M. Paraye, "Detection and identifi- cation of faults in a pv module using cnn based algorithm," in 2022 3rd International Conference for Emerging Technology (INCET), 2022, pp. 1-5.
Machine vision based fault diagnosis of photovoltaic modules using lazy learning approach. S , Naveen Venkatesh, V Sugumaran, Measurement. 191110786S. Naveen Venkatesh and V. Sugumaran, "Machine vision based fault diagnosis of photovoltaic modules using lazy learning approach," Mea- surement, vol. 191, p. 110786, 2022.
Automatic boundary extraction of largescale photovoltaic plants using a fully convolutional network on aerial imagery. A M Sizkouhi, M Aghaei, S M Esmailifar, M R Mohammadi, F Grimaccia, IEEE Journal of Photovoltaics. 104A. M. Moradi Sizkouhi, M. Aghaei, S. M. Esmailifar, M. R. Mo- hammadi, and F. Grimaccia, "Automatic boundary extraction of large- scale photovoltaic plants using a fully convolutional network on aerial imagery," IEEE Journal of Photovoltaics, vol. 10, no. 4, pp. 1061-1067, 2020.
Hotspot detection in photovoltaic module using otsu thresholding method. A N N Afifah, A Indrabayu, Syafaruddin Suyuti, 2020 IEEE International Conference on Communication, Networks and Satellite (Comnetsat). A. N. N. Afifah, Indrabayu, A. Suyuti, and Syafaruddin, "Hotspot detection in photovoltaic module using otsu thresholding method," in 2020 IEEE International Conference on Communication, Networks and Satellite (Comnetsat), 2020, pp. 408-412.
A new approach for hot spot solar cell detection based on multi-level otsu algorithm. 2021 International Seminar on Intelligent Technology and Its Applications (ISITIA). --, "A new approach for hot spot solar cell detection based on multi-level otsu algorithm," in 2021 International Seminar on Intelligent Technology and Its Applications (ISITIA), 2021, pp. 278-282.
Unsupervised learning of image segmentation based on differentiable feature clustering. W Kim, A Kanezaki, M Tanaka, IEEE Transactions on Image Processing. 29W. Kim, A. Kanezaki, and M. Tanaka, "Unsupervised learning of image segmentation based on differentiable feature clustering," IEEE Transactions on Image Processing, vol. 29, pp. 8055-8068, 2020.
A deep learningbased computer-aided diagnosis method of x-ray images for bone age assessment. S Li, B Liu, S Li, X Zhu, Y Yan, D Zhang, Complex and intelligent systemsS. Li, B. Liu, S. Li, X. Zhu, Y. Yan, and D. Zhang, "A deep learning- based computer-aided diagnosis method of x-ray images for bone age assessment," Complex and intelligent systems, 2022.
Unsupervised segmentation of cage aquaculture in sar images based on invariant information. J Zhou, C Chu, G Zhou, X Wang, K Wang, J Fan, 2022 14th International Conference on Advanced Computational Intelligence (ICACI). J. Zhou, C. Chu, G. Zhou, X. Wang, K. Wang, and J. Fan, "Unsupervised segmentation of cage aquaculture in sar images based on invariant information," in 2022 14th International Conference on Advanced Com- putational Intelligence (ICACI), 2022, pp. 212-215.
A robust deep unsupervised image segmentation model with application in mining industry. H Imani, I Karatas, S Yalcinkaya, 2022 Innovations in Intelligent Systems and Applications Conference (ASYU). H. Imani, I. Karatas, and S. Yalcinkaya, "A robust deep unsupervised image segmentation model with application in mining industry," in 2022 Innovations in Intelligent Systems and Applications Conference (ASYU), 2022, pp. 1-6.
Dataset for recognition of snail trails and hot spot failures in monocrystalline si solar panels. E Alfaro-Mejía, H Loaiza-Correa, E Franco-Mejía, A D Restrepo-Girón, S E Nope-Rodríguez, Data in Brief. 26104441E. Alfaro-Mejía, H. Loaiza-Correa, E. Franco-Mejía, A. D. Restrepo- Girón, and S. E. Nope-Rodríguez, "Dataset for recognition of snail trails and hot spot failures in monocrystalline si solar panels," Data in Brief, vol. 26, p. 104441, 2019.
Hotspot detection in photovoltaic module using otsu thresholding method. A N N Afifah, A Indrabayu, Syafaruddin Suyuti, 2020 IEEE International Conference on Communication, Networks and Satellite (Comnetsat). A. N. N. Afifah, Indrabayu, A. Suyuti, and Syafaruddin, "Hotspot detection in photovoltaic module using otsu thresholding method," in 2020 IEEE International Conference on Communication, Networks and Satellite (Comnetsat), 2020, pp. 408-412.
| []
|
[
"Revealing the Weaknesses of File Sharing System on Cloud Storages",
"Revealing the Weaknesses of File Sharing System on Cloud Storages"
]
| [
"Adi Fauzi ",
"Qi Rafrastara ",
"Deyu "
]
| []
| [
"International Journal of Advances in Computer Science & Its Applications-IJCSIA"
]
| Cloud storage provides the simpler way to share the files privately and publicly. A good Cloud Storage Provider (SCP) is not only measured by the access speed or file size that can be shared to others, but also regarding the security issues in file sharing itself. In this paper, we analyze the security of file sharing in 3 Chinese CSPs, which are: Baidu, Weiyun and Kanbox. Those CSPs have their own vulnerabilities that successfully revealed. We also provide some suggestions to countermeasure the weaknesses so that they can maintain the quality while improving the security. | 10.15224/978-1-63248-038-5-03 | [
"https://arxiv.org/pdf/2009.07099v1.pdf"
]
| 167,671,031 | 2009.07099 | 75dd22f0eb096bfc0bc0e6628f1f41e0a693a17f |
Revealing the Weaknesses of File Sharing System on Cloud Storages
Publication Date: 30 October, 2015
Adi Fauzi
Qi Rafrastara
Deyu
Revealing the Weaknesses of File Sharing System on Cloud Storages
International Journal of Advances in Computer Science & Its Applications-IJCSIA
52Publication Date: 30 October, 2015321cloud computingcloud storagefile sharingsecurity
Cloud storage provides the simpler way to share the files privately and publicly. A good Cloud Storage Provider (SCP) is not only measured by the access speed or file size that can be shared to others, but also regarding the security issues in file sharing itself. In this paper, we analyze the security of file sharing in 3 Chinese CSPs, which are: Baidu, Weiyun and Kanbox. Those CSPs have their own vulnerabilities that successfully revealed. We also provide some suggestions to countermeasure the weaknesses so that they can maintain the quality while improving the security.
Introduction
Inside the cloud computing system, there is a cloud service that earlier it was very famous to be classified as: Software as a Service (SaaS), Platform as a Service (PaaS) dan Infrastructure as a Service (IaaS). Along with the advance technology and invention, currently there is no category on cloud service. Those now are in one category, called Everything-as-a-Service (XaaS) [1]. It is because most of the CSPs (Cloud Service Provider) now combine those 3 services or sometime plus some other services, such as: Storage as a Service, Network as a Service or even Monitoring as a Service, and then provide it to users as one bundled system.
One of cloud computing products that currently booming in many parts of the world, is Cloud Storage. It has been experiencing the tremendous growth in the past 5 years. In China, there is a kinds of war when many giant IT companies suddenly create a new product and become Cloud Storage Provider (CSP), and trying to get as many as possible users by promoting their product massively with offering some extraordinary features [2].
When the western CSP are offering free capacity to the new users around 1 to 2 digits of Giga Bytes, the Chinese CSPs are currently offering free capacity around 1 to 2 digits of Tera Bytes. At least there are 4 CSPs that have such kinds of special promotion. However, size of capacity is not the only consideration that users should think firstly. The security level becomes urgent when the cloud storage is used to save the private data.
This paper consists of 6 sections. The 2nd section will discuss about cloud storage. The 3rd section will give the discussion in more details about sharing methods in CSP. The security in CSP, especially in term of file sharing, will be explained thoroughly in the 4th section. Assessment and Suggestion will be in the 5th section and followed by the Conclusion as a last one.
II. Cloud Storage
The existence of cloud storage is very important nowadays. The need for this service is getting bigger and bigger, as they provide the easiness for the people to access, synchronize, share and backup data easily. Users will be able to access their digital content any time, from anywhere, and with any device (smartphone, tablet, notebook, or desktop PC) [3].
Since some western CSPs were blocked in China, some local CSPs raised with fantastic offer, such as: Baidu (up to 2TB storage capacity), Alibaba Kanbox (up to 10TB) and Tencent Weiyun (up to 10TB) for free. Such offer is tempting so many people especially in China and it is successfully make them ignore the dropbox, googledrive, etc. But actually, no matter how large the capacity offered, the security issue cannot be denied that it is still very important things to be considered. As mentioned by Zhou et al [4] in their paper, security and privacy issue is regarded as a top concern among 9 challenges in cloud computing system. Cryptographic mechanism is used to secure all communication between users and CSPs, such as uploading and downloading the data [5] [3]. Another security aspect that should be given more attention is the security of file sharing [3]. It is not a small thing that will not bring a serious problem to the users. When we begin to share the file, it means that we are ready to take all risk for that. The fact is, by sharing the files with some other users, sometimes it opens security hole. The afterwards discussion will explain the situation by analyzing 3 popular Chinese CSPs, those are: baidu [6], weiyun [7] and kanbox [8].
III. Sharing Methods on Cloud Storage
The importance of sharing features in cloud storage is to make the sharing file becomes much easier without using email attachments which sometimes have limitation in term of file size [9]. On the other hand, sharing methods in cloud storage also have some functionality which does not exist in sharing through email.
A. Public Sharing
There's no access control on this kind of sharing method. The data is intended for the public, so anyone can get the data without any authentication or authorization. In this scenario, the data owner can generate the URL of the document and publish it on the website. Afterwards anyone on the internet can access or even download this document directly through the given URL. In another scenario, the contents that shared publicly on some CSPs can be accessed through Google search as well.
Weiyun does not have the public sharing option. Anyone who wants to access the documents stored inside the Weiyun Cloud Storage, they have to sign-in first, except for the image file. Here, anyone can still see the image with the medium resolution without sign-in to CSP. But they are required to have the account when they want to access the image with real resolution. On the contrary, both baidu and kanbox have this option.
B. Private Sharing
Authentication is required here. Firstly, the owner must specify who will get the access for the data. Afterwards the CSP will authenticate anyone which trying to access the data whether they are in the list or not. They must sign-in first, and their identity usually will be shown on the owner's CSP window.
Among the 3 Chinese CSPs, only Kanbox that does not have private sharing option. It only provides sharing method through public URL and via e-mail. On the other hand, private sharing method is provided in Baidu and Weiyun.
C. Secret-URL Sharing
Secret-URL Sharing can be a bridge between public and private sharing. The URL of public sharing is not distributed in a secret way. While private sharing, it is a secret way but need the account to access it. By being the middle option, Secret-URL Sharing provides the secret distribution with the open access account.
In this method, the URL of documents that is going to be shared will be distributed secretly through private way, such as e-mail. So, only the one who received the e-mail from data's owner can access the contents without further authentication or authorization. This option is available in Baidu and Kanbox.
Weiyun actually does have the option to share the URL through e-mail. However, it cannot be categorized as a Secret-URL Sharing since anyone who receives the email remains required to sign-in with QQ or Weiyun account.
D. File Sharing Security
File sharing in cloud storage is one of the most interesting topics to be discussed, in which the mechanism of this method is different compared to file sharing via email attachment. By using e-mail attachment, the shared data from Allice will permanently exist in Bob's e-mail storage. But in cloud storage, Allice can decide when she will open and close the door to share the documents anytime. It is depend on the features provided by CSPs.
According to Chu et al. (2013) [3], there are 8 parameters that can be used to assess the security of CSPs in term of data sharing. By using those 8 plus 1 additional parameter, this paper is going to reveal the security hole of 3 Chinese CSPs, those are: Baidu, Weiyun and Kanbox.
E. Non-Dead URL
This parameter refers to the link that remains working when the file has been updated, deleted or even replaced by a new one with the same name.
In this case, Weiyun apparently does not change the URL when the file has been updated. Suppose that Allice wan to share a document with Bob by using secret-URL sharing. Bob will do have the access to her file permanently as long as that file is existing. If they both are using Weiyun, then Bob will be able to access the file even though it has been updated many times by Allice. When Allice deletes the file, then it will be ended. The URL on the Bob's pocket will be deactivated.
It is probably good news when Allice works together with Bob and she shares the document through Weiyun. Bob will be able to monitor the progress of Allice because he can get the up-to-date version of the document with one single link only. But in another scenario, it could be dangerous when Allice does not have intention to share the documents permanently. She might do not realize with the situation in which Bob can easily collect a lot of information without her permission, by monitoring the update on Allice's document. Table II).
F. Uncertain Identities
The objective of private sharing is to make sure that only the authorized people can access the data. But in fact, not all CSPs follow this rule. The owner should be able to see the identity of the users who access the data. But sometimes, there are some security holes so that the owner cannot identify it easily as explained by Chu et al in their paper [3].
Let say Allice shares the URL to Bob and John by using private sharing. Since Bob has 2 accounts in this CSP, then Bob tries to sign-in with his second account who has not listed in Allice data yet. After Bob has successfully signed-in, Allice will be curious with that strange account. If only 2 accounts that the file is shared with, it could be easy to confirm whether this strange account belongs to Bob or John. But, what if we share the document to more than 50 or hundred peoples? It would be difficult to clarify once we found the uncertain identities. Among 3 CSPs, it is only Kanbox who does not have the private sharing methods. So Kanbox will be ignored in this part.
Baidu provides the private sharing method. Here, data owner can share the document to the colleagues that have been registered and listed as "good friends" (好友). And anyone who has listed as a "good friends", it is confirmed that they already have the Baidu's account before. Once Allice wants to share the document with others, she only needs to pick it from the "good friends" list and the notification will be sent privately to the expected users. Suppose that Bob has 2 Baidu's accounts, and Allice has sent the notification to his first account, then he can only access the data from his first account. Here, the shared data will only appear on the selected users account (required to sign-in), and there is no specific URL appears when the shared data is being accessed. The peoples who have the privilege to access the data will be shown on the Data Sharing Window with some modifications to hide and protect the full identity.
Example 1:
"[email protected]", will be shown as "us…[email protected]" Discussing about Weiyun, actually the user interface of Weiyun CSP is very simple. There are only few features included. Especially on sharing mechanism, it provides 2 options only, those are: by url" and "by mail". There is no specific option to do private sharing. However, it is compulsory to sign-in first using QQ or Weiyun account to access the shared file. This condition is suitable with the requirement of private sharing where the users need to have an account access. By this reason, Weiyun is considered have the private sharing method.
Consider that Allice wants to share the document with Bob by choosing "by mail" option, and then Bob click the link and sign-in to the Weiyun CSP, it still difficult to identify who are accessing that document. There is no list of peoples who have privilege to the document. Allice is unable to check whether Bob has accessed it or not. Even Allice is also unable to maksure that Bob accessed the file using his own account or not. And it is also impossible for Allice to know whether there is any other person or not accessing her shared file.
G. Unauthorized Resharing
Unauthorized Ressharing is the condition where Allice has shared a link to Bob privately, but Bob turns out re-shares again to others, and it is still possible for others to re-share again many times. Allice cannot control the link once she releases the URL, both publicly and privately.
This parti is also for the CSPs that have private sharing feature. Once again, Kanbox will be ingnored.
On Baidu, there is a private way to share the data. No one can open the door if they are not invited by the owner. Bob can receive the shared file from Allice but no way to do direct re-sharing. If Bob wants to do such action, he has to keep the file first to his cloud storage then share it manually. But it is a different scenario.
Weiyun differs from Baidu in term of this parameter.
There is no private way provided by Weiyun. Anyone can access the link given by the owner as long as they sign-in to Weiyun Cloud Storage account. The owner can send the link to some expected colleagues. But in the same time, the owner cannot control if one of those colleagues is trying to re-share the given link by spreading from their account. This condition allows for what so called "Unauthorized Resharing".
H. Indiscriminate Accessing URL
It is happened when there is no difference between URL used by Allice (as an owner) and Bob to access the file. On their paper, Chu et al [3] mentioned that Google Drive have such kind of weakness. They illustrate that if Alice is in a meeting and accesses her file while using a projector, the URL will be shown in the browser's address bar. It is dangerous since in Google Drive, any file is accessed via the associated URL, even for the file owner.
Unlike a Google Drive, these 3 Chinese CSPs are showing the page's URL, not file's URL on the address bar of web browser. This kind of URL is relatively safer and inaccessible by the unauthorized user.
I. Non-HTTPS URL for Sharing
HTTPS is important to secure the communication line in internet era. It protects not only the transmission of the involved data but also the resource locator [3]. HTTPS actually is combination of HTTP and SSL/TLS to secure the communication between Web browser and Web server, especially to prevent the kinds of web attacks, including eavesdropping [10] [11].
Unfortunately, these 3 Chinese CSPs are not using the HTTPS as their communication protocol, so that the integrity and confidentiality of the data are at stake.
Talking about Baidu and Weiyun, both of them cannot be considered that they are running "No Privacy on Sharing". Baidu does not have this weakness. But, it does not mean that Baidu has a very powerfull security. This is simply because Baidu do not have the feature to show the people who connected to the shared files. While on the Weiyun, the invited guests will be listed neatly and safely. It is only email address of the inviter that will be shown completely. Other email address will be secured by hiding some characters of account name (see Example 1).
L. Sharing of Trash Files
A serious problem happened in Google Drive when the people still can access the file that has been deleted and store it into trash folder [3]. Alice deletes the file probably because she does not want to share it anymore. But unfortunately she might be not aware that people can still access it even in the trash.
After we analyzed the 3 Chinese CSPs, we conclude that all of them are free from this problem. Baidu and Kanbox will change the URL every time when changes happened. They automatically deactivate the previous link that has been shared before. On the other hand, Weiyun still maintain the same link if owner only update the contents. But when owner deletes the file, then the link will be removed automatically.
M. Fixed URL
The scenario of this parameter according to Chu et al [3] is as follows:
Suppose that Allice has shared the URL publicly, but then suddenly she changes her mind and she wants to share it privately, so what happened with the previous URL? Chu et al [3] reported that Google Drive uses the same URL for that scenario. It is unsecure since now Allice wants to categorize the file as a private, but in fact the URL has been widely spread as public URL. If earlier Bob already got the link when Allice made it open for public, then now Bob can still access this secret file. Allice probably does not aware with this situation.
On Baidu, for the scenario above, the file turns out still accessible using public URL. So, it is like no privilege, once the URL has been publicly shared. But there is good news that Baidu provides a feature to cancel the file sharing. If Alice cancels the public sharing in advance, then it will be secure when she wants to re-share it privately.
On the other hand, Weiyun and Kanbox both cannot be measured using this parameter, because Weiyun does not have public sharing feature, whereas Kanbox does not provide the private sharing feature.
IV. Assessments and Suggestions
According to the prior discussion, we provide a table that can explain the vulnerabilities of 3 Chinese CSP, namely: Baidu, Weiyun and Kanbox, in a simpler way (see Table III). The assessments and suggestions regarding the security weaknesses of 3 Chinese CSPs will be discussed as follows:
Baidu and Kanbox do the good way, whereas Weiyun should be more careful about the "Non-Dead URL" issue. So, to the best of our knowledge, the link should be invalidated when the file is changed (updated or removed). The CSPs can also provide the option when the owner wants to share the live file, so that people can access the updated file anytime without changing the URL.
It is better to implement the private sharing methods as provided by Baidu, since it can make sure that the file is only accessed by the invited people with the registered account. However, especially for the owner, it is important to consider that owner should be able to check which users that currently on-line and accessing the shared file.
Baidu has a good security system against "unauthorized re-sharing". There is no URL exposed in private sharing, and it can reduce the possibility to do unauthorized resharing. Other CSPs are recommended to implement such kind of private sharing method. If insisted using URL on private sharing, then they have to provide another security method to generate the URL that can be accessed once only. Once the users accept the invitation and sign-in to their account, then the URL will be deactivated immediately [3].
These 3 Chinese CSPs are using the different URL for sharing and for owner. It means that, they provide a good security against "indiscriminate accessing URL".
Up to the writing of this paper is completed, we have not yet found the reason why these Chinese CSPs are not using HTTPS as their protocol. Since cloud storage is involving client-server system and private data, it is required to secure the communication line by implementing HTTPS.
Chu et al mentioned that no shortening service supports SSL, so "secret" URLs should not be shortened [3]. Howevery, security is more important than Beauty.
When the owner deletes the file from the cloud, the link to this file should be disabled immediately [3]. These Chinese CSPs are implementing such kind of concept so they are free from this threat.
Baidu provides a proper way in private sharing, especially regarding "no privacy on sharing" issue. But it is better if owner can choose whether he or she will allow other people to see the sharing list of the shared file or not [3].
The URL should not be fixed when there is a change in sharing method. CSPs have to be careful for such unexpected vulnerability, and they have to make sure that every changing on sharing settings will be followed by the changing of URL of the shared file. Among these 3 Chinese CSPs, only Baidu that can be measured using this issue and it can handle it well. Later, when Weiyun and Kanbox are going to provide both public and private sharing, we suggest to consider such security hole. Baidu can be the good model for countermeasures this problem.
V. Conclusion
Recently, storage media have been experiencing the tremendous growth, which are now widely available the virtual hard drive, namely Cloud Storage. On the other hand, file sharing is a kind of daily activity that cannot be denied by those who are active in using Information and Technology product. These 2 things currently have a close relationship where cloud storage can make the sharing process become simpler. However, we should aware for all kinds of vulnerabilities that may be not previously realized.
This paper was discussing about the security of file sharing system on Cloud Storage, especially by analyzing 3 Chinese CSPs: Baidu, Weiyun and Kanbox. Depart from the basic of some parameters proposed by Chu et al [3], we did some experiments to test their security in file sharing. As a result, there were some vulnerabilities found in each CSP that can be seen in Table 3.
Baidu, Weiyun and Kanbox have their own vulnerabilities. By looking at the Table 3, at least users will know the security level of each CSP, especially in term of file sharing system. Once more, discussing the security of the system is very interesting since there is no perfect system. By this research, it is mandatory for users to be careful when they share the files in the cloud.
TABLE I .
IFILE SHARING METHODS USED BY BAIDU, WEIYUN AND KANBOXTypes
Baidu
Weiyun
Kanbox
Public Sharing
Private Sharing
Secret URL Sharing
TABLE II .
IINONDEAD URLOn the other hand, Baidu and Kanbox have the more secure way about it. The URL is not a "NonDead" link sinceURL still active when:
Baidu
Weiyun
Kanbox
Update
Delete
Replace (creating a new file
with the same name)
International Journal of Advances in Computer Science & Its Applications-IJCSIA
Volume 5 : Issue 2
[ISSN : 2250-3765]
Publication Date: 30 October, 2015
they will change the URL when there is an update on the file.
(See
TABLE III .
IIISECURITY ASSESSMENTS OF BAIDU, WEIYUN AND KANBOXVulnerabilities
Baidu
Weiyun
Kanbox
Non-Dead URL
Uncertain Identities
Unauthorized Resharing
Indiscriminate Accessing URL
Non-HTTPS URL for Sharing
Non-HTTPS Shortened URL
No Privacy on Sharing
Sharing of Trash Files
Fixed URL
International Journal of Advances in Computer Science & Its Applications-IJCSIA
Volume 5 : Issue 2
[ISSN : 2250-3765]
Publication Date: 30 October, 2015
A Comparative Study of The Secret Sharing Algorithms for Secure Data in The Cloud. S J Nirmala, S M S Bhanu, A A Patel, International Journal on Cloud Computing: Services and Architecture (IJCCSA). 24S. J. Nirmala, S. M. S. Bhanu, and A. A. Patel, "A Comparative Study of The Secret Sharing Algorithms for Secure Data in The Cloud," International Journal on Cloud Computing: Services and Architecture (IJCCSA), vol. 2, no. 4, pp. 63-71, August 2012.
Augustus) Creativehunt. Cindy Kuan, Cindy Kuan. (2014, Augustus) Creativehunt. [Online].
Security Concerns in Popular Cloud Storage Services. Cheng-Kang Chu, Pervasive Computing, IEEE. 124Cheng-Kang Chu et al., "Security Concerns in Popular Cloud Storage Services," Pervasive Computing, IEEE, vol. 12, no. 4, pp. 50-57, October-Desember 2013.
Security and Privacy in Cloud Computing: A Survey. Minqi Zhou, Rong Zhang, Wei Xie, Weining Qian, Aoying Zhou, Sixth International Conference on Semantics, Knowledge and Grids. BeijingMinqi Zhou, Rong Zhang, Wei Xie, Weining Qian, and Aoying Zhou, "Security and Privacy in Cloud Computing: A Survey," in Sixth International Conference on Semantics, Knowledge and Grids, Beijing, 2010, pp. 105-112.
Survey on Data Sharing and Re-Encryption in Cloud. Renjith P Sabitha, S , International Journal of Advanced Research in Computer Engineering & Technology (IJARCET). 22Renjith P and Sabitha S., "Survey on Data Sharing and Re-Encryption in Cloud," International Journal of Advanced Research in Computer Engineering & Technology (IJARCET), vol. 2, no. 2, pp. 477-480, February 2013.
Baidu Cloud Storage). Baidu Yun Wangpan, OnlineBaidu Yun WangPan (Baidu Cloud Storage). [Online].
. Weiyun, Weiyun. [Online].
. Kupan, KuPan. [Online].
A*STAR) The Agency for Science. Research Technology, Science Daily. [OnlineTechnology and Research (A*STAR) The Agency for Science. (2014, June) Science Daily. [Online].
Computer Security Literacy: Staying Safe in Digital World. Douglas Jacobson, Joseph Idziorek, CRC PressBoca Raton, Florida, USADouglas Jacobson and Joseph Idziorek, Computer Security Literacy: Staying Safe in Digital World. Boca Raton, Florida, USA: CRC Press, 2013.
Principles and Practice, Fifth Edition. William Stallings, Cryptography , Network Security, Publishing House of Electronics IndustryBeijing, ChinaWilliam Stallings, Cryptography and Network Security: Principles and Practice, Fifth Edition. Beijing, China: Publishing House of Electronics Industry, 2011.
Rootkits & Botnets: A Beginner's Guide. USA: McGraw Hill. C Christopher, Malware Elisan, Christopher C. Elisan, Malware, Rootkits & Botnets: A Beginner's Guide. USA: McGraw Hill, 2012.
| []
|
[
"Starobinsky Inflation from String Theory?",
"Starobinsky Inflation from String Theory?"
]
| [
"Max Brinkmann [email protected] \nDipartimento di Fisica e Astronomia\nUniversità di Padova\nvia Marzolo 835131PadovaItaly\n\nINFN\nSezione di Padova\nvia Marzolo 835131PadovaItaly\n\nDipartimento di Fisica e Astronomia\nUniversità di Bologna\nvia Irnerio 4640126BolognaItaly\n",
"Michele Cicoli [email protected] \nDipartimento di Fisica e Astronomia\nUniversità di Bologna\nvia Irnerio 4640126BolognaItaly\n\nINFN\nSezione di Bologna\nviale Berti Pichat 6/240127BolognaItaly\n",
"Pietro Zito [email protected] \nDipartimento di Fisica e Astronomia\nUniversità di Bologna\nvia Irnerio 4640126BolognaItaly\n\nInstitute for Quantum Optics and Quantum Information (IQOQI) Vienna\nAustrian Academy of Sciences\nBoltzmanngasse 31090ViennaAustria\n\nFaculty of Physics & Vienna Doctoral School in Physics\nUniversity of Vienna\nBoltzmanngasse 5A-1090ViennaAustria\n"
]
| [
"Dipartimento di Fisica e Astronomia\nUniversità di Padova\nvia Marzolo 835131PadovaItaly",
"INFN\nSezione di Padova\nvia Marzolo 835131PadovaItaly",
"Dipartimento di Fisica e Astronomia\nUniversità di Bologna\nvia Irnerio 4640126BolognaItaly",
"Dipartimento di Fisica e Astronomia\nUniversità di Bologna\nvia Irnerio 4640126BolognaItaly",
"INFN\nSezione di Bologna\nviale Berti Pichat 6/240127BolognaItaly",
"Dipartimento di Fisica e Astronomia\nUniversità di Bologna\nvia Irnerio 4640126BolognaItaly",
"Institute for Quantum Optics and Quantum Information (IQOQI) Vienna\nAustrian Academy of Sciences\nBoltzmanngasse 31090ViennaAustria",
"Faculty of Physics & Vienna Doctoral School in Physics\nUniversity of Vienna\nBoltzmanngasse 5A-1090ViennaAustria"
]
| []
| Starobinsky inflation is currently one of the best models concerning agreement with cosmological data. Despite this observational success, it is still lacking a robust embedding into a UV complete theory. Previous efforts to derive Starobinsky inflation from string theory have been based on the derivation of higher derivative curvature terms from the low-energy limit of ten-dimensional string theory. This approach is however known to fail due to the difficulty to tame the effect of contributions proportional to the Ricci scalar to a power larger than two. In this paper we investigate an alternative attempt which exploits instead the ubiquitous presence of scalar fields in string compactifications combined with the fact that Starobinsky inflation can be recast as Einstein gravity coupled to a scalar field with a precise potential and conformal coupling to matter fermions. We focus in particular on type IIB Kähler moduli since they have shown to lead to exponential potentials with a Starobinsky-like plateau. We consider three classes of moduli with a different topological origin: the volume modulus, bulk fibre moduli, and blow-up modes. The only modulus with the correct coupling to matter is the volume mode but its potential does not feature any plateau at large field values. Fibre moduli admit instead a potential very similar to Starobinsky inflation with a natural suppression of higher curvature corrections, but they cannot reproduce the correct conformal coupling to matter. Blow-up modes have both a wrong potential and a wrong coupling. Our analysis implies therefore that embedding Starobinsky inflation into string theory seems rather hard. Finally, it provides a detailed derivation of the coupling to matter of fibre moduli which could be used as a way to discriminate Starobinsky from fibre inflation. arXiv:2305.05703v1 [hep-th] 9 May 2023 | null | [
"https://export.arxiv.org/pdf/2305.05703v1.pdf"
]
| 258,587,913 | 2305.05703 | 65f47fad678e722332d0de0316ab00d3b09aa8ea |
Starobinsky Inflation from String Theory?
Max Brinkmann [email protected]
Dipartimento di Fisica e Astronomia
Università di Padova
via Marzolo 835131PadovaItaly
INFN
Sezione di Padova
via Marzolo 835131PadovaItaly
Dipartimento di Fisica e Astronomia
Università di Bologna
via Irnerio 4640126BolognaItaly
Michele Cicoli [email protected]
Dipartimento di Fisica e Astronomia
Università di Bologna
via Irnerio 4640126BolognaItaly
INFN
Sezione di Bologna
viale Berti Pichat 6/240127BolognaItaly
Pietro Zito [email protected]
Dipartimento di Fisica e Astronomia
Università di Bologna
via Irnerio 4640126BolognaItaly
Institute for Quantum Optics and Quantum Information (IQOQI) Vienna
Austrian Academy of Sciences
Boltzmanngasse 31090ViennaAustria
Faculty of Physics & Vienna Doctoral School in Physics
University of Vienna
Boltzmanngasse 5A-1090ViennaAustria
Starobinsky Inflation from String Theory?
Prepared for submission to JHEP
Starobinsky inflation is currently one of the best models concerning agreement with cosmological data. Despite this observational success, it is still lacking a robust embedding into a UV complete theory. Previous efforts to derive Starobinsky inflation from string theory have been based on the derivation of higher derivative curvature terms from the low-energy limit of ten-dimensional string theory. This approach is however known to fail due to the difficulty to tame the effect of contributions proportional to the Ricci scalar to a power larger than two. In this paper we investigate an alternative attempt which exploits instead the ubiquitous presence of scalar fields in string compactifications combined with the fact that Starobinsky inflation can be recast as Einstein gravity coupled to a scalar field with a precise potential and conformal coupling to matter fermions. We focus in particular on type IIB Kähler moduli since they have shown to lead to exponential potentials with a Starobinsky-like plateau. We consider three classes of moduli with a different topological origin: the volume modulus, bulk fibre moduli, and blow-up modes. The only modulus with the correct coupling to matter is the volume mode but its potential does not feature any plateau at large field values. Fibre moduli admit instead a potential very similar to Starobinsky inflation with a natural suppression of higher curvature corrections, but they cannot reproduce the correct conformal coupling to matter. Blow-up modes have both a wrong potential and a wrong coupling. Our analysis implies therefore that embedding Starobinsky inflation into string theory seems rather hard. Finally, it provides a detailed derivation of the coupling to matter of fibre moduli which could be used as a way to discriminate Starobinsky from fibre inflation. arXiv:2305.05703v1 [hep-th] 9 May 2023
Abstract: Starobinsky inflation is currently one of the best models concerning agreement with cosmological data. Despite this observational success, it is still lacking a robust embedding into a UV complete theory. Previous efforts to derive Starobinsky inflation from string theory have been based on the derivation of higher derivative curvature terms from the low-energy limit of ten-dimensional string theory. This approach is however known to fail due to the difficulty to tame the effect of contributions proportional to the Ricci scalar to a power larger than two. In this paper we investigate an alternative attempt which exploits instead the ubiquitous presence of scalar fields in string compactifications combined with the fact that Starobinsky inflation can be recast as Einstein gravity coupled to a scalar field with a precise potential and conformal coupling to matter fermions. We focus in particular on type IIB Kähler moduli since they have shown to lead to exponential potentials with a Starobinsky-like plateau. We consider three classes of moduli with a different topological origin: the volume modulus, bulk fibre moduli, and blow-up modes. The only modulus with the correct coupling to matter is the volume mode but its potential does not feature any plateau at large field values. Fibre moduli admit instead a potential very similar to Starobinsky inflation with a natural suppression of higher curvature corrections, but they cannot reproduce the correct conformal coupling to matter. Blow-up modes have both a wrong potential and a wrong coupling. Our analysis implies therefore that embedding Starobinsky inflation into string theory seems rather hard. Finally, it provides a detailed derivation of the coupling to matter of fibre moduli which could be used as a way to discriminate Starobinsky from fibre inflation.
Introduction
Starobinsky inflation [1] is presently one of the most successful inflationary models in fitting cosmological observations [2]. This predictive model is based on a simple extension of the Einstein-Hilbert action via the introduction of an R 2 contribution. Applying a conformal transformation on the Jordan frame metric, the model can be recast as ordinary gravity in Einstein frame coupled to a scalar field φ with an exponential potential which asymptotes to an inflationary slow-roll plateau at large field values. Moreover, the conformal transformation of the metric fixes the Yukawa coupling of the inflaton to ordinary matter to be y φ = −1/ √ 6. While elegant, this formulation of Starobinsky inflation is still lacking a quantum gravity embedding which is crucial to trust its robustness, especially against higher derivative curvature corrections which are naively expected to arise from the effective field theory point of view and which ruin the flatness of the inflationary potential. In fact, a working UV embedding of the Starobinsky model should explain why corrections to the Einstein-Hilbert action involving the Ricci and Riemann tensors are absent or can be ignored. Moreover, it should be characterised by at least two mass scales: M 10 13 GeV, which controls the R 2 contribution, and a much larger scale, M * M , which suppresses higher curvature terms R n with n > 2.
String theory is at the moment one of best developed candidate theories of quantum gravity, and so it is natural to try to embed Starobinsky inflation in this framework. In string theory higher derivative corrections to the four-dimensional Einstein-Hilbert action do in fact arise as the low-energy limit of α corrections in ten dimensions. They are naturally suppressed by the string scale with potentially additional powers of the vacuum expectation values of universal moduli like the internal volume or the dilaton which fixes the string coupling. This observation gives some hope to obtain more than one suppression scale to justify the inclusion of just R 2 effects. However, as argued in [3], a detailed analysis of the low-energy limit of both heterotic and type II string theories reveals that letting the first higher derivative term compete with the leading term results in a loss of control over the perturbative α expansion. In summary, allowing higher order terms to scale as expected from string theory quickly destroys any inflationary dynamics.
In this paper we will instead explore a different attempt to derive Starobinksy inflation from string theory based on the dual formulation of the model in terms of Einstein gravity coupled to a scalar field, the inflaton. Since string theory features many scalar fields, called moduli, arising as Kaluza-Klein zero modes of deformations of the internal compactification space, our hope is to reinterpret one of these scalars as the Starobinsky inflaton and recover the Starobinsky model in Einstein frame. This modulus should have two important features: (i) a scalar potential which reproduces the one of Starobinsky inflation, with in particular a plateau at large field values which can sustain at least 50-60 efoldings of slow-roll inflation without being destroyed by higher order corrections (like R n terms with n > 2 in Jordan frame); (ii) the correct conformal Yukawa coupling to matter coming from minimally coupled fermions in Jordan frame.
Several different mechanisms to drive inflation from string theory have been derived, both at single and a multi-field level (see [4] for a recent and updated review). Focusing on single-field models (as in the Starobinsky case), the most popular inflaton candidates are open string moduli, axions and Kähler moduli. The potential of open string moduli and axions is typically power-law, or at most sinusoidal for axions, while Kähler moduli feature exponential potentials [5,6] which arise naturally in the supergravity effective field theory of string compactifications once the potential is expressed in terms of canonically normalised fields. Given that the potential of the Starobinsky model is characterised by an exponential dependence on the inflaton, we will focus on Kähler moduli within type IIB compactification where moduli fixing is better understood. The stabilisation of these modes is crucial, not just to compute the inflationary potential, but also to derive their couplings to fermions living on stacks of either D7-branes wrapping internal four-cycles or D3-branes at singularities. We will focus on three different classes of Kähler moduli in type IIB Calabi-Yau compactifications. From the microscopic point of view, they are different since they control the volume of internal four-cycles with a different topology. Let us summarise our results for each of these classes of Kähler moduli:
• Volume mode: The volume mode has the correct conformal Yukawa coupling to matter fermions living on either D3-branes or D7-branes wrapped around a blowup mode. It also has the right coupling to open string modes living within the worldvolume of D7-branes wrapping the overall volume. However, the potential of the volume mode cannot mimic the one of Starobinsky inflation since it does not feature any plateau at large field values. On the contrary, the potential of the volume modulus has a steep runaway at large field values and realisations of volume mode inflation require to tune different terms to induce a near inflection point [7][8][9].
• Fibre moduli: Fibre inflation is a class of models where inflation is driven by a bulk fibre modulus which is a leading order flat direction lifted by subdominant perturbative corrections to the Kähler potential [10][11][12][13][14][15][16]. These effects generate a potential that is very similar to Starobinsky inflation. In fact, when recast as an f (R) theory, fibre inflation looks effectively like f (R) R 1.3 + R 2 [17]. Its predictions for the main cosmological observables are thus very similar to the predictions of Starobinsky inflation. If the relation between the scalar spectral index n s and the tensor-to-scalar ratio r is written as r = 3α(n s − 1) 2 , Starobinsky inflation corresponds to α = 1, while fibre inflation to α = 2 [3]. However, fibre moduli can never reproduce the correct Yukawa coupling to matter since they are effectively decoupled from fermions on D3-branes or on D7-branes wrapping blow-up modes. On the other hand, they have a non-zero coupling to fermions if the Standard Model is realised on branes wrapping bulk four-cycles, but the actual value of the Yukawa coupling is always different from y φ = −1/ √ 6. More precisely, matching this value would require irrational modular weights in the Kähler metric for matter fields, a situation which seems impossible to achieve.
• Blow-up modes: Moduli controlling the volume of local exceptional divisors resolving point-like singularities seem the worst candidates to reproduce Starobinsky inflation since both their potential and coupling take the wrong form. Despite featuring an exponential dependence, the potential of blow-up inflation is much flatter than the one of Starobinsky inflation [18][19][20]. Moreover, blow-up modes are effectively decoupled from fermions on D3-branes or on D7-branes wrapping bulk cycles, while they have a much stronger than Planckian coupling to matter on D7-branes wrapped around the blow-up mode itself (due to the localisation of the interaction in the extra dimensions).
As a result of our analysis, we conclude that no string modulus seems to have the right properties to effectively realise an R + R 2 inflationary model, at least within the framework of type IIB Calabi-Yau compactifications at large volume and weak string coupling where the low-energy effective field theory is under control. Embedding Starobinsky inflation in a UV complete theory like string theory seems therefore to be very challenging. If agreement with cosmological observations will become even better after the inclusion of more data, our analysis suggests that the underlying model is more likely to be fibre inflation which can be discriminated from Starobinsky inflation, not just from the slightly different relation between n s and r, but also from the different coupling to matter fermions. For example, we find that for matter at the intersection of D7-branes wrapping bulk four-cycles, the Yukawa coupling of fibre moduli is y φ = 1/ √ 3.
Basics of Starobinsky inflation
Starobinsky inflation [1] uses higher derivative corrections to the Einstein-Hilbert action, in particular the R 2 correction. Let us first review how this theory is classically equivalent to a standard Einstein gravity coupled to a scalar field with a scalar potential characterised by a large field plateau that can drive inflation. We shall then extend the scenario to more general f (R) theories.
R + R 2 gravity
The starting point of Starobinsky inflation [1] is the following action
S = 1 2 M 2 P d 4 x √ −g f (R) + d 4 x L mat (g µν , ψ) , (2.1) where f (R) = R + R 2 M 2 ,(2.2)
and L mat describes the minimal coupling to matter fields collectively denoted as ψ. Here M P = 1/ √ 8πG and M is a constant with the dimension of a mass. The Einstein-Hilbert linear term, R, is responsible for a deviation from an eternal de Sitter solution: it will cause inflation to end as required in any realistic inflationary model. However, the conformally equivalent form of the model is more useful for our purposes. In order to understand the universal features of the inflaton model, let us describe the conformal transformation in some detail.
A conformal transformation is a point-dependent rescaling of the metric tensor g µν of the form g µν →g µν = Ω(x) 2 g µν = e 2 ω(x) g µν ,
(2.3)
with the scale factor ω(x) ≡ ln Ω(x). One of its properties is to leave the light cones unchanged [21]. The Ricci tensor R µν and the Ricci scalar R constructed out of the metric g µν , andR µν andR obtained fromg µν , are related by [22]
R = Ω 2 (R + 6˜ ω − 6g µν ∂ µ ω∂ ν ω) , (2.4) where ∂ µ ω ≡ ∂ω ∂x µ ,˜ ω ≡ 1 √ −g ∂ µ −gg µν ∂ ν ω . (2.5)
We want to use this transformation to derive an action in Einstein frame from the Starobinsky model. As a preliminary step, it is useful to rewrite the action in the equivalent form [22]
S = d 4 x √ −g 1 2 M 2 P F R − U + d 4 x L mat (g µν , ψ) ,(2.6)
where we defined
F ≡ ∂f ∂R = 1 + 2 R M 2 , U ≡ 1 2 M 2 P (F R − f ) = 1 2 M 2 P R 2 M 2 . (2.7)
Using the relation √ −g = Ω −4 √ −g and (2.4), the action becomes
S = d 4 x −g 1 2 M 2 P F Ω 2 (R + 6˜ ω − 6g µν ∂ µ ω∂ ν ω) − U Ω 4 + d 4 x L mat (Ω −2g µν , ψ) .
(2.8) One can see that for F > 0, the conformal factor which brings the action into Einstein frame is Ω 2 = e 2ω = F . Since the scale factor has a kinetic term in this frame, we promote it to a scalar field φ, defined by
φ(x) ≡ √ 6 ω(x) M P . (2.9)
Now we have all the necessary ingredients to write the action in the Einstein frame, where it takes the form [22]
S = d 4 x −g 1 2 M 2 PR + 1 2g µν ∂ µ φ∂ ν φ − V (φ) + d 4 x L mat (F −1 (φ)g µν , ψ) , (2.10) with V (φ) = U F 2 . (2.11)
Note that the scalar field φ is canonically normalised. For ease of notation let us name the scalar part of the Lagrangian as L φ so that
L = √ −g M 2 PR /2 + L φ + L mat .
Inflationary plateau
To determine the scalar potential, note that the scalar field can also be expressed as
φ M P = 3 2 ln F = 3 2 ln 1 + 2 R M 2 . (2.12)
Then the scalar potential of the Starobinsky model gives the Starobinsky inflaton potential
V (φ) = R 2 M 2 P 2M 2 1 + 2 R M 2 −2 = 1 8 M 2 M 2 P 1 − e − √ 2/3φ/M P 2 . (2.13)
This potential is plotted in Fig. 1. From the form of the potential, we recognise two phases of evolution of the scalar field. Slow-roll inflation is realised for large values of the scalar field, φ M P , where V (φ)
M 2 M 2 P /8.
Here the potential is sufficiently flat to drive an epoch of accelerated expansion of the universe. More precisely, inflation ends around φ M P and 50-60 efoldings of inflation are realised around φ 5-6 M P . Note that the WMAP normalisation of the CMB anisotropies constraints M to be M 10 13 GeV. After this phase, the regime φ M P takes place, where V (φ) 3M 2 φ 2 and the field oscillates around φ = 0 leading to the reheating process.
Coupling to matter
Through the conformal transformation of the metric, the scalar field φ is directly coupled to matter in the Einstein frame [22]. We can see this better after obtaining an equation of motion for φ. By taking the variation of (2.10) with respect to φ we obtain
− ∂ µ ∂( √ −g L φ ) ∂(∂ µ φ) + ∂( √ −g L φ ) ∂φ + ∂L mat ∂φ = 0 ,(2.
14) which simplifies to˜
φ − ∂ φ V + 1 √ −g ∂L mat ∂φ = 0 . (2.15)
The matter term ∂L mat /∂φ is given by
∂L mat ∂φ = δL mat δg µν ∂g µν ∂φ = 1 F (φ) δL mat δg µν ∂(F (φ)g µν ) ∂φ = − −g ∂ φ F 2FT (mat) µνg µν , (2.16) whereT (mat) µν
is the energy-momentum tensor for matter
T (mat) µν ≡ 2 √ −g δL mat δg µν . (2.17)
With this we obtain the field equation in the Einstein framẽ
φ − ∂ φ V + y φ M PT (mat) = 0 , (2.18)
with the Yukawa coupling between φ and matter given as
y φ ≡ − 1 2 M P ∂ φ ln F = − 1 2 M P ∂ φ (e 2 3 φ/M P ) e 2 3 φ/M P = − 1 √ 6 . (2.19)
This shows that the scalar field φ is directly coupled to matter with a universal coupling constant y φ = −1/ √ 6. Alternatively, the coupling can be read off directly from the action
L mat ⊃ −y φ m ψ M P φψψ , (2.20)
which gives the same universal result. This is also the way we shall compute the coupling for stringy models in Sec. 3.
General f (R) expansion
Let us now consider a more general f (R) theory where the function f (R) is analytic around zero, and so can be expanded in a series in R 2 /M . This is motivated by the fact that if Starobinsky inflation arises from a more fundamental UV complete theory, we expect it to be an effective field theory obtained after integrating out massive modes below some high scale M (which can still be several orders of magnitude below the Planck scale). In this effective theory we expect all possible terms compatible with the underlying symmetries and particle content. Hence we expect not just terms involving the Ricci scalar, but also contributions which depend on the Ricci and the Riemann tensors. Setting this issue aside, and focusing just on an effective f (R) theory, we generalise (2.2) by a series expansion in R 2 /M of the form (see [23][24][25][26] for similar studies):
f (R) = R + R 2 M 2 + µ R 3 M 4 + λ R 4 M 6 + ... (2.21)
The resulting scalar potential in Einstein frame would look like
V = 1 8 M 2 M 2 P 1 − e − 2 3 φ M P 2 + λ e 2 3 φ M P + ... (2.22)
where we have set µ = 0 since in the following we shall focus on supersymmetric models where this coefficient vanishes. A working Starobinsky inflation model requires λ 1 or, in other words, that the R 4 term is suppressed by an effective scaleM = M λ −6 M which is well above M . Fig. 2 shows the effect of R 4 corrections for different values of λ. It is easy to infer that these corrections do not ruin 50-60 efoldings of standard Starobinsky inflation only if |λ| < 10 −4 .
This has to be the case, more in general, for all higher order terms R n with n ≥ 5. Thus, a working Starobinsky inflation model requires the presence of at least two different suppression scales, which might be hard to justify from a low-energy perspective. The idea is thatM could correspond to the Planck scale so that (2.21) behaves as
f (R) = R + R 2 M 2 1 + c 1 R M 2 P + c 2 R 2 M 4 P + ... (2.23)
where the c i are now O(1) coefficients. While this expression can certainly be postulated from an effective field theory point of view, it definitely deserves an explanation from a more fundamental theory which is valid at higher energy scales.
A stringy embedding?
In this section we will explore the possibility to embed Starobinsky inflation in string theory by either deriving higher curvature corrections from ten dimensions in Jordan frame, or by reproducing the correct potential and Yukawa coupling of the Starobinsky scalar in Einstein frame.
Higher curvatures from ten dimensions
Let us investigate if the structure outlined in (2.23) with two suppression scales can be reproduced from string theory following [3]. The low-energy ten-dimensional action below the string scale takes the schematic form
1 α 4 d 10 ξ √ −G e −2Φ R + k 1 α R 2 + k 3 α 3 R 4 + .... (3.1)
where k 1 = 0 for heterotic and type I strings, while k 1 = 0 for type IIB string theory, and k 3 ∼ O(1) for all string theories. Let us set the dilaton Φ and the internal volume V to their vacuum expectation values, given respectively by
e Φ 0 = g s and d 6 y −g (6D) = V α 3 . (3.2)
Dimensional reduction to four dimensions then gives (showing only the terms which involve four-dimensional curvatures)
M 2 P 2 d 4 x √ −g R + R 2 M 2 s k 1 +k 1 V 2/3 + ... + k 3 R 4 M 6 s + .... (3.3)
wherek 1 ∼ O(1) and
M s 1 √ α , and M 2 P M 2 s V g 2 s . (3.4)
This analysis shows that Starobinsky inflation is very unlikely to emerge in string theory from the dimensional reduction of ten-dimesional higher derivative terms for the following reasons:
• Heterotic string: in this case k 1 = 0 and the Starobinsky suppression scale M is identified with the string scale M s , M M s . Due to the relation (3.4), it might not seem difficult to achieve M s 10 13 GeV which is necessary to match the observed amplitude of the density perturbations if g s 1 and V 1, which are the limits where the supergravity approximation is under control. However the string scale is tied to the gauge coupling of the visible sector (generically a GUT theory) since
α −1 GUT V/g 2 s .
To avoid hyperweak couplings, the string scale is therefore around the GUT scale, M s 10 16 GeV, which is too high to reproduce M s 10 13 GeV. Let us stress that, even ignoring this issue, the R 4 term would still be suppressed by the same scale, the string scale, resulting in a positive exponential that spoils the inflationary plateau.
• Type IIB string: In this case k 1 = 0, and so the Starobinsky suppression scale M has to be identified with
M M s V 1/3 = g s M P V 1/6 , (3.5)
which could again reproduce M 10 13 GeV for g s 1 and V 1. In type IIB the overall internal volume is decoupled from the visible sector gauge coupling since the Standard Model (or genelisations thereof) is realised on stacks of D3 or D7branes which are localised in the extra-dimensions. However the mass scale which suppresses the R 4 terms would be given simply by the string scale, and so it would be even smaller than the one suppressing the R 2 term. This implies that the positive exponential arising from the R 4 term would destroy inflation very quickly.
Moduli inflation and Yukawa couplings
String theory proposes various candidate scalar fields that could drive inflation. Many interesting scenarios have been explored in the past years (see [4]). Inflationary solutions have been developed by employing open string fields controlling the position of either warped, unwarped or relativistic branes. Alternatively axions have also been proposed as candidates to drive inflation thanks to their perturbative shift symmetry. Most importantly for us, the possibility that Kähler moduli, naturally arising in string compactifications, could be the inflaton field has also been considered in a vast literature [7-20, 27, 28] since these fields (particularly the modes orthogonal to the overall volume) enjoy an effective and approximate non-compact shift symmetry [3,6]. In what follows we will focus on Kähler moduli since in supergravity effective theories they naturally admit an exponential potential that is the typical feature of Starobinsky inflation.
Our goal here is to explore the possibility that any of these Kähler moduli can resemble the Starobinsky field and therefore be eligible as a candidate for driving Starobinsky inflation. The first step is to verify that the coupling of this candidate modulus to matter coincides with the coupling to matter of the Starobinsky field, which is equal to y φ = −1/ √ 6. This will be our primary objective in the following discussion, where we will first consider as possible candidates the volume modulus and blow-up modes. Subsequently, we analyse the form of the scalar potential to check if it resembles the Starobinsky potential. We will also examine a particular class of constructions known as fibre inflation. Again, we will focus on the computation of the couplings to matter and of the scalar potential of bulk moduli appearing in fibred Calabi-Yau models of string inflation.
Volume modulus and blow-up modes
Due to the presence of the compactification volume in all the four-dimensional effective theories of string compactifications, it is natural to ask if it could be the field driving slow-roll inflation. The most concrete models have been constructed in the Large Volume Scenario (LVS) [29]. Following [30], we will consider compactifications on P 4 [1,1,1,6,9] . This is the single-hole Swiss cheese geometry with two Kähler moduli, T b = τ b +iθ b and T s = τ s +iθ s , and Calabi-Yau volume given by (up to an irrelevant overall 1/(9 √ 2) factor)
V = τ 3/2 b − τ 3/2 s . (3.6)
The big modulus τ b controls the size of the overall volume, while τ s controls the size of the blow-up of a point-like singularity. The Kähler potential and the superpotential are given by (setting from now on M P = 1) We will assume that the procedure of moduli stabilisation has been performed, giving the moduli a potential and a mass. Expanding the fields in τ i = τ i + δτ i , we can write, around the minimum of the moduli potential, the following Lagrangian [30]
K = −2 ln τ 3/2 b − τ 3/2 s +ξ 2 (3.7) W = W 0 + A s e −asTs ,(3.L = K ij ∂ µ (δτ i )∂ µ (δτ j ) − V 0 − (M 2 ) ij (δτ i )(δτ j ) − O(δτ 3 ) − 1 4g 2 SM F µν F µν ,(3.9)
where the Standard Model gauge coupling g SM can take three different expressions depending on the brane realisation of the visible sector: (i) g −2 SM = τ s for D7-branes wrapped around the blow-up mode τ s ; (ii) g −2 SM = τ b for D7-branes wrapped around the big four-cycle τ b (in this case the overall volume cannot be too large in order to avoid a hyperweak Standard Model gauge coupling); (iii) g −2 SM = s for D3-branes at singularities (where s is the dilaton with s = g −1 s ). It is useful to rewrite the Lagrangian in terms of mass eigenstates σ and φ (the heavy and the light field, respectively), defined by
δτ b δτ s = v σ σ √ 2 + v φ φ √ 2 ,(3.10)
where v σ and v φ are the normalised eigenvectors of (K −1 ) ij (M 2 )j k . In the following, we will need an expression for the original fields δτ b and δτ s in terms of σ and φ. These relations are given at leading order by [30]
δτ b = √ 6 τ b 1/4 τ s 3/4 σ √ 2 + 4 3 τ b φ √ 2 ∼ O V 1/6 0 σ + O V 2/3 0 φ (3.11) δτ s = 2 √ 6 3 τ b 3/4 τ s 1/4 σ √ 2 + √ 3 a s φ √ 2 ∼ O V 1/2 0 σ + O (1) φ , (3.12) with V 0 = V τ b 3/2 1.
Couplings to matter
Let us now compute the couplings between moduli and chiral matter fields. We consider the following relevant terms of the supergravity Lagrangian [30] L = Kψ ψ iψγ µ ∂ µ ψ + K HH ∂ µ H∂ µH − e K/2 y H Hψψ .
(3.13)
The Kähler metric for the chiral matter fields is given by
Kψ ψ ∼ KH H ∼ τ λs s τ λ b b = K 0 1 + λ s δτ s τ s − λ b δτ b τ b + . . . ,(3.14)
with K 0 ≡ τ s λs / τ b λ b and, depending on the detailed origin of the matter fields as open strings on branes, one can have [32]
λ s = 1, 1/2, 0 , λ b = 1 if g −2 SM = τ s , (3.15) λ s = 0 , λ b = 1, 1/2, 0 if g −2 SM = τ b , (3.16) λ s = 0 , λ b = 1 if g −2 SM = s .e K/2 = 1 V 1 τ 3/2 b − τ 3/2 s = 1 V 0 1 − 3 2 δτ b τ b + . . . . (3.18)
In terms of these expressions, we can now write the Lagrangian (3.13) as
L = K 0 iψγ µ ∂ µ ψ + K 0 ∂ µ H∂ µH − y H V 0 Hψψ + λ s δτ s τ s − λ b δτ b τ b K 0 iψγ µ ∂ µ ψ + λ s δτ s τ s − λ b δτ b τ b K 0 ∂ µ H∂ µH + 3 2 δτ b τ b y H V 0 Hψψ . (3.19)
The next step is to canonically normalise the fields and impose electroweak symmetry breaking. The Higgs will acquire a vacuum expectation value and the following fermion mass will be generated y H H c = m ψ .
(3.20)
Therefore the Lagrangian describing the cubic couplings between matter fields and moduli looks like
L cubic = λ s δτ s τ s − λ b δτ b τ b ψ (iγ µ ∂ µ − m ψ ) ψ + λ s δτ s τ s + 3 2 − λ b δτ b τ b m ψψ ψ , (3.21)
where only the last term gives a non-zero contribution after the equations of motion are taken into account. Our first goal is to prove that the volume modulus τ b has the right coupling to matter. Let us first remark that from (3.11), it is possible to notice that τ b is mostly given by φ while τ s is mostly σ. To get the coupling, let us now consider the last term of (3.21), in particular focussing on the τ b part
L φψψ 3 2 − λ b δτ b τ b m ψψ ψ . (3.22)
Given that the light field φ is dominantly the volume modulus, we can reduce (3.11) to
δτ b τ b 2 3 φ . (3.23)
Using this, (3.22) can be written as
L φψψ − 2 3 λ b − 3 2 m ψ φψψ ,(3.24)
from which we can easily read off the coupling between the light field φ and matter, which is equal to
y φ = 2 3 λ b − 3 2 = − 1 √ 6 if λ b = 1 . (3.25)
As shown above, (3.25) reproduces the correct conformal coupling to matter of Starobinsky inflation, y φ = −1/ √ 6, if λ b = 1. As can be seen from (3.15), (3.16) and (3.17), this is indeed very often the case since λ b = 1 if the Standard Model lives on D7-branes wrapped around the blow-up mode, on D3-branes but also on D7-branes wrapped around the big four-cycle τ b provided the matter fields originate from the reduction of gauge fields within the D7-worldvolume [32]. In our limit, the light field φ is basically the volume modulus τ b . The fact that the volume modulus has the same coupling to matter as the Starobinsky field represents a first step towards a possible identification between the two.
Let us now present the coupling to matter of the heavy field σ, showing that it is different from −1/ √ 6, therefore excluding the possibility for the τ s modulus to be the scalar field responsible for Starobinksy inflation. To proceed, let us consider the τ s part in the last term of (3.21)
L σψψ λ s δτ s τ s m ψψ ψ . (3.26)
From the expansion (3.11), we can see that it is possible to make the following approximation δτ s τ s V 0 σ . (3.27) We conclude that L σψψ λ s V 0 m ψ σψψ .
(3.28)
The general classification (3.15), (3.16) and (3.17) shows that λ s is non-zero only when the Standard Model lives on D7-branes wrapped around τ s . However in this case the absolute value of the Yukawa coupling to matter of the heavy field σ (mainly the small modulus τ s ) is much larger than 1/ √ 6 since V 0 1. Note also that σ is approximately decoupled from ordinary matter when the Standard Model is realised via D7-branes on τ b or via D3branes at singularities since these cases are characterised by λ s = 0. A non-zero Yukawa coupling would be generated by considering the term in (3.21) proportional to δτ b and then exploiting the dependence of δτ b on σ in (3.11). However it is easy to check that this would induce a very suppressed coupling proportional to V −1/2 0 1. We therefore conclude that σ cannot be identified with the Starobinksy field.
Scalar potential
Although the Kähler modulus τ b controlling the overall volume of the Calabi-Yau manifold has the right coupling to matter, it is well known that its scalar potential is not suitable for a slow-roll inflationary behaviour. Recall that the Starobinsky potential has the form (2.13) with a slow-roll plateau at large values of φ.
Such a constant behaviour of the inflationary potential can never be reproduced for the volume mode since any contribution to the scalar potential has to depend on V due to the Weyl rescaling to express the four-dimensional action in Einstein frame. Alternatively, this can also be seen from the general expression of the supergravity scalar potential that is proportional to e K = V −2 . This has to be the case since the potential has to go to zero in the infinity volume limit, V → ∞, in order to recover a ten-dimensional Minkowski solution in the decompactification regime.
Due to these very general considerations, at large field values, the potential of the volume modulus will feature a runaway behaviour, instead of a Starobinsky-like plateau, regardless of the effects (perturbative or non-perturbative) which break the no-scale structure and generate the potential for V. Without loss of generality, we therefore illustrate this claim by focusing on the simplest LVS scenario where the (uplifted) scalar potential reads
V = 8 √ τ s (a s A s ) 2 e −2asτs 3V − 4W 0 a s A s τ s e −asτs V 2 + 3ξW 2 0 4V 3 + V up , (3.29)
where, again without loss of generality, we parameterised the uplift potential as V up = δ/V α with α < 3. A single-field potential for the volume modulus can be obtained upon
integrating out τ s V =ξ W 2 0 2V 3 1 − 8 (ln V) 3/2 3ξa 3/2 s + δ V α . (3.30)
This expression depends only on the volume V, which is related to the modulus τ b and to the canonically normalised field φ through In terms of the canonical field φ, the volume mode potential reads
V 2/3 τ b = eV (φ) =ξ W 2 0 2 1 − 8 3 2 φ 3/2 3ξa 3/2 s e − 27 2 φ + δ e −α 3 2 φ . (3.32)
This potential is shown in Fig. 3. As anticipated, due to the volume dependence of the uplift term, the potential does not have the necessary plateau of the Starobinsky potential, featuring instead a typical runaway at large field values. Thus the volume modulus τ b cannot drive Starobinksy inflation.
On the upside, since the uplift depends on the volume only, the uplifting contribution is constant in other directions of moduli space. If we can find another modulus with the correct coupling to the matter sector, this could provide the plateau we need.
Fibre moduli
In the context of LVS moduli stabilisation in type IIB Calabi-Yau orientifold compactifications, another particular realisation of inflation is given by the class of constructions known as fibre inflation [3,[10][11][12][13][14][15][16]. We now briefly discuss its setup following [3,10,33].
The main idea behind fibre inflation is to use as the inflaton one of the Kähler moduli whose potential is generated by perturbative (in α or g s ) corrections to the Kähler potential [34][35][36][37][38][39] which are subdominant with respect to the leading O(α 3 ) term [31] that we have considered to stabilise the volume modulus. The simplest construction uses a Calabi-Yau manifold which is a K3 fibration over a P 1 base. For concreteness, let us consider the P 4 [1,1,2,2,6] geometry. The model has two Kähler moduli τ 1 and τ 2 . We require also the existence of a third Kähler modulus which is implemented by the presence of an additional blow-up cycle, whose volume we denote by τ s [40,41]. To summarise, this model (and also fibre inflation models in general) involves a Calabi-Yau manifold with at least three Kähler moduli (denoting them by T i = τ i + iθ i , with τ i the geometric modulus and θ i its axionic partner):
• T 1 = τ 1 + iθ 1 . The geometric modulus τ 1 corresponds to the volume of a K3 fibre over a P 1 base.
• T 2 = τ 2 + iθ 2 , where τ 2 is the volume of a divisor which contains the P 1 base.
• T s = τ s + iθ s . As in the Swiss cheese example, τ s corresponds to the volume of a blow-up cycle (the 'hole' of the Swiss cheese).
There are some differences with respect to the previously considered LVS model. Here, τ 1 is stabilised at subleading order due to string loop or higher α corrections to the Kähler potential. The overall volume V is mainly controlled by τ 2 which is stabilised at leading order by O(α 3 ) corrections. The most important difference is that the volume modulus V acts only as a spectator during inflation and it is heavier than the inflaton. For what concerns τ s , it is fixed by non-perturbative corrections to the superpotential W and it is heavier than the inflaton τ 1 during inflation. In terms of these moduli, the compactification volume is given by
V = κ 0 √ τ 1 τ 2 − κ s τ 3/2 s ,(3.33)
with κ 0 and κ s two model-dependent constants of order one, which are determined by the Calabi-Yau intersection numbers. Without loss of generality, we shall set κ 0 = κ s = 1.
Scalar potential
The idea behind fibre inflation is to rely on perturbative corrections to the effective action. Before including these, it may be useful to summarise the procedure for obtaining the leading order scalar potential. We compute the scalar potential by considering the leading α corrections to the Kähler potential K, and non-perturbative corrections to the superpotential W
K = K 0 + δK α = −2 ln V +ξ 2 and W = W 0 + i A i e −a i T i . (3.34)
Since we are interested in the large volume regime where
√ τ 1 τ 2 τ 3/2 s ,(3.35)
we can actually neglect non-perturbative effects involving τ 1 and τ 2 (i.e. the T 1 and T 2 dependence in W ) and keep only the T s dependence:
W W 0 + A s e −asTs .(3.36)
Including an uplifting sector, the leading order scalar potential takes the form (3.29). The key point of this construction is that this potential depends only on τ s and V. These get stabilised at (in the a s τ s 1 limit)
τ s = ξ 2 2/3 and V = 3 4a s A s W 0 τ s e as τs .(3.37)
Most importantly, we have a completely flat direction in the (τ 1 , τ 2 )-plane, along which V is constant. This flat direction is lifted by subdominant contributions to the Kähler potential. In fibre inflation these are the corrections providing the leading terms in the inflaton potential.
Focusing on the version of fibre inflation where the inflaton potential arises from string loop effects [34][35][36][37], we have 1
δV gs = W 2 0 V 2 A g 2 s τ 2 1 − B √ τ 1 V + C g 2 s τ 1 V 2 ,(3.38)
where A, B and C are functions of the complex structure moduli which are expected to become O(1) constants after these moduli are frozen by background fluxes. Taking this contribution into account, the fibre modulus τ 1 gets fixed at
τ 1 g 4/3 s V 2/3 . (3.39)
So, how can we achieve inflation? We have established the existence of an LVS minimum of the potential. The idea is to displace a field away from its minimum and explore the possibility of having an inflationary dynamics. Due to the fact that the potential for τ 1 is flat if we do not include string loop corrections, this is the field we displace to drive inflation. Displacing τ 1 from its minimum is equivalent to increasing the size of the K3 fibre while shrinking the base. If we consider τ s and V stabilised at their minima, we can integrate them out to obtain a single-field potential.
Instead of working with τ 1 and τ 2 , we will define two new fields, ρ and φ. These are the physical mass eigenstates which diagonalise the mass matrix. They are related to τ 1 and τ 2 through [42] ln
τ 1 = 2 3 ρ + 2 √ 3 φ , (3.40) ln τ 2 = 2 3 ρ − 1 √ 3 φ . (3.41)
Notice that ρ corresponds to the volume, while φ gives the ratio u between τ 1 and τ 2
V = √ τ 1 τ 2 = e 3 2 ρ , u ≡ τ 1 τ 2 = e √ 3φ .(3.42)
Let us consider the field φ and shift it from its minimum φ = φ +φ. The potential takes the form
V (φ) = V 0 3 − 4 e −φ/ √ 3 + e −4φ/ √ 3 + λ e 2φ/ √ 3 , (3.43) where V 0 ≡ O(1) × V −10/3 0
is the inflaton-independent uplifting contribution and λ ≡ 16g 4 s AC/B 2 ∝ g 4 s 1. This potential is shown in Fig. 4 and it resembles Starobinsky 1 Recall that string loops are perturbative corrections, and so we work in the regime W0 O(1), where they become relevant. inflation very closely. The similarity between the two is clearer if we focus on the slow-roll region where the fibre inflation potential can be very well approximated as
V (φ) V 0 1 − 4 3 e −φ/ √ 3 . (3.44)
In fact, ref. [17] tried to apply the standard f (R) duality to fibre inflation with approximated potential (3.44) finding that its version in Jordan frame would be an f (R) theory with f (R) = R 2−1/ √ 2 + R 2 R 1.3 + R 2 which is very similar to the Starobinsky model f (R) = R+R 2 . Moreover, fibre inflation predicts a relation among the scalar spectral index n s and the tensor-to-scalar ratio r of the form r = 6(n s − 1) 2 which is almost analogous to the one of Starobinsky inflation r = 3(n s − 1) 2 [3].
As can be clearly seen in Fig. 4, the fibre inflation potential (3.43) features a rising behaviour for large field values. Again this is in complete analogy with the case of Starobinsky inflation, where higher derivative corrections make the potential too steep to drive inflation at large field values, as demonstrated in Fig. 2 for the case of R 4 corrections. There the coefficient λ has to satisfy |λ| 1 to preserve inflation. However, as explained in Sec. 2.4, λ is expected to be of O(1) from effective field theory arguments, as well as from explicit dimensional reduction of ten-dimensional higher derivative effects in string theory. In fibre inflation, on the other hand, the situation is improved since the smallness of λ is now guaranteed by the fact that it turns out to be proportional to the string coupling, which is a priori required to be small in order to trust perturbation theory, λ ∝ g 4 s 10 −4 for g s 0.1.
Couplings to matter
We now proceed to compute the couplings to matter of the two moduli τ 1 and τ 2 . Working in the large volume regime, we have
√ τ 1 τ 2 τ 3/2 s ⇒ V √ τ 1 τ 2 . (3.45)
The tree-level Kähler potential is given by
K = −2 ln V − ln τ 1 − 2 ln τ 2 . (3.46)
The kinetic Lagrangian for the moduli becomes (at leading order)
L kin = K ij ∂ µ T i ∂ µTj = 1 4τ 2 1 ∂ µ τ 1 ∂ µ τ 1 + 1 2τ 2 2 ∂ µ τ 2 ∂ µ τ 2 .
(3.47)
After including matter fields, the relevant Lagrangian takes the form
L = 1 4 τ 2 1 ∂ µ τ 1 ∂ µ τ 1 + 1 2 τ 2 2 ∂ µ τ 2 ∂ µ τ 2 + Kψ ψ iψγ µ ∂ µ ψ − e K/2mψ ψ .
(3.48)
Before proceeding, we have to specify the form of the Kähler metric for the matter fields. We start by considering the case where the Standard Model is localised in the extra dimensions since it is realised by either D7-branes wrapped around a shrinkable four-cycle τ s or by D3branes at singularities. As listed in (3.15) and (3.17) for a simple Swiss-cheese geometry, in this case the matter Kähler metric would scale as Kψ ψ ∼ τ −1 b . This expression can be used also in our case after substituting τ b ∼ V 2/3 , and then generalising to
Kψ ψ 1 V 2/3 1 τ 1/3 1 τ 2/3 2 , (3.49)
where we ignored the dependence of the Kähler matter metric on τ s since it would induce just a subdominant contribution to the couplings of τ 1 and τ 2 . Moreover, we have
e K/2 1 √ τ 1 τ 2 . (3.50)
Using these expressions, we arrive at the following Lagrangian
L = 1 4τ 2 1 ∂ µ τ 1 ∂ µ τ 1 + 1 2τ 2 2 ∂ µ τ 2 ∂ µ τ 2 + 1 τ 1/3 1 τ 2/3 2 iψγ µ ∂ µ ψ − 1 √ τ 1 τ 2mψ ψ . (3.51)
We can now canonically normalise the fields as follows
τ 1 = e √ 2φ 1 , τ 2 = e φ 2 , ψ τ 1/6 1 τ 1/3 2 = ψ c ,(3.52)
The Lagrangian in terms of the canonical fields looks like
L = 1 2 ∂ µ φ 1 ∂ µ φ 1 + 1 2 ∂ µ φ 2 ∂ µ φ 2 + iψ c γ µ ∂ µ ψ c − e − √ 2 6 φ 1 e − 1 3 φ 2mψ c ψ c . (3.53)
We can now read off the fermionic mass which is given by
m = e − √ 2 6 φ 1 e − 1 3 φ 2 m . (3.54)
Let us now focus on the last term of (3.53), shift the fields as φ i = φ i +φ i , and expand the exponentials up to linear order to get
L ⊃ − 1 − √ 2 6φ 1 1 −φ 2 3 mψ c ψ c −mψ c ψ c + √ 2 6 mφ 1ψc ψ c + 1 3 mφ 2ψc ψ c . (3.55)
Recall that we are interested in the mass eigenstates ρ and φ. Out of (3.40), it is straightforward to obtain a relation between ρ, φ and φ 1 and φ 2
φ 1 = 1 √ 3 ρ + 2 3 φ , (3.56) φ 2 = 2 3 ρ − 1 √ 3 φ (3.57)
Substituting these expressions into (3.55) we get
L ⊃ −mψ c ψ c + √ 2 6 m 1 √ 3ρ + 2 3φ ψ c ψ c + 1 3 m 2 3ρ − 1 √ 3φ ψ c ψ c = −mψ c ψ c + 1 √ 6 mρψ c ψ c ,(3.58)
from which we can easily read off the couplings to matter of the physical fields
y ρ = − 1 √ 6 , y φ = 0 . (3.59)
We see that the field ρ has the right Yukawa coupling to matter. This is not surprising since ρ corresponds to the overall volume mode which we have already shown to feature the same coupling to matter as the Starobinsky field, even if its potential is just a runaway at large field values. On the contrary, φ is effectively decoupled at this level of approximation. A non-zero y φ would be induced by any perturbative corrections to the matter Kähler metric which depends explicitly on the mode u = τ 1 /τ 2 orthogonal to V. However this coupling would be much weak than Planckian due to volume suppression factors. However, we have seen that a suitable scalar potential with the desired plateau can be constructed for φ (or equivalently u). This motivates the search for other D-brane configurations leading to different choices for the Kähler metric of matter fields. Since φ features a promising inflationary potential, the hope is to find a configuration for which it also couples to matter appropriately.
Let us therefore consider the situation where the Standard Model lives on D7-branes wrapped around the two bulk cycles τ 1 and τ 2 which control the Calabi-Yau volume. In analogy with the Swiss-cheese case summarised in (3.16), we expect modular weights which can take values in {0, 1/2, 1}. To be as general as possible, we consider however a Kähler matter metric with arbitrary modular weights
Kψ ψ 1 τ λ 1 1 τ λ 2 2 .
(3.60)
We determine now the values of the modular weights for which the coupling of φ to matter reproduces y φ = −1/ √ 6. The starting point is the following Lagrangian
L = 1 4τ 2 1 ∂ µ τ 1 ∂ µ τ 1 + 1 2τ 2 2 ∂ µ τ 2 ∂ µ τ 2 + 1 τ λ 1 1 τ λ 2 2 iψγ µ ∂ µ ψ − 1 √ τ 1 τ 2mψ ψ . (3.61)
The canonically normalised versions of τ 1 and τ 2 are still given by φ 1 and φ 2 as in (3.52). The matter fields, instead, are normalised according to
ψ τ λ 1 /2 1 τ λ 2 /2 2 = ψ c . (3.62)
Having canonically normalised the fields, let us focus on the relevant term in the Lagrangian
L ⊃ −τ (λ 1 − 1 2 ) 1 τ (λ 2 −1) 2mψ c ψ c .
(3.63)
Using
τ 1 = e √ 2φ 1 , τ 2 = e φ 2 ,(3.64)
substituting this result in the Lagrangian (3.63), and expanding to first order, we obtain the following cubic couplings
L ⊃ − 1 + √ 2 λ 1 − 1 2 φ 1 1 + (λ 2 − 1)φ 2 mψ c ψ c ⊃ −mψ c ψ c − √ 2 λ 1 − 1 2 φ 1 + (λ 2 − 1)φ 2 mψ c ψ c .
(3.65)
At this point, we substitute the expressions (3.56) giving φ 1 and φ 2 in terms of ρ and φ and obtain
L ⊃ −mψ c ψ c 2 3 λ 1 + λ 2 − 3 2 ρ + 1 √ 3 (2λ 1 − λ 2 )φ . (3.66)
The Yukawa coupling to matter of the inflaton fieldφ of fibre inflation models takes therefore the generic form
y φ = 1 √ 3 (2λ 1 − λ 2 ) .2λ 1 − λ 2 = − 1 √ 2 . (3.68)
This equation can never be satisfied if the modular weights are rational numbers, implying that fibre inflation cannot reproduce the conformal coupling to matter typical of Starobinsky inflation. As already pointed out, the Kähler metrics for chiral matter fields on different brane setups in toroidal/orbifold orientifolds have been studied in [32], noting that the modular weights are generally rational numbers. For the toroidal case, the only three possibilities are λ i ∈ {0, 1/2, 1}. In more general setups, it is hard to determine the modular weights directly. Fortunately they normalise the physical Yukawa couplings, which are also given by the triple overlap of wave functions in the extra dimensions [43]. Thereby one can extract the sum of three modular weights, 2 which is related to the scaling of the wave functions with the cycle volumes given by the moduli. It is clear that there are no irrational numbers involved here, so also the individual modular weights must be rational. Hence (3.68) does not seem to be achievable. 2 Unless the unnormalised coupling vanishes, i.e. there is no trilinear coupling between the fields.
For rational modular weights, the Yukawa couplings of fibre moduli to fermions are expected to be of O(1) with exact numerical values which depend on the D-brane origin of matter fields. As illustrative examples, we consider two cases following again [32]. Fermions at the intersection between a D7-stack wrapping τ 1 and another D7-stack wrapping τ 2 would be characterised by λ 1 = 1/2 and λ 2 = 0, resulting in y φ = 1/ √ 3. On the other hand, open string modes living within the worldvolume of a D7-stack wrapping τ 1 would have λ 1 = 0 and λ 2 = 1, resulting in y φ = −1/ √ 3. Note finally that the couplings of fibre moduli to gauge bosons and Higsses on τ 1 , as well as to ultralight bulk axions, play a relevant role for reheating and have been computed in [33,44].
Conclusions
In this paper we have investigated the possibility to embed Starobinsky inflation in a UV complete theory as string theory. Previous studies [3] have already shed light on the behaviour of higher curvature terms arising as the low-energy limit of ten-dimensional αcorrections. In particular, when the R 2 term competes with the standard Einstein-Hilbert action, R n terms with n > 2 can never be neglected, and tend to spoil the flatness of the inflationary plateau.
We instead took a different approach which exploits the fact that R + R 2 inflation is conformally dual to a standard Einstein gravity coupled to a scalar field with a flat exponential potential and a precise Yukawa coupling to matter fermions which are minimally coupled in Jordan frame. We tried therefore to search for string moduli which can effectively reproduce these features within the four-dimensional low-energy theory of Calabi-Yau compactifications at large volume and weak string coupling. We focused on type IIB Kähler moduli which naturally admit an exponential potential and tend to couple to matter with Planckian strength.
We investigated three different classes of stringy inflaton candidates: the volume modulus, blow-up modes and fibre moduli. While the overall volume modulus generically features the correct Yukawa coupling to matter, the typical runaway behaviour of its potential does not allow for the plateau necessary for Starobinsky-like inflation. Blow-up moduli couple instead too strongly to matter fermions localised on D7-branes wrapping these four-cycles, or are effectively decoupled in other Standard Model realisations. Moreover, their potential is too shallow to reproduce Starobinsky inflation. Finally, the moduli of fibre inflation models feature a promising scalar potential which leads to cosmological predictions very similar to the ones of Starobinsky inflation, for example for the scalar spectral index and the tensor-to-scalar ratio. Moreover, fibre inflation features a nice mechanism to tame higher order corrections since they are proportional to a small suppression factor of order g 4 s 1. However fibre moduli cannot reproduce the correct Yukawa coupling to fermions in any D-brane setup which realises the visible sector. In particular, we found that to produce the universal coupling that would allow us to identify a fibre modulus as the Starobinsky inflaton, its modular weight would need to be irrational. Given that modular weights are expected to be rational [32], engineering the correct coupling in this way seems impossible.
We conclude therefore that providing a UV complete justification of Starobinsky inflation remains a big challenge. Our detailed computation of moduli couplings to fermions, provides also a way to discriminate between Starobinsky inflation (setting aside the issue of UV consistency) and fibre inflation.
Figure 1 :
1Starobinsky potential (2.13). Inflation occurs for trans-Planckian values of the field φ.
Figure 2 :
2Starobinsky potential with R 4 corrections given by(2.22).
s
, with ξ = ζ(3)χ(M )/(2π) 3 a constant controlling the size of O(α 3 ) corrections [31]. Here ζ(3) 1.20 is Apéry's constant and χ(M ) is the Euler number of the Calabi-Yau manifold.
, when the Standard Model is realised with D7-branes wrapping the four-cycle τ i , the modular weight λ i = 1 corresponds to open strings living within the D7-worldvolume, λ i = 1/2 to matter fields at the intersection between two stacks of D7-branes, and λ i = 0 to open string moduli controlling the position of D7-branes.Moreover we will use the expansion[30]
Figure 3 :
3The volume modulus potential (3.32) for a parameter choice that realises an uplift to Minkowski, featuring the typical runaway towards large volume.
Figure 4 :
4The inflationary potential (3.43) for fibre inflation with λ = 10 −6 features a plateau-like region very similar to Starobinsky inflation.
coupling can match the conformal coupling of the Starobinsky scalar y φ = −1/ √ 6 only if
arXiv:2305.05703v1 [hep-th] 9 May 2023Contents
1 Introduction
1
2 Basics of Starobinsky inflation
4
2.1 R + R 2 gravity
4
2.2 Inflationary plateau
5
2.3 Coupling to matter
5
2.4 General f (R) expansion
7
3 A stringy embedding?
8
3.1 Higher curvatures from ten dimensions
8
3.2 Moduli inflation and Yukawa couplings
9
3.2.1 Volume modulus and blow-up modes
10
3.2.2 Fibre moduli
14
4 Conclusions
21
AcknowledgementsWe would like to thank Cliff Burgess and Fernando Quevedo for useful conversations.
A New Type of Isotropic Cosmological Models Without Singularity. A A Starobinsky, 10.1016/0370-2693(80)90670-XPhys. Lett. B. 9199A.A. Starobinsky, A New Type of Isotropic Cosmological Models Without Singularity, Phys. Lett. B 91 (1980) 99.
10.1051/0004-6361/201833887A10 [1807.06211Planck 2018 results. X. Constraints on inflation. 641Planck collaboration, Planck 2018 results. X. Constraints on inflation, Astron. Astrophys. 641 (2020) A10 [1807.06211].
C P Burgess, M Cicoli, S De Alwis, F Quevedo, 10.1088/1475-7516/2016/05/0321603.06789Robust Inflation from Fibrous Strings. 0532C.P. Burgess, M. Cicoli, S. de Alwis and F. Quevedo, Robust Inflation from Fibrous Strings, JCAP 05 (2016) 032 [1603.06789].
M Cicoli, J P Conlon, A Maharana, S Parameswaran, F Quevedo, I Zavala, 2303.04819String Cosmology: from the Early Universe to Today. M. Cicoli, J.P. Conlon, A. Maharana, S. Parameswaran, F. Quevedo and I. Zavala, String Cosmology: from the Early Universe to Today, 2303.04819.
String moduli inflation: An overview. M Cicoli, F Quevedo, 10.1088/0264-9381/28/20/204001Class. Quant. Grav. 282040011108.2659M. Cicoli and F. Quevedo, String moduli inflation: An overview, Class. Quant. Grav. 28 (2011) 204001 [1108.2659].
Inflating with Large Effective Fields. C P Burgess, M Cicoli, F Quevedo, M Williams, 10.1088/1475-7516/2014/11/045JCAP. 11451404.6236C.P. Burgess, M. Cicoli, F. Quevedo and M. Williams, Inflating with Large Effective Fields, JCAP 11 (2014) 045 [1404.6236].
Volume Modulus Inflation and the Gravitino Mass Problem. J P Conlon, R Kallosh, A D Linde, F Quevedo, 10.1088/1475-7516/2008/09/011JCAP. 09110806.0809J.P. Conlon, R. Kallosh, A.D. Linde and F. Quevedo, Volume Modulus Inflation and the Gravitino Mass Problem, JCAP 09 (2008) 011 [0806.0809].
Microscopic Origin of Volume Modulus Inflation. M Cicoli, F Muia, F G Pedro, 10.1088/1475-7516/2015/12/0401509.07748JCAP. 1240M. Cicoli, F. Muia and F.G. Pedro, Microscopic Origin of Volume Modulus Inflation, JCAP 12 (2015) 040 [1509.07748].
Inflation near a metastable de Sitter vacuum from moduli stabilisation. I Antoniadis, O Lacombe, G K Leontaris, 10.1140/epjc/s10052-020-08581-9Eur. Phys. J. C. 801014I. Antoniadis, O. Lacombe and G.K. Leontaris, Inflation near a metastable de Sitter vacuum from moduli stabilisation, Eur. Phys. J. C 80 (2020) 1014 [2007.10362].
M Cicoli, C P Burgess, F Quevedo, 10.1088/1475-7516/2009/03/013Fibre Inflation: Observable Gravity Waves from IIB String Compactifications. 03130808.0691M. Cicoli, C.P. Burgess and F. Quevedo, Fibre Inflation: Observable Gravity Waves from IIB String Compactifications, JCAP 03 (2009) 013 [0808.0691].
Starobinsky-Type Inflation from α -Corrections. B J Broy, D Ciupke, F G Pedro, A Westphal, 10.1088/1475-7516/2016/01/0011509.00024JCAP. 011B.J. Broy, D. Ciupke, F.G. Pedro and A. Westphal, Starobinsky-Type Inflation from α -Corrections, JCAP 01 (2016) 001 [1509.00024].
Inflation: moduli stabilisation and observable tensors from higher derivatives. M Cicoli, D Ciupke, S De Alwis, F Muia, 10.1007/JHEP09(2016)0261607.01395JHEP. 0926M. Cicoli, D. Ciupke, S. de Alwis and F. Muia, α Inflation: moduli stabilisation and observable tensors from higher derivatives, JHEP 09 (2016) 026 [1607.01395].
Global Embedding of Fibre Inflation Models. M Cicoli, F Muia, P Shukla, 10.1007/JHEP11(2016)1821611.04612JHEP. 11182M. Cicoli, F. Muia and P. Shukla, Global Embedding of Fibre Inflation Models, JHEP 11 (2016) 182 [1611.04612].
Chiral Global Embedding of Fibre Inflation Models. M Cicoli, D Ciupke, V A Diaz, V Guidetti, F Muia, P Shukla, 10.1007/JHEP11(2017)2071709.01518JHEP. 11207M. Cicoli, D. Ciupke, V.A. Diaz, V. Guidetti, F. Muia and P. Shukla, Chiral Global Embedding of Fibre Inflation Models, JHEP 11 (2017) 207 [1709.01518].
Fitting string inflation to real cosmological data: The fiber inflation case. M Cicoli, E Di Valentino, 10.1103/PhysRevD.102.043521Phys. Rev. D. 10243521M. Cicoli and E. Di Valentino, Fitting string inflation to real cosmological data: The fiber inflation case, Phys. Rev. D 102 (2020) 043521 [2004.01210].
Fibre Inflation and Precision CMB Data. S Bhattacharya, K Dutta, M R Gangopadhyay, A Maharana, K Singh, 10.1103/PhysRevD.102.123531Phys. Rev. D. 1021235312003.05969S. Bhattacharya, K. Dutta, M.R. Gangopadhyay, A. Maharana and K. Singh, Fibre Inflation and Precision CMB Data, Phys. Rev. D 102 (2020) 123531 [2003.05969].
Disentangling the f (R) -Duality. B J Broy, F G Pedro, A Westphal, 10.1088/1475-7516/2015/03/0291411.6010JCAP. 0329B.J. Broy, F.G. Pedro and A. Westphal, Disentangling the f (R) -Duality, JCAP 03 (2015) 029 [1411.6010].
Kahler moduli inflation. J P Conlon, F Quevedo, 10.1088/1126-6708/2006/01/146hep-th/0509012JHEP. 01146J.P. Conlon and F. Quevedo, Kahler moduli inflation, JHEP 01 (2006) 146 [hep-th/0509012].
Roulette inflation with Kahler moduli and their axions. J R Bond, L Kofman, S Prokushkin, P M Vaudrevange, 10.1103/PhysRevD.75.123511hep-th/0612197Phys. Rev. D. 75123511J.R. Bond, L. Kofman, S. Prokushkin and P.M. Vaudrevange, Roulette inflation with Kahler moduli and their axions, Phys. Rev. D 75 (2007) 123511 [hep-th/0612197].
Kahler Moduli Inflation Revisited. J J Blanco-Pillado, D Buck, E J Copeland, M Gomez-Reino, N J Nunes, 10.1007/JHEP01(2010)081JHEP. 01810906.3711J.J. Blanco-Pillado, D. Buck, E.J. Copeland, M. Gomez-Reino and N.J. Nunes, Kahler Moduli Inflation Revisited, JHEP 01 (2010) 081 [0906.3711].
V Faraoni, E Gunzig, P Nardone, gr-qc/9811047Conformal transformations in classical gravitational theories and in cosmology. 20121V. Faraoni, E. Gunzig and P. Nardone, Conformal transformations in classical gravitational theories and in cosmology, Fund. Cosmic Phys. 20 (1999) 121 [gr-qc/9811047].
f(R) theories, Living Rev. A De Felice, S Tsujikawa, 10.12942/lrr-2010-31002.4928Rel. 133A. De Felice and S. Tsujikawa, f(R) theories, Living Rev. Rel. 13 (2010) 3 [1002.4928].
A polynomial f(R) inflation model. Q.-G Huang, 10.1088/1475-7516/2014/02/0351309.3514JCAP. 0235Q.-G. Huang, A polynomial f(R) inflation model, JCAP 02 (2014) 035 [1309.3514].
Beyond the Starobinsky model for inflation. D Y Cheong, H M Lee, S C Park, 10.1016/j.physletb.2020.135453Phys. Lett. B. 8051354532002.07981D.Y. Cheong, H.M. Lee and S.C. Park, Beyond the Starobinsky model for inflation, Phys. Lett. B 805 (2020) 135453 [2002.07981].
Analytic extensions of Starobinsky model of inflation. V R Ivanov, S V Ketov, E O Pozdeeva, S Y Vernov, 10.1088/1475-7516/2022/03/058JCAP 03 (2022) 058 [2111.09058V.R. Ivanov, S.V. Ketov, E.O. Pozdeeva and S.Y. Vernov, Analytic extensions of Starobinsky model of inflation, JCAP 03 (2022) 058 [2111.09058].
S M Lee, T Modak, K Oda, T Takahashi, 2303.09866Ultraviolet Sensitivity in Higgs-Starobinsky Inflation. S.M. Lee, T. Modak, K.-y. Oda and T. Takahashi, Ultraviolet Sensitivity in Higgs-Starobinsky Inflation, 2303.09866.
Poly-instanton Inflation. M Cicoli, F G Pedro, G Tasinato, 10.1088/1475-7516/2011/12/0221110.6182JCAP. 1222M. Cicoli, F.G. Pedro and G. Tasinato, Poly-instanton Inflation, JCAP 12 (2011) 022 [1110.6182].
Inflation from the internal volume in type IIB/F-theory compactification. I Antoniadis, Y Chen, G K Leontaris, 10.1142/S0217751X195004281810.05060Int. J. Mod. Phys. A. 341950042I. Antoniadis, Y. Chen and G.K. Leontaris, Inflation from the internal volume in type IIB/F-theory compactification, Int. J. Mod. Phys. A 34 (2019) 1950042 [1810.05060].
Systematics of moduli stabilisation in Calabi-Yau flux compactifications. V Balasubramanian, P Berglund, J P Conlon, F Quevedo, 10.1088/1126-6708/2005/03/007hep-th/0502058JHEP. 037V. Balasubramanian, P. Berglund, J.P. Conlon and F. Quevedo, Systematics of moduli stabilisation in Calabi-Yau flux compactifications, JHEP 03 (2005) 007 [hep-th/0502058].
Astrophysical and cosmological implications of large volume string compactifications. J P Conlon, F Quevedo, 10.1088/1475-7516/2007/08/019JCAP. 08190705.3460J.P. Conlon and F. Quevedo, Astrophysical and cosmological implications of large volume string compactifications, JCAP 08 (2007) 019 [0705.3460].
Supersymmetry breaking and alpha-prime corrections to flux induced potentials. K Becker, M Becker, M Haack, J Louis, 10.1088/1126-6708/2002/06/060hep-th/0204254JHEP. 0660K. Becker, M. Becker, M. Haack and J. Louis, Supersymmetry breaking and alpha-prime corrections to flux induced potentials, JHEP 06 (2002) 060 [hep-th/0204254].
Modulus-dominated SUSY-breaking soft terms in F-theory and their test at LHC. L Aparicio, D G Cerdeno, L E Ibanez, 10.1088/1126-6708/2008/07/099JHEP. 07990805.2943L. Aparicio, D.G. Cerdeno and L.E. Ibanez, Modulus-dominated SUSY-breaking soft terms in F-theory and their test at LHC, JHEP 07 (2008) 099 [0805.2943].
Reheating and Dark Radiation after Fibre Inflation. M Cicoli, G A Piovano, 10.1088/1475-7516/2019/02/0481809.01159JCAP. 0248M. Cicoli and G.A. Piovano, Reheating and Dark Radiation after Fibre Inflation, JCAP 02 (2019) 048 [1809.01159].
String loop corrections to Kahler potentials in orientifolds. M Berg, M Haack, B Kors, 10.1088/1126-6708/2005/11/030hep-th/0508043JHEP. 1130M. Berg, M. Haack and B. Kors, String loop corrections to Kahler potentials in orientifolds, JHEP 11 (2005) 030 [hep-th/0508043].
Kahler corrections for the volume modulus of flux compactifications. G Gersdorff, A Hebecker, 10.1016/j.physletb.2005.08.024hep-th/0507131Phys. Lett. B. 624270G. von Gersdorff and A. Hebecker, Kahler corrections for the volume modulus of flux compactifications, Phys. Lett. B 624 (2005) 270 [hep-th/0507131].
Jumping Through Loops: On Soft Terms from Large Volume Compactifications. M Berg, M Haack, E Pajer, 10.1088/1126-6708/2007/09/031JHEP. 09310704.0737M. Berg, M. Haack and E. Pajer, Jumping Through Loops: On Soft Terms from Large Volume Compactifications, JHEP 09 (2007) 031 [0704.0737].
M Cicoli, J P Conlon, F Quevedo, 10.1088/1126-6708/2008/01/052Systematics of String Loop Corrections in Type IIB Calabi-Yau Flux Compactifications. 520708.1873M. Cicoli, J.P. Conlon and F. Quevedo, Systematics of String Loop Corrections in Type IIB Calabi-Yau Flux Compactifications, JHEP 01 (2008) 052 [0708.1873].
Higher-Derivative Supergravity and Moduli Stabilization. D Ciupke, J Louis, A Westphal, 10.1007/JHEP10(2015)0941505.03092JHEP. 1094D. Ciupke, J. Louis and A. Westphal, Higher-Derivative Supergravity and Moduli Stabilization, JHEP 10 (2015) 094 [1505.03092].
Systematics of the α' expansion in F-theory. M Cicoli, F Quevedo, R Savelli, A Schachner, R Valandro, 10.1007/JHEP08(2021)0992106.04592JHEP. 0899M. Cicoli, F. Quevedo, R. Savelli, A. Schachner and R. Valandro, Systematics of the α' expansion in F-theory, JHEP 08 (2021) 099 [2106.04592].
Toric K3-Fibred Calabi-Yau Manifolds with del Pezzo Divisors for String Compactifications. M Cicoli, M Kreuzer, C Mayrhofer, 10.1007/JHEP02(2012)0021107.0383JHEP. 022M. Cicoli, M. Kreuzer and C. Mayrhofer, Toric K3-Fibred Calabi-Yau Manifolds with del Pezzo Divisors for String Compactifications, JHEP 02 (2012) 002 [1107.0383].
A Geometrical Upper Bound on the Inflaton Range. M Cicoli, D Ciupke, C Mayrhofer, P Shukla, 10.1007/JHEP05(2018)0011801.05434JHEP. 051M. Cicoli, D. Ciupke, C. Mayrhofer and P. Shukla, A Geometrical Upper Bound on the Inflaton Range, JHEP 05 (2018) 001 [1801.05434].
The Numerically Controlled Regime of Moduli Space. M Cicoli, F Cunillera, A Padilla, F G Pedro, Quintessence Swampland, 10.1002/prop.202200008Fortsch. Phys. 7022000082112.10783M. Cicoli, F. Cunillera, A. Padilla and F.G. Pedro, Quintessence and the Swampland: The Numerically Controlled Regime of Moduli Space, Fortsch. Phys. 70 (2022) 2200008 [2112.10783].
Kahler potentials of chiral matter fields for Calabi-Yau string compactifications. J P Conlon, D Cremades, F Quevedo, 10.1088/1126-6708/2007/01/022hep-th/0609180JHEP. 0122J.P. Conlon, D. Cremades and F. Quevedo, Kahler potentials of chiral matter fields for Calabi-Yau string compactifications, JHEP 01 (2007) 022 [hep-th/0609180].
The dark universe after reheating in string inflation. M Cicoli, K Sinha, R Wiley Deal, 10.1007/JHEP12(2022)0682208.01017JHEP. 1268M. Cicoli, K. Sinha and R. Wiley Deal, The dark universe after reheating in string inflation, JHEP 12 (2022) 068 [2208.01017].
| []
|
[
"Dark axisymmetric plasma modes in partially gated two-dimensional electron gas disk",
"Dark axisymmetric plasma modes in partially gated two-dimensional electron gas disk"
]
| [
"M V Cheremisin \nIoffe Physical-Technical Institute\nSt.PetersburgRussia\n",
"A F \nIoffe Physical-Technical Institute\nSt.PetersburgRussia\n"
]
| [
"Ioffe Physical-Technical Institute\nSt.PetersburgRussia",
"Ioffe Physical-Technical Institute\nSt.PetersburgRussia"
]
| []
| The transition from ungated to completely gated disk-shaped two-dimensional gas is studied under extension of the central gate spot. We investigate axisymmetric plasmon excitations spectra which show interchange between neighboring modes caused by abrupt change of carriers screening at the gate boarder. This behavior is totally unexpected within simple scenario of sub-gate gap varying [A.L.Fetter, Phys.Rev.B 33, 5221 (1986)]. Our results provide the accurate identification of axisymmetric plasmon modes recently observed in experiment.PACS numbers:Plasma oscillations in two-dimensional(2D) electron gas were first predicted in the 70th by F.Stern[1] for ungated and, then analyzed for gated systems[2]. Over the next decade the plasmon assisted infrared absorbtion[3][4][5][6]and emission[7]has been reported. Since Stern's pioneering discovery the enormous efforts were done to clarify the plasmon behavior found to be influenced by magnetic field[8-10], retardation effects[11], sample geometry[12][13][14]and quality[15][16].Recently, it has been shown[17][18][19]that the plasma oscillations for two-dimensional systems with arbitrary attached gates and contacts can be elegantly described in terms of classical theory of electrical circuits with distributed parameters. The proposed LC-approach is extremely powerful for description of plasmon excitations in narrow stripes of 2D gas with periodic grating[20].In the present paper we will consider two-dimensional electron system of disk geometry shown inFig.1, inset. The 2D gas mesa of radius R is grounded peripherally and, moreover, assumed to be embedded into an environment of dielectric constant ǫ. The cylindrical slab of the height h is covered atop by the gate of radius r 0 . For clarity, we further assume a fixed 2D mesa radius R and, then a variable sizes h, r 0 . We will be interested in study of axisymmetric plasmon modes known to have zero angular momentum[12]. Arguing that axisymmetric plasmon modes are purely radial we makes use of LCapproach[19]in order to find the spectra. Notably the pioneering observation of axisymmetric plasmon was reported recently[21,22].Within quasistatic approximation the collective plasma excitations in 2DEG can be described by hydrodynamic model[12]which includes Euler and continuity equations. The electric potential of in-plane 2D gas is given by solution of Poisson equation accounting overall the dielectric environment. The set of equations constituent the hydrodynamic model[12] could be linearized with respect to small amplitude of plasma wave excitations. Fortunately, the model could be further simplified[19]down to so-called telegrapher's equations for radial components of in-plane potential U and the current I:0 5 10 15 20 0 1 2 3 4 n=3 n=2 R r 0 h z FIG. 1: Inset: the experimental setup. Main panel: the dimensionless frequency Ω = ω/ω0 vs screening parameter κ = R/2h for lowest m = 2, 3 plasma modes with α0m = 3.831 and 7.016 respectively for totally covered atop 2DEG, i.e. when r0 = R. Dashed curves represent the asymptotes Ω = α 0m κ for highly screened 2D gas.∂I ∂rHere, L, C are the inductance and capacitance per unit length respectively, m * and n are the effective mass and density of 2D carriers. Then, q is the plasmon wave vector originating from solution of the Poisson equation accounting the actual electrostatic environment. We now search the solution of Eqs.(1,2) separating the temporal and spatial components for potential U = U (r) exp(iωt) and current I = I(r) exp(iωt). Finally, the equation for radial part of the potential yields:Here, we introduced the dimensionless radius ρ = r/λ, where λ = 1 ω √ LC is the length scale of the problem. The | null | [
"https://export.arxiv.org/pdf/2305.06091v1.pdf"
]
| 258,588,038 | 2305.06091 | 29472262ac255e15bd9a62737b01f5bd3f9e88cb |
Dark axisymmetric plasma modes in partially gated two-dimensional electron gas disk
10 May 2023
M V Cheremisin
Ioffe Physical-Technical Institute
St.PetersburgRussia
A F
Ioffe Physical-Technical Institute
St.PetersburgRussia
Dark axisymmetric plasma modes in partially gated two-dimensional electron gas disk
10 May 2023(Dated: May 11, 2023)arXiv:2305.06091v1 [cond-mat.mes-hall]
The transition from ungated to completely gated disk-shaped two-dimensional gas is studied under extension of the central gate spot. We investigate axisymmetric plasmon excitations spectra which show interchange between neighboring modes caused by abrupt change of carriers screening at the gate boarder. This behavior is totally unexpected within simple scenario of sub-gate gap varying [A.L.Fetter, Phys.Rev.B 33, 5221 (1986)]. Our results provide the accurate identification of axisymmetric plasmon modes recently observed in experiment.PACS numbers:Plasma oscillations in two-dimensional(2D) electron gas were first predicted in the 70th by F.Stern[1] for ungated and, then analyzed for gated systems[2]. Over the next decade the plasmon assisted infrared absorbtion[3][4][5][6]and emission[7]has been reported. Since Stern's pioneering discovery the enormous efforts were done to clarify the plasmon behavior found to be influenced by magnetic field[8-10], retardation effects[11], sample geometry[12][13][14]and quality[15][16].Recently, it has been shown[17][18][19]that the plasma oscillations for two-dimensional systems with arbitrary attached gates and contacts can be elegantly described in terms of classical theory of electrical circuits with distributed parameters. The proposed LC-approach is extremely powerful for description of plasmon excitations in narrow stripes of 2D gas with periodic grating[20].In the present paper we will consider two-dimensional electron system of disk geometry shown inFig.1, inset. The 2D gas mesa of radius R is grounded peripherally and, moreover, assumed to be embedded into an environment of dielectric constant ǫ. The cylindrical slab of the height h is covered atop by the gate of radius r 0 . For clarity, we further assume a fixed 2D mesa radius R and, then a variable sizes h, r 0 . We will be interested in study of axisymmetric plasmon modes known to have zero angular momentum[12]. Arguing that axisymmetric plasmon modes are purely radial we makes use of LCapproach[19]in order to find the spectra. Notably the pioneering observation of axisymmetric plasmon was reported recently[21,22].Within quasistatic approximation the collective plasma excitations in 2DEG can be described by hydrodynamic model[12]which includes Euler and continuity equations. The electric potential of in-plane 2D gas is given by solution of Poisson equation accounting overall the dielectric environment. The set of equations constituent the hydrodynamic model[12] could be linearized with respect to small amplitude of plasma wave excitations. Fortunately, the model could be further simplified[19]down to so-called telegrapher's equations for radial components of in-plane potential U and the current I:0 5 10 15 20 0 1 2 3 4 n=3 n=2 R r 0 h z FIG. 1: Inset: the experimental setup. Main panel: the dimensionless frequency Ω = ω/ω0 vs screening parameter κ = R/2h for lowest m = 2, 3 plasma modes with α0m = 3.831 and 7.016 respectively for totally covered atop 2DEG, i.e. when r0 = R. Dashed curves represent the asymptotes Ω = α 0m κ for highly screened 2D gas.∂I ∂rHere, L, C are the inductance and capacitance per unit length respectively, m * and n are the effective mass and density of 2D carriers. Then, q is the plasmon wave vector originating from solution of the Poisson equation accounting the actual electrostatic environment. We now search the solution of Eqs.(1,2) separating the temporal and spatial components for potential U = U (r) exp(iωt) and current I = I(r) exp(iωt). Finally, the equation for radial part of the potential yields:Here, we introduced the dimensionless radius ρ = r/λ, where λ = 1 ω √ LC is the length scale of the problem. The
The transition from ungated to completely gated disk-shaped two-dimensional gas is studied under extension of the central gate spot. We investigate axisymmetric plasmon excitations spectra which show interchange between neighboring modes caused by abrupt change of carriers screening at the gate boarder. This behavior is totally unexpected within simple scenario of sub-gate gap varying [A.L. Fetter, Phys.Rev.B 33, 5221 (1986)]. Our results provide the accurate identification of axisymmetric plasmon modes recently observed in experiment.
PACS numbers:
Plasma oscillations in two-dimensional(2D) electron gas were first predicted in the 70th by F.Stern [1] for ungated and, then analyzed for gated systems [2]. Over the next decade the plasmon assisted infrared absorbtion [3][4][5][6] and emission [7] has been reported. Since Stern's pioneering discovery the enormous efforts were done to clarify the plasmon behavior found to be influenced by magnetic field [8][9][10], retardation effects [11], sample geometry [12][13][14] and quality [15][16].
Recently, it has been shown [17][18][19] that the plasma oscillations for two-dimensional systems with arbitrary attached gates and contacts can be elegantly described in terms of classical theory of electrical circuits with distributed parameters. The proposed LC-approach is extremely powerful for description of plasmon excitations in narrow stripes of 2D gas with periodic grating [20].
In the present paper we will consider two-dimensional electron system of disk geometry shown in Fig.1, inset. The 2D gas mesa of radius R is grounded peripherally and, moreover, assumed to be embedded into an environment of dielectric constant ǫ. The cylindrical slab of the height h is covered atop by the gate of radius r 0 . For clarity, we further assume a fixed 2D mesa radius R and, then a variable sizes h, r 0 . We will be interested in study of axisymmetric plasmon modes known to have zero angular momentum [12]. Arguing that axisymmetric plasmon modes are purely radial we makes use of LCapproach [19] in order to find the spectra. Notably the pioneering observation of axisymmetric plasmon was reported recently [21,22].
Within quasistatic approximation the collective plasma excitations in 2DEG can be described by hydrodynamic model [12] which includes Euler and continuity equations. The electric potential of in-plane 2D gas is given by solution of Poisson equation accounting overall the dielectric environment. The set of equations constituent the hydrodynamic model [12] could be linearized with respect to small amplitude of plasma wave excitations. Fortunately, the model could be further simplified [19] down to so-called telegrapher's equations for radial components of in-plane potential U and the current I:
∂U ∂r = −L ∂I ∂t ,(1)∂I ∂r = −C ∂U ∂t ,(2)L(r) = m * e 2 n 1 2πr , C(r) = (1 + coth(qh))ǫqr 2 .(3)
Here, L, C are the inductance and capacitance per unit length respectively, m * and n are the effective mass and density of 2D carriers. Then, q is the plasmon wave vector originating from solution of the Poisson equation accounting the actual electrostatic environment. We now search the solution of Eqs.(1,2) separating the temporal and spatial components for potential U = U (r) exp(iωt) and current I = I(r) exp(iωt). Finally, the equation for radial part of the potential yields:
d 2 U dρ 2 + 1 ρ dU dρ + U = 0.(4)
Here, we introduced the dimensionless radius ρ = r/λ,
where λ = 1 ω √
LC is the length scale of the problem. The general solution of Eq.(4) is given by the sum of zero order Bessel functions of first(second) kind, namely
U (ρ) = AJ 0 (ρ) + BY 0 (ρ).(5)
The arbitrary constants in Eq.(5) can be found by use of correct boundary conditions. Let us first consider the simple case of totally gated 2D gas, i.e when r 0 = R and variable height 0 < h < ∞ of the slab. Arguing the potential is finite in the disk center we put B = 0 to avoid divergent behavior of the second term in Eq.(5) at ρ → 0. Noticing that the peripheral current is absent, i.e. I(R) ∼ ρ dU dρ = 0, we finally find out the dispersion equation for plasmon excitations
ω √ LCR = α 0m ,(6)
where α 0m is the mth zero of the function J ′ 0 (x). Using Eq.(3) we re-write Eq.(6) as it follows
ω = v p α 0m R · 1 − e −2hq 2hq 1/2 ,(7)
where v p = 4πe 2 nh m * ǫ is plasma wave velocity. For strong screening case qh ≪ 1 we obtain the relationship
ω = v p α 0m R ,(8)
which is exactly the linear dispersion law [2,12] if one defines wave vector
q = α 0m R(9)
for present case of axisymmetric plasmon excitations.
In the opposite ungated case qh ≫ 1 Eq. (7) provides familiar long-wave dispersion law [1]
ω = 2πe 2 n m * ǫ q(10)
with the wave vector specified by Eq.(9) as well.
Let's do visualization of the transition from ungated to screened 2D system by fixing the disk radius and, then varying the slab height. It is convenient to introduce the reference value of frequency ω 0 = 2πe 2 n m * ǫR and screening parameter κ = R/2h. Using Eq.(7) we plot in Fig.1 the dimensionless frequency Ω = ω/ω 0 vs κ for lowest axisymmetric modes m = 2, 3. For highly screening case κ ≫ 1 one obtains Ω = α0m κ being a simple replica of Eq. (8). For ungated 2D gas κ ≪ 1 we recover the result specified by Eq.(10) as Ω = √ α 0m . The change in axisymmetric mode behavior occurs at κ ∼ 1 which was not clearly indicated in longstanding classical study [12].
The second part of the present paper concerns more interesting case of partially gated 2D electron system shown in Fig.1, inset. The extension of the central gate spot would result in a transition from ungated to fully gated 2D gas as well. We now search the spectra of axisymmetric excitations by varying the central gate radius 0 ≤ r 0 ≤ R and keeping fixed h, R. Evidence shows that for gated disk(index "1") and ungated hoop(index "2") the in-plane potential can be written as it follows
U 1 = A 1 J 0 (ρ 1 ) (11) U 2 = A 2 J 0 (ρ 2 ) + B 2 Y 0 (ρ 2 ) (12) U 1 = U 2 | r=r0 , U ′ 1 = U ′ 2 | r=r0 , U ′ 2 | r=R = 0,(13)
where the dimensionless variable ρ 1,2 = r/λ 1,2 are different for gated disk and ungated hoop since
λ 1,2 = 1 ω √ LC1,2
. Equations(11), (12) have to be solved jointly with boundary conditions specified by Eq.(13) for continuous potential and current in gated/ungated parts respectively and, then the absence of outgoing current at the slab perimeter. Finally, we obtain cumbersome transcendental dispersion equation
J 0 (Ωκν)(J 1 (Ω 2 ν)Y 1 (Ω 2 ) − J 1 (Ω 2 )Y 1 (Ω 2 ν)) + (14) k Ω J 1 (Ωκν) J 1 (Ω 2 )Y 0 (Ω 2 ν) − J 0 (Ω 2 ν)Y 1 (Ω 2 ) = 0,
which can be minimized as it follows
D U · H I + k Ω D I · H U = 0.(15)
Eq.(15) allows one to obtain the plasmon frequency Ω vs dimensionless radii ratio ν = r 0 /R at certain value of screening parameter κ. Here, we use the notations D I = J 1 (Ωκν) and D U = J 0 (Ωκν) for gated disk(D) part. The solutions D I(U) = 0 correspond to zero current at the disk center and, then the zero current(voltage) respectively at the disk circumference. The remaining multiplies, namely H I = J 1 (Ω 2 ν)Y 1 (Ω 2 )−J 1 (Ω 2 )Y 1 (Ω 2 ν) and H U = J 1 (Ω 2 )Y 0 (Ω 2 ν)−J 0 (Ω 2 ν)Y 1 (Ω 2 ) correspond to ungated hoop(H) part seen in Fig.1, inset. The transcendental equations H I(U) = 0 define the solution of Bessel Eq.(4) for zero current(voltage) respectively at the internal boarder and, then absence of the current at the outer boarder of the hoop. For real systems [22] the typical disk size R = 0.25mm is much grater that the gate to 2DEG separation is h = 370nm, hence we find κ = 18 ≫ 1. Assuming that Ω ∼ 1 we conclude that second summand in Eq.(15) prevails. Moreover, the zeroth of transcendental equation D I = 0 defines, in general, the solution of Eq.(15) in total. The later condition corresponds to axisymmetric plasmon modes excited in a gated part along. It is easy to check that the plasmon modes frequencies are given by asymptotes Ω = α0m κν shown by dotted lines in Fig.2. We emphasize that this simple scenario fails when the multipliers from first(second) summand in Eq. Table 1 for:
-ungated [22] and △, -partially gated 2D system [21]. To support our reasoning we plot schematically in Fig.2, inset A the spatial distribution of the current for neighboring plasma modes m = 2, 3. Actually, the transition 3 → 2 consists of half wavelength shrinking happened at point A in Fig.2. For small gate sizes when the most place atop the slab is occupied by ungated hoop, the lowest mode m = 2 undergo further transformation. The point of interest B in Fig.2 is still defined by condition D U , H U = 0. However, in the present case the plasmon mode m = 2 is first confined within ultra-narrow gated part and, then is modified entirely by occupation of the ungated part as whole. Let us call further this mode as "soft" mode. In Fig.2 we represent schematically the current spatial distribution for initial m = 2 mode and, finally, the "soft" mode. We verify the dispersion relation for soft mode follows the asymptote Ω = 1 √ κν . In actual fact, the "soft" mode is a precursor of the conventional plasmon in totally ungated 2D system, i.e. Ω = √ α 0m .
It is worthwhile to mention with respect to Fig.2 that upon change of the gated part radius r 0 the plasmon frequency follows either ω ∼ 1/ √ r 0 or ω ∼ 1/ √ R dependencies regarding to the actual ν-range of interest. Evidently, the latter uncertainty may lead in misunderstanding regarding plasma mode identification. Let us compare our results with experimental findings. The data are collected in Table 1 for AlGaAs/GaAs samples [21,22] demonstrated axisymmetric plasmon excitations. Fortunately, a use of dimensionless units allows one to put a different data points on the same plasmon spectra graph in Fig.2. Indeed, for each sample one may calculate the respective reference frequency f 0 = ω 0 /2π and, then find out the ratio Ω exp = f exp /f 0 . The latter can be added into plasmon spectra graph in Fig.2. Note an overall agreement between the experimental data and theory predictions. However, the data shown by circles in Fig.2 demonstrate a puzzling series of high-order plasmon excitations for even numbers m = 4, 6, 8 only.
In conclusion, we investigated the transition from ungated to gated two-dimensional system of disk geometry by means of central spot gate expansion. The discontinuity of gated to ungated part provides the interchange between the neighboring axisymmetric modes. The experimental data for different sample sizes, carrier densities is found to agree well with theory predictions. Our study allows to classify the observed axisymmetric modes.
FIG. 1 :
1Inset: the experimental setup. Main panel: the dimensionless frequency Ω = ω/ω0 vs screening parameter κ = R/2h for lowest m = 2, 3 plasma modes with α0m = 3.831 and 7.016 respectively for totally covered atop 2DEG, i.e. when r0 = R. Dashed curves represent the asymptotes Ω = α 0m κ for highly screened 2D gas.
FIG. 2 :
2(15) vanish simultaneously, i.e. when D U , H U = 0, therefore the dispersion Eq.(15) becomes fulfilled as well. The solutions of equations D U = 0 and H U = 0 are shown in The dimensionless frequency Ω = ω/ω0 vs ratio ν = r0/R for lowest m = 2..5 and, partially m = 6, 8 plasma modes plotted for fixed κ = 18. Dotted lines represent the asymptotes Ω = α 0m κν expected for plasmon in the gated part along. Dashed(thin) lines depict DU = 0 and HU = 0 solutions respectively. Bold dashed line specifies the "soft" mode asymptote Ω = 1 √ κν . Inset A(B): Radial distribution of the current density for intermode transition A(B) shown in the main panel. The experimental data indicated in
Fig. 2
2by thin and dashed lines respectively. This condition D U , H U = 0 is exactly one defines the transition between neighboring axisymmetric modes.
TABLE I :
IAxisymmetric plasmon: theory vs experimentn
h r0 R f0
fexp Ωexp Ref.
(10 11 cm −2 ) nm mm mm GHz GHz
2.6
-
0
0.25 22.1 60.0 2,71 [22]
1.0
370 0.05 0.25 13.7 26.5 1.93
[21]
1.0
370 0.05 0.25 13.7 39.4 2.87
[21]
1.0
370 0.05 0.25 13.7 46.4 3,38
[21]
1.0
370 0.05 0.50 9.7 19.4 2.0 △[21]
1.0
370 0.05 1.0 6.9 14.4 2.1 △[21]
. F Stern, Phys. Rev. Lett. 18546F. Stern, Phys. Rev. Lett. 18, 546 (1967).
. A V Chaplik, Zh. Eksp. Teor. Fiz. 62Sov. Phys. JETPA.V. Chaplik, Zh. Eksp. Teor. Fiz. 62, 746 (1972) [Sov. Phys. JETP 35, 395 (1972)].
. C C Grimes, G Adams, Phys. Rev. Lett. 36145C.C. Grimes and G. Adams, Phys. Rev. Lett. 36, 145 (1976).
. S J Allen, D C Tsui, R A Logan, Phys. Rev. Lett. 3898S.J. Allen, D.C. Tsui and R.A. Logan, Phys. Rev. Lett. 38, 98 (1977).
Solid State Comm. T N Theis, J P Kotthaus, P J Stiles, 26603T.N. Theis, J.P. Kotthaus and P.J. Stiles, Solid State Comm. 26, 603 (1978).
. D Heitmann, J P Kotthaus, E G Mohr, Solid State Commun. 44715D.Heitmann, J.P. Kotthaus, and E.G. Mohr, Solid State Commun. 44, 715 (1982).
Solid State Comm. D C Tsui, E Gornik, R A Logan, 35875D.C. Tsui, E. Gornik, and R.A. Logan, Solid State Comm. 35, 875 (1980).
. K W Chiu, J J Quinn, Phys.Rev.B. 94724K.W.Chiu and J.J.Quinn, Phys.Rev.B 9, 4724 (1974).
. M Nakayama, J.Phys.Soc.Jpn. 36393M.Nakayama, J.Phys.Soc.Jpn. 36, 393 (1974).
. V A Volkov, S A Mikhailov, Sov. Phys. JETP. 671639V.A. Volkov and S.A. Mikhailov, Sov. Phys. JETP 67, 1639 (1988).
. I V Kukushkin, J H Smet, S A Mikhailov, D V Kulakovskii, K Klitzing, W Wegscheider, Phys.Rev.Lett. 90156801I.V. Kukushkin, J.H. Smet, S.A. Mikhailov, D.V. Kulakovskii, K.von Klitzing and W. Wegscheider, Phys.Rev.Lett. 90, 156801 (2003).
. A L Fetter, Phys. Rev. B. 335221A.L. Fetter, Phys. Rev. B 33, 5221 (1986).
. Yu A Kosevich, A M Kosevich, J C Granada, Phys. Lett. A. 12752Yu.A. Kosevich, A.M. Kosevich, J.C. Granada, Phys. Lett. A, 127, 52 (1988).
. A A Zabolotnykh, V A Volkov, Pis'ma Zh. Eksp. Teor. Fiz. 115141JETP LettA.A. Zabolotnykh and V.A. Volkov, JETP Lett 115, 141 (2022). [ Pis'ma Zh. Eksp. Teor. Fiz. 115, 163 (2022) ].
. V I Falko, D E , Zh. Eksp. Teor. Fiz. 951150Sov. Phys. JETPV.I.Falko and D.E.Khmel'nitskii, Zh. Eksp. Teor. Fiz. 95, 1988 (1989) [Sov. Phys. JETP 68, 1150 (1989)].
Solid State Comm. M V Cheremisin, 2687M.V.Cheremisin, Solid State Comm. 268, 7 (2017).
. P J Burke, I B Spielman, J P Eisenstein, Appl. Phys. Lett. 76745P.J. Burke, I.B. Spielman, and J.P. Eisenstein, Appl. Phys. Lett. 76, 745 (2000).
. F Rana, IEEE Trans. Nanotech. 791F. Rana, IEEE Trans. Nanotech. 7, 91 (2008).
. G C Dyer, G R Aizin, S Preu, N Q Vinh, S J Allen, J L Reno, E A Shaner, Phys.Rev.Lett. 109126803G.C. Dyer, G.R. Aizin, S. Preu, N.Q. Vinh, S.J. Allen, J.L. Reno, and E.A. Shaner, Phys.Rev.Lett. 109, 126803 (2012).
. G C Dyer, G R Aizin, S J Allen, A D Grine, D Bethke, J L Reno, E A Shaner, Nature Photon. 7925G.C. Dyer, G.R. Aizin, S.J. Allen, A.D. Grine, D. Bethke, J. L. Reno, and E.A. Shaner, Nature Photon. 7, 925 (2013).
. V M Muravev, I V Andreev, V N Belyanin, S I Gubarev, I V Kukushkin, Phys. Rev. B. 9645421V.M. Muravev, I.V. Andreev, V.N. Belyanin, S.I. Gubarev, and I.V. Kukushkin, Phys. Rev. B 96, 045421 (2017).
. A A Zagitova, V M Muravev, P A Gusikhin, A A Fortunatov, I V Kukushkin, Lett, Pis'ma Zh. Eksp. Teor. Fiz. 108446A.A. Zagitova, V.M. Muravev, P.A. Gusikhin, A.A. For- tunatov, I.V. Kukushkin, JETP Lett. 108, 446 (2018) [Pis'ma Zh. Eksp. Teor. Fiz. 108, 478 (2018) ].
| []
|
[
"Stationary cosmology in group field theory",
"Stationary cosmology in group field theory"
]
| [
"Steffen Gielen ",
"Robert Santacruz ",
"\nSchool of Mathematics and Statistics\nUniversity of Sheffield\nHicks Building, Hounsfield RoadS3 7RHSheffieldUnited Kingdom\n",
"\nDepartment of Mathematics and Statistics\nE3B 5A3 † and School of Mathematics and Statistics\nUniversity of New Brunswick\nFrederictonNew BrunswickCanada\n",
"\nUniversity of Sheffield\nHicks Building, Hounsfield RoadS3 7RHSheffieldUnited Kingdom\n"
]
| [
"School of Mathematics and Statistics\nUniversity of Sheffield\nHicks Building, Hounsfield RoadS3 7RHSheffieldUnited Kingdom",
"Department of Mathematics and Statistics\nE3B 5A3 † and School of Mathematics and Statistics\nUniversity of New Brunswick\nFrederictonNew BrunswickCanada",
"University of Sheffield\nHicks Building, Hounsfield RoadS3 7RHSheffieldUnited Kingdom"
]
| []
| which is the Hamiltonian of a free particle in one dimension whose quantum theory is, of course, well known. However, here we are interested in the interpretation of the corresponding GFT cosmology, which requires defining a number operatorN J =â † Jâ J in terms of some suit-Stationary solutions of the resulting equations of motion correspond to extrema of this Hamiltonian in φ J , for p J = 0, given by φ J = 0 andThis mean field model is equivalent to a classical systemJ , usually referred to as double well potential (seeFig. 1). The values of the field at the bottom of the potential imply a minimum value for the energy and volume | null | [
"https://export.arxiv.org/pdf/2303.16942v2.pdf"
]
| 257,833,978 | 2303.16942 | a1e846c184119c733ca1a20f1507e1c832fe4378 |
Stationary cosmology in group field theory
Steffen Gielen
Robert Santacruz
School of Mathematics and Statistics
University of Sheffield
Hicks Building, Hounsfield RoadS3 7RHSheffieldUnited Kingdom
Department of Mathematics and Statistics
E3B 5A3 † and School of Mathematics and Statistics
University of New Brunswick
FrederictonNew BrunswickCanada
University of Sheffield
Hicks Building, Hounsfield RoadS3 7RHSheffieldUnited Kingdom
Stationary cosmology in group field theory
(Dated: April 18, 2023)
which is the Hamiltonian of a free particle in one dimension whose quantum theory is, of course, well known. However, here we are interested in the interpretation of the corresponding GFT cosmology, which requires defining a number operatorN J =â † Jâ J in terms of some suit-Stationary solutions of the resulting equations of motion correspond to extrema of this Hamiltonian in φ J , for p J = 0, given by φ J = 0 andThis mean field model is equivalent to a classical systemJ , usually referred to as double well potential (seeFig. 1). The values of the field at the bottom of the potential imply a minimum value for the energy and volume
Group field theory (GFT) models for quantum gravity coupled to a massless scalar field give rise to cosmological models that reproduce the (expanding or contracting) dynamics of homogeneous and isotropic spacetimes in general relativity at low energies, while including high-energy corrections that lead to singularity resolution by a "bounce." Here we investigate two possibilities for obtaining stationary solutions in GFT cosmology, which could be useful as an analogue of Minkowski spacetime. We first focus on a limit in which interactions are neglected and the effective Newton's constant in GFT cosmology is taken to zero. In this limit, we derive an effective Friedmann equation that shows no stationary solutions, but departures from the trivial classical dynamics falling off rapidly, similar to the usual correction terms responsible for the bounce. Since the effective Newton's constant needs to be exactly zero, the scenario is fine-tuned. A more satisfactory approach is obtained in a weakly interacting model: we find bound states with sharply peaked volume, representing a stationary semiclassical cosmology, and show that coherent states peaked around the minimum of the potential remain stable with small quantum fluctuations, and only small oscillations around a nearly constant volume. These coherent states realise the idea of a "quantum gravity condensate."
I. INTRODUCTION
Many approaches to quantum gravity entertain the idea that space and time are not fundamental structures that all of physics is built on, but themselves "emergent" from other quantum or discrete degrees of freedom with no initial spacetime continuum [1]. A fundamental challenge is then to show how the usual classical, continuum nature of space and time might be recovered, at least in an approximation or perhaps in one out of different possible phases of a statistical description (see, e.g., Ref. [2] for the example of causal dynamical triangulations). One might look at other examples of emergence in physics such as a macroscopic electromagnetic field defined as a coherent state in quantum electrodynamics, or an effective continuum superfluid description of Bose-Einstein condensates in a quantum field theory of atoms. The latter in particular has served as an inspiration for looking for spacetime as a kind of Bose-Einstein condensate of quantum gravity "atoms," i.e., a nonperturbative ground state away from the usual (Fock) vacuum [3]. In the group field theory (GFT) approach to quantum gravity [4], following this approach seems rather natural since the fundamental degrees of freedom of the "group field" are directly interpreted as quanta of spacetime, or elementary building blocks of spin networks in the language of loop quantum gravity [5]. The initial (perturbative) GFT vacuum contains no quanta, and hence no spacetime, as is manifest by the fact that quantities like areas or volumes vanish; a macroscopic geometry must be built up from many excitations over this initial vacuum.
The idea that continuum spacetime could emerge from a phase transition to GFT condensate was proposed in earlier papers [6] and then implemented concretely by building on a particular prescription for canonical quantisation [7]. One basic question in an emergent spacetime scenario is how to define dynamics in a system without any fundamental notion of time. Here one can follow ideas from canonical quantum gravity and quantum cosmology [8] and introduce matter degrees of freedom that can play the role of a (relational) clock. Following this idea and coupling a massless "clock" scalar field to gravity in GFT, emergent Friedmann equations for the relational volume (i.e., for a volume of space given as a function of the scalar field) can be derived, showing agreement with general relativity at large volumes and singularity resolution by a bounce [9]. The resulting cosmology resembles very closely that of loop quantum cosmology (LQC), raising the hope that GFT could provide an embedding of LQC into full quantum gravity.
The results of Ref. [9] were obtained using a number of simplifying assumptions; in particular, one usually restricts to a single mode in the expansion of the group field into Peter-Weyl modes, and interactions are (initially) neglected. One also works in a mean-field approximation, assuming a type of coherent state. The last approximation is inspired by the idea of a quantum gravity condensate and by the requirement that any cosmology emergent from quantum gravity should be semiclassical, with small fluctuations over expectation values for geometric observables. However, if interactions are neglected, what leads to the emergence of a macroscopic geometry is not so much a process of condensation but rather an instability in the free (linear) theory: in this approximation the dynamics of a single field mode resembles that of an upside-down harmonic oscillator, whose classical solutions grow or decay exponentially, just as the volume of the corresponding classical cosmology. This exponential behaviour of solutions was studied more explicitly, e.g., in Ref. [10], and a more general analysis of the quantum theory in a deparametrised approach was given in Ref. [11]. Here, by choosing the scalar matter field as a clock before quantisation, one obtains a standard Hamiltonian acting on a Fock space generated from creation and annihilation operators associated to the upside-down harmonic oscillator. The Hamiltonian is quadratic in these, and corresponds to a squeezing operator (realising the proposal of Ref. [12]). The Fock "vacuum" defined byâ J |0 = 0 for a mode J is unstable and the number of quanta with respect to it grows exponentially under time evolution. It is then not surprising that almost any quantum state in this truncation leads to an effective Friedmann equation for the expectation value of the volume that reduces to that of general relativity at low energies and includes a bounce [13]. The requirement that the state be semiclassical at late times is nontrivial and still suggests that one should work, e.g., with Fock coherent states.
To extend the results of Ref. [9] beyond the approximation of negligible interactions, the effect of certain interaction terms was included into the derivation of effective Friedmann equations in Ref. [14]. Other assumptions, in particular the mean-field approximation, were maintained. One finds that any monomial interaction term in GFT can be mapped to an additional term in the effective Friedmann equation, analogous to a perfect fluid contribution whose equation of state is related to the field power in the interaction. In this way, effective contributions corresponding to dust, dark energy, or other matter may in principle be obtained. However, given that these new terms become relevant when interactions are strong, one expects the mean-field approximation to break down, as explained in Ref. [9] and shown explicitly in Ref. [13].
The recovery of expanding solutions that mimic the dynamics of classical general relativity coupled to a massless scalar field is an important result, but one might be interested in stationary solutions as well. Given that the expansion in usual GFT cosmology is driven by the energy density in the scalar field, is there a way to switch it off? Here we consider two approaches towards addressing this question. The first corresponds to the idea of taking a "G → 0" limit in GFT cosmology, as is sometimes considered in other approaches to quantum gravity [15]. Newton's constant G appears emergent from a combination of fundamental couplings in the GFT action [9,10], so this is the limit of a vanishing coupling in the GFT action. We find that this procedure does indeed modify the late-time behaviour of GFT cosmology in the expected way, leading to an asymptotically stationary geometry. However, there are still high-energy corrections similar to the ones causing the bounce in usual GFT cosmology, so that we do not obtain a truly stationary solution. These results are perhaps expected, but involve some interesting subtleties. In particular, we now need to introduce creation and annihilation operators for a system analogous to a free particle in quantum mechanics, which requires using an arbitrary length scale (see, e.g., Ref. [16]). This length scale enters geometric observables in GFT, whose meaning is hence ambiguous. The assumption of an exactly vanishing coupling also constitutes fine-tuning.
A different approach is to include interactions and look for dynamically stationary solutions. Our approach follows the one of Ref. [14] without relying on a mean-field approximation: we are looking for exact solutions of the interacting theory. Finding such solutions requires numerical methods, but can be done to arbitrary precision. We can identify bound states in which all expectation values remain constant, so that they might be seen as representing a stationary cosmology. While these states are not peaked on a particular value of the group field, they have small fluctuations in the cosmologically more relevant volume (or number of quanta) and can hence be seen as semiclassical. We also turn to the familiar proposal of coherent states, peaked around the minimum of the potential. We show that these states are stable with small quantum fluctuations, even though they are not exactly stationary and show small oscillations in quantities like the volume. Both the exact bound state solutions and the coherent states we have constructed are promising candidates for an emergent semiclassical and stationary or almost stationary spacetime; they represent a cosmology in which the contribution to the effective energy density coming from GFT interactions cancels the terms usually responsible for expansion. This proposal could be argued to be the most explicit realisation of a "quantum gravity condensate" achieved so far, albeit in a relatively simple toy model.
II. BRIEF OVERVIEW OF GFT COSMOLOGY
Here we review the derivation of effective cosmological dynamics from GFT in the deparametrised approach of Ref. [11]. Although this formalism differs in its assumptions and motivations from the "algebraic" canonical quantisation first proposed in Refs. [5,7], at the level of effective cosmology the two approaches lead to rather similar results; see, e.g., Ref. [17] for a recent review comparing the two. What is important to obtain a particular cosmology is whether interactions or multiple field modes are included, as well as whether one assumes a coherent state. We will restrict to a single field mode throughout.
For GFT models for quantum gravity coupled to a massless scalar field, a common starting point is a (real) field ϕ whose arguments are four SU(2) group elements, corresponding to parallel transport variables of discrete gravity in the Ashtekar-Barbero formalism, and a realvalued argument χ corresponding to the matter scalar field. The field is usually assumed to satisfy the "gauge invariance" property
ϕ(g 1 , . . . , g 4 , χ) = ϕ(g 1 h, . . . , g 4 h, χ)(1)
for any h ∈ SU(2). If we picture the elementary excitations of this quantum field as spin-network vertices (labelled by χ) with four open links labelled by the g I , this property ensures that GFT states are invariant with respect to discrete SU(2) gauge transformations acting on these vertices. Assuming that ϕ is square integrable on SU(2) 4 , we can define a Peter-Weyl decomposition as (2), m I and n I are magnetic indices taking values between −j I and j I , and I are a basis of intertwiners (indexed by ι) compatible with the chosen j I , which are needed in order to satisfy Eq. (1). D j m n (g) are the Wigner D-matrices in the representation j. It is very convenient to introduce a multi-index J ≡ (j I , m I , ι) so that the field modes in Eq. (2) become more simply ϕ J (χ). Following Ref. [11] we will assume an action
ϕ(g I , χ) = j I ,m I ,n I ,ι ϕ j I ,ι m I (χ)I j I ,ι n I 4 a=1 2j a + 1 D ja ma na (g a ) (2) where j I ∈ {0, 1 2 , 1 . . .} are irreducible representations of SUS = 1 2 d 4 g dχ ϕ(g I , χ) K (0) + K (2) ∂ 2 χ ϕ(g I , χ) − V [ϕ] = 1 2 J dχ ϕ −J (χ) K (0) J + K (2) J ∂ 2 χ ϕ J (χ) − V [ϕ] ,(3)
where in the second line we have used the Peter-Weyl decomposition and −J ≡ (j I , −m I , ι) denotes flipping of the magnetic indices (needed to ensure a real Lagrangian). K (0) and K (2) can contain derivative operators with respect to the SU(2) variables, in particular Laplace-Beltrami operators, which become diagonal in the second line, so that K (0) J and K (2) J are just Jdependent numbers. More generally, higher order derivatives in χ could be present, but one can see Eq. (3) as a truncation in derivatives (as proposed in Ref. [9]) or as a definition of the fundamental theory. A first derivative term is forbidden by the symmetry of the action under χ → −χ which is required as a symmetry of relativistic matter fields. V [ϕ] includes all interactions, i.e., terms higher than second order in ϕ, whose structure is modeldependent.
One can now proceed with canonical quantisation based on promoting ϕ J and its conjugate momentum
π J := ∂L ∂(∂ χ ϕ J ) = −K (2) J ∂ χ ϕ J(4)
squeezed state (or more general initial states into generalised squeezed states). If one works in the Heisenberg picture, the operatorsâ J andâ † J are time-dependent witĥ
a J (χ) =â J (0) cosh(M J χ) − iâ † −J (0) sinh(M J χ) , a † J (χ) =â † J (0) cosh(M J χ) + iâ −J (0) sinh(M J χ) . (8)
Likewise, the number operatorN J :=â † Jâ J can be written aŝ
N J (χ) = 1 2 N J (0) +N −J (0) + 1 cosh(2M J χ) + 1 2 N J (0) −N −J (0) − 1 (9) + i 2 â J (0)â −J (0) −â † J (0)â † −J (0) sinh(2M J χ)
showing explicitly that the particle number grows exponentially in χ for arbitrary initial states. The quanta generated by actions ofâ † J are interpreted as "atoms" of geometry in the sense of loop quantum gravity [5], which are assigned a volume V J dependent on the representation labels J. If one assumes that only a single field mode is excited, the total volume is simply proportional to the number of quanta, V = V J N J . This assumption is most easily made self-consistent by focusing on a mode for which all magnetic indices vanish, so that J = −J; Eq. (9) then refers to operators for a single mode only.
For this case, one can easily show that V (χ) := V (χ) satisfies the differential equation [13]
V (χ) V (χ) 2 = 4M 2 J 1 + V J V (χ) + K 2 0 − V J V (0) − V (0) 2 V (χ) 2(10)with K 0 := i 2 V J â J (0)â −J (0) − â † J (0)â † −J (0)
. This is the analogue of the Friedmann equation, and can be used to interpret the expectation value V (χ) of the volume in cosmological terms.
Comparing Eq. (10) with its general relativity analogue (V /V ) 2 = 12πG, the first observation is that Eq. (10) reduces to general relativity at large volume, provided that M 2 J = 3πG where G is Newton's constant. In this sense, we can say that Newton's constant is emergent from fundamental GFT couplings. At smaller volumes, there are corrections to general relativity, in particular a 1/V 2 term which is almost always repulsive (there are fine-tuned initial conditions for which K 2 0 − V J V (0) − V (0) 2 can vanish, otherwise its sign is negative). When it is repulsive, it will dominate at small volume, leading to a bounce that resolves the classical singularity; in other words, V (χ) never reaches zero.
These conclusions do not depend on a choice of state; however, for a semiclassical interpretation one should also require that fluctuations in quantities like the volume are small at late times, ∆V V , which suggests that one should choose, e.g., a Fock coherent state satisfying (again in the Heisenberg picture) [13] a J (0)|α = α|α .
If either multiple field modes or interactions in V [φ] are included, the situation is more complicated; in the first case and without interactions, one still has Eq. (9) for each mode and while deriving an equation for (V /V ) 2 is still straightforward, the right-hand side is complicated and does not admit a simple cosmological interpretation as in Eq. (10). Interactions would generally couple different modes and spoil the property of independent evolution. As a first step, one can study toy models with a single self-interacting mode as done, e.g., in Refs. [13,14]; we will study such a toy model below. In this case, one deals with the quantum theory of an upside-down harmonic oscillator with a higher-order potential, for which there are generally no analytic solutions. One can still propose a mean-field approximation to solve essentially classical equations as in Ref. [14], although such an approximation will break down once interactions become important. Numerical studies as in Ref. [13] are an alternative possibility.
III. VANISHING NEWTON'S CONSTANT
An interesting question which has so far escaped detailed attention is what happens in the case that M J vanishes, i.e., the case where K (0) J is zero for a particular mode J. Given that the indices contained in J are discrete, there is no particular reason to expect that such a J exists, but one might assume that it does. In this case, while the Legendre transform leading to a Hamiltonian (5) can be defined as before, the creation and annihilation operators used above become ill-defined since ω J → 0. Indeed, one now faces the problem of defining creation and annihilation operators for a system equivalent to a free particle in quantum mechanics, rather than a (regular or upside-down) harmonic oscillator.
For simplicity, we restrict to a single mode with J = −J, and assume that K (0) J vanishes. We will, for now, also neglect interactions. With all these approximations we obtain a quadratic Hamiltonian
H J = − 1 2π
able ladder operatorsâ J andâ † J . The definition (6) cannot be applied in this case, but one can definê
a J = 1 √ 2ω 0 (ω 0φJ + iπ J )(13)
where ω 0 is now an arbitrary scale rather than derived from the Hamiltonian. We then havê
H J = ω 0 4K (2) J (â † J −â J ) 2(14)
which decomposes into the difference of a squeezing operator similar to Eq. (7) and a standard harmonic oscillator Hamiltonian ∝ (â † Jâ J +â Jâ † J ), i.e., the difference of an operator with continuous and one with discrete spectrum. The overall spectrum is of course continuous, but the number operatorâ † Jâ J has the usual spectrum given by the non-negative integers, since that simply derives from the algebraic relation [â J ,â † J ] = 1. We can then go ahead and define an effective volume operatorV = V JNJ as in usual GFT cosmology.
The Heisenberg equations of motion are now
dâ J dχ = −i ω 0 2K (2) J â † J −â J(15)
and its Hermitian conjugate, with solution
a J (χ) =â J (0) − ω 0 2 1 K (2) Jπ χ(16)
and Hermitian conjugate;π is time-independent since it commutes with the Hamiltonian. This solution of course represents the linear relation between "position" and "time" expected for the free particle. For the number operatorN J =â † Jâ J we then find
N J (χ) =N J (0) − ω 0 2 1 K (2) J â † J (0)π +πâ J (0) χ − ω 0 K (2) JĤ J χ 2(17)
and hence quadratic growth in the volume with respect to χ. Since there are no states of zero energy (a putative eigenstate of zero momentum would not be normalisable), this general behaviour applies to all states and there no exactly stationary solutions. On the other hand, one can derive an effective Friedmann equation
V (χ) V (χ) 2 = − 4ω 0 E K (2) J V J V (χ) + ω 0 K (2) J A V 2 J V (χ) 2 ,(18)A = C 2 0 2K (2) J + 4N 0 E(19)
where V (χ) = V (χ) as before, E = Ĥ J is the expectation value of the Hamiltonian, N 0 = N J (0) is the average initial particle number, and C 0 = â † J (0)π +πâ J (0) .
Since the inequality N J (χ) ≥ 0 for all χ implies A ≤ 0, the 1/V 2 term in the effective Friedmann equation is repulsive for small volumes and generically (for A < 0) guarantees that the volume never reaches zero. At late (or very early) times when the volume is large, the right-hand side of Eq. (18) goes to zero and the emergent spacetime geometry becomes approximately flat: the terms on the right-hand side of Eq. (18) are of the same form as the subleading corrections in Eq. (10). In both cases, these can be seen as quantum gravity corrections to the correct classical limit. In this sense, the general structure of Eq. (18) might be expected: while the emergent Newton's constant could be fine-tuned to zero, there is not a single limit in the quantum gravity framework of GFT that would also make all the subleading corrections vanish. These subleading corrections are suppressed by inverse powers in the number of GFT quanta, which we expect to be large for a semiclassical interpretation.
From Eq. (17) we see that at late (or very early) times, the relative uncertainty in the volume asymptotes to
(∆ V ) 2 V (χ) 2 = V 2 (χ) − V 2 V 2 → (∆ H ) 2 E 2(20)
(where we use the notation (∆ O ) 2 := Ô 2 − Ô 2 ), which can be made arbitrarily small by choosing states sharply peaked around an average energy value E. Hence there exists a large class of states that evolve into semiclassical, asymptotically flat effective geometries. This notion of semiclassicality, based on relative uncertainty in the volume, does not mean that states remain sharply peaked in quantities such as the field ϕ J or momentum π J . For instance, if we define coherent states as proposed in Ref. [16], we can see that uncertainties grow as we move away from the initial time χ = 0,
(∆ ϕ J ) 2 = 1 2 1 ω 0 + ω 0 K (2) J χ 2 ,(21)(∆ π J ) 2 = ω 0 2 .(22)
From these expressions we can readily see that Fock coherent states do not stay coherent, as ∆ ϕ J ∆ π J = 1 2 only at the initial time. This behaviour seems to be general for Hamiltonians that do not commute withN J . Due to Eq. (20), such states can still be made sharply peaked around a given volume for early and late times.
Given the use of χ as a clock, the energy E is usually interpreted as representing the momentum conjugate to the scalar matter field [13]. It seems puzzling that in this model the energy is restricted to be negative, so that this momentum would have a preferred sign in contrast with classical cosmology, where it is simply related to the time derivative in the scalar field which can take either sign. Moreover, Eq. (18) also depends explicitly on the arbitrary scale ω 0 , since the number operator itself required this scale for its definition. In this sense, the meaning of GFT geometric observables in this scenario seems ambiguous, so that it would seem difficult to extract any phenomenology from it. This is in contrast to the usual case Eq. (10) which involves no additional arbitrary scales.
Perhaps the most unphysical aspect of this scenario is the fine-tuning in setting K (0) J to zero. As we mentioned, there will generically be no J which satisfies this property; even if there is such a J, K (0) J will be non-zero for other modes and there will generally still be modes satisfying Eq. (9) and growing exponentially. The model has to be set up in a specific way for no such modes to exist, and would be unstable under inclusion of other modes.
IV. INTERACTING GFT MODEL
To address some of the issues with the GFT cosmology scenario obtained from tuning K (0) J to zero, we turn to a second approach, in which the quadratic Hamiltonian is unchanged, but one now includes interaction terms as well. The idea is that the exponential instability seen in Eq. (9), which arises from a quadratic Hamiltonian unbounded from below, is an artefact of neglecting interactions; the full theory should have a Hamiltonian that is bounded from below. This viewpoint was advocated in Ref. [14], in the context of a mean-field approximation, and used to derive an effective GFT cosmology for a simple interacting toy model. Here we will present numerical evolution of the quantum theory, which can help to understand the validity of the mean-field approximation.
As before, we restrict the analysis to a single Peter-Weyl mode with J = −J. We then add a ϕ 4 interaction term to the Hamiltonian (7) to obtain
H J = 1 2 M J â † Jâ † J +â JâJ + g 4 |M J | â J +â † J 4(23)
where 0 < g 1, and we now assume K (0) J > 0 and K (2) J < 0 (the opposite sign choice can be treated analogously). In most GFT models for quantum gravity, interactions couple different modes, e.g., to encode matching conditions expected from gluing tetrahedra to higherdimensional structures [4]. We take this "local" interaction in J as a general toy model for quantum behaviour of the GFT field, keeping in mind that choosing particularly symmetric GFT states can reduce more general interactions to local ones [9].
The previous interacting Hamiltonian is equivalent to a quantum mechanical system in terms ofφ J andπ J ,
H J = − 1 2 π 2 J K (2) J + K (0) Jφ 2 J +g 4φ 4 J ,(24)g = 4g K (0) J 3 K (2) J .(25)
In a mean-field approximation, we would replaceπ J and ϕ J by their respective expectation values p J and φ J . We then obtain an effectively classical Hamiltonian We can attempt an interpretation of this stabilising behaviour in terms of GFT cosmology, as was done in Ref. [13] using a classical analogue system in which one treats the independent quadratic combinationsâ 2 J , (â † J ) 2 andâ † Jâ J as classical variables, ignoring higher order corrections coming from commutators of such variables. In such an approximation, one can derive an effective Friedmann equation
= V J 8g .(29)V (χ) V (χ) 2 = − 2M 2 J V 2 J g 2 V (χ) 2 1 + 4g E |M J | − V (χ) V J × 1 − 4g V (χ) V J − 1 − 2g V (χ) V J 1 + 4g E |M J | − V (χ) V J +2g − 3 4 g + E |M J |(30)
where V (χ) and E are now effectively classical quantities, derived from the respective combinations of the fundamental variables. While this approximation differs from a simpler mean-field approximation, both can be seen as neglecting quantum corrections beyond a certain order. We can substitute the classical minimum values (28) and (29) into Eq. (30) resulting in (V /V ) 2 = 48g 2 M 2 J , with the only contribution coming from the last term, which arises from a non-vanishing Casimir in the su(1, 1) algebra spanned by the basic operators [13], and may be seen as a quantum correction to the stationary classical dynamics. In simpler truncations where the right-hand side of Eq. (30) is only linear in all interaction couplings, as in Ref. [14], this higher-order term would not be visible and a classical minimum would automatically be interpreted as a stationary cosmology. In either case, the effective Friedmann equation contains terms of both signs, so that cancellations can lead to a stationary solution. This can be compared to a classical cosmology for which, in the Friedmann equatioṅ
a 2 a 2 = 8πG 3 ρ + Λ 3 ,(31)
one chooses the energy density ρ at a given time to exactly balance the contribution of a negative Λ; such a cosmology would however be unstable under perturbations. In GFT, it seems we can obtain a stationary cosmology by balancing the usual matter energy density with new contributions that appear to have effectively negative energy density, similar to a negative Λ.
A. Full quantum analysis
Turning to the full quantum theory, we return to the Schrödinger picture and aim to find stationary (or almost stationary) solutions to the Schrödinger equation for the model. The most obvious candidates for such states are eigenstates of the Hamiltonian (23), but we will also follow the more traditional approach in GFT cosmology to identify coherent states with suitable initial conditions that can have a good semiclassical interpretation.
Since this Hamiltonian is quartic in the basic ladder operators, there are no good methods for analytically deriving its spectrum and eigenstates. However, one can work numerically by representing the ladder operators as infinite matrices written in the basis of eigenstates of the number operator [18],
a J = 0 1 0 0 0 . . . 0 0 √ 2 0 0 . . . 0 0 0 √ 3 0 . . . 0 0 0 0 2 . . . . . . . . . . . . . . . . . . . . . ,(32)a † J = 0 0 0 0 0 . . . 1 0 0 0 0 . . . 0 √ 2 0 0 0 . . . 0 0 √ 3 0 0 . . . . . . . . . . . . . . . . . . . . . .(33)
We may represent the basis states as
|0 = 1 0 0 . . . , |1 = 0 1 0 . . . , |2 = 0 0 1 . . . , . . . (34)
One can express an operator as a matrix by writing it in terms of the ladder ones,â J andâ † J , provided that a truncation is used. This truncation sets a finite dimension for matrices, determines the accuracy of the calculations and can be extended for a higher accuracy of numerics. By representing the Hamiltonian as a truncated matrix, we can find its eigenvalues and eigenstates, and determine the the dynamics by expressing the Schrödinger equation as a matrix differential equation.
For small coupling constant g, there are a large number of bound states, with the ground state (lowest energy) close to the classical minimum, H ∼ E min = − |M J | 16g . Nonetheless, the expectation value of the fieldφ J = (â J + a † J )/ √ 2ω J is not near either of classical minima (at the bottom of the double well) but instead it is close to zero, with large fluctuations.
As an illustrative example, we fix the parameters to K (0) J = 1, K (2) J = −1, g = 10 −3 ; a matrix size of 500 gives good numerical accuracy. We then find a two-fold degenerate ground state |G ± with
G ± |Ĥ J |G ± ≈ −61.79 ,(35)G ± |φ J |G ± ≈ 0 ,(36)G ± |(φ J ) 2 |G ± ≈ 249 ,(37)G ± |N J |G ± ≈ 124.5 ,(38)G ± | (N J ) 2 |G ± ≈ (124.85) 2 ,(39)
matching well with the classical E min = −62.5 but not with φ (0) J ≈ 15.81. We can see that, while such a state is not semiclassical in the group field ϕ J , the state is sharply peaked around its expectation value of the volume V = V J N J . Somewhat surprisingly, such a bound state then already represents a semiclassical, stationary cosmology. In Fig. 2 we show the expectation value G ± |N J |G ± in the ground state(s) as a function of the coupling g; it follows closely the classical result given in Eq. (29), N J = 1/8g. Fig. 3 shows that the relative variance (∆ N J ) 2 N J 2 monotonically increases with g, so that the semiclassical interpretation of the ground states breaks down at larger values of the coupling constant. For small g, this relative variance grows linearly in g and hence scales as the inverse particle number; the relation becomes nonlinear at larger g. Moreover, the first higher energy states above the ground state show similar expectation values for the volume but with rapidly growing fluctuations, making those states less suitable for a semiclassical interpretation than the ground state.
Going beyond this simplest proposal, one can try to define some kind of coherent states from the eigenstates of this bounded Hamiltonian. These are called Gazeau-Klauder coherent states and have been studied for the double well-potential in Ref. [19]. Note then that, for an expectation value of the energyĤ close to the minimum of the classical potential, this coherent state can be approximated by the two first eigenstates, and produces essentially the same expectation values as Eqs. (35)-(39).
Focusing on bound states is quite different from the traditional approach in GFT cosmology, in which semiclassical cosmology is represented via coherent Fock states [7]. In our setting, we could define a (normalised) approximate coherent state at some initial time by
|α = e −|α| 2 /2 M i=0 α i √ i! |i (40)
where M is the cutoff on the truncation that one has applied to make the matrix representations (32)-(33) finite. M must be chosen large enough so that α|α = 1 up a small error below the accuracy one wants to work at. If we are interested in a state that represents the classically stationary configuration at the minimum of the potential, we can choose α such that the field is set at one of the classical minima, α = ± 1/8g. The value of M then depends on g; for g ∼ 10 −3 a value between 600 and 1000 is sufficient for very small errors (depending on the time of evolution).
The mean-field approximation, which assumes that such a state remains coherent at all times, would imply that it is also stationary. This approximation is not exact, and since this is not an energy eigenstate we expect nontrivial time evolution. Nevertheless, for small g the evolution of these states is almost stationary (see . In particular, relative fluctuations stay very close to the initially small value, so that these states remain semiclassical under time evolution. In this sense, these quantum states behave classically enough to use them in the mean-field approximation. In terms of the physically relevant evolution of the volume, their properties are very similar to those of the exact ground state. It is important to stress that this semiclassical behaviour of initially coherent states will not hold for arbitrary initial conditions. If we start with a coherent state with an expectation value ofφ J far from the minimum of the potential, it evolves in a non-trivial way; the relative varianceN J (as well as of the fieldsφ J andπ J ) increases, deviating from the classical behaviour. The mean-field approximation is then not applicable. We give an example of this in the Appendix. This is then the generic case in which the mean-field approximation breaks down in an interacting GFT, as previously discussed in Refs. [9,13].
V. CONCLUSIONS
We have discussed several approaches for finding stationary cosmologies that could tentatively be associated with a Minkowski spacetime in the cosmological interpretation of GFT. First, we looked at a model in which the effective Newton's constant, related to the "mass" parameter K (0) J in GFT, is taken to zero. This theory could be seen as a potential starting point for a standard model limit of GFT, in which matter propagates on an emergent spacetime but does not affect its structure, akin to setting G = 0 in the classical Einstein equations G ab = 8πGT ab . We saw that this model suffers from fine-tuning in the parameters, such that any small deviation turning the GFT dynamics into those of an inverted oscillator will develop instabilities generating an expanding Universe. We also found that the effective Friedmann equation does not imply an exactly stationary cosmology, but includes high-energy corrections similar to those responsible for a bounce in standard GFT cosmology.
We then studied a second approach which uses a GFT toy model including a quartic interaction term, equivalent to the dynamics of a double-well potential in usual quantum mechanics. By applying a numerical approximation technique in which the ladder operators and the Hamiltonian are represented as matrices, we found bound states starting from a ground state whose energy is very close to the classical minimum of the potential. We saw that such bound states are sharply peaked in the number of GFT quanta and hence in the volume, making them suitable candidates for semiclassical stationary cosmologies. In addition, the numerics show an approximately exponential relation of the relative variance ofN and the coupling constant g: for theories with lower g the ground state is even more sharply peaked around a given volume. Somewhat similar properties are found for the traditionally used Fock coherent states, set up in such a way that the expectation value of the group field sits in the classical minimum of the potential. These states evolve in time but only show small oscillations around the initial volume expectation value, with small fluctuations. While this nontrivial time evolution deviates from the meanfield approximation which would give exactly stationary expectation values, this deviation is small and the use of the mean-field approximation is justified. However, this only works for such a "quantum gravity condensate" in the minimum of the potential, and more generic initial conditions would lead to very non-semiclassical behaviour even for an initially coherent state.
The conclusion that bound states represent good candidates for a semiclassical cosmology in GFT is somewhat at odds with the traditional idea of using Fock coherent states for which uncertainties in the group fieldφ J and momentumπ J can be made small. Sinceφ J andπ J do not correspond to observables it seems more meaningful to demand that cosmologically relevant quantities such as the total volume or the energy (associated to a conjugate momentum of the matter scalar field), or more generally the su(1, 1) variables discussed in Ref. [13], remain semiclassical. In this sense, our work suggests that more general classes of states may be considered to be viable candidates for GFT cosmology.
The numerical techniques applied here could be extended to more general and physically more interesting models for GFT cosmology, such as models with more general interaction terms or models coupling different Peter-Weyl modes. The only limitations come from computational cost, but since the calculations presented here were easy to implement there certainly seems to be scope for studying more involved cases.
Acknowledgements. -This work was supported by the Royal Society through the University Research Fellowship UF160622 and University Research Fellowship Renewal URF\R\221005 (both SG). In addition, this work was supported in part by the Natural Sciences and Engineering Research Council of Canada and by Mitacs through the Mitacs Globalink Research Award (RS).
Appendix A: Coherent states with generic initial conditions
Here we consider the previous interacting case from Eq. (23) and initial coherent state |α associated with an expectation value of the field α|φ J |α located far from the classical minimum of the potential (see Fig. 6). We choose α = 10/ √ 2 ∼ 7.07 as an example for an initial field value away from the minimum of the potential, but the behaviour we observe here appears to be generic. The quantum evolution is given by the Schrödinger equation in the truncation described around Eq. (34). With a dimension of 900 we can calculate the numerical solution for the aforementioned initial condition with sufficient speed and precision. The relative variance of the particle number increases in time and seems to converge to a large value, meaning that the state is not close to a coherent state and the semiclassical (or mean-field) interpretation is lost.
Classically, we would expect the field to oscillate between φ J = 10 and φ J = 20, corresponding to particle numbers of 50 to 200. In Fig. 7 we see that this is indeed what happens initially, but after a short time the behaviour of the particle number (and therefore the volume) differs from this classical expectation: the oscillations become damped leading to an asymptotic value around the minimum of the potential. At the same time, the state no longer remains peaked in the volume, and acquires large fluctuations (Fig. 8). These results demonstrate the breakdown of the mean-field approximation for this kind of states, unlike what we found for states peaked initially at the minimum of the potential.
FIG. 1 .
1Schematic plot of the potential for our GFT toy model Hamiltonian.
FIG. 2 .FIG. 3 .
23Expectation value of the number of particles in the ground state as a function of the coupling constant g, compared with the classical result 1/8g. Relative variance of the particle number NJ for ground states |G± . At low values of g, quantum fluctuations are still small. The relation is almost linear at small g, but becomes nonlinear as g is increased further.
FIG. 4 .
4Oscillatory evolution in relational time χ of the particle number expectation value in the coherent state representing the classically stationary state, for the set of parameters K
FIG. 5 .
5Relative uncertainty of the number of particles in the coherent state representing the classically stationary state, for K g = 10 −3 . As expected for a coherent state, initially (∆N J ) 2 / N J 2 = 1/ N J ; this initial value is actually an approximate upper bound for all times.
FIG. 6 .J
6Classical potential and initial expectation value of the field (red point). We set the parameters of the Hamiltonian to K = 1 and g = 10 −3 .
FIG. 7 .
7Evolution of the particle number in the unstable case. Due to quantum effects of the quartic potential, the evolution results in a damping of the initial oscillations.
FIG. 8. The relative variance of the particle number increases in time and seems to converge to a large value, meaning that the state is not close to a coherent state and the semiclassical (or mean-field) interpretation is lost.
to operators satisfying the usual [φ J (χ),π J (χ)] = iδ J,J . In other words, the scalar field variable χ is now treated as a conventional time variable. The quadratic part of the Hamiltonian is a sum of single-mode Hamiltonians,H = − 1 2 J π Jπ−J K (2) J + K (0) Jφ Jφ−J + V [φ] =: JĤ J + V [φ] .(5)For modes for which K (0) J and K(2)J have the same sign, this quadratic part is the Hamiltonian of a harmonic oscillator (potentially with an unusual minus sign), whereas for opposite signs the Hamiltonian is that of an upsidedown harmonic oscillator with negative quadratic potential. It is the second case which is relevant for cosmology since it has exponentially growing solutions, and most of the literature is focused on this case.Introducing for each J an annihilation operatora J = 1 √ 2ω J ω JφJ + iπ † J(6)and its conjugate (creation operator)â † J , where ω J := |K(0) J K (2) J |, the quadratic Hamiltonian for one of the unstable modes becomeŝH J = 1 2 M J â † Jâ † −J +â Jâ−J(7)with M J := −sgn K(0) J |K (0) J /K (2)J | (see Ref.[11] for details). If we neglect the effect of interactions contained in V [φ] for now, we see that this Hamiltonian takes the form of a squeezing operator, in that time evolution with respect toĤ J transforms the vacuum into a
Disappearance and emergence of space and time in quantum gravity. D Oriti, 10.1016/j.shpsb.2013.10.006arXiv:1302.2849Stud. Hist. Phil. Sci. B. 46D. Oriti, "Disappearance and emergence of space and time in quantum gravity," Stud. Hist. Phil. Sci. B 46 (2014), 186-199, arXiv:1302.2849.
Quantum gravity from causal dynamical triangulations: a review. R , 10.1088/1361-6382/ab57c7arXiv:1905.08669Class. Quant. Grav. 3713002R. Loll, "Quantum gravity from causal dynamical tri- angulations: a review," Class. Quant. Grav. 37 (2020), 013002, arXiv:1905.08669.
Can spacetime be a condensate?. B L Hu, 10.1007/s10773-005-8895-0gr-qc/0503067Int. J. Theor. Phys. 44B. L. Hu, "Can spacetime be a condensate?," Int. J. Theor. Phys. 44 (2005), 1785-1806, gr-qc/0503067;
Loop Quantum Gravity Vacuum with Nondegenerate Geometry. T Koslowski, H Sahlmann, 10.3842/SIGMA.2012.026arXiv:1109.4688SIGMA. 826T. Koslowski and H. Sahlmann, "Loop Quantum Grav- ity Vacuum with Nondegenerate Geometry," SIGMA 8 (2012), 026, arXiv:1109.4688.
Group Field Theory: An Overview. L Freidel, 10.1007/s10773-005-8894-1hep-th/0505016Int. J. Theor. Phys. 44L. Freidel, "Group Field Theory: An Overview," Int. J. Theor. Phys. 44 (2005), 1769-1783, hep-th/0505016;
The microscopic dynamics of quantum space as a group field theory. D Oriti, arXiv:1110.5606Foundations of Space and Time: Reflections on Quantum Gravity. J. Murugan, A. Weltman and G. F. R. EllisCambridge University PressD. Oriti, "The microscopic dynamics of quantum space as a group field theory," in Foundations of Space and Time: Reflections on Quantum Gravity, eds. J. Murugan, A. Weltman and G. F. R. Ellis (Cambridge University Press, 2012), pp. 257-320, arXiv:1110.5606.
Group field theory as the second quantization of loop quantum gravity. D Oriti, 10.1088/0264-9381/33/8/085005arXiv:1310.7786Class. Quant. Grav. 3385005D. Oriti, "Group field theory as the second quantization of loop quantum gravity," Class. Quant. Grav. 33 (2016), 085005, arXiv:1310.7786.
Group field theory as the microscopic description of the quantum spacetime fluid: a new perspective on the continuum in quantum gravity. D Oriti, ; D Oriti, L Sindoni, 10.1088/1367-2630/13/2/025006arXiv:0710.3276arXiv:1010.5149New J. Phys. 1325006PoS QG-PHD. Oriti, "Group field theory as the microscopic descrip- tion of the quantum spacetime fluid: a new perspective on the continuum in quantum gravity," PoS QG-PH (2007), 030, arXiv:0710.3276; D. Oriti and L. Sindoni, "Toward classical geometrodynamics from the group field theory hydrodynamics," New J. Phys. 13 (2011), 025006, arXiv:1010.5149.
Cosmology from Group Field Theory Formalism for Quantum Gravity. S Gielen, D Oriti, L Sindoni, 10.1103/PhysRevLett.111.031301arXiv:1303.3576Phys. Rev. Lett. 11131301S. Gielen, D. Oriti and L. Sindoni, "Cosmology from Group Field Theory Formalism for Quantum Gravity," Phys. Rev. Lett. 111 (2013), 031301, arXiv:1303.3576;
Homogeneous cosmologies as group field theory condensates. S Gielen, D Oriti, L Sindoni, 10.1007/JHEP06(2014)013arXiv:1311.1238JHEP. 0613S. Gielen, D. Oriti and L. Sindoni, "Homogeneous cos- mologies as group field theory condensates," JHEP 06 (2014), 013, arXiv:1311.1238.
Dust as a standard of space and time in canonical quantum gravity. J D Brown, K V Kuchař, 10.1103/PhysRevD.51.5600gr-qc/9409001Phys. Rev. D. 51J. D. Brown and K. V. Kuchař, "Dust as a stan- dard of space and time in canonical quantum gravity," Phys. Rev. D 51 (1995), 5600-5629, gr-qc/9409001;
Scalar material reference systems and loop quantum gravity. S B Giddings, D Marolf, J B Hartle, ; K Giesel, T Thiemann, 10.1088/0264-9381/32/13/135015arXiv:1206.3807Class. Quant. Grav. 74135015Phys. Rev. DS. B. Giddings, D. Marolf and J. B. Hartle, "Observ- ables in effective gravity," Phys. Rev. D 74 (2006), 064018, hep-th/0512200; K. Giesel and T. Thiemann, "Scalar material reference systems and loop quan- tum gravity," Class. Quant. Grav. 32 (2015), 135015, arXiv:1206.3807.
Emergent Friedmann dynamics with a quantum bounce from quantum gravity condensates. D Oriti, L Sindoni, E Wilson-Ewing, ; D Oriti, L Sindoni, E Wilson-Ewing, 10.1088/1361-6382/aa549aarXiv:1602.05881arXiv:1602.08271Class. Quant. Grav. 33Class. Quant. Grav.D. Oriti, L. Sindoni and E. Wilson-Ewing, "Emer- gent Friedmann dynamics with a quantum bounce from quantum gravity condensates," Class. Quant. Grav. 33 (2016), 224001, arXiv:1602.05881; D. Oriti, L. Sin- doni and E. Wilson-Ewing, "Bouncing cosmologies from quantum gravity condensates," Class. Quant. Grav. 34 (2017), 04LT01, arXiv:1602.08271.
Emergence of a low spin phase in group field theory condensates. S Gielen, 10.1088/0264-9381/33/22/224002arXiv:1604.06023Class. Quant. Grav. 33224002S. Gielen, "Emergence of a low spin phase in group field theory condensates," Class. Quant. Grav. 33 (2016), 224002, arXiv:1604.06023.
Addendum to 'Relational Hamiltonian for group field theory. E Wilson-Ewing, ; S Gielen, A Polaczek, E Wilson-Ewing, 10.1103/PhysRevD.100.106002arXiv:1810.01259arXiv:1908.09850Phys. Rev. D. 99106002Phys. Rev. DE. Wilson-Ewing, "Relational Hamiltonian for group field theory," Phys. Rev. D 99 (2019), 086017, arXiv:1810.01259; S. Gielen, A. Polaczek and E. Wilson-Ewing, "Addendum to 'Relational Hamilto- nian for group field theory'," Phys. Rev. D 100 (2019), 106002, arXiv:1908.09850.
Cosmological evolution as squeezing: a toy model for group field cosmology. E Adjei, S Gielen, W Wieland, 10.1088/1361-6382/aaba11arXiv:1712.07266Class. Quant. Grav. 35105016E. Adjei, S. Gielen and W. Wieland, "Cosmological evolution as squeezing: a toy model for group field cosmology," Class. Quant. Grav. 35 (2018), 105016, arXiv:1712.07266.
Generalised effective cosmology from group field theory. S Gielen, A Polaczek, 10.1088/1361-6382/ab8f67arXiv:1912.06143Class. Quant. Grav. 37165004S. Gielen and A. Polaczek, "Generalised effective cos- mology from group field theory," Class. Quant. Grav. 37 (2020), 165004, arXiv:1912.06143.
Cosmological implications of interacting group field theory models: Cyclic Universe and accelerated expansion. M De Cesare, A G A Pithis, M Sakellariadou, 10.1103/PhysRevD.94.064051arXiv:1606.00352Phys. Rev. D. 9464051M. de Cesare, A. G. A. Pithis and M. Sakellariadou, "Cosmological implications of interacting group field the- ory models: Cyclic Universe and accelerated expansion," Phys. Rev. D 94 (2016), 064051, arXiv:1606.00352.
The GNewton to 0 limit of Euclidean quantum gravity. L Smolin, 10.1088/0264-9381/9/4/007hep-th/9202076Class. Quant. Grav. 9L. Smolin, "The GNewton to 0 limit of Euclidean quan- tum gravity," Class. Quant. Grav. 9 (1992), 883-894, hep-th/9202076
Coherent states for free particles. A De La Torre, D Goyeneche, arXiv:1004.2620A. de la Torre and D. Goyeneche, "Coherent states for free particles," arXiv:1004.2620.
Towards anisotropic cosmology in group field theory. A Calcinari, S Gielen, 10.1088/1361-6382/acc1dbarXiv:2210.03149Class. Quant. Grav. 4085004A. Calcinari and S. Gielen, "Towards anisotropic cosmol- ogy in group field theory," Class. Quant. Grav. 40 (2023), 085004, arXiv:2210.03149.
Computing quantum eigenvalues made easy. H J Korsch, M Glück, 10.1088/0143-0807/23/4/305Eur. J. Phys. 23H. J. Korsch and M. Glück, "Computing quantum eigen- values made easy," Eur. J. Phys. 23 (2002), 413-426.
Generalized coherent states for the double-well potential. M Novaes, M A M De Aguiar, J E M Hornos, 10.1088/0305-4470/36/21/307J. Phys. A.: Math. Gen. 36M. Novaes, M. A. M. de Aguiar, and J. E. M. Hornos, "Generalized coherent states for the double-well poten- tial," J. Phys. A.: Math. Gen. 36 (2003), 5773-5786.
| []
|
[]
| [
"Rangel Hernández-Ortiz ",
"Kolja Knauer ",
"ANDLuis Pedro Montejano ",
"Manfred Scheucher "
]
| []
| [
"Mathematics Subject Classification. 52C40, 05C35, 05Dxx"
]
| J.-P. Roudneff conjectured in 1991 that every arrangement of n ≥ 2d+1 ≥ 5 pseudohyperplanes in the real projective space P d has at most d−2 i=0 n−1 i complete cells (i.e., cells bounded by each hyperplane). The conjecture is true for d = 2, 3 and for arrangements arising from Lawrence oriented matroids. The main result of this manuscript is to show the validity of Roudneff's conjecture for d = 4. Moreover, based on computational data we conjecture that the maximum number of complete cells is only obtained by cyclic arrangements. | 10.48550/arxiv.2303.14212 | [
"https://export.arxiv.org/pdf/2303.14212v1.pdf"
]
| 257,766,397 | 2303.14212 | 3b9cd591d72959dcc02ffa6f36289291a7ba0bfa |
March 28. 2023. 2010
Rangel Hernández-Ortiz
Kolja Knauer
ANDLuis Pedro Montejano
Manfred Scheucher
Mathematics Subject Classification. 52C40, 05C35, 05Dxx
68March 28. 2023. 2010ROUDNEFF'S CONJECTURE IN DIMENSION 4and phrases Roudneff's conjectureoriented matroidarrangement of hyperplanes
J.-P. Roudneff conjectured in 1991 that every arrangement of n ≥ 2d+1 ≥ 5 pseudohyperplanes in the real projective space P d has at most d−2 i=0 n−1 i complete cells (i.e., cells bounded by each hyperplane). The conjecture is true for d = 2, 3 and for arrangements arising from Lawrence oriented matroids. The main result of this manuscript is to show the validity of Roudneff's conjecture for d = 4. Moreover, based on computational data we conjecture that the maximum number of complete cells is only obtained by cyclic arrangements.
Introduction
A projective arrangement of n pseudohyperplanes H(d, n) in the real projective space P d is a finite collection of mildly deformed linear hyperplanes with several combinatorial properties, see Section 2.1 for the definition in terms of oriented matroids. In particular, no point belongs to every pseudohyperplane of H(d, n). Any arrangement H(d, n) decomposes P d into a d-dimensional cell complex and any d-cell c of H(d, n) has at most n facets (that is, (d − 1)-cells). We say that a d-cell c is a complete cell of H(d, n) if c has exactly n facets, i.e., c is bounded by each pseudohyperplane of H(d, n).
The cyclic polytope of dimension d with n vertices, discovered by Carathéodory [3], is the convex hull in R d of n ≥ d + 1 ≥ 3 different points x(t 1 ), . . . , x(t n ) on the moment curve x : R → R d , t → (t, t 2 , . . . , t d ). Cyclic polytopes play an important role in combinatorial convex geometry due to their connection with certain extremal problems. See for example, the upper bound theorem due to McMullen [10]. Cyclic arrangements are defined as the dual of the cyclic polytopes. As for cyclic polytopes, cyclic arrangements also have extremal properties, see Section 2.1 for the definition in terms of oriented matroids. For instance, Shannon [14] introduced cyclic arrangements as examples of projective arrangements in dimension d which minimize the number of cells with (d + 1) facets.
Denote by C d (n) the number of complete cells of the cyclic arrangement of dimension d with n hyperplanes. Roudneff [13] The conjecture is true for d = 2 (that is, any arrangement of n pseudolines in P 2 contains at most one complete cell), Ramírez Alfonsín [12] proved the case d = 3, and in [11] the authors proved it for arrangements corresponding to Lawrence oriented matroids.
In [8] the exact number of complete cells of cyclic arrangements was calculated for any positive integers d and n with n ≥ d + 1, namely,
C d (n) = d n − d + d − 1 n − d − 1 + d−2 i=0 n − 1 i .
Thus, in view of Roudneff's conjecture, the following question was asked in [11].
Question 1.2. Is it true that every arrangement of n ≥ d + 1 ≥ 3 pseudohyperplanes in P d has at most C d (n) complete cells?
Notice that there is a unique arrangement of 3 (resp. 4) lines in P 2 with C 2 (3) = 4 (resp. C 2 (4) = 3) complete cells. Since Conjecture 1.1 is true for d = 2 and n ≥ 5, Question 1.2 is answered affirmatively for d = 2.
As the main result of this paper, we give an affirmative answer to Question 1.2 for d = 4 and therefore prove Roudneff's conjecture for dimension 4, further supporting the general conjecture. In addition, with a few simple observations, we answer Question 1.2 for d = 3 and further strengthen Roudneff's conjecture.
Oriented matroids
Let us give some basic notions and definitions in oriented matroid theory. We assume some knowledge and standard notation of the theory of oriented matroids, for further reference the reader can consult the textbook [2]. A signed set or signed vector X on ground set E is a set X ⊆ E together with a partition (X + , X − ) of X into two distinguished subsets: X + , the set of positive elements of X, and X − , its set of negative elements. The set X = X + ∪ X − is the support of X. We denote by −X the sign-vector such that −X + = X − and −X − = X + . An oriented matroid M = (E, C) is a pair of a finite ground set E and a collection of signed sets on E called circuits, satisfying the following axioms:
• ∅ / ∈ C, • if X ∈ C then −X ∈ C, • if X, Y ∈ C and X ⊆ Y then X = ±Y , • if X, Y ∈ C, X = −Y , and e ∈ X + ∩ Y − , then there is Z ∈ C, with e / ∈ Z and Z + ⊆ X + ∩ Y + and Z − ⊆ X − ∩ Y − .
We say that X ∈ C is a positive circuit if X − = ∅. We call the set of all reorientations of M its reorientation class. We say that M is acyclic if it does not contain positive circuits (otherwise, M is called cyclic). A reorientation of M on R ⊆ E is performed by changing the signs of the elements in R in all the circuits of M. It is easy to check that the new set of signed circuits is also the set of circuits of an oriented matroid, usually denoted by M R . A reorientation is acyclic if M R is acyclic. Recall that oriented matroid on n elements is uniform of rank r if the set of supports of its circuits consists of all (r + 1)-element subsets of E. Given a uniform oriented matroid M of rank r on n = |E| elements, we denote its dual by M * , which is another uniform oriented matroid of rank n − r on n elements. A characterization of oriented matroids in terms of basis orientations (that we will not make explicit here) was given by Lawrence [9]. Let r ≥ 1 be an integer and E = {1, . . . , n} be a set. A mapping χ : E r → {−1, 0, 1} (where we will abbreviate it by {−, 0, +}) is a basis orientation of an oriented matroid of rank r on E if and only if χ is a chirotope, that is, a special alternating mapping not identically zero. It is known that χ : E r → {−, +} is a chirotope if and only if χ is a basis orientation of a rank r uniform oriented matroid on E. Moreover, if χ(B) = + for any ordered basis B = (b 1 , . . . , b r ) of M with b 1 < . . . < b r , then the uniform matroid M is known to be the alternating oriented matroid of rank r on n elements. In that case, the signs of each circuit alternate along the ordering of E.
Given two sign-vectors X, Y ∈ {+, −, 0} E , their separation is the set S(X, Y ) = {e ∈ E | X e · Y e = −},
where X e and Y e are the sings of the element e in X and Y , respectively. We denote by X ⊥ Y and say that X and Y are orthogonal if the sets S(X, Y ) and S(X, −Y ) are either both empty or both non-empty. Maximal covectors of an oriented matroid M are usually called topes. It is known that a sign-vector T ∈ {+, −} E is a tope of M if and only if T ⊥ X for all circuit X ∈ C (see [2, Section 1.2, page 14]). Moreover, T is a tope of M if and only if S(T, X) and S(T, −X) are both non-empty, for every circuit X ∈ C.
2.1. Topological Representation Theorem. The combinatorial properties of arrangements of pseudohyperplanes can be studied in the language of oriented matroids. The Folkman-Lawrence topological representation theorem [7] states that the reorientation classes of oriented matroids on n elements and rank r (without loops or parallel elements) are in one-to-one correspondence with the classes of isomorphism of arrangements of n pseudospheres in S r−1 (see [2,Theorem 1.4.1]). There is a natural identification between pseudospheres and pseudohyperplanes as follows. Recall that P r−1 is the topological space obtained from S d by identifying all pairs of antipodal points. The double covering map π : S r−1 → P r−1 , given by π(x) = {x, −x}, gives an identification of centrally symmetric subsets of S r−1 and general subsets of P r−1 . This way centrally symmetric pseudospheres in S r−1 correspond to pseudohyperplanes in P r−1 . Hence, the topological representation theorem can also be stated in terms of pseudohyperplanes in P r−1 , i.e., the reorientation classes of oriented matroids on n elements and rank r (without loops or parallel elements) are in one-to-one correspondence with the classes of isomorphism of arrangements of n pseudohyperplanes in P r−1 (see [2, Section 5, exercise 5.8]). An arrangement H(d, n) is called simple if every intersection of d pseudohyperplanes is a unique distinct point. Simple arrangements correspond to uniform oriented matroids. The d-cells of any arrangement H(d, n) are usually called topes since they are in one-toone correspondence with the topes of each of the oriented matroids M of rank r = d + 1 on n elements of its corresponding reorientation class. It is known that a tope of M (i.e, a d-cell of its corresponding arrangement) corresponds to an acyclic reorientation of M having as interior elements precisely those pseudohyperplanes not bordering the tope. Moreover, a tope T of M is a complete cell if reorienting any single element of T , the resulting sign-vector is also a tope of M. Cyclic arrangements of n hyperplanes in P d are equivalent to alternating oriented matroids of rank r = d + 1 on n elements, which hence have exactly 2C r−1 (n) complete cells. Summarizing, Question 1.2 (and hence Roudneff's conjecture) can be stated in the following form:
Every rank r oriented matroid M on n ≥ r + 1 elements has at most 2C r−1 (n) complete cells. We summarize for later usage: Given a rank r oriented matroid M = (E, C), the following three conditions hold. (a) A tope of M is a sign-vector T ∈ {+, −} E such that T ⊥ X for all circuit X ∈ C.
Previous results
We will use the following result due to Roudneff. From the proof of the above proposition, it can be seen that even for any arrangement H with n ≤ 2d pseudohyperplanes in P d , we may also perturb each hyperplane of H a bit in order to obtain a simple arrangement H ′ with at least the same number of complete cells as H (see Proposition 2.3 of [13]). This shows that also for Question 1.2, we can restrict ourselves to simple arrangements.
Remark 3.2.
To answer Question 1.2 for dimension d in the affirmative, it suffices to verify it for simple arrangements of pseudohyperplanes in P d .
Thus, by condition (c) and by Remark 3.2, it is sufficient to prove Question 1.2 for uniform oriented matroids. The following observation will be useful in this work.
Remark 3.3.
There is only one reorientation class of uniform rank r oriented matroids on n ≤ r + 2 elements.
Proof. The number of reorientation classes of a uniform oriented matroid M of rank r on n elements is equal to the number of reorientation classes of its dual M * . Now, if M has rank r and n ≤ r + 2 elements, then M * has rank at most 2. Hence, M * and therefore M has only one reorientation class.
Thus, every acyclic uniform oriented matroid on at most r + 2 elements is in the reorientation class of the alternating oriented matroid and hence, they all have the same number of complete cells. As a consequence of Remarks 3.2 and 3.3, we can answer affirmatively Question 1.2 for n ≤ r + 2. In particular, as for r = 4 (dimension d = 3) Conjecture 1.1 is true for n ≥ 7, we obtain the following.
Corollary 3.4. Every arrangement of n ≥ 4 pseudohyperplanes in P 3 has at most C 3 (n) complete cells.
Main result
Given a uniform rank r oriented matroid M = (E, C) on n = |E| elements, we explain the procedure to obtain the set of all complete cells of its corresponding arrangement of n pseudohyperplanes in P d via the signed bases of M. We start with the signature of all the bases of M and then, we obtain all its signed circuits. After that, we get the set of topes of M and finally, we obtain the set of all complete cells of M as follows:
Bases → Circuits: From the chirotope, we may obtain that Topes → Complete cells: For any tope T , we verify condition (b) to confirm that T is a complete cell of M. That is, we reorient any single element of T , check if the resulting sign-vector is a tope of M and verify this for each of the n entries of T .
χ(B) = −X b i · X b i+1 · χ(B ′ ), where X = {b 1 , ..., b r+1 } is
Finschi and Fukuda [5,6] enumerated the signed bases of all the reorientation classes of uniform rank 5 oriented matroids on 8 and 9 elements. While the data for 8 elements is available on the website [4], the data for 9 elements and also their source code for the enumeration is available upon request from Lukas Finschi. We follow the procedure explained above with a computer program (available at [1]) which gives us the number of complete cells of each acyclic reorientation class. After about 26 CPU days of computing time (i.e., few days with parallelization), we obtain the following. Theorem 4.2. Each of the 9276595 reorientation classes of uniform rank 5 oriented matroids on 9 elements has at most 2C 4 (9) complete cells. Moreover, the class of the alternating oriented matroid is the only one with exactly 2C 4 (9) complete cells.
We can now prove our main result:
Theorem 4.3.
Every arrangement of n ≥ 5 pseudohyperplanes in P 4 has at most C 4 (n) complete cells.
Proof. By Proposition 3.2, it is sufficient to prove the theorem for simple arrangements, that is, for uniform oriented matroids (see condition (c)). Thus, by Remark 3.3 and Theorem 4.1, the result holds for n = 5, 6, 7 and 8. Finally, by Proposition 3.1 it suffices to verify it for n = 9. Therefore, the result holds by Theorem 4.2.
Finally, we have used our computer program to verify that the cyclic arrangement is the unique example which maximizes the number of complete cells for d = 2 and n ≤ 10, for d = 3 and n ≤ 7, and for d = 4 and n ≤ 9. Based on our computational evidence, we conclude this article with the following strengthening of Roudneff's conjecture and Question 1.2:
Conjecture 4.4. Every arrangement of n ≥ d + 1 ≥ 3 pseudohyperplanes in P d has at most C d (n) complete cells. Moreover, among all arrangements of n pseudohyperplanes in P d the cyclic arrangement is (up to isomorphism) the only one with C d (n) complete cells.
Last but not least, as the proof of Proposition 3.1, it suffices to verify Conjecture 4.4 for simple arrangements of pseudohyperplanes in P d . However, we do not know whether the setting can also be restricted to n ≤ 2d + 1 without loss of generality.
(b) A tope T of M is a complete cell if reorienting any single element of T , the resulting sign-vector is also a tope of M. (c) If the corresponding arrangement of n pseudohyperplanes in P r−1 of M is simple, then M is uniform.
the support of an ordered circuit of M and B = X − b i and B ′ = X − b i+1 are two bases of M (see [2, Section 3.5]). Hence, given χ(B), for any basis B of M, we obtain the signed circuit X and since M is uniform, we can proceed to obtain all the signed circuits of M. Circuits → Topes: For any sign-vector T ∈ {+, −} n , we verify condition (a) to confirm that T is a tope of M, i.e., we check for all circuit X ∈ C of M if S(T, X) and S(T, −X) are both non-empty (see [2, Section 1.2, page 14]).
Theorem 4. 1 .
1Each of the 135 reorientation classes of uniform rank 5 oriented matroids on 8 elements has at most 2C 4 (8) complete cells. Moreover, the class of the alternating oriented matroid is the only one with exactly 2C 4 (8) complete cells.
proved that C d (n) ≥d−2
i=0
n−1
i
holds for d ≥ 2 and that
this bound is tight for all n ≥ 2d + 1. Moreover, he conjectured that in that case, cyclic
arrangements maximize the number of complete cells.
Conjecture 1.1 ([13, Conjecture 2.2]). Every arrangement of n ≥ 2d + 1 ≥ 5 pseudohy-
perplanes in P d has at most
d−2
i=0
n−1
i
complete cells.
Proposition 3.1 ([13]). To prove Conjecture 1.1 for dimension d, it suffices to verify it for all simple arrangements of n = 2d + 1 pseudohyperplanes in P d .
Supplemental source code and data. Supplemental source code and data. https://github.com/manfredscheucher/supplemental-roudneff4.
A Björner, M Vergnas, B Sturmfels, N White, G M Ziegler, of Encyclopedia of Mathematics and its Applications. Cambridge University Press462nd edition. 2 editionA. Björner, M. Las Vergnas, B. Sturmfels, N. White, and G. M. Ziegler. Oriented Matroids, 2nd edition, volume 46 of Encyclopedia of Mathematics and its Applications. Cambridge University Press, 2 edition, 1999.
Über den Variabilitätsbereich der Koeffizienten von Potenzreihen, die gegebene Werte nicht annehmen. C Carathéodory, Mathematische Annalen. 64C. Carathéodory.Über den Variabilitätsbereich der Koeffizienten von Potenzreihen, die gegebene Werte nicht annehmen. Mathematische Annalen, 64:95-115, 1907.
Webpage: Homepage of oriented matroids. L Finschi, L. Finschi. Webpage: Homepage of oriented matroids. https://finschi.com/math/om/?p=catom&filter=nondeg.
A graph theoretical approach for reconstruction and generation of oriented matroids. L Finschi, 10.3929/ethz-a-004255224ETH Zürich, SwitzerlandPhD thesisL. Finschi. A graph theoretical approach for reconstruction and generation of oriented matroids. PhD thesis, ETH Zürich, Switzerland, 2001.
Generation of oriented matroids-a graph theoretical approach. L Finschi, K Fukuda, 10.1007/s00454-001-0056-5Discrete & Computational Geometry. 271L. Finschi and K. Fukuda. Generation of oriented matroids-a graph theoretical approach. Discrete & Computational Geometry, 27(1):117-136, 2002.
Oriented matroids. J Folkman, J Lawrence, 10.1016/0095-8956(78)90039-4Journal of Combinatorial Theory, Series B. 252J. Folkman and J. Lawrence. Oriented matroids. Journal of Combinatorial Theory, Series B, 25(2):199-236, 1978.
On counting the k-face cells of cyclic arrangements. D Forge, J L Ramírez Alfonsín, 10.1006/eujc.2000.0462European Journal of Combinatorics. 223D. Forge and J. L. Ramírez Alfonsín. On counting the k-face cells of cyclic arrangements. European Journal of Combinatorics, 22(3):307-312, 2001.
Oriented matroids and multiply ordered sets. J Lawrence, 10.1016/0024-3795(82)90094-5Linear Algebra and its Applications. 48J. Lawrence. Oriented matroids and multiply ordered sets. Linear Algebra and its Applications, 48:1-12, 1982.
The maximum numbers of faces of a convex polytope. P Mcmullen, 10.1112/S0025579300002850Mathematika. 172P. McMullen. The maximum numbers of faces of a convex polytope. Mathematika, 17(2):179-184, 1970.
Roudneff's Conjecture for Lawrence Oriented Matroids. L P Montejano, J L Ramírez Alfonsín, 10.37236/4811Electronic Journal of Combinatorics. 222Paper #P2.3, 4 pagesL. P. Montejano and J. L. Ramírez Alfonsín. Roudneff's Conjecture for Lawrence Oriented Matroids. Electronic Journal of Combinatorics, 22(2):Paper #P2.3, 4 pages, 2015.
Cyclic arrangements and Roudneff's conjecture in the space. J L Ramírez Alfonsín, 10.1016/S0020-0190(99)00115-5Information Processing Letters. 715J. L. Ramírez Alfonsín. Cyclic arrangements and Roudneff's conjecture in the space. Information Processing Letters, 71(5):179-182, 1999.
Cells with many facets in arrangements of hyperplanes. J.-P Roudneff, 10.1016/0012-365X(91)90375-CDiscrete Mathematics. 983J.-P. Roudneff. Cells with many facets in arrangements of hyperplanes. Discrete Mathematics, 98(3):185-191, 1991.
Simplicial cells in arrangements of hyperplanes. R W Shannon, 10.1007/BF00181486Geometriae Dedicata. 82R. W. Shannon. Simplicial cells in arrangements of hyperplanes. Geometriae Dedicata, 8(2):179-187, 1980.
. Av. Països Catalans. 2643007Universitat Rovira i Virgili, Departament d'Enginyeria Informàtica i MatemàtiquesEmail address: [email protected] Rovira i Virgili, Departament d'Enginyeria Informàtica i Matemàtiques, Av. Països Catalans 26, 43007 Tarragona, Spain. Email address: [email protected]
Email address: luispedro.montejano@urv. Serra Húnter Fellow, Av. Països Catalans. 2643007deUniversitat Rovira i Virgili, Departament d'Enginyeria Informàtica i Matemàtiques ; cat Institut für Mathematik, Technische Universität BerlinGermany Email address: [email protected] Húnter Fellow, Universitat Rovira i Virgili, Departament d'Enginyeria Informàtica i Matemàtiques, Av. Països Catalans 26, 43007 Tarragona, Spain. Email address: [email protected] Institut für Mathematik, Technische Universität Berlin, Germany Email address: [email protected]
| [
"https://github.com/manfredscheucher/supplemental-roudneff4."
]
|
[
"Estimating the prevalence of anemia rates among children under five in Peruvian districts with a small sample size",
"Estimating the prevalence of anemia rates among children under five in Peruvian districts with a small sample size"
]
| [
"Anna Sikov [email protected] \nDepartment of Engineering Statistics\nNational Engineering University\n\n\nEconometric Modelling and Data Science Research Group -UNI\n\n",
"José Cerda-Hernández \nDepartment of Engineering Economics\nNational Engineering University\n\n\nEconometric Modelling and Data Science Research Group -UNI\n\n"
]
| [
"Department of Engineering Statistics\nNational Engineering University\n",
"Econometric Modelling and Data Science Research Group -UNI\n",
"Department of Engineering Economics\nNational Engineering University\n",
"Econometric Modelling and Data Science Research Group -UNI\n"
]
| []
| In this paper we attempt to answer the following question: "Is it possible to obtain reliable estimates for the prevalence of anemia rates in children under five years in the districts of Peru?" Specifically, the interest of the present paper is to understand to which extent employing the basic and the spatial Fay-Herriot models can compensate for inadequate sample size in most of the sampled districts, and whether the way of choosing the spatial neighbors has an impact on the resulting inference. Furthermore, it is raised the question of how to choose an optimal way to define the neighbours. We present an illustrative analysis using the data from the Demographic and Family Health Survey of the year 2019, and the National Census carried out in 2017. | 10.1007/s10260-023-00698-x | [
"https://export.arxiv.org/pdf/2208.01593v1.pdf"
]
| 251,253,047 | 2208.01593 | 35312a4bc8cb246dea5f97825fe90ef7b9c68628 |
Estimating the prevalence of anemia rates among children under five in Peruvian districts with a small sample size
Anna Sikov [email protected]
Department of Engineering Statistics
National Engineering University
Econometric Modelling and Data Science Research Group -UNI
José Cerda-Hernández
Department of Engineering Economics
National Engineering University
Econometric Modelling and Data Science Research Group -UNI
Estimating the prevalence of anemia rates among children under five in Peruvian districts with a small sample size
Direct EstimateSpatial AutocorrelationFay-Herriot ModelMean Square ErrorBootstrap
In this paper we attempt to answer the following question: "Is it possible to obtain reliable estimates for the prevalence of anemia rates in children under five years in the districts of Peru?" Specifically, the interest of the present paper is to understand to which extent employing the basic and the spatial Fay-Herriot models can compensate for inadequate sample size in most of the sampled districts, and whether the way of choosing the spatial neighbors has an impact on the resulting inference. Furthermore, it is raised the question of how to choose an optimal way to define the neighbours. We present an illustrative analysis using the data from the Demographic and Family Health Survey of the year 2019, and the National Census carried out in 2017.
Introduction
The prevalence of anemia in young children is an important public health problem. According to the World Health Organization (WHO), anemia is a condition in which the number of red blood cells or the haemoglobin concentration within them is lower than normal, which can cause symptoms such as fatigue, weakness, dizziness and shortness of breath, among others ([Organización Mundial de la Salud. (2011).], [World Health Organization (2004).]). For this reason, reduction of prevalence of anemia is one of the priorities of the health policies of the Peruvian state. According to "The National Plan for reduction and control of Maternal and Child Anemia and Chronic Child Malnutrition in Peru: 2017-2021", presented by the Ministery of Health, the target level was the reduction to 19% of anemia in children by the end of 2021. Nonetheless, the prevalence of anemia, reported in 2018 was still 43.5%, which corresponds to a reduction of 3.3%, compared to the rates, observed in 2014 ([Ministerio de Salud (2014).], [Ministerio de Salud (2017).]). Evidently, at the current rate of reduction the targeted level of 19% will be attained only by the year 2050. In order to combat the problem of anemia in childhood, the Peruvian Government has implemented various social programs, such as "Vaso de leche", "Juntos" and "Qali Warma", the objetive of which is to reduce the prevalence of anemia and malnutrition in childhood. One of the most important aims of these programs is to quantify their impact on the reduction of the prevalence of anemia and malnutrition so as to optimize their costs and benefits (see [Alcázar (2012).] for detailes). In order to evaluate this impact, good estimates of the percentage of anemic children are needed. However, in the case of Peru, obtaining these estimates, typically presents the most challenges, since there are many remote disticts, especially in mountainous regions, which are generally not included in the sample of the surveys due to logistic problems and limited budget; others have a very small sample size (see Figure 1). We will see below that a possible remedy to this problem would be to use spatial models, which exploit spatial correlations between the neighboring areas. However, populated areas in Peru are mostly located in mountainous regionsin, and therefore their location can be represented by three coordinates (longitude, latitude and altitude), in contrast to the proposed methods in the literature that use only the first two coordinates. Another problem is that application of the spatial Fay-Herriot model requires definition of the spatial neghbors which is completely subjective. In this study we address the question: "Is it possible to obtain reliable estimates for the prevalence of anemia rates in children under five years in the districts of Peru?" in the presence of the above-mentioned problems.
In this article we utilize the two following sources of data: 1-the data provided by the Demographic and Health Survey-the ENDES, carried out by the National Institute of Statistics and Informatics in 2019 ([INEI, Perú (2019).]) and 2-the data, obtained from the national census, carried out in 2017. The main objective of the national surveys like the ENDES is to describe some selected population characteristics such as health, employment and unemployment, education, household income and expenses, poverty etc. However one of the common problems of these surveys is that their corresponding sampling design is usually more appropriate for representing characteristics of the entire population, or of large subgrups, such as urban or rural population, the population of major geographical regiones, etc. Nonetheless, as noted by [Rao and Molina (2015).], more and more policy makers are demanding estimates for small domains to use them in the elaboration of policy decisions. In the case of the ENDES, inference at more disaggregated levels, such as provinces or districts is generally not reliable, since at these levels the areas may have small or null sample size. Namely, some of the areas of interest are usually not included in the sample, while the others do not have a sufficient number of observations in order to provide reliable direct estimates, based only on the area-specific sample data. As noted previously, in the case of Peru, the problem is even more pronounced due to limited logistics support and resources. For instance, in Puno region, only 34.5% of the districts data regarding the prevalence of anemia is available. Furthermore, 65.8 % of these districts have less than 10 observations. In order to solve the problem of small sample sizes, the governmental entities like the statistical office of the European Union, United States Census Bureau among many others, utilize the basic Fay-Herriot model [Fay and Herriot (1979).], which is the area level model (distrit-level in our case). Based on this approach, the area level predictions are constructed as a linear combination of standard design-based estimates and indirect model-dependent estimates, where the corresponding regression model incorporates the auxiliary information, which is generally available from the census, administrative records or some other source of data, thus "borrowing strength" across other areas. Thereby, the basic Fay-Herriot model allows the areas to be linked through the vector of the regression coefficientes, compensating for their small sample sizes. The variation, which is not explained by the auxiliary variables, is accounted for by the corresponding area-specific random effects. In the case of the basic Fay-Herriot model, these effects are assumed to be independent. A limitation of the basic model is that it is not designed to handle the data that exhibit spatial dependence [Moran (1950).] between the areas, which is the typical problem, arising in the data, collected from socio-economic surveys like the ENDES. In such situations, many authors (see for example, [Cressie (1993).], [Marhuenda, Molina and Morales (2013).], [Petrucci and Salvati (2006).], .], [Singh, Shukla and Kundu (2005).]) advocate the use of the natural extension of the basic model: the spatial Fay-Herriot model, which incorporates the information about geographical proximity of the areas which, in turn, is utilized to determine the covariance structure of the random effects of the spatially linked areas. More specifically, the random effects are modelled by a simultaneously autoregressive model (SAR), which is characterized by a spatial autoregressive coefficient and a proximity matrix (see [Anselin (1992).], [Banerjee, Carlin and Gelfand (2004).] and [Cressie (1993).] for more details). In this way, the expected value of a random effect of a specific area is defined as a linear combination of random effects of the neighboring areas. A drawback of this model is that it contains some degree of subjectivity, since it depends on the definition of the neighbours, which is aparently not unique. In addition, it should be noted, that including spatial correlation into the model will not result in considerable gain in efficiency if this correlation is not substantially strong ( [Pfeffermann (2002).]).
In order to predict the area-specific characteristic of interest, Fay and Herriott (1979) develop the Best Linear Unbiased Predictor (BLUP). As mentioned above, this predictor constitutes a composite estimator, which is derived as the weighted average of the direct area-specific estimator and a corresponding sintetic regression estimator. However, the BLUP can only be obtained if the variances of the random area-specific effects are known. In real applications, this is not always the case. If the variances are unknown, they are substituted by their corresponding estimates, obtained by maximum likelihood, restricted maximum likelihood or by a method of moments ( [Fay and Herriot (1979).], [Kackar and Harville (1984).], [Prasad and Rao (1990).], [Rao and Molina (2015).]). The resulting predictor is the empirical BLUP (EBLUP)( [Fay and Herriot (1979).]). In the case of a spatial Fay-Herriot model, a Spatial Best Linear Unbiased Predictor (SBLUP) is used (see .] for details). Replacing the unknown variance and autoregressive parameters by their corresponding estimates in the SBLUP leads to the empirical SBLUP (SEBLUP).
In this article we apply the basic and the spatial Fay-Herriot model in order to predict the percentage of anemic children under 5 years in the districts in Peru. Our main interest is to compare and to evaluate the performance of district-level predictors EBLUP and SEBLUP of the prevalence of anemia rates in the situation where the sampling design is inadequate in the sense that most districts are either not sampled or have a very small sample size, which is a typical problem in emerging and developing countries. As already mentioned, application of the spatial Fay-Herriot model is associated with some degree of subjectivity, introduced by definition of the neighbors. In order to address this issue we conduct a sensitivity analysis of the results to various definitions to the neighbours (see Section 4.4). This analysis is helpful to define the optimal choice of the neighbors. Another complication that arises in our case is that each district has an additional dimension, namely the altitude. In Section 4 we consider how this additional coordinate can be aggregated in the definition of the neighboring districts. Next, we compute the mean square error for the aforementioned predictors. In the case of the basic Fay-Herriot model, we use the Prassad and Rao estimate [Prasad and Rao (1990).] for the means square error, and in the case of the spatial Fay-Herriot we implement the parametric and non-parametric bootstrap, developed in [Molina, Salvati and Pratesi (2009).].
The rest of the paper is organized as follows. In Section 2 the basic and the spatial Fay-Herriot models are presented. In Section 3 we briefly describe the problem of estimation of the MSE and provide some references to the most important works in this area. Section 4 illustrates a real data application. In this section the problem of subjectivity of the choice of neighboring areas, as well as the three-dimensionalcoordinates problem are addressed. Finally, Section 5 provides some conclusions.
Small Area Estimation Models
Basic Fay-Herriot model
Let Y i denote the direct area-level estimate of the characteristic of interest in the i−th area, where i, i = 1, ..., D and D is the total number of the areas with available data, and θ i donotes the corresponding true value of this characteristic. We suppose that Y i is design unbiased for θ i . Denote by X i = (x i1 , ..., x ip ) the vector of p auxiliary area-level covariates, which can usually be obtained from census or administrative sources. Then, the Fay-Herriot model is defined as follows
Y i = θ i + e i ; θ i = X i β + u i ,(2.1)
Here e i ∼ N (0, σ 2 i ) are the errors of the direct estimates and u i ∼ N (0, σ 2 u ) are the area-level random effects, that represent the variability of the θ i 's that is not explained by auxiliary variables, where cov(e i , e j ) = cov(u i , u j ) = 0 if i = j and cov(e i , u j ) = 0 ∀i, j; β is the vector of the coefficients that expresses the association between θ = (θ 1 , ..., θ D ) t and X = (X 1 , ..., X D ) t . It is assumed that the sampling error variances σ 2 i are known. This assumption is customary, since the design variance of the sampling errors can usually be estimated from the observed data. Note that the coefficients β do not depend on the area. Specifically, the association between X i and θ i is the same for all the areas, and hence the model-based estimate for the characteristic of interest in the ith area will incorporate the information about the other areas through the vector of coefficients β.
The model (2.1) can be rewritten as follows:
Y = Xβ + u + e, (2.2) where Y = (Y 1 , ..., Y D ) t , u = (u 1 , ..., u D ) t ∼ N (0, Σ u ), e = (e 1 , ..., e D ) t ∼ N (0, Σ e ), such that Σ u = σ 2 u I D and [Σ e ] ij = σ 2 i I (i=j) , i, j = 1, ..., D.
If σ 2 u is known, θ i , i = 1, ..., D can be estimated using the Best Linear Unbiased Predictor (BLUP), developed in [Fay and Herriot (1979).], as follows.
θ BLU P i σ 2 u = X iβ σ 2 u +û i σ 2 u , (2.3) Here,β σ 2 u = X t V (σ 2 u ) −1 X −1 X t V (σ 2 u ) −1 Y, (2.4) u i σ 2 u = E (u i | Y i ) = γ i σ 2 u Y i − X iβ σ 2 u (2.5) where V σ 2 u = V ar(u + e) = Σ u + Σ e and γ i σ 2 u = σ 2 u σ 2 i + σ 2 u .
Alternatively, the predictor (2.3) can be presented aŝ
θ BLU P i σ 2 u = γ i σ 2 u Y i + 1 − γ i σ 2 u X iβ σ 2 u (2.6)
Note that the predictor (2.6) constitutes a convex combination of the direct estimate Y i and the model-based estimate X iβ . Clearly, if the ith area does not have available data, its corresponding value of γ i is equal to zero, and therefore the prediction of θ i for this area is equal to the model-based estimator.
In most real data applications, the value of the parameter σ 2 u is unknown. In this case, σ 2 u can be estimated by means of maximum likelihood (ML), restricted maximum likelihood (REML), the method of moments, developed by Prasad and Rao (1990) for the Fay-Herriot model (see [Prasad and Rao (1990).]), or the method, proposed by Fay and Herriot (see [Fay and Herriot (1979).] for details).
The log-likelihood function is obtained as
l M L (β, σ 2 u ) = c − 1 2 log | V | − 1 2 (Y − Xβ)V −1 (Y − Xβ) t (2.7)
where c is some constant and V = V (σ u ). Given function is maximized with respect to σ 2 u , whereas the parameters β are estimated as (2.4). The restricted log-likelihood function is defined as
l REM L (σ 2 u ) = c − 1 2 log | V | − 1 2 log | X t V −1 X | − 1 2 Y t P Y, (2.8) where c is some constant, V = V (σ u ) and P = V −1 − V −1 X(X t V −1 X) −1 X t V −1 .
Contrary to the ML, the REML takes into account the loss of degrees of freedom due to estimation of the parameters β, and consequently, it is advantageous in the case of small sample sizes ( [Molina, Salvati and Pratesi (2009).], [Rao (2003).], [Rao and Molina (2015).]).
The method of moments estimate for σ 2 u can be obtained as
σ 2 u = 1 D − p D i=1 Y i − X iβOLS 2 − σ 2 i (1 − h i ) , (2.9) whereβ OLS = (X t X) −1 X t Y , h i = X i (X t X) −1 X t i
and p is the number of auxiliary area level covariates in the model (2.1). However, since the value ofσ 2 u can take a negative value, the estimate for σ 2 u is given bŷ
σ 2 u = max{0,σ 2 u }.
(2.10)
The estimate, proposed by Fay and Herriot (1979) (see [Fay and Herriot (1979).]) is derived as an iterative solution of the equation
D i=1 (Y i − X i β * ) 2 σ * 2 u + σ 2 i = (D − p) (2.11)
where β * is obtained from (2.4).
It is important to emphasize that all mentioned estimates for σ 2 u are translation invariant, that is, have the following properties (see [Kackar and Harville (1984).] for more details): Kackar and Harville (1984) [Kackar and Harville (1984).] show that the empirical BLUPθ EBLU P i , which is defined in [Fay and Herriot (1979).] aŝ
1.σ 2 u (Y ) =σ 2 u (−Y ) 2.σ 2 u (Y − Xa) =σ 2 u (Y ), ∀ a ∈ R p and ∀ Y .θ EBLU P i σ 2 u = γ i σ 2 u Y i + 1 − γ i σ 2 u X iβ σ 2 u , (2.12)
is unbiased for θ i if a consistent estimateσ 2 u is translate invariant. As discussed previously, if the data present strong spatial correlations, a spatial Fay-Herriot model is a natural way to proceed. This model is described in the following subsection.
Spatial Fay-Herriot Model
The spatial Fay-Herriot model is defined as follows (see .] for more details):
Y = Xβ + u + e; u = ρW u + , (2.13) where = ( 1 , ..., D ) t ∼ N (0, Σ ) such that Σ = σ 2 I, ρ
is the spatial autoregressive coefficient (see [Banerjee, Carlin and Gelfand (2004).], [Cressie and Chan (1989).] and [Cressie (1993).]), and W is a matrix of non-negative spatial weights, the elements w ij of which define the spatial measure of proximity between the areas i and j, such that ∀ i = 1, ..., D, w ii = 0 and D j=1 w ij = 1. As noted above, the weights w ij can be defined in a variety of ways. Typically, w ij depend on the definition of the neighbouring areas. However, it must be noted that, it is hard to formulate specific criteria to choose the "best" definition. Here we present a few common approaches to define neighboring areas of a specific area i (the interested readers can refer to [Anselin (1992).] and [Cressie (1993).] for more details).
1. Those areas, whose distance between their corresponding centroids and the centroid of the area of interest is within L miles. For example, [Cressie and Chan (1989).] define two areas as neighbours if the distance between their centroids is within 30 miles.
2. The k nearest areas to the area of interest.
3. Areas that share a common boundary with the area of interest.
Clearly, it is important to use caution when defining the neighbors, since different definitions may produce different results. Now, the model (2.13) can be written as:
Y = Xβ + (I − ρW ) −1 + e = Xβ + ν, ν ∼ N (0, G),
(2.14)
where
G = σ 2 (I − ρW ) t (I − ρW ) −1 + Σ e = Ω + Σ e .
Note that the matrix G exists only if (I − ρW ) is non-singular.
Next, let φ = (σ 2 , ρ) index the unknown model parameters, and b i = (0, ..., 0, 1, 0, ..., 0) t be a D-dimensional vector with value 1 in the ith position and 0 in all other positions. Therefore, the spatial BLUP (SBLUP) for θ i , is obtained as:θ
SBLU P i (φ) = X iβ (φ) +û i (φ), (2.15) whereβ (φ) = X t [G(φ)] −1 X −1 X t [G(φ)] −1 Y (2.16) andû i (φ) = b t i Ω t (φ) [G(φ)] −1 Y − Xβ(φ) (2.17)
The estimates of the unknown parameters φ can be obtained using ML or REML, where the covariance matrix V in (2.7) or (2.8) is replaced by the matrix G(φ). Molina, Salvati and Pratesi (2009) [Molina, Salvati and Pratesi (2009).] warn about possible numeric problems, associated with optimization of the functions (2.7) and (2.8) in this case.
Replacing the parameters φ with there corresponding estimates,φ in (2.16) and in (2.17), we obtain the empirical SBLUP (SEBLUP) for θ i , which is given bŷ
θ SEBLU P i (φ) = X iβ (φ) +û i (φ), (2.18)
The estimate (2.18) is unbiased for θ i ifσ 2 andρ are derived using ML or REML (see [Kackar and Harville (1984).] for more details).
Estimation of the Mean Square Error of EBLUP and SEBLUP
In real applications, a natural question of interest is how to estimate the mean square error (MSE) of the predictors (2.12) and (2.18). In this section we present a brief review of the main estimation methods that have been proposed in the literature to address this problem. We start with analizing the MSE of the BLUP (2.3). It can be easily shown that
M SE θ BLU P i (σ 2 u ) = γ i (σ 2 u )σ 2 i + 1 − γ i (σ 2 u ) 2 X i V ar β (σ 2 u ) X t i = g 1i (σ 2 u ) + g 2i (σ 2 u ), (3.1)
where X i is the ith line of the matrix X andβ(σ 2 u ) is the estimate for β, defined in (2.4). Note that, the component g 1i (σ 2 u ) corresponds to the sampling error, whereas g 2i (σ 2 u ) expresses the error associated with estimation of the parameters β. It is important to emphasize that g 1i (σ 2 u ) = O(1) and g 2i (σ 2 u ) = O 1 D and therefore if the total number of areas D is large, M SE(θ BLU P i (σ 2 u )) ≈ g 1i (σ 2 u ). Obviously, g 1i (σ 2 u ) is smaller than σ 2 i , which is the MSE of the direct estimate. In fact, g 1i (σ 2 u ) is substantially smaller than σ 2 i if the value of σ 2 u is small which occurs when good covariate information if available. The estimate for the MSE defined in (3.1) is obtained by replacing σ 2 u with the estimateσ 2 u , as follows.
mse(θ BLU P i (σ 2 u )) = g 1i (σ 2 u ) + g 2i (σ 2 u ), (3.2)
It should be noticed that (3.1) and (3.2) do not account for the error associated with the estimation of the parameter σ 2 u . It can be demonstrated that (see [Kackar and Harville (1984).] and [Harville and Jeske (1992).]) if the sampling errors and the area-level random effects have a normal distribution, and the estimate for σ 2 u is translation invariant, the MSE can be decomposed as:
M SE θ EBLU P i (σ 2 u ) = M SE θ BLU P i (σ 2 u ) +E θ EBLU P i (σ 2 u )−θ BLU P i (σ 2 u ) 2 (3.3)
The second term in the expression (3.3) represents the additional error which is the result of the estimation of the parameter σ 2 u . Contrary to the first term, the second term can not be expressed analitycally, and therefore, can only be obtained by approximation. If σ 2 u is estimated by the method of moments, defined in (2.9) and (2.10), the MSE of θ EBLU P i can be approximated utilizing the method proposed by Prasad and Rao [Prasad and Rao (1990).], as follows:
M SE(θ EBLU P i (σ 2 u )) ≈ g 1i (σ 2 u ) + g 2i (σ 2 u ) + V ar(σ 2 u )g 3i (σ 2 u ), (3.4) where g 3i (σ 2 u ) = (σ 2 i ) 2 (σ 2 i + σ 2 u ) 3 and V ar(σ 2 u ) ≈ 1 2D 2 D i=1 σ 2 i + σ 2 u 2 .
The authors demonstrate that in this case the estimate for the MSE can be obtained as
mse θ EBLU P i (σ 2 u ) = g 1i (σ 2 u ) + g 2i (σ 2 u ) + 2V ar(σ 2 u )g 3i (σ 2 u ),(3.5)
and that the proposed estimate has the bias of order o 1 D .
In [Datta, Rao and Smith (2005).] the authors develop the estimate for the MSE of θ EBLU P i in the case where σ 2 u is estimated by (2.11), as follows
mse θ EBLU P i (σ 2 u ) = g 1i (σ 2 u ) + g 2i (σ 2 u ) + 2V ar(σ 2 u )g 3i (σ 2 u ) − g 4i (σ 2 u ), (3.6) where g 4i (σ 2 u ) = 2(1 − γ i (σ 2 u )) 2 × D D i=1 1 (σ 2 i +σ 2 u ) 2 − D i=1 1 (σ 2 i +σ 2 u ) 2 × D i=1 1 (σ 2 i +σ 2 u ) −3
The order of the bias of the estimate (3.6) is o( 1 D ).
Ifσ 2 u is obtained using the method of ML or REML, the MSE ofθ EBLU P i can be estimated utilizing the approximation developed in [Datta and Lahiri (2000).]. As in the previous cases the order of the bias of the proposed estimate is o 1 D .
Alternatively, the MSE can be estimated with the same order of the bias utilizing resampling methods, such as the bootstrap and jackknife (see [Chen and Lahiri (2003).], [Hall and Maiti (2006).] and [Jiang, Lahiri and Wan (2002).] among many others).
If the spatial Fay-Herriot model is used, an additional parameter ρ is to be estimated. As noted previously, unknown parameters φ = (σ 2 u , ρ) can be estimated using the method of ML or REML. As in the previous case the MSE of θ SEBLU P i can be decomposed as ( [Molina, Salvati and Pratesi (2009).], and [Singh, Shukla and Kundu (2005).]):
MSE θ SEBLU P i (φ) = MSE θ SBLU P i (φ) +E θ SEBLU P i (φ)−θ SBLU P i (φ) 2 = g 1i (φ) + g 2i (φ) + g 3i (φ), (3.7)
where the term g 1i (φ) represents the error produced by the estimation of the random effects and has the order O(1), and the term g 2i (φ) represents the error produced by the estimation of the parameters β and it is of the order O 1 D (see [Singh, Shukla and Kundu (2005).]). If the parameters φ are estimated by means of REML, the estimate for the MSE is approximately unbiased and is given by
mse(θ SEBLU P i (φ)) ≈ g 1i (φ) + g 2i (φ) + 2g 3i (φ) (3.8)
If ML is used for estimation of φ, the expression for the estimate of the MSE includes an extra term, which corrects for the additional bias of g 1i (φ) (see [Molina, Salvati and Pratesi (2009).], .], .], [Singh, Shukla and Kundu (2005).] for details).
The expressions of g 1i (φ) and g 2i (φ) can be obtained analytically (computational details can be found in [Singh, Shukla and Kundu (2005).]), whereas for the term g 3i (φ) which represents the error due to estimating the parameters φ, no analytic form can be derived. In .] the authors propose a heuristic aproximation for g 3i (φ). Alternatively, a bootstrap method can be adopted in order to estimate g 3i (φ). Here, we present the parametric bootstrap, proposed by [Molina, Salvati and Pratesi (2009) 4. For each bootstrap sample, Y b , b = 1, ..., B, computeθ SBLU P,b (φ) and θ SEBLU P,b (φ b ) as:
θ SBLU P,b (φ) = Xβ b (φ) + Ω t (φ)[G(φ)] −1 (Y b − Xβ b (φ)) andθ SEBLU P,b (φ b ) = Xβ b (φ b ) + Ω t (φ b )[G(φ b )] −1 (Y b − Xβ b (φ b ))
5. Now, the bootstrap estimate for g 3i (φ) is given by
g P B 3i (φ) = 1 B B b=1 θ SEBLU P,b i (φ b ) −θ SBLU P,b i (φ) 2
Another estimate for the MSE of the SEBLUP (2.18) was developed in [Pfeffermann and Tiller (2005).] and it is computed as
mse(θ SEBLU P i (φ)) = 2(g 1i (φ) + g 2i (φ)) − 1 B B b=1 (g 1i (φ b ) + g 2i (φ b )) + g P B 3i (φ) (3.9)
Analogously, one can use a non-parametric bootstrap, developed in [Molina, Salvati and Pratesi (2009).]. In this case, the bootstrap random effects and the sampling errors are drawn from the empirical distribution of the predicted random effects and from the model residuals, respectively. As noted by the authors, this method avoids the need of distributional assumptions and therefore, it is expected to be more robust to non-normality of any of the random components of the model.
A Case Study
Objectives of the study
In this section we illustrate and study the performance of the basic and spatial Fay-Herriot models using data collected as part of the Demographic and Health Survey-ENDES, carried out by the National Institute of Statistics and Informatics in 2019. The survey collects information on the topics such as anemia, nutrition, education, domestic violence among many others. The sampling units in this survey are households, which were sampled by a two-stage sampling design: at the first stage, a sample of localities was selected; at the second stage, a sample of dwellings was chosen within each of the selected localities. A household is defined as a group of people living in the same dwelling and sharing the same budget for food expenditure. In this study we focus on modeling the prevalence of anemia rates in children under five, per district. As it has been pointed out previously, these estimates are unreliable for most of the sampled districts. Our main aim is to study gain in precision of the estimates obtained by employing the aforementioned models. Specifically, we focus on the following two points. First, we address the question of choosing the neighbor criterion to be used. Second, we compare the MSE and the coefficient of variation of the predictors EBLUP and SEBLUP obtained by application of the basic and the spatial Fay-Herriot model, respectively. The auxiliary covariates used in the model are the Application of the basic (2.1) and the spatial (2.14) Fay-Herriot models to all the districts with available direct estimates resulted in a very poor fit. In order to remedy this problem, we divided all the districts into the following three groups: 1the districts, where less than 30% of the population live in poverty (a total of 585 districts, 281 sampled districts), 2-the districts where 30%-55% of the population live in poverty (a total of 671 districts, 297 sampled districts) and 3-the districts where more than 55% of the population live in poverty (a total of 618 districts, 234 sampled districts), and fit the aforementioned models in each of the specified groups separately. Next, we compare the estimators for the MSE of the EBLUP and SEBLUP, defined by (2.12) and (2.18) correspondingly. In the case of the EBLUP, we utilize the estimator proposed by [Prasad and Rao (1990).], defined in (3.4). In order to obtain the estimator for the MSE of the SEBLUP we use the parametric and non-parametric bootstrap, proposed in [Molina, Salvati and Pratesi (2009).].
Definition of the neighboring districts
In what follows the neighbors of a specific district are defined in two steps. In the first step, K 1 nearest neighbors are chosen, using districts' latitude and longitude, where K 1 = 3, 4, ..., 10. It should be noticed that another two ways to define the neighbors, mentioned in section 2.2 are inapplicable in our case due to a large number of nonsampled districts. In the second step we use the difference in altitude as the measure of proximity between each of the K 1 previously selected districts and the district of interest. In this step we choose K 2 ≤ K 1 "closest" districts. The spatial weights of each of the K 2 districts selected in the second step is equal to 1/K 2 , while the spatial weights of all other district are equal to 0. For this study we use K 2 = 1, ..., K 1 . Then, for each pair (K 1 , K 2 ) we analize the fit of the spatial Fay-Herriot model. Specifically, we study the behavior of the estimator for the variance of the model errors,σ 2 ε as a function of (K 1 , K 2 ). Obviously, the optimal definition of the neighbor corresponds to the values of K 1 and K 2 which results in the smallest value of σ 2 ε . In should be noted that in the second step the proximity (or similarity) between the neighboring districts can be expressed using other variables, for example, the poverty level or human development index in the district. This additional information can be potentially useful, especially in the case where many areas have small o very small sample size. In this study in addition to the variable "Altitude" we use the variables "Poverty" and "Extreme Poverty" which stand for the percentage of the population living in poverty and extreme poverty, respectively.
Fitting the basic Fay-Herriot models
Initially, we present the results of fitting the basic Fay-Herriot models. Table 2 shows the estimated coefficientsβ of the model and their corresponding p-values, as obtained when fitting the model separately to each of the three defined groups of the districts. Table 2 indicates that the prevalence of anemia in a district is apparently associated with the variables that reflect the poverty level of that district. It should be noted that many other auxiliary variables that also reflect the poverty level in a district, such as the percentage of dwellings with concrete walls, the percentage of dwellings with access to centralized hygiene system, the percentage of illiterate population etc., were initially included in the model, however their corresponding coefficients were not significant.
Sensitivity Analysis
In this Section we conduct a sensitivity analysis to investigate the impact of selecting the neighboring districts. To this end, the spatial Fay-Herriot model was fitted with K 1 = 1, ..., 10 and K 2 = 1, ..., K 1 neighbors, as explained in Section 4.2. The figures in Tables 3-5 suggest that the results are sensitive to the way in which the neighbours were defined. Furthermore, it should be noted that the estimators of the parameter ρ vary quite widely with the choice of K 1 and K 2 (from 0.15 to 0.87). These results demonstrate that the way of choosing of the neighbors can dramatically alter inferences. In this situation we recommend using the values of K 1 and K 2 that correspond to the minimal value ofσ 2 . The results displayed in the tables, illustrate that the optimal choice of the neighbors in the case of the districts of the first two groups is K 1 = 3 and K 2 = 2, whereas for the third group the optimal values are K 1 = 7 and K 2 = 3. At the same time, the tables show that if the variable "Altitude" is not utilized, which implies K 2 = K 1 , the optimal value of K 1 in the case of the first two groups is K 1 = 2, while for the third group K 1 = 7. Comparing the corresponding magnitudes ofσ 2 , it can be observed that incorporating the variable "Altitude" leads to a minor reduction of 5% (from 0.0041 to 0.0039 and from 0.0042 to 0.0040) in the first and the third group, and of 22% (from 0.0027 to 0.0022) in the second group. Table 3: The values ofσ 2 as a function of K 1 and K 2 : the districts with less than 30% of the population living in poverty. Table 4: The values ofσ 2 as a function of K 1 and K 2 : the districts where 30% -55% of the population live in poverty.
K 1 K 2 = 1 K 2 = 2 K 2 = 3 K 2 = 4 K 2 = 5 K 2 = 6 K 2 = 7 K 2 = 8 K 2 = 9 K 2 = 10 1 0.0047 - - - - - - - - - 2 0.0052 0.0041 - - - - - - - - 3 0.
K 1 K 2 = 1 K 2 = 2 K 2 = 3 K 2 = 4 K 2 = 5 K 2 = 6 K 2 = 7 K 2 = 8 K 2 = 9 K 2 = 10 1 0.0033 - As we have already mentioned, for the purpose of selecting K 2 districts, out of the K 1 previously selected districts, the variable "Altitude" is not the only variable that can be utilized in order to establish the degree of similarity between the districts. In Table 5: The values ofσ 2 as a function of K 1 and K 2 : the districts with more than 55% of the population living in poverty. Table 6 that the use of the variable "Extreme Poverty" had some beneficial effect in the case of the first and the third group, while in the second group we would recommend to use the variable "Altitude". Notably, the variable "Extreme Poverty" was not significant in the models presented in Table 2. In summary, it can be inferred that choosing K 2 districts in the second step using an additional variable to measure similarity between the K 1 previously selected districts, can potentially produce more powerful predictors (see Section 4.6).
- - - - - - - - 2 0.0037 0.0027 - - - - - - - - 3 0.K 1 K 2 = 1 K 2 = 2 K 2 = 3 K 2 = 4 K 2 = 5 K 2 = 6 K 2 = 7 K 2 = 8 K 2 = 9 K 2 = 10 1 0.0078 - - - - - - - - - 2 0.0079 0.0077 - - - - - - - - 3 0.
Spatial Fay-Herriot Model
In what follows we fit the Spatial Fay-Herriot model for the following two scenarios.
1. The neighbors are chosen using only the first step (the K 1 nearest neighbors), where K 1 = 2 for the districts with poverty level of less than 30%, and for the districts with poverty level between 30% and 55%, and K 1 = 7 for the districts with poverty level of more than 55%.
2. The neighbors are chosen using both steps, where in the second step we use the variable "Extreme Poverty" for the districts with poverty level of less than 30% (K 1 = 3, K 2 = 2), and for the districts with poverty level of more than 55% (K 1 = 7, K 2 = 3); for the districts with poverty level between 30% and 55%, the variable "Altitude" was utilized with K 1 = 3 and K 2 = 2.
Tables 7 and 8 display the estimators for the coefficients β and ρ obtained by fitting the spatial Fay-Herriot model under the first and the second scenarios. The results illustrate that the spatial correlations are substentially high, especially for the poorer districts, being higher under the second scenario as opposed to the first scenario. This suggests that ignoring the spatial correlation structure between the districts may increase the potential for greater MSE. We can also conclude that the estimators for the coefficients β are very similar under both scenarios. In comparing the results of this analysis with those presented in Table 2, there is no drastic difference in the estimators. In the following section we compare the EBLUP and the SEBLUP as well as their corresponding MSEs.
EBLUP, SEBLUP and MSE
First, we compare the predictions EBLUP and SEBLUP for the prevalence of anemia rates among children under five years, with the corresponding direct estimates. In the following tables, SEBLUP1 and SEBLUP2 refer to the predictors SEBLUP obtained under the first and the second scenarios defined above. For the purpose of these comparisons the following three groups of districts are used: the districts that only ahve 5 observations (a total of 15 districts), the districts with 15 observations (a total of 16 districts) and the districts with 40-49 observations (a total of 24 districts).
As expected, the results presented in Figures 2-4 illustrate that the differences between SEBLUP1, SEBLUP2, EBLUP and the corresponding direct estimate decreases as the sample size increases. Interestingly, the discrepancies between SEBLUP1 and SEBLUP2 are generally minor: the mean absolute differences between EBLUP1 and EBLUP2 is 0.014 in the first case, 0.010 in the second case and 0.009 in the third case. The corresponding relative differences amount to 3.6%, 2.4% and 3.1%, respectively. Next, we present the MSEs of the discussed predictors. Figures 5-7 display the MSEs obtained by application of the parametric bootstrap. The MSEs derived from application of the non-parametric bootstrap are somewhat larger, however, the conclusions reached are very similar to those reported below. The results indicate very clearly that in our case application of the spatial Fay-Herriot model yields better MSEs than the basic Fay-Herriot model. The results also provide evidence that except for several districts, the MSEs of SEBLUP2 have had better performance than SEBLUP1 and EBLUP, especially if the sample size is small. Specifically, the relative difference in MSE between SEBLUP1 and SEBLUP2 are 12.9%, 8.8% and 6.0% in the first, second and third case, respectively. Finally, we compute the coefficients of variation (CV) for all predictors discussed above. As one can observe from Table 9, the direct estimator has very large CV in the districts where the sample size is smaller than 50. If the sample size is larger than 50, only for 61 districts (out of 104 districts) the CV of the direct estimator is smaller than 20%. Comparing this result to the corresponding numbers for EBLUP (87 districts), SEBLUP1 (92 districts) and SEBLUP2 (92 districts), we can conclude that for large samples employing the basic Fay-Herriot as well as the spatial Fay-Herriot considerably improve the precision of the predictors, where the SEBLUP1 and SEBLUP2 slightly outperform the EBLUP. For smaller sample sizes we can observe a similar pattern; the difference is that in these cases the performance of SEBLUP1 and SEBLUP2 is much better than that of the EBLUP, especially if the sample size is less than 7 or between 7 and 10. Moreover, the performance of SEBLUP2 is evidently better for all sample sizes.
Conclusion
From the results obtained in Section 4 we conclude that utilizing the basic Fay-Herriot model have considerably removed the MSE (and therefore, the CVs) of the predictors as opposed to the direct estimates. However, the obtained CVs in most of the districts are still substentially large. If the spatial Fay-Herriot model is applied, an additional reduction in MSEs is attained. This is due to incorporating information about the spatial structure of the data, which is ignored by the basic model. The reduction in MSE is more substential if we select the neighbors using the two-step procedure which allows to employ additional information about the districts (see Section 4.2). Regarding the question about reliability of the EBLUP and SEBLUP, the magnitudes of the corresponding CVs indicate that in the first case the percentage of unreliable estimates (the estimates with the CV larger than 20%) is considerably large, especially if the sample is small. Specifically, if the sample size is smaller than 7, the percentage of unreliable estimates is 54%. For larger sample sizes we observe a very modest reduction (48% if the sample size is between 21 and 51). If the sample size is larger than 50, the percentage of unreliable estimators reduces to 15%. In the case of the SEBLUP the corresponding percentages are as follows: 34% if the sample size is smaller than 7, 26% if the sample size is between 21 and 50 and 12% if the sample size is larger than 50. However, it should be noticed that the percentage of the estimates whose CV is larger than 30% is relatively small: in the case of the EBLUP it oscillates between 7 and 15% (for SEBLUP the range is between 2 and 9%) if the sample size is smaller than 50. If the sample size is larger than 50, the CV of only 1 predictor EBLUP (out of 104) is larger than 30%. In the case of the SEBLUP, the CVs of all predictors is smaller than 30%. Apart from comparing the performance of the basic and the spatial Fay-Herriot models we explore the sensitivity of the choice of the neighbors to the resulting inference. It follows from the results that the conclusions drawn can depend significantly on the definition of the neighbors. We recommend that, in practice, one chooses the definition that acheive the smallest variance,σ 2 . There is no theoretical basis for this choice, however it may be advantageous from the perspective of reduction of the MSEs of the predictors. In this paper we do not discuss the problem of prediction in the nonsampled district. This can be a topic for future research.
Figure 1 :
1ENDES data: sample size in the districts of the dapartments of La Libertad (the left panel) and Arequipa (the right panel), where the blank districts do not have available data.
model (2.13) to the original data Y = (Y 1 , ..., Y D ) t in order to obtain the estimatesφ = (σ 2 u ,ρ) andβ.2. Generate B bootstrap samples, utilizing the model (2.13) with the parameters estimated in step 1, as follows.(a) Generate a vector Z b1 = (Z b 11 , Z b 12 , ..., Z b 1D ) t of independent variables, such that Z b 1j ∼ N (0, 1), j = 1, ..., D, b = 1, ..., B, and computeũ b =σ u Z b 1 , u b = (I −ρW ) −1ũb . (b) Generate a vector Z b 2 = (Z b 21 , Z b 22 , ..., Z b 2D ) t of independent variables, such that Z b 2j ∼ N (0, 1), j = 1, ..., D,b = 1, ..., B and compute e b = (e b 1 , ..., e b D ), where e b j = σ j Z b 2j . (c) Compute bootstrap area level characteristics of interest, θb = Xβ + u b , and bootstrap data Y b = θ b + e b .3. For each bootstrap sample, Y b , b = 1, ..., B, reestimate φ and β, obtainingφ b andβ b (φ), whereφ b is derived by application of ML or REML and the estimateŝ β b (φ) andβ b (φ b ) are computed using (2.16), where φ is replaced byφ andφ b respectively.
table the results obtained in the case of utilizing the variables "Poverty" and "Extreme Poverty" are summarized. It can be concluded from
Figure 2 :Figure 3 :
23Estimates for the prevalence of anemia rates in the districts with 5 observations Estimates for the prevalence of anemia rates in the districts with 15 observations
Figure 4 :
4Estimates for the prevalence of anemia rates in the districts with 40-49 observations
Figure 5 :Figure 6 :Figure 7 :
567Mean Square Errors of the estimates for the prevalence of anemia rates in the districts with 5 observations Mean Square Errors of the estimates for the prevalence of anemia rates in the districts with 15 observations Mean Square Errors of the estimates for the prevalence of anemia rates in the districts with 40-49 observations
Table 1 :
1Description of the auxiliary variablesVariable Description of the variable Altitude The altitude of the district (height above sea level)Water
% of dwellings with access to centralized water supply
Water-
days
% of dwellings with access to potable water only several
days per week
Floor
% of dwellings that have non-dirt flooring
Internet % of dwellings with access to internet
SIS
% of the population that is affiliated with the Compre-
hensive Health Insurance (SIS)
Uninsur. % of the population that do not have health insurance
Refrig.
% of households that have a refrigerator
Spanish
% of native Spanish speakers
Rural
% of rural dwellings
characteristics of the district, obtained from the National Census carried out in 2017,
as displayed in the following table.
Table 2 :
2Estimators of the coefficients β of the basic Fay-Herriot modelLess than 30%
30%-55%
More than 55%
Estimator
p.value
Estimator
p.value
Estimator
p.value
Water
-
-
-0.15557
0.0002
-0.10699
0.0287
Water-days
0.08827
0.0480
-
-
-
-
Floor
-0.13376
0.0023
-
-
-
-
Refrig.
-
-
-0.33600
< 0.0001
-0.18277
0.0015
Internet
-0.39027
< 0.0001
-
-
-
-
Spanish
-0.25204
< 0.0001
-0.17888
< 0.0001
-0.23563
< 0.0001
SIS
-
-
-0.18834
0.0101
-
-
Uninsur.
-
-
-
-
0.45826
0.0001
Altitude
0.00002
0.0027
-
-
0.00002
0.0291
Rural
-
-
-0.07049
0.0484
-
-
Table 6 :
6The optimal values ofσ 2 and the corresponding values of K 1 and K2 for the
Table 7 :
7Estimators of the coefficients β and ρ of the spatial Fay-Herriot model under
the first scenario
Less than 30%
30%-55%
More than 55%
Estimator
p.value
Estimator
p.value
Estimator
p.value
Water
-
-
-0.12640
0.0041
-0.07681
0.0863
Water-days
0.10153
0.0281
-
-
-
-
Floor
-0.09349
0.041
-
-
-
-
Refrig.
-
-
-0.34632
< 0.0001
-0.22646
0.0002
Internet
-0.30843
< 0.0001
-
-
-
-
Spanish
-0.24583
< 0.0001
-0.17426
< 0.0001
-0.20250
< 0.0001
SIS
-
-
-0.14858
0.0494
-
-
Uninsur.
-
-
-
-
0.21169
0.0926
Altitude
0.00002
0.0122
-
-
0.00001
0.6277
Rural
-
-
-0.07971
0.0208
-
-
ρ
0.4495
< 0.0001
0.6548
< 0.0001
0.8062
< 0.0001
Table 8 :
8Estimators of the coefficients β and ρ of the spatial Fay-Herriot model under the second scenarioLess than 30%
30%-55%
More than 55%
Estimator
p.value
Estimator
p.value
Estimator
p.value
Water
-
-
-0.10861
0.0144
-0.08323
0.0863
Water-days
0.08005
0.0735
-
-
-
-
Floor
-0.09976
0.0286
-
-
-
-
Refrig.
-
-
-0.34411
< 0.0001
-0.20772
0.0004
Internet
-0.29959
< 0.0001
-
-
-
-
Spanish
-0.24556
< 0.0001
-0.17884
< 0.0001
-0.19998
< 0.0001
SIS
-
-
-0.13017
0.0888
-
-
Uninsur.
-
-
-
-
0.20459
0.0940
Altitude
0.00002
0.0253
-
-
0.00001
0.4073
Rural
-
-
-0.07049
0.0484
-
-
ρ
0.4984
< 0.0001
0.7275
< 0.0001
0.8144
< 0.0001
Table 9 :
9Distribution of coefficient of variation for Direct estimate, EBLUP, SE-BLUP1 and SEBLUP2 by sample sizeSample Size
Predictor
< 10% 10-20% 20-30% > 30% Total
Less than 7
Direct
0
1
4
120
125
Less than 7
EBLUP
0
57
56
12
125
Less than 7
SEBLUP1 0
78
40
7
125
Less than 7
SEBLUP2 0
82
40
3
125
7-10
Direct
0
11
20
164
195
7-10
EBLUP
0
76
89
30
195
7-10
SEBLUP1 1
110
61
23
195
7-10
SEBLUP2 1
120
57
17
195
11-20
Direct
0
16
50
161
227
11-20
EBLUP
1
106
90
30
227
11-20
SEBLUP1 2
128
79
18
227
11-20
SEBLUP2 3
145
63
16
227
21-50
Direct
0
27
65
69
161
21-50
EBLUP
0
84
65
12
161
21-50
SEBLUP1 2
104
46
9
161
21-50
SEBLUP2 6
113
34
8
161
More than 50 Direct
10
51
39
4
104
More than 50 EBLUP
10
77
15
1
104
More than 50 SEBLUP1 11
81
11
1
104
More than 50 SEBLUP2 13
79
12
0
104
All Districts
Direct
10
106
178
518
812
All Districts
EBLUP
11
400
315
86
812
All Districts
SEBLUP1 16
501
237
58
812
All Districts
SEBLUP2 23
539
206
44
812
Acknowledgements.The views presented in this work are those of the authors and do not represent the official position of the institutions that the authors are or were affiliated with. This research is supported by a grant from the Unidad de Investigación, FIEECS-UNI.
Impacto económico de la anemia en el Perú. ; L Alcázar, Alcázar, Grupo de Análisis para el Desarrollo (GRADEAlcázar (2012).] L. Alcázar (2012). Impacto económico de la anemia en el Perú. Grupo de Análisis para el Desarrollo (GRADE).
Spatial econometrics. Methods and models. Kluwer: Boston[Anselin (1992).] L. Anselin (1992). Spatial econometrics. Methods and models, Kluwer: Boston.
Hierarchical modeling and analysis for spatial data. Carlin Banerjee, ; S Gelfand, B Banerjee, A Carlin, Gelfand, Chapman and HallNew YorkBanerjee, Carlin and Gelfand (2004).] S. Banerjee, B. Carlin and A. Gelfand (2004). Hierarchical modeling and analysis for spatial data, Chapman and Hall: New York.
A Comparison of Different MSPE Estimators of EBLUP for the Fay-Herriot Model. Lahiri ; S Chen, P Chen, Lahiri, American Statistical Association. Washington, DC[Chen and Lahiri (2003).] S. Chen and P. Lahiri (2003). A Comparison of Differ- ent MSPE Estimators of EBLUP for the Fay-Herriot Model, Proceedings of the Section on Survey Research Methods, Washington, DC: American Statistical As- sociation 903-911.
Spatial modeling of regional variables. Chan ; N Cressie, N H Cressie, Chan, Journal of the American Statistical Association. 84[Cressie and Chan (1989).] N. Cressie and N.H. Chan (1989). Spatial modeling of regional variables. Journal of the American Statistical Association, 84:393-401.
A unified measure of uncertainty of estimated best linear unbiased predictors in small area estimation problems. N Cressie, Statistica Sinica. G. S. Datta and P. S. Lahiri10WileyStatistics for spatial data[Cressie (1993).] N. Cressie (1993). Statistics for spatial data, Wiley: New York. [Datta and Lahiri (2000).] G. S. Datta and P. S. Lahiri (2000). A unified measure of uncertainty of estimated best linear unbiased predictors in small area estimation problems, Statistica Sinica 10 613-627.
On measuring the variability of small area estimators under a basic area level model. Rao Datta, ; G S Smith, J N K Datta, D D Rao, Smith, Biometrika. 92Datta, Rao and Smith (2005).] G. S. Datta, J. N. K. Rao and D. D. Smith (2005). On measuring the variability of small area estimators under a basic area level model, Biometrika 92 183-196.
Estimates of income for small places: an application of James-Stein procedures to census data. Herriot ; R E Fay, R A Fay, Herriot, Journal of the American Statistical Association. 74[Fay and Herriot (1979).] R. E. Fay and R. A. Herriot (1979). Estimates of income for small places: an application of James-Stein procedures to census data, Journal of the American Statistical Association 74 269-277.
On parametric bootstrap methods for small area prediction. P Hall, T Maiti, Journal of the Royal Statistical Society Series B. 68Hall and Maiti[Hall and Maiti (2006).] P. Hall and T. Maiti (2006). On parametric bootstrap meth- ods for small area prediction, Journal of the Royal Statistical Society Series B 68 221-238.
Mean squared error of estimation or prediction under a general linear model. Jeske ; D Harville, D Harville, Jeske, Am Stat Assoc. 87[Harville and Jeske (1992).] D. Harville and D. Jeske (1992). Mean squared error of estimation or prediction under a general linear model, Am Stat Assoc 87 724-731.
A unified jackknife theory for empirical best prediction with M-estimation. Lahiri Jiang, ; J Wan, P S Jiang, S M Lahiri, Wan, The Annals of Statistics. 30Jiang, Lahiri and Wan (2002).] J. Jiang, P. S. Lahiri and S. M. Wan (2002). A uni- fied jackknife theory for empirical best prediction with M-estimation, The Annals of Statistics 30 1782-1810.
Encuesta Demográfica y de Salud Familiar-ENDES. Inei, ; Perú, Perú Inei, [INEI, Perú (2019).] INEI, Perú (2019). Encuesta Demográfica y de Salud Familiar- ENDES.
Approximations for standard errors of estimators for fixed and random effects in mixed models. Harville ; R N Kackar, D A Kackar, Harville, J Am Stat Assoc. 79[Kackar and Harville (1984).] R. N. Kackar and D. A. Harville (1984). Approxima- tions for standard errors of estimators for fixed and random effects in mixed models, J Am Stat Assoc 79 853-862.
Small area estimation with spatio temporal Fay-Herriot models. [ Marhuenda, Molina, ; Y Morales, I Marhuenda, D Molina, Morales, Comput. Statist. Data Anal. 58[Marhuenda, Molina and Morales (2013).] Y. Marhuenda, I. Molina and D. Morales (2013). Small area estimation with spatio temporal Fay-Herriot models. Comput. Statist. Data Anal. 58 308-325.
Plan Nacional para la Reducción de la Desnutrición Crónica Infantil y la Prevención de la Anemia en el país. N°258-2014/ MINSA. Lima: MinsaMinisterio de Salud. Ministerio de Salud[Ministerio de Salud (2014).] Ministerio de Salud (2014). Plan Nacional para la Re- ducción de la Desnutrición Crónica Infantil y la Prevención de la Anemia en el país: 2014-2016. RM N°258-2014/ MINSA. Lima: Minsa.
Plan Nacional para la Reducción y Control de la Anemia Materno Infantil y la Desnutrición Crónica Infantil en el Perú. N°249-2017/MINSA. Lima: MinsaMinisterio de Salud. Ministerio de Salud[Ministerio de Salud (2017).] Ministerio de Salud (2017). Plan Nacional para la Re- ducción y Control de la Anemia Materno Infantil y la Desnutrición Crónica Infantil en el Perú: 2017-2021. RM N°249-2017/MINSA. Lima: Minsa.
Bootstrap for Estimating the MSE of the Spatial EBLUP. Salvati Molina, ; I Pratesi, N Molina, M Salvati, Pratesi, Computational Statistics. 24Molina, Salvati and Pratesi (2009).] I. Molina, N. Salvati and M. Pratesi (2009). Bootstrap for Estimating the MSE of the Spatial EBLUP, Computational Statis- tics 24 441-458.
Notes on Continuous Stochastic Phenomena. ; P A P Moran, Moran, Biometrika. 371Moran (1950).] P. A. P. Moran (1950). Notes on Continuous Stochastic Phenomena, Biometrika 37(1) 17-23.
Concentraciones de hemoglobina para diagnosticar la anemia y evaluar su gravedad. Ginebra: OMS. Organización Mundial de la Salud1Organización Mundial de la Salud[Organización Mundial de la Salud. (2011).] Organización Mundial de la Salud. (2011). Concentraciones de hemoglobina para diagnosticar la anemia y evaluar su gravedad. Ginebra: OMS. (WHO/NMH/NHD/ MNM/11.1).
Small area estimation for spatial correlation in watershed erosion assessment. J Agric Biol Environ Stat. A. Petrucci and N. Salvati112Petrucci and Salvati[Petrucci and Salvati (2006).] A. Petrucci and N. Salvati (2006). Small area estima- tion for spatial correlation in watershed erosion assessment, J Agric Biol Environ Stat 11(2) 169-182.
Small Area Estimation-New Developments and Directions. D Pfeffermann, International Statisticaal Review. 70[Pfeffermann (2002).] D. Pfeffermann (2002). Small Area Estimation-New Develop- ments and Directions, International Statisticaal Review 70 125-143.
Bootstrap Approximation to Prediction MSE for State-Space Models with Estimated Parameters. D Pfeffermann, R B Tiller, Journal of Time Series Analysis. 26Pfeffermann and Tiller[Pfeffermann and Tiller (2005).] D. Pfeffermann and R. B. Tiller (2005). Bootstrap Approximation to Prediction MSE for State-Space Models with Estimated Pa- rameters, Journal of Time Series Analysis 26 893-916.
New Important Developments in Small Area Estimation. Rao ; N G N Prasad, J N K Prasad, Rao, Journal of the American Statistical Association. 85409[Prasad and Rao (1990).] N. G. N. Prasad and J. N. K. Rao (1990). New Important Developments in Small Area Estimation, Journal of the American Statistical Association 85(409) 163-171.
Small area estimation: the EBLUP estimator based on spatially correlated random area effects. Salvati ; M Pratesi, N Pratesi, Salvati, Statistical Methods and Applications. 171[Pratesi and Salvati (2009).] M. Pratesi and N. Salvati (2009). Small area estima- tion: the EBLUP estimator based on spatially correlated random area effects, Statistical Methods and Applications 17(1) 113-141.
Small Area Estimation in the Presence of Correlated Random Area Effects. Salvati ; M Pratesi, N Pratesi, Salvati, Journal of Official Statistics. 251[Pratesi and Salvati (2009).] M. Pratesi and N. Salvati (2009). Small Area Estimation in the Presence of Correlated Random Area Effects, Journal of Official Statistics 25(1) 37-53.
Small area estimation. J N K Rao, Wiley: London[Rao (2003).] J. N. K. Rao (2003). Small area estimation, Wiley: London.
Small area estimation, Wiley series in survey methodology. Molina ; J N K Rao, I Rao, Molina, WileyHoboken, New Jersey2nd ed.[Rao and Molina (2015).] J. N. K. Rao and I. Molina (2015). Small area estimation, Wiley series in survey methodology. 2nd ed. Hoboken, New Jersey: Wiley.
Spatial-temporal models in small area estimation. Shukla Singh, ; B B Kundu, K Singh, D Shukla, Kundu, Surv Methodol. 312[Singh, Shukla and Kundu (2005).] B. B. Singh, K. Shukla and D. Kundu (2005). Spatial-temporal models in small area estimation, Surv Methodol 31(2) 183-195.
Centers for Disease Control and Prevention. Assessing the Iron Status of Populations. Ginebra: WHO. [World Health Organization (2004).] World Health Organization (2004). Centers for Disease Control and Prevention. Assessing the Iron Status of Populations. Gine- bra: WHO.
| []
|
[
"Gravitational redshift/blueshift of light emitted by geodesic test particles, frame-dragging and pericentre-shift effects, in the Kerr-Newman-de Sitter and Kerr-Newman black hole geometries",
"Gravitational redshift/blueshift of light emitted by geodesic test particles, frame-dragging and pericentre-shift effects, in the Kerr-Newman-de Sitter and Kerr-Newman black hole geometries"
]
| [
"G V Kraniotis [email protected] \nPhysics Department Section of Theoretical Physics\nUniversity of Ioannina\nGR-451 10Greece\n"
]
| [
"Physics Department Section of Theoretical Physics\nUniversity of Ioannina\nGR-451 10Greece"
]
| []
| We investigate the redshift and blueshift of light emitted by timelike geodesic particles in orbits around a Kerr-Newman-(anti) de Sitter (KN(a)dS) black hole. Specifically we compute the redshift and blueshift of photons that are emitted by geodesic massive particles and travel along null geodesics towards a distant observer-located at a finite distance from the KN(a)dS black hole. For this purpose we use the Killing-vector formalism and the associated first integrals-constants of motion. We consider in detail stable timelike equatorial circular orbits of stars and express their corresponding redshift/blueshift in terms of the metric physical black hole parameters (angular momentum per unit mass, mass, electric charge and the cosmological constant) and the orbital radii of both the emitter star and the distant observer. These radii are linked through the constants of motion along the null geodesics followed by the photons since their emission until their detection and as a result we get closed form analytic expressions for the orbital radius of the observer in terms of the emitter radius, and the black hole parameters. In addition, we compute exact analytic expressions for the frame dragging of timelike spherical orbits in the KN(a)dS spacetime in terms of multivariable generalised hypergeometric functions of Lauricella and Appell. Last but not least we derive a very elegant and novel exact formula for the periapsis advance for a test particle in a non-spherical polar orbit in KNdS black hole spacetime in terms of Jacobi's elliptic function sn and Lauricella's hypergeometric function FD. * | 10.1140/epjc/s10052-021-08911-5 | [
"https://arxiv.org/pdf/1912.10320v2.pdf"
]
| 209,444,539 | 1912.10320 | 2325cc2c7e1212c6b85c5b498766f809a036f4b7 |
Gravitational redshift/blueshift of light emitted by geodesic test particles, frame-dragging and pericentre-shift effects, in the Kerr-Newman-de Sitter and Kerr-Newman black hole geometries
6 Jan 2020 January 7, 2020
G V Kraniotis [email protected]
Physics Department Section of Theoretical Physics
University of Ioannina
GR-451 10Greece
Gravitational redshift/blueshift of light emitted by geodesic test particles, frame-dragging and pericentre-shift effects, in the Kerr-Newman-de Sitter and Kerr-Newman black hole geometries
6 Jan 2020 January 7, 20201
We investigate the redshift and blueshift of light emitted by timelike geodesic particles in orbits around a Kerr-Newman-(anti) de Sitter (KN(a)dS) black hole. Specifically we compute the redshift and blueshift of photons that are emitted by geodesic massive particles and travel along null geodesics towards a distant observer-located at a finite distance from the KN(a)dS black hole. For this purpose we use the Killing-vector formalism and the associated first integrals-constants of motion. We consider in detail stable timelike equatorial circular orbits of stars and express their corresponding redshift/blueshift in terms of the metric physical black hole parameters (angular momentum per unit mass, mass, electric charge and the cosmological constant) and the orbital radii of both the emitter star and the distant observer. These radii are linked through the constants of motion along the null geodesics followed by the photons since their emission until their detection and as a result we get closed form analytic expressions for the orbital radius of the observer in terms of the emitter radius, and the black hole parameters. In addition, we compute exact analytic expressions for the frame dragging of timelike spherical orbits in the KN(a)dS spacetime in terms of multivariable generalised hypergeometric functions of Lauricella and Appell. Last but not least we derive a very elegant and novel exact formula for the periapsis advance for a test particle in a non-spherical polar orbit in KNdS black hole spacetime in terms of Jacobi's elliptic function sn and Lauricella's hypergeometric function FD. *
Introduction
General relativity (GR) has triumphed all experimental tests so far which cover a wide range of field strengths and physical scales that include: those in large scale cosmology [17], [18], the prediction of solar system effects like the perihelion precession of Mercury with a very high precision [1], [19], the recent discovery of gravitational waves in Nature [34], [35], as well as the observation of the shadow of the M87 black hole [24], see also [5].
The orbits of short period stars in the central arcsecond (S-stars) of the Milky Way Galaxy provide the best current evidence for the existence of supermassive black holes [2], [3].
In a series of papers we solved exactly timelike and null geodesics in Kerr and Kerr-(anti) de Sitter black hole spacetimes [6], [7], [20], and null geodesics and the gravitational lens equations in electrically charged rotating black holes in [8].
We also computed in [6] elegant closed form analytic solutions for the general relativistic effects of periapsis advance, Lense-Thirring precession, orbital and Lense-Thirring periods and applied our solutions for calculating these GR-effects for the observed orbits of S-stars. The shadow of the Kerr and charged Kerr black holes were computed in [9], [7] and [8] respectively.
One of the targets of observational astronomers of the galactic centre is to measure the gravitational redshift predicted by the theory of general relativity [10]. In the Schwarzschild spacetime geometry the ratio of the frequencies measured by two stationary clocks at the radial positions r 1 and r 2 is given by [15]:
ν 1 ν 2 = 1 − 2GM/r 2 1 − 2GM/r 1 ,(1)
where G is the gravitational constant and M is the mass of the black hole.
Recently, the redshift/blueshift of photons emitted by test particles in timelike circular equatorial orbits in Kerr spacetime were investigated in [30]. It is the purpose of this paper to extend our previous results on relativistic observables and compute for the first time the redshift and blueshift of light emitted by timelike geodesic particles in orbits around the Kerr-Newman-(anti) de Sitter (KN(a)dS) black hole. In addition, we derive new exact analytic expressions for the pericentre-shift and frame-dragging for non-spherical nonequatorial (polar) timelike KNdS and KN black hole orbits. Moreover, we derive novel exact expressions for the frame dragging effect for particles in spherical, non-equatorial orbits in KNdS and KN black hole geometries. These results will be of interest to the observational astronomers of the Galactic centre [2], [3] whose aim is to measure experimentally, the relativistic effects predicted by the theory of General Relativity [29] for the observed orbits of short-period stars-the so called S-stars in our Galactic centre. During 2018, the close proximity of the star S2 (S02) to the supermassive Galactic centre black hole allowed the first measurements of the relativistic redshift observable by the GRAVITY collaboration [11] and the UCLA Galactic centre group whose astrometric measurements were obtained at the W.M. Keck Observatory [12] 1 .
One of the most fundamental exact non-vacuum solutions of the gravitational field equations of general relativity is the Kerr-Newman black hole [14]. The Kerr-Newman (KN) exact solution describes the curved spacetime geometry surrounding a charged, rotating black hole and it solves the coupled system of differential equations for the gravitational and electromagnetic fields [14] (see also [15]).
The KN exact solution generalised the Kerr solution [16], which describes the curved spacetime geometry around a rotating black hole, to include a net electric charge carried by the black hole.
A more realistic description should include the cosmological constant [17], [18], [19] [8], [7] [21], [22], [23], [27], [28].
Taking into account the contribution from the cosmological constant Λ, the generalisation of the Kerr-Newman solution is described by the Kerr-Newman de Sitter (KNdS) metric element which in Boyer-Lindquist (BL) coordinates is given by [36], [37], [39], [38] (in units where G = 1 and c = 1):
ds 2 = ∆ KN r Ξ 2 ρ 2 (dt − a sin 2 θdφ) 2 − ρ 2 ∆ KN r dr 2 − ρ 2 ∆ θ dθ 2 − ∆ θ sin 2 θ Ξ 2 ρ 2 (adt − (r 2 + a 2 )dφ) 2(2)
∆ θ := 1 + a 2 Λ 3 cos 2 θ, Ξ := 1 + a 2 Λ 3 ,
∆ KN r := 1 − Λ 3 r 2 r 2 + a 2 − 2M r + e 2 ,(3)ρ 2 = r 2 + a 2 cos 2 θ,(4)
where a, M, e, denote the Kerr parameter, mass and electric charge of the black hole, respectively. The KN(a)dS metric is the most general exact stationary black hole solution of the Einstein-Maxwell system of differential equations. This is accompanied by a non-zero electromagnetic field F = dA, where the vector potential is [40], [39]:
A = − er Ξ(r 2 + a 2 cos 2 θ) (dt − a sin 2 θdφ).(6)
The KN(a)dS dynamical system of geodesics is a completely integrable system 2 as was shown in [37], [36], [40], [20] and the geodesic differential equations take the form:
dr √ R ′ = dθ √ Θ ′ ,(7)ρ 2 dφ dλ = − Ξ 2 ∆ θ sin 2 θ aE sin 2 θ − L + aΞ 2 ∆ KN r (r 2 + a 2 )E − aL ,(8)cρ 2 dt dλ = Ξ 2 (r 2 + a 2 )[(r 2 + a 2 )E − aL] ∆ KN r − aΞ 2 (aE sin 2 θ − L) ∆ θ ,(9)ρ 2 dr dλ = ± √ R ′ ,(10)ρ 2 dθ dλ = ± √ Θ ′ ,(11)
where
R ′ := Ξ 2 (r 2 + a 2 )E − aL 2 − ∆ KN r (µ 2 r 2 + Q + Ξ 2 (L − aE) 2 ),(12)Θ ′ := Q + (L − aE) 2 Ξ 2 − µ 2 a 2 cos 2 θ ∆ θ − Ξ 2 aE sin 2 θ − L 2 sin 2 θ .(13)
Null geodesics are derived by setting µ = 0. The proper time τ and the affine parameter λ are connected by the relation τ = µλ. In the following we use geometrised units, G = c = 1, unless it is stipulated otherwise. The first integrals of motion E and L are related to the isometries of the KNdS metric while Q (Carter's constant) is the hidden integral of motion that results from the separation of variables of the Hamilton-Jacobi equation. The material of the paper is organised as follows: In Sec. 2 we consider the Killing vector formalism and the corresponding conserved quantities in Kerr-Newman-(anti) de Sitter spacetime. In Sec.3 we consider equatorial circular geodesics in KN(a)dS spacetime and derive novel expressions for the specific energy and specific angular momentum for test particles moving in such orbits, see equations (26) and (27). Typical behaviour of these functions is displayed in Fig.1-Fig.17. In Sec. 4 we provide general expressions for the redshift/blueshift that emitted photons by massive particles experience while travelling along null geodesics towards an observer located far away from their source by making use of the Killing vector formalism. In Sec.4.1 we derive novel exact analytic expressions for the redshift/blueshift of photons for circular and equatorial emitter/detector orbits around the Kerr-Newman-(anti) de Sitter black hole-see equations (56) and (57) respectively. In the procedure we take into account the bending of light due to the field of the Kerr-Newman-(anti)de Sitter black hole at the moment of detection by the observer. In Sec.5 we study non-equatorial orbits in rotating charged black hole spacetimes. Specifically, we compute in closed analytic form the frame-dragging for test particles in timelike spherical orbits in the Kerr-Newman and Kerr-Newman-de Sitter black hole spacetimes-equations (72),theorem 4 and (80) respectively. The former equation (KN case) involves the ordinary Gauß hypergeometric function and Appell's F 1 two-variable hypergeometric function, while the latter (KNdS case) is expressed in terms of Lauricella's F D and Appell's F 1 generalised multivariate hypergeometric functions [32]. In Sec.5.5 & 5.6 we derive new closed form analytic expressions for the periapsis advance for test particles in non-spherical polar orbits in KN and KNdS spacetimes respectively. In the latter case we derive a novel, very elegant, exact formula in terms of Jacobi's elliptic function sn and Lauricella's hypergeometric function F D of three variables-see equation (108).
Particle orbits and Killing vectors formalism in Kerr-Newman-(anti)de Sitter spacetime
From the condition for the invariance of the metric tensor:
0 = ξ α ∂g µν ∂x α + ∂ξ α ∂x µ g αν + ∂ξ β ∂x ν g µβ ,(14)
under the infinitesimal transformation:
x ′α = x α + ǫξ α (x), with ǫ → 0(15)
it follows that whenever the metric is independent of some coordinate a constant vector in the direction of that coordinate is a Killing vector . Thus the generic metric:
ds 2 = g tt dt 2 + 2g tφ dtdφ + g φφ dφ 2 + g rr dr 2 + g θθ dθ 2 ,(16)
possesses two commuting Killing vector fields:
ξ µ = (1, 0, 0, 0) timelike Killing vector,(17)
ψ µ = (0, 0, 0, 1) rotational Killing vector.
According to Noether's theorem to every continuous symmetry of a physical system corresponds a conservation law. In a general curved spacetime, we can formulate the conservation laws for the motion of a particle on the basis of Killing vectors. We can prove that if ξ ν is a Killing vector, then for a particle moving along a geodesic, the scalar product of this Killing vector and the momentum P ν = µ dx ν dτ of the particle is a constant [15]:
ξ ν P ν = constant(19)
Due to the existence of these Killing vector fields (17), (18) there are two conserved quantities the total energy and the angular momentum per unit mass at rest of the test particle 3 :
E =Ẽ µ = g µν ξ µ U ν = g tt U t + g tφ U φ ,(20)L =L µ = −g µν ψ µ U ν = −g φt U t − g φφ U φ .(21)
Thus, the photon's emitter is a probe massive test particle which geodesically moves around a rotating electrically charged cosmological black hole in the spacetime with a four-velocity:
U µ e = (U t , U r , U θ , U φ ) e .(22)
The conservation law (19) also applies to photon moving in the curved spacetime. Thus, if the spacetime geometry is time independent, the photon energy P 0 is constant. In section 4.1 we will extract the redshift/blueshift of photons from this conservation law.
Equatorial circular orbits in Kerr-Newman spacetime with a cosmological constant
It is convenient to introduce a dimensionless cosmological parameter:
Λ ′ = 1 3 ΛM 2 ,(23)
and set M = 1. For equatorial orbits Carter's constant Q vanishes. For the following discussion, it is useful to introduce new constants of motion, the specific energy and specific angular momentum:
E ≡ ΞE µ ,(24)L ≡ ΞL µ .(25)
This is equivalent to setting µ = 1. Thus for reasons of notational simplicity we omit the caret for the specific energy and specific angular momentum in what follows. Equatorial circular orbits correspond to local extrema of the effective potential. Equivalently, these orbits are given by the conditions R ′ (r) = 0, dR ′ /dr = 0, which have to be solved simultaneously. Following this procedure, we obtain the following novel equations for the specific energy and the specific angular momentum of test particles moving along equatorial circular orbits in KN(a)dS spacetime:
E ± (r; Λ ′ , a, e) = e 2 + r(r − 2) − r 2 (r 2 + a 2 )Λ ′ ± a r 4 1 r 3 − Λ ′ − e 2 r 2e 2 + r(r − 3) − a 2 r 2 Λ ′ ± 2a r 4 1 r 3 − Λ ′ − e 2 ,(26)L ± (r; Λ ′ , a, e) = ±(r 2 + a 2 ) r 4 1 r 3 − Λ ′ − e 2 − 2ar − ar 2 Λ ′ (r 2 + a 2 ) + ae 2 r 2e 2 + r(r − 3) − a 2 r 2 Λ ′ ± 2a r 4 1 r 3 − Λ ′ − e 2(27)
The reality conditions connected with equations (26) and (27) are given by the inequalities:
2e 2 + r(r − 3) − a 2 r 2 Λ ′ ± 2a r 4 1 r 3 − Λ ′ − e 2 ≥ 0 (28) ⇔ 2e 2 r 2 + r − 3 r − a 2 Λ ′ ± 2a 1 r 3 − Λ ′ − e 2 r 4 ≥ 0,(29)
and 1 − Λ ′ r 3 ≥ e 2 /r.
For zero electric charge e = 0 equations (26), (27) reduce correctly to those in Kerr-anti de Sitter (KadS) spacetimes [25]. For zero electric charge and zero cosmological constant (Λ = e = 0) equations (26), (27) reduce correctly to the corresponding ones in Kerr spacetime [26].
In the figures 1-17, for concrete values of the electric charge and the cosmological parameter we present the radial dependence of the specific energy and specific angular momentum for various values of the black hole's spin. For the cosmological parameter Λ ′ we choose the values Λ ′ = 10 −5 , 10 −4 , 10 −3 as well as their negative counterparts. For stellar mass black holes, and positive cosmological constant this corresponds to Λ ∼ 10 −15 cm −2 − 10 −13 cm −2 . For supermassive black holes such as at the centre of Galaxy M87 with mass M M87 BH = 6.7 × 10 9 solar masses [24] the value of Λ ′ = 10 −5 corresponds to the value for the cosmological constant: Λ = 3.06 × 10 −35 cm −2 .
Gravitational redshift-blueshift of emitted photons
In this section we will provide general expressions for the redshift/blueshift that emitted photons by massive particles experience while travelling along null geodesics towards an observed located far away from their source. In general, the frequency of a photon measured by an observer with proper velocity U µ A at the spacetime point P A reads [15], [30]:
ω A = k µ U µ A | PA ,(31)
where the index A refers to the emission (e) and/or detection (d) at the corresponding point P A . The emission frequency is defined as follows:
ω e = k µ U µ = k t U t + k r U r + k θ U θ + k φ U φ = (k t E − k φ L + g rr k r U r + g θθ k θ U θ )| e .(32)
Likewise the detected frequency is given by the expression:
ω d = +k µ U µ = (Ek t − Lk φ + g rr k r U r + g θθ k θ U θ )| d .(33)
In producing (32),(33) we used the expressions for U t and U φ in terms of the metric components and the conserved quantities E, L:
U t = −Eg φφ − Lg tφ g 2 tφ − g tt g φφ ,(34)U φ = g tt L + g φt E g 2 tφ − g tt g φφ .(35)
Thus,
1 + z = ω e ω d = (k t E − k φ L + g rr k r U r + g θθ k θ U θ )| e (Ek t − Lk φ + g rr k r U r + g θθ k θ U θ | d ) = (E γ U t − L γ U φ + g rr K r U r + g θθ K θ U θ )| e (E γ U t − L γ U φ + g rr K r U r + g θθ K θ U θ )| d(36)
This is the most general expression for the redshift/blueshift that light signals emitted by massive test particles experience in their path along null geodesics towards a distant observer (ideally located near the cosmological horizon in particular or at spatial infinity assuming a zero cosmological constant).
The redshift/blueshift of photons for circular and equatorial emitter/detector orbits around the Kerr-Newman-(anti) de Sitter black hole
For equatorial circular orbits U r = U θ = 0 thus
1 + z = (E γ U t − L γ U φ )| e (E γ U t − L γ U φ )| d = U t − ΦU φ | e U t − ΦU φ | d = U t e − Φ e U φ e U t d − Φ d U φ d ,(37)where Φ = L γ /E γ . For Φ = 0, 1 + z c = U t e U t d .
Following the procedure for the Kerr black hole in [30], we consider the kinematic redshift of photons either side of the line of sight that links the Kerr-Newman-de Sitter black hole and the observer, and subtract from Eq.(37) the central value z c . Then we obtain:
z kin ≡ z − z c = U t e − Φ e U φ e U t d − Φ d U φ d − U t e U t d = Φ d U φ d U t e − Φ e U φ e U t d U t d U t d − Φ d U φ d U t d .(38)
Let us now consider photons with 4-momentum vector k µ = (k t , k r , k θ , k φ ) which move along null geodesics k µ k µ = 0 outside the event horizon of the Kerr-Newman-de Sitter black hole, which explicitly can be expressed as
0 =g tt (k t ) 2 + 2g tφ (k t k φ ) + g φφ (k φ ) 2 + g rr (k r ) 2 + g θθ (k θ ) 2 .(39)k t = Ξ 2 ∆ θ (r 2 + a 2 )[(r 2 + a 2 )E γ − aL γ ] − aΞ 2 ∆ KN r (aE γ sin 2 θ − L γ ) ∆ KN r ∆ θ ρ 2 = E γ [Ξ 2 ∆ θ (r 2 + a 2 ) 2 − a 2 sin 2 θΞ 2 ∆ KN r ] + L γ [−aΞ 2 ∆ θ (r 2 + a 2 ) + aΞ 2 ∆ KN r ] ∆ KN r ∆ θ ρ 2 ,(40)k φ = −Ξ 2 ∆ KN r (aE γ sin 2 θ − L γ ) + aΞ 2 ∆ θ sin 2 θ[(r 2 + a 2 )E γ − aL γ ] ∆ KN r ∆ θ ρ 2 sin 2 θ = [−Ξ 2 ∆ KN r a sin 2 θ + aΞ 2 ∆ θ sin 2 θ(r 2 + a 2 )]E γ + L γ [Ξ 2 ∆ KN r − a 2 Ξ 2 ∆ θ sin 2 θ] ∆ KN r ∆ θ ρ 2 sin 2 θ ,(41)(k θ ) 2 = Q γ ∆ θ + (L γ − aE γ )Ξ 2 ∆ θ − Ξ 2 (aEγ sin 2 θ−Lγ ) 2 sin 2 θ ρ 4(42)
We must take into account the bending of light from the rotating and charged Kerr-Newman-(anti) de Sitter black hole. From (38) it follows that the apparent impact parameter must be maximised. The apparent impact factor Φ γ ≡ L γ /E γ can be obtained from the expression k µ k µ = 0 4 as follows:
k µ k µ = 0 ⇔ k t k t + k φ k φ = 0 ⇔ E γ g φφ + g tφ L γ g 2 tφ − g φφ g tt (−E γ ) + −L γ g tt − E γ g φt g 2 tφ − g φφ g tt L γ = 0 ⇔ g φφ + 2g tφ Φ γ + Φ 2 γ g tt = 0(43)
Solving the quadratic equation we obtain:
Φ ± γ = −g φt ± g 2 tφ − g φφ g tt g tt = a(∆ KN r − (r 2 + a 2 )) ± r 2 ∆ KN r ∆ KN r − a 2 .(44)
where we got two values, Φ + γ and Φ − γ (either evaluated at the emitter or detector position, since this quantity is preserved along the null geodesic photon orbits, i.e., Φ e = Φ d ) that give rise to two different shifts respectively, z 1 and z 2 of the emitted photons corresponding to a receding and to an approaching object with respect to a far away positioned observer:
z 1 = Φ − d U φ d U t e − Φ − e U φ e U t d U t d (U t d − Φ − d U φ d ) ,(45)z 2 = Φ + d U φ d U t e − Φ + e U φ e U t d U t d (U t d − Φ + d U φ d )(46)
4 Taking into account that k r = 0 and k θ = 0.
In general the two values z 1 and z 2 differ from each other due to light bending experienced by the emitted photons and the differential rotation experienced by the detector as encoded in U φ d and U t d components of the four-velocity 5 . In order to get a closed analytic expression for the gravitational redshift/blueshift experienced by the emitted photons we shall express the required quantities in terms of the Kerr-Newman-(anti) de Sitter metric. Thus, the U φ and U t components of the four-velocity for circular equatorial orbits read:
U t (r, π/2) = −(∆ KN r a 2 − (r 2 + a 2 ) 2 )E − L(−a(∆ KN r − (r 2 + a 2 ))) Ξ 2 r 2 ∆ KN r Ξ 4 ,(47)U φ (r, θ = π/2) = Ξ 2 (∆ KN r − a 2 )L + EΞ 2 (−a(∆ KN r − (r 2 + a 2 ))) r 2 ∆ KN r .(48)
Substituting the expressions (26)-(27) for E ± and L ± into U t (r, π/2), U φ (r, π/2) we finally obtain remarkable novel expressions for these four-velocity components in Kerr-Newman-(anti) de Sitter spacetime:
U t (r, π/2) = (r 2 ± a −e 2 + r 4 1 r 3 − Λ ′ ) Ξ 2 r 2e 2 + r(r − 3) − r 2 a 2 Λ ′ ± 2a −e 2 + r 4 1 r 3 − Λ ′ ,(49)U φ (r, π/2) = ± −e 2 + r 4 1 r 3 − Λ ′ Ξ 2 r 2e 2 + r(r − 3) − r 2 a 2 Λ ′ ± 2a −e 2 + r 4 1 r 3 − Λ ′ .(50)
We now compute the angular velocity Ω:
Ω ≡ dφ dt = 1 a ± r 3/2 1−Λ ′ r 3 − e 2 r .(51)
In terms of the angular velocities the quantities z 1 , z 2 read as follows:
z 1 = Φ − d Ω d U t e − Φ − e U φ e U t d − Φ − d U φ d = U t e [Φ − d Ω d − Φ − e Ω e ] U t d (1 − Φ − d Ω d ) ,(52)z 2 = Φ + d Ω d U t e − Φ + e U φ e U t d − φ + d U φ d = Φ + d Ω d U t e − Φ t e U φ e U t d (1 − Φ + d Ω d ) = U t e [Φ + d Ω d − Φ + e Ω e ] U t d (1 − Φ + d Ω d )
.
(53)
Thus for the Kerr-Newman-(anti) de Sitter black hole we can write for the redshift and blueshift, respectively:
z red = Ω d Φ − d − Φ − e Ω e 1 − Φ − d Ω d [r 3/2 e ± a −e 2 /r e + r 3 e 1 r 3 e − Λ ′ ] r 3/4 e 2e 2 r 1/2 e + r 3/2 e − 3 √ r e − r 3/2 e a 2 Λ ′ ± 2a −e 2 /r e + r 3 e 1 r 3 e − Λ ′ × r 3/4 d 2e 2 r 1/2 d + r 3/2 d − 3 √ r d − r 3/2 d a 2 Λ ′ ± 2a −e 2 /r d + r 3 d 1 r 3 d − Λ ′ r 3/2 d ± a −e 2 /r d + r 3 d 1 r 3 d − Λ ′ ,(54)z blue = Ω d Φ + d − Φ + e Ω e 1 − Φ + d Ω d [r 3/2 e ± a −e 2 /r e + r 3 e 1 r 3 e − Λ ′ ] r 3/4 e 2e 2 r 1/2 e + r 3/2 e − 3 √ r e − r 3/2 e a 2 Λ ′ ± 2a −e 2 /r e + r 3 e 1 r 3 e − Λ ′ × r 3/4 d 2e 2 r 1/2 d + r 3/2 d − 3 √ r d − r 3/2 d a 2 Λ ′ ± 2a −e 2 /r d + r 3 d 1 r 3 d − Λ ′ r 3/2 d ± a −e 2 /r d + r 3 d 1 r 3 d − Λ ′ ,(55)
where now r e and r d stand for the radius of the emitter's and detector's orbits, respectively. These elegant and novel expressions can be written in terms of the physical parameters of the Kerr-Newman-(anti) de Sitter black hole and the detector radius, r d , as follows:
z red = r 3/4 d 2e 2 r 1/2 d + r 3/2 d − 3 √ r d − r 3/2 d a 2 Λ ′ ± 2a −e 2 /r d + r 3 d 1 r 3 d − Λ ′ r 3/4 e 2e 2 r 1/2 e + r 3/2 e − 3 √ r e − r 3/2 e a 2 Λ ′ ± 2a −e 2 /r e + r 3 e 1 r 3 e − Λ ′ × a(−Λ ′ r 2 e (r 2 e + a 2 ) − 2r e + e 2 ) − r 2 e ∆ KN r (r e ) (±[r 3/2 e 1 − Λ ′ r 3 d − e 2 r d − r 3/2 d 1 − Λ ′ r 3 e − e 2 re ]) (r 3/2 d ± a 1 − Λ ′ r 3 d − e 2 r d )[(∆ KN r (r e ) − a 2 )r 3/2 d + (ar 2 e + r 2 e ∆ KN r (r e ))(± 1 − Λ ′ r 3 d − e 2 r d )](56)z blue = r 3/4 d 2e 2 r 1/2 d + r 3/2 d − 3 √ r d − r 3/2 d a 2 Λ ′ ± 2a −e 2 /r d + r 3 d 1 r 3 d − Λ ′ r 3/4 e 2e 2 r 1/2 e + r 3/2 e − 3 √ r e − r 3/2 e a 2 Λ ′ ± 2a −e 2 /r e + r 3 e 1 r 3 e − Λ ′ × a(−Λ ′ r 2 e (r 2 e + a 2 ) − 2r e + e 2 ) + r 2 e ∆ KN r (r e ) (±[r 3/2 e 1 − Λ ′ r 3 d − e 2 r d − r 3/2 d 1 − Λ ′ r 3 e − e 2 re ]) (r 3/2 d ± a 1 − Λ ′ r 3 d − e 2 r d )[(∆ KN r (r e ) − a 2 )r 3/2 d + (ar 2 e − r 2 e ∆ KN r (r e ))(± 1 − Λ ′ r 3 d − e 2 r d )] ,(57)
where we define:
∆ KN r (r e ) := (1 − Λ ′ r 2 e )(r 2 e + a 2 ) − 2r e + e 2(58)
and we have made use of the relation Φ e = Φ d . The remarkable closed form analytic expressions for the frequency shifts we obtained in eqns.(56)-(57), constitute a new result in the theory of General Relativity, in which all the physical parameters of the exact theory enter on an equal footing.
Redshift/blueshift for circular equatorial orbits in Kerr-de Sitter spacetime
For zero electric charge, e = 0, eqns.(56)-(57) reduce to:
z red = r 3/4 d r 3/2 d − 3 √ r d − r 3/2 d a 2 Λ ′ ± 2a r 3 d ( 1 r 3 d − Λ ′ ) r 3/4 e r 3/2 e − 3 √ r e − r 3/2 e a 2 Λ ′ ± 2a r 3 e ( 1 r 3 e − Λ ′ ) × [a(−Λ ′ r 2 e (r 2 e + a 2 ) − 2r e ) − r 2 e ∆ r (r e )](±[r 3/2 e 1 − Λ ′ r 3 d − r 3/2 d 1 − Λ ′ r 3 e ]) (r 3/2 d ± a 1 − Λ ′ r 3 d )[(∆ r (r e ) − a 2 )r 3/2 d + (ar 2 e + r 2 e ∆ r (r e ))(± 1 − Λ ′ r 3 d )] ,(59)z blue = r 3/4 d r 3/2 d − 3 √ r d − r 3/2 d a 2 Λ ′ ± 2a r 3 d ( 1 r 3 d − Λ ′ ) r 3/4 e r 3/2 e − 3 √ r e − r 3/2 e a 2 Λ ′ ± 2a r 3 e ( 1 r 3 e − Λ ′ ) × [a(−Λ ′ r 2 e (r 2 e + a 2 ) − 2r e ) + r 2 e ∆ r (r e )](±[r 3/2 e 1 − Λ ′ r 3 d − r 3/2 d 1 − Λ ′ r 3 e ]) (r 3/2 d ± a 1 − Λ ′ r 3 d )[(∆ r (r e ) − a 2 )r 3/2 d + (ar 2 e − r 2 e ∆ r (r e ))(± 1 − Λ ′ r 3 d )]
.
In the particular case when the detector is located far away from the source and the condition is fulfilled: r d ≫ M ≥ a, the redshift and blueshift respectively take the form:
z red = √ 1 − a 2 Λ ′ [a(Λ ′ r 2 e (r 2 e + a 2 ) + 2r e ) + r 2 e ∆ r (r e )](± 1 − Λ ′ r 3 e ) r 3/4 e r 3/2 e − 3 √ r e − r 3/2 e a 2 Λ ′ ± 2a r 3 e ( 1 r 3 e − Λ ′ )[∆ r (r e ) − a 2 ] ,(61)z blue = √ 1 − a 2 Λ ′ [a(Λ ′ r 2 e (r 2 e + a 2 ) + 2r e ) − r 2 e ∆ r (r e )](± 1 − Λ ′ r 3 e ) r 3/4 e r 3/2 e − 3 √ r e − r 3/2 e a 2 Λ ′ ± 2a r 3 e ( 1 r 3 e − Λ ′ )[∆ r (r e ) − a 2 ]
.
The frequency shifts (61) Assuming Λ = 0 we derive from (8) and (11) the following equation:
dφ dθ = aP ∆ KN − aE + L/ sin 2 θ Q − L 2 cos 2 θ sin 2 θ + a 2 cos 2 θ(E 2 − 1) .(63)
In (63) ∆ KN := r 2 + a 2 + e 2 − 2M r. 6 Using the variable z = cos 2 θ, − 1 2 dz √ z 1 √ 1−z = sgn(π/2 − θ)dθ we will determine for the first time in closed 6 When we set M = 1, ∆ KN = r 2 + a 2 + e 2 − 2r. analytic form the amount of frame-dragging for timelike spherical orbits in the Kerr-Newman spacetime 7 .
Thus for instance, expressed in terms of the new variable :
L sin 2 θ dθ √ Θ = L 1 − z − 1 2 dz √ z 1 αz 2 − (α + β)z + Q = L 1 − z − 1 2 dz √ z 1 |a| √ 1 − E 2 1 (z − z + )(z − z − ) ,(64)
where α = a 2 (1 − E 2 ), β = L 2 + Q. The range of z for which the motion takes place includes the equatorial value, z = 0:
0 ≤ z ≤ z − .(65)
We will prove first the following exact result:
Proposition 1 z− 0 L 1 − z − 1 2 dz √ z 1 |a| √ 1 − E 2 1 (z − z + )(z − z − ) = − L |a| √ 1 − E 2 π 2 (1 − z − ) −1 (z + − z − ) −1/2 F 1 1 2 , 1, 1 2 , 1, z − z − − 1 , z − z − − z + = − L |a| √ 1 − E 2 π 2 z −1/2 + F 1 1 2 , 1 2 , 1, 1, z − z + , z − .(66)
Proof. We compute first the integral:
z− zj L 1 − z − 1 2 dz √ z 1 |a| √ 1 − E 2 1 (z − z + )(z − z − ) .(67)
Applying the transformation z = z − + ξ 2 (z j − z − ) in (67) we obtain
z− zj L 1 − z − 1 2 dz √ z 1 |a| √ 1 − E 2 1 (z − z + )(z − z − ) = −L 2|a| √ 1 − E 2 z − − z j (1 − z − ) 1 √ z − √ z − − z + √ z j − z − 1 0 dx [1 − x zj −z− 1−z− ] 1 1 − x z−−zj z−−z+ 1 √ x 1 1 − x z−−zj z− = −L 2|a| √ 1 − E 2 z − − z j (1 − z − ) 1 √ z − √ z − − z + √ z j − z − × Γ 1 2 Γ(1) Γ 3 2 F D 1 2 , 1, 1 2 , 1 2 , 3 2 , z j − z − 1 − z − , z − − z j z − , z − − z j z − − z + ,(68)
where x ≡ ξ 2 . Setting z j = 0 yields:
z− 0 L 1 − z − 1 2 dz √ z 1 |a| √ 1 − E 2 1 (z − z + )(z − z − ) = −L 2|a| √ 1 − E 2 (1 − z − ) −1 (z + − z − ) −1/2 Γ 1 2 Γ(1) Γ 3 2 F D 1 2 , 1, 1 2 , 1 2 , 3 2 , −z − 1 − z − , 1, z − z − − z + = −L 2|a| √ 1 − E 2 (1 − z − ) −1 (z + − z − ) −1/2 Γ(1/2) 2 Γ 3 2 Γ 3 2 F 1 1 2 , 1, 1 2 , 1, −z − 1 − z − , z − z − − z + = −L 2|a| √ 1 − E 2 z −1/2 + πF 1 1 2 , 1 2 , 1, 1, z − z + , z − .(69)
For producing the result in the last line of equation (69), we used the following transformation property of Appell's hypergeometric function F 1 :
Lemma 2 y 1+β−γ (1 − y) γ−α−1 (x − y) −β F 1 1 − β ′ , β, 1 + α − γ, 2 + β − γ, y y − x , y y − 1 = y 1+β−γ x −β F 1 1 + β + β ′ − γ, β, 1 + α − γ, 2 + β − γ, y x , y .(70)
On the other hand we compute analytically the second integral that contributes to frame-dragging and we obtain:
Proposition 3 z− 0 aP ∆ KN − aE √ Θ ′ dθ = aP ∆ KN − aE 1 |a| √ 1 − E 2 −1 2 Γ 1 2 Γ 1 2 √ z + − z − F 1 2 , 1 2 , 1, − z − z + − z − = aP ∆ KN − aE 1 |a| √ 1 − E 2 − π 2 1 √ z + F 1 2 , 1 2 , 1, z − z + .(71)
We thus obtain the following result in closed analytic form for the amount of frame-dragging that a timelike spherical orbit in Kerr-Newman spacetime undergoes.
Theorem 4 As θ goes through a quarter of a complete oscillation we obtain the change in azimuth φ, ∆φ GTR :
∆φ GTR = − L |a| √ 1 − E 2 π 2 z −1/2 + F 1 1 2 , 1 2 , 1, 1, z − z + , z − + aP ∆ KN − aE 1 |a| √ 1 − E 2 − π 2 1 √ z + F 1 2 , 1 2 , 1, z − z + .(72)
Periods
Squaring the geodesic differential equation for the polar variable (11) (for Λ = 0), multipying by the term cos 2 θ sin 2 θ, and making the change to the variable z, yields the following differential equation for the proper polar period:
dτ θ = (r 2 + a 2 z)dz 2 √ z a 2 (1 − E 2 )z 2 + (−a 2 (1 − E 2 ) − L 2 − Q)z + Q(73)
Finally our closed form analytic computation for the proper polar period yields:
Proposition 5 τ θ = 4 r 2 |a| √ 1 − E 2 π 2 1 √ z + F 1 2 , 1 2 , 1, z − z + − a 2 2|a| 1 √ 1 − E 2 √ z + Γ 1 2 Γ 1 2 F 1 2 , − 1 2 , 1, z − z + − z + a 2 2|a| 1 √ 1 − E 2 π 2 1 √ z + F 1 2 , 1 2 , 1, z − z + .(74)
Proof. We compute first:
z− 0 r 2 dz 2|a| √ 1 − E 2 1 √ z (z − z − )(z − z + ) = r 2 2|a| √ 1 − E 2 F 1 2 , 1 2 , 1, −z − z + − z − Γ 1 2 Γ 1 2 1 √ z + − z − = r 2 2|a| √ 1 − E 2 π 2 1 √ z + F 1 2 , 1 2 , 1, z − z + .(75)
We write:
z− zj a 2 zdz 2|a| √ 1 − E 2 √ z (z − z + )(z − z − ) = z− zj a 2 (z − z + + z + )dz 2|a| √ 1 − E 2 √ z (z − z + )(z − z − )(76)
Applying the change of variables:z = z − + ξ 2 (z j − z − ) we compute the term:
z− zj a 2 (z − z + )dz 2|a| √ 1 − E 2 √ z (z − z + )(z − z − ) = a 2 (z j − z − ) 2|a| √ 1 − E 2 0 1 2ξdξ[z − − z + + ξ 2 (z j − z − )] √ z − √ z + − z − √ z − − z j 1 − ξ 2 (z−−zj ) z−−z+ ξ 2 1 − ξ 2 (z−−zj) z− = a 2 2|a| √ 1 − E 2 (z + − z − )(z j − z − ) z − (z − − z + )(z j − z − ) 1 0 2ξdξ 1 − ξ 2 (z−−zj) z−−z+ 1/2 1 − ξ 2 (z−−zj ) z− ξ 2 = a 2 2|a| √ 1 − E 2 (z + − z − )(z j − z − ) z − (z − − z + )(z j − z − ) 1 0 dx 1 − x(z−−zj) z−−z+ 1/2 1 − x(z−−zj ) z− √ x = a 2 2|a| √ 1 − E 2 (z + − z − )(z j − z − ) z − (z − − z + )(z j − z − ) F 1 1 2 , − 1 2 , 1 2 , 3 2 , z − − z j z − − z + , z − − z j z − Γ 1 2 Γ(1) Γ 3 2(77)
Setting z j = 0 in the expression which involves Appell's hypergeometric function F 1 yields:
z− 0 a 2 (z − z + )dz 2|a| √ 1 − E 2 √ z (z − z + )(z − z − ) = −a 2 2|a| 1 √ 1 − E 2 (z + − z − ) √ z + − z − Γ 1 2 Γ 3 2 Γ 3 2 Γ 3 2 − 1 2 − 1 2 Γ 3 2 − 1 2 Γ 3 2 − 1 2 F 1 2 , − 1 2 , 1, −z − z + − z − (78)
Spherical orbits in Kerr-Newman-(anti) de Sitter spacetime
From (8) and (11) we derive the equation :
dφ dθ = aΞ 2 ∆ KN r [(r 2 + a 2 )E − aL] √ Θ ′ − Ξ 2 (1 + a 2 Λ 3 cos 2 θ)(sin 2 θ) aE sin 2 θ − L √ Θ ′ = aΞ 2 ∆ KN r [(r 2 + a 2 )E − aL] √ Θ ′ − Ξ 2 (1 + a 2 Λ 3 z)(1 − z) aE(1 − z) − L √ Θ ′ .(79)
Using the variable z we obtain the following novel exact result in closed analytic form for the amount of frame-dragging that a timelike spherical orbit in Kerr-Newman-(anti)de Sitter spacetime undergoes. As θ goes through a quarter of a complete oscillation we obtain the change in azimuth φ, ∆φ GTR in terms of Lauricella's F D and Appell's F 1 multivariable generalised hypergeometric functions:
∆φ GTR Λ = aΞ 2 ∆ KN r [(r 2 + a 2 )E − aL] √ z + − z − √ z − − z Λ Γ 2 1 2 −2 a 4 Λ 4 F 1 1 2 , 1 2 , 1 2 , 1, z − z − − z + , z − z − − z Λ + HΞ 2 aE 2 a 4 Λ 3 Γ 2 1 2 F D 1 2 , 1 2 , 1 2 , 1, 3 2 , z − z − − z + , z − z − − z Λ , −η + −HLΞ 2 2 a 4 Λ 3 Γ 2 1 2 1 − z − F D 1 2 , 1 2 , 1 2 , 1, 1, 3 2 , z − z − − z + , z − z − − z Λ , −η, −z − 1 − z − ,(80)
where we define:
H ≡ 1 √ z + − z − √ z − − z Λ 1 (1 + a 2 Λz− 3 ) η ≡ a 2 Λz − 1 + a 2 Λ 3 z − .(81)
In our calculations we used the following property for the values of Lauricella's multivariate function F D :
Frame-dragging effect for polar non-spherical bound orbits in Kerr-Newman spacetime
Polar spherical orbits are characterised by the vanishing of the angular momentum of the particle, i.e. L = 0. We further assume in this section that Λ = 0. The relevant differential equation for the calculation of frame-dragging is:
dφ dr = (2ar − ae 2 )E ∆ KN √ R .(83)
The quartic radial polynomial R is obtained from R ′ in (12) for Λ = L = 0.
Using the partial fractions technique we integrate from the periastron distance r P to the apoastron distance r A : Applying the transformation:
z = 1 ω r − α µ+1 r − α µ+2 = α − γ α − β r − β r − γ(84)
and organizing the roots of the radial polynomial and the radii of the event and Cauchy horizon in the ascending order of magnitude:
α ρ > α σ > α ν > α i ,(85)
with the correspondence α ρ = α µ = α, α σ = α µ+1 = β, α ν = α µ+2 = γ, α i = α µ−i , i = 1, 2, 3, α µ−1 = a µ−2 = r ± , α µ−3 = δ we compute the exact analytic result in terms of Appell's hypergeometric function F 1 :
∆φ GT R tpKN = 2 − ω 3/2 H + A + tpKN F 1 3 2 , 1, 1 2 , 2, κ t2 + , κ ′2 Γ 3 2 Γ 1 2 Γ(2) + √ ω H + A + tpKN F 1 1 2 , 1, 1 2 , 1, κ t2 + , κ ′2 Γ 2 1 2 Γ(1) − ω 3/2 H − A − tpKN F 1 3 2 , 1, 1 2 , 2, κ t2 − , κ ′2 Γ 3 2 Γ 1 2 Γ(2) + √ ω H − A − tpKN F 1 1 2 , 1, 1 2 , 1, κ t2 − , κ ′2 Γ 2 1 2 Γ(1)(86)
where the partial fraction expansion parameters are given by:
A + tpKN = −r + 2aE + ae 2 E r − − r + , A − tpKN = +r − 2aE − ae 2 E r − − r + .(87)
The variables of the hypergeometric functions are given in terms of the roots of the quartic and the radii of the horizons by the expressions:
κ t2 ± := α − β α − γ r ± − γ r ± − β , κ ′2 := α − β α − γ δ − γ δ − β ,(88)
while
H ± ≡ (1 − E 2 )(α µ+1 − α µ−1 ) α µ − α µ+1 α µ+1 − α µ−3 = (1 − E 2 )(β − r ± ) α − β β − δ.(89)
Exact calculation of the orbital period in non-spherical polar Kerr-Newman geodesics
In this section we will compute a novel exact formula for the orbital period for a test particle in a non-spherical polar Kerr-Newman geodesic. The relevant differential equation is:
cdt dr = r 2 + a 2 ∆ KN √ R E(r 2 + a 2 ) − a 2 E sin 2 θ √ R ,(90)
and we integrate from periapsis to apoapsis and back to periapsis. Indeed, our analytic computation yields:
ct ≡ cP KN = Eβ 2 2 GM c 2 2 √ 1 − E 2 2 √ α − γ 2 √ β − δ Γ 2 (1/2) Γ(1) F 1 1 2 , 2, 1 2 , 1, ω, κ 2 − 2ωγ β Γ(3/2)Γ(1/2) Γ(2) F 1 3 2 , 2, 1 2 , 2, ω, κ 2 + γ 2 ω 2 β 2 Γ(5/2)Γ(1/2) Γ(3) F 1 5 2 , 2, 1 2 , 3, ω, κ 2 +2E(a 2 − e 2 ) GM c 2 ω 1 − E 2 1 2 (α − β)(β − δ) F (1/2, 1/2, 1, κ 2 ) Γ 2 (1/2) Γ(1) + 4EGM c 2 ω 1 − E 2 β 2 (α − β)(β − δ) Γ 2 (1/2) Γ(1) F 1 1 2 , 1, 1 2 , 1, ω, κ 2 − ωγ β F 1 3 2 , 1, 1 2 , 2, ω, κ 2 Γ(3/2)Γ(1/2) Γ(2) + 4EG2M c 2 ω 1 − E 2 1 2 (α − β)(β − δ) F (1/2, 1/2, 1, κ 2 ) Γ 2 (1/2) Γ(1) − 4EG2M c 2 − ω 3/2 A KN + H + F 1 3 2 , 1, 1 2 , 2, κ 2 + , µ 2 Γ(3/2)Γ(1/2) Γ(2) + ω 1/2 A KN + H + F 1 1 2 , 1, 1 2 , 1, κ 2 + , µ 2 Γ 2 (1/2) Γ(1) − ω 3/2 A KN − H − F 1 3 2 , 1, 1 2 , 2, κ 2 − , µ 2 Γ(3/2)Γ(1/2) Γ(2) + ω 1/2 A KN − H − F 1 1 2 , 1, 1 2 , 1, κ 2 − , µ 2 Γ 2 (1/2) Γ(1) +2E − ω 3/2 (4e 2 r + − e 4 ) (−2 √ 1 − a 2 − e 2 )H + F 1 3 2 , 1, 1 2 , 2, κ 2 + , µ 2 Γ(3/2)Γ(1/2) Γ(2) + ω 1/2 (4e 2 r + − e 4 ) (−2 √ 1 − a 2 − e 2 )H + F 1 1 2 , 1, 1 2 , 1, κ 2 + , µ 2 Γ 2 (1/2) Γ(1) − ω 3/2 (e 4 − 4e 2 r − ) (−2 √ 1 − a 2 − e 2 )H − F 1 3 2 , 1, 1 2 , 2, κ 2 − , µ 2 Γ(3/2)Γ(1/2) Γ(2) + ω 1/2 (e 4 − 4e 2 r − ) (−2 √ 1 − a 2 − e 2 )H − F 1 1 2 , 1, 1 2 , 1, κ 2 − , µ 2 Γ 2 (1/2) Γ(1) + −a 2 E GM c 2 2 √ Q sin(ϕ)F 1 1 2 , 1 2 , 1 2 , 3 2 , sin 2 ϕ, κ 2′ sin 2 ϕ + − sin(ϕ)F 1 1 2 , 1 2 , 1 2 , 3 2 , sin 2 ϕ, κ 2′ sin 2 ϕ + sin(ϕ)F 1 1 2 , 1 2 , − 1 2 , 3 2 , sin 2 ϕ, κ 2′ sin 2 ϕ × 1 κ 2′ (91) where ϕ = am 2 Q 4 2 √ 1 − E 2 1 2 √ α − γ 1 2 √ β − δ π 2 F 1 2 , 1 2 , 1, κ 2 , a 2 (1 − E 2 ) Q(92)
Parameters for the star S2 15.15 6.25 × 10 6 Table 1: Lense-Thirring precession for the star S2 in the central arcsecond of the galactic centre, using the exact formula (86) . We assume a central galactic Kerr-Newman black hole with mass M BH = 4.06 × 10 6 M ⊙ and that the orbit of S2 star is a timelike non-spherical polar Kerr-Newman geodesic. The computation of the orbital period of the star S2 was performed using the exact result in eqn.(91).
A KN + := − a 2 + e 2 − 2r ′ + r ′ − − r ′ + , A KN − := − −a 2 − e 2 + 2r ′ − r ′ − − r ′ + ,(93)
and the moduli (variables) of the hypergeometric function of Appell are given by:
µ 2 = κ 2 = α − β α − γ δ − γ δ − β κ 2 ± = α − β α − γ r ′ ± − γ r ′ ± − β .(94)
For zero electric charge, e = 0, Eqn.(91) reduces correctly to Eqn. (33) in [6] for the case of a Kerr black hole. The Lense-Thirring period for a non-spherical polar timelike geodesic in Kerr-Newman BH geometry, is defined in terms of the Lense-Thirring precession Eq. (86) and its orbital period Eq. (91) as follows:
LTP := 2πP KN ∆φ GT R tpKN .(95)
We now proceed to calculate using our exact analytic solutions and assuming a central galactic Kerr-Newman black hole, the Lense-Thirring effect and the corresponding Lense-Thirring period for the observed stars S2,S14 for various values of the Kerr parameter and the electric charge of the central black holesee Tables 1-2. We observe that the contribution of the electric charge on the frame-dragging precession is small.
Periapsis advance for non-spherical polar timelike Kerr-Newman orbits
The purpose of this section is twofold. First, we apply closed form analytic expressions for the periapsis advance for non-constant radius orbits in Kerr-Newman spacetime, for the computation of this relativistic effect for the observed S-star orbits in the central arcsecond of SgrA* supermassive black hole.
Parameters for the star S14 37.88 7.38 × 10 6 Table 2: Lense-Thirring precession for the star S14 in the central arcsecond of the galactic centre, using the exact formula (86) . We assume a central galactic Kerr-Newman black hole with mass M BH = 4.06 × 10 6 M ⊙ and that the orbit of S14 star is a timelike non-spherical polar Kerr-Newman geodesic. The computation of the orbital period of the star S14 was performed using the exact result in eqn.(91).
Secondly, this computation will provide us with realistic values for the first integrals of motion associated with non-circular orbits in KN and KN-(a)dSspacetime. In principle, the latter values can be used as input in our analytic expressions for the redshift/blueshift experienced by photons emitted by test particles such as S-stars.
In [6] we computed a closed-form analytic expression for the periapsis advance that a non-spherical polar timelike orbit undergoes in Kerr spacetime in terms of Abel-Jacobi's amplitude function:
∆Ψ GTR = ∆Ψ − 2π = am Q 4 √ 1 − E 2 1 √ α − γ 1 √ β − δ π 2 F 1 2 , 1 2 , 1, κ 2 , a 2 (1 − E 2 ) Q − 2π.(96)
The functional form of the solution will remain the same by incorporating the electric charge of the black hole, however the roots α, β, γ, δ will now be solutions of the quartic polynomial:
R = ((r 2 + a 2 )E) 2 − (r 2 + a 2 + e 2 − 2r)(r 2 + Q + a 2 E 2 ) = 0,(97)
and thus they will differ from those of [6]. We compute with the aid of (96) the periapsis advance for the stars S2 and S14 assuming that they orbit in a timelike non-spherical polar Kerr-Newman geodesic. Our results are displayed in Tables 3 and 4. We also computed with the aid of the exact formula Eqn. (103) in [8], the pericentre-shift for the stars S2 and S14 for various values for the spin and charge of the central black hole.. By doing this exercise, we gain an appreciation of the effect of the electric charge of the rotating galactic black hole (we assume that the KN solution describes the curved spacetime geometry around SgrA*) on this observable. We also assume that the angular momentum axis of the orbit is co-aligned with the spin axis of the black hole and that the S−stars can be treated as neutral test particles i.e. their orbits are timelike non-circular equatorial Kerr-Newman geodesics, Our results are displayed in Tables 5-8 is evident in this case that the value of electric charge plays a significant role in the value of the pericentre-shift 8 .
A few comments are in order. The values of the hypothetical electric charge of the central Kerr-Newman black hole have been chosen so that the surrounding spacetime represents a black hole, i.e. the singularity surrounded by the horizon, the electric charge and angular momentum J must be restricted by the relation:
GM c 2 ≥ J M c 2 + Ge 2 c 4 1/2 ⇔ (98) GM c 2 ≥ a 2 + Ge 2 c 4 1/2 ⇒ (99) e 2 ≤ GM 2 (1 − a ′2 )(100)
where in the last inequality a ′ = a GM/c 2 denotes a dimensionless Kerr parameter. Concerning the tentative values for the electric charge e we used in applying our exact solutions for the case of SgrA* black hole we note that their likelihood is debatable: There is an expectation that the electric charge trapped in the galactic nucleous will not likely reach so high values as the ones close to the extremal values predicted in (100) that allow the avoidance of a naked singularity. However, more precise statements on the electric charge's magnitude of the Table 6: Periastron precession for the star S14 in the central arcsecond of the galactic centre, using the exact formula Eqn. (103) in [8] . We assume a central galactic Kerr-Newman black hole with mass M BH = 4.06 × 10 6 M ⊙ and that the orbit of the star S14 is a timelike non-circular equatorial Kerr-Newman geodesic.
galactic black hole or its upper bound will only be reached once the relativistic effects predicted in this work are measured and a comparison of the theory we developed with experimental data will take place.
Periapsis advance for non-spherical polar timelike Kerr-Newman -de Sitter orbits
In this section we are going to derive a new closed form expression for the pericentre-shift of a test particle in a timelike non-spherical polar Kerr-Newman-
dθ √ Θ ′ = 2 rP rA dr √ R ′ = 2 √ ω
Inverting the elliptic integral for z we obtain:
z = −β 1 ω 1 sn 2 2 √ ω √ Λ 3 H −F D 1 2 , β, 1, x Γ(1/2) 2 Γ(1) + ωF D 3 2 , β, 2, x Γ( 3 2 )Γ( 1 2 ) Γ(2) √ ω1 √ δ1−β1 √ α1−β12a 2 2ω1 Λ 3 , κ 2 − 1 .(107)
Equivalently the change in latitude after a complete radial oscillation leads to the following exact novel expression for the periastron advance for a test particle in an non-spherical polar Kerr-Newman-de Sitter orbit:
θ = arccos ± √ z = cos −1 ± −β 1 ω 1 sn 2 2 √ ω √ Λ 3 H −F D 1 2 , β, 1, x π + ωF D 3 2 , β, 2, x Γ( 3 2 )Γ( 1 2 ) Γ(2) √ δ1−β1 √ α1−β1a 2 √ ω1 Λ 3 , κ 2 − 1 ,(108)
where:
β ≡ 1 2 , 1 2 , 1 2 , x ≡ κ 2 , λ 2 , µ 2 .(109)
Also the Jacobi modulus κ of the Jacobi's sinus amplitudinous elliptic function in formula (108), for the periapsis advance that a non-spherical polar orbit undergoes in the Kerr-Newman-de Sitter spacetime, is given in terms of the roots of the angular elliptic integral by:
κ 2 = ω 1 δ 1 δ 1 − β 1 = α 1 − β 1 α 1 δ 1 δ 1 − β 1 .(110)
The roots z Λ , z + , z − appearing in (106) are roots of the polynomial equation:
z 3 ( a 4 Λ 3 ) − z 2 a 2 Λ 3 [Q + (L − aE) 2 Ξ 2 ] + a 2 Ξz 2 (1 − E 2 Ξ) − z{[Q + (L − aE) 2 Ξ 2 ]Ξ + a 2 + 2aEΞ 2 (L − aE)} + Q = 0,(111)
after setting 10 L = 0. 10 We have the correspondence α 1 = z + , β 1 = z − , δ 1 = z Λ .
Computation of first integrals for spherical timelike geodesics in Kerr-Newman spacetime
For zero electric charge e = 0, Equations (118) and (119) reduce correctly to the corresponding equations for the first integrals of motion in Kerr spacetime [42]:
ξ = M (r 2 − a 2 ) ± r∆ (1 − 1 E 2 (1 − M r )) a(r − M ) ,(120)η Q a 2 (r − M ) = r 3 M [4a 2 M − r(r − 3M ) 2 ] − 2r 3 M r − M ∆[1 ± 1 − 1 E 2 (1 − M r )] + r 2 E 2 [r(r − 2M ) 2 − a 2 M ].(121)
The apparent impact factor for more general orbits
The apparent impact parameter Φ for the Kerr-Newman-(anti) de Sitter black hole can also be computed in the case in which the considered orbits depart from the equatorial plane and therefore θ = π/2. Again, we compute this quantity from the k µ k µ = 0 relation just taking into account its maximum character, i.e., that k r = 0. Our calculation yields:
Φ γ = −[aΞ 2 (r 2 + a 2 ) − aΞ 2 ∆ KN r ] ± Ξ 2 ∆ KN r [Ξ 2 r 4 + Q γ (a 2 − ∆ KN r )] −a 2 Ξ 2 + Ξ 2 ∆ KN r .
(122) Our exact expression (122) for the apparent impact parameter in the KN(a)dS spacetime, for zero cosmological constant (Λ = 0) and zero electric charge (e = 0), reduces to eqn.(59) (the apparent impact parameter for the Kerr black hole) in [30]. Also for zero value for Carter's constant Q γ equation (122) reduces to eqn.(44).
Conclusions
In this work using the Killing-vector formalism and the associated first integrals we computed the redshift and blueshift of photons that are emitted by geodesic massive particles and travel along null geodesics towards a distant observerlocated at a finite distance from the KN(a)dS black hole. As a concrete example we calculated analytically the redshift and blueshift experienced by photons emitted by massive objects orbiting the Kerr-Newman-(anti) de Sitter black hole in equatorial and circular orbits, and following null geodesics towards a distant observer.
In addition and extending previous results in the literature we calculated in closed analytic form firstly, the frame-dragging that experience test particles in non-equatorial spherical timelike orbits in KN and KNdS spacetimes in terms of generalised hypergeometric functions of Appell and Lauricella. Secondly, we computed in closed analytic the periapsis advance for timelike non-spherical polar orbits in Kerr-Newman and Kerr-Newman de Sitter spacetimes. In the Kerr-Newman case, the pericentre-shift is expressed in terms of Jacobi's amplitude function and Gauß hypergeometric function, while in the Kerr-Newman-de Sitter the periapsis-shift is expressed in an elegant way in terms of Jacobi's sinus amplitudinus elliptic function sn and Lauricella's hypergeometric function F D with three-variables.
We also computed the first integrals of motion for non-equatorial Kerr-Newman geodesics of constant radius. These expressions together with the analytic equation for the apparent impact factor we derived in this work-eqn (122), can be used to derive close form expressions for the redshift/blueshift of the emitted photons for such non-equatorial orbits in Kerr-Newman and Kerr-Newman (anti) de Sitter spacetimes. This will be a task for the future. Such a future endeavour will also involve the computation of the redshift/blueshift of the emitted photons for realistic values of the first integrals of motion associated with the observed orbits of S-stars (the emitters) such as those of Section 5.5, especially when the first measurements of the pericentre-shift of S2 will take place. The ultimate aim of course is to determine in a consistent way the parameters of the supermassive black hole that resides at the Galactic centre region SgrA*.
It will also be interesting to investigate the effect of a massive scalar field on the orbit of S2 star and in particular on its redshift and periapsis advance by combining the results of this work and the exact solutions of the Klein-Gordon-Fock (KGF) equation on the KN(a)dS and KN black hole backgrounds in terms of Heun functions produced in [43] (see also [44]) . This research will be the theme of a future publication 11 .
The fruitful synergy of theory and experiment in this fascinating research field will lead to the identification of the resident of the Milky Way's Galactic centre region and will provide an important test of General Relativity at the strong field regime.
Figure 1 :
1Specific energy E + for e = 0.11, Λ ′ = 0.001 for different values for the Kerr parameter.
Figure 2 :
2Specific energy E + for e = 0.11, Λ ′ = 0.0001 for different values for the Kerr parameter.
Figure 3 :
3Specific energy E + for e = 0.11, Λ ′ = 0.00001 for different values for the Kerr parameter.
Figure 4 :
4Specific energy E − for e = 0.11, Λ ′ = 0.001 for different values for the Kerr parameter.
Figure 5 :
5Specific energy E − for e = 0.11, Λ ′ = 0.0001 for different values for the Kerr parameter.
Figure 6 :
6Specific energy E − for e = 0.11, Λ ′ = 0.00001 for different values for the Kerr parameter.
Figure 7 :
7Specific angular momentum L + for e = 0.11, Λ ′ = 0.001 for different values for the Kerr parameter.
Figure 8 :
8Specific angular momentum L + for e = 0.11, Λ ′ = 0.0001 for different values for the Kerr parameter.
Figure 9 :
9Specific angular momentum L + for e = 0.11, Λ ′ = 10 −5 for different values for the Kerr parameter.
Figure 10 :
10Specific angular momentum L − for e = 0.11, Λ ′ = 10 −3 for different values for the Kerr parameter.
Figure 11 :
11Specific angular momentum L − for e = 0.11, Λ ′ = 10 −4 for different values for the Kerr parameter.
Figure 12 :
12Specific angular momentum L − for e = 0.11, Λ ′ = 10 −5 for different values for the Kerr parameter.
Figure 13 :
13Specific energy E + for e = 0.11, Λ ′ = −0.001 for different values for the Kerr parameter.
Figure 14 :
14Specific momentum L + for e = 0.11, Λ ′ = −0.001 for different values for the Kerr parameter.
Figure 15 :
15Specific angular momentum L − for e = 0.6, Λ ′ = −0.01 for different values for the Kerr parameter.
Figure 16 :
16Specific energy E − for e = 0.11, Λ ′ = −0.001 for different values for the Kerr parameter.
Figure 17 :
17Specific angular momentum L − for e = 0.11, Λ ′ = −0.001 for different values for the Kerr parameter.
-(62) are plotted in Figs.(18)-(19) for different values of the spin of the central black hole and for positive cosmological constant for the corotating case. As the radius increases, z red → −z blue . 5 More general orbits for rotating charged black holes 5.1 Spherical orbits in Kerr-Newman spacetime 5.1.1 Frame-dragging for timelike spherical orbits
Figure 18 :
18The functions z red , z blue as functions of radius r e . The spin of the Kerr-Newman-de Sitter black hole was chosen as a = 0.52 and the dimensionless cosmological parameter as Λ ′ = 10 −33 .
Figure 19 :
19The functions z red , z blue as functions of radius r e . The spin of the Kerr-Newman-de Sitter black hole was chosen as a = 0.9939 and the dimensionless cosmological parameter as Λ ′ = 10 −33 .
D
(α, β 1 , . . . , β n , γ, 1, x 2 , . . . , x n )= Γ(γ)Γ(γ − α − β 1 ) Γ(γ − α)Γ(γ − β 1 ) F (n−1) D (α, β 2 , . . . , β n , γ − β 1 , x 2 , . . . , x n ), max{|x 2 |, . . . , |x n |} < 1, ℜ(γ − α − β 1 ) > 0.(82)
P 6 a
6KN (yr) LT P (yr) a = 0.9939, e = 0.11, Q = 5321.06355, E = 0.999988863 ∆φ GT R tpKN = = 0.9939, e = 0, Q = 5321.06355, E = 0.999988863 ∆φ GT R tpKN = 6.64981 arcsec revol.
. It Parameters for the star S2 Periapsis advance∆Ψ GTR a = 0.9939, e = 0.11, Q = 5693.30424, E = 0.999979485 ∆Ψ GTR KN = 682.512 arcsec revol. a = 0.52, e = 0.11, Q = 5693.30424, E = 0.999979485 ∆Ψ GTR KN = 682.533 arcsec revol.
= 0
0Parameters for the star S2 Periapsis advance a = 0.52, e = 0.025, L = 75.4539876, E = 0.999979485 δ teKN P = 677.571 arcsec revol. a = 0.52, e = 0.1, L = 75.4539876, E
Table 3 :
3Periastron precession for the star S2 in the central arcsecond of the
galactic centre, using the exact formula (96) . We assume a central galactic
Kerr-Newman black hole with mass M BH = 4.06 × 10 6 M ⊙ and that the orbit of
S2 star is a timelike non-spherical polar Kerr-Newman geodesic.
Parameters for the star S14
Periapsis advance∆Ψ GTR
a = 0.9939, e = 0.11, Q = 5321.06355, E = 0.999988863 ∆Ψ GTR
KN = 730.351 arcsec
revol.
a = 0.52, e = 0.11, Q = 5321.06355, E = 0.999988863
∆Ψ GTR
KN = 730.376 arcsec
revol.
Table 4 :
4Periastron precession for the star S14 in the central arcsecond of the galactic centre, using the exact formula (96) . We assume a central galactic Kerr-Newman black hole with mass M BH = 4.06 × 10 6 M ⊙ and that the orbit of S14 star is a timelike non-spherical polar Kerr-Newman geodesic.
Table 5 :
5Periastron precession for the star S2 in the central arcsecond of the galactic centre, using the exact formula Eqn. (103) in[8] . We assume a central galactic Kerr-Newman black hole with mass M BH = 4.06 × 10 6 M ⊙ and that the orbit of the star S2 is a timelike non-circular equatorial Kerr-Newman geodesic.Parameters for the star S14
Periapsis advanceδ teKN
p
a = 0.9939, e = 0.11, L = 72.9456205, E = 0.999988863
δ teKN
p
= 717.128 arcsec
revol.
a = 0.9939, e = 0.025, L = 72.9456205, E = 0.999988863 δ teKN
p
= 718.531 arcsec
revol.
Table 7 :
7Periastron precession for the star S2 in the central arcsecond of the galactic centre, using the exact formula Eqn. (103) in[8], for three different values of the electric charge of the galactic black hole. The Kerr parameter is a Gal = 0.52 GMBH c 2 . We assume a central KN black hole mass M BH = 4.06 × 10 6 M ⊙ and that the orbit of the star S2 is a timelike non-circular equatorial Kerr-Newman geodesic.
Table 8 :
8Periastron precession for the star S14 in the central arcsecond of the galactic centre, using the exact formula Eqn. (103) in[8], for three different values of the electric charge of the galactic black hole.The Kerr parameter is a Gal = 0.52 GMBH c 2 . We assume a central black hole mass M BH = 4.06 × 10 6 M ⊙ de Sitter geodesic. After one complete revolution the angular integration has to satisfy the equation:
Observational work is ongoing towards the detection of the periastron shift of the star S2 and the discovery of putative closer stars-in the central milliarcsecond of SgrA* supermassive black hole[13], which could allow an astrometric measurement of the black hole spin as envisaged e.g. in[6].2 This is proven by solving the relativistic Hamilton-Jacobi equation by the method of separation of variables.
As we mentioned already in the introduction, the charged Kerr solution possesses another hidden constant, Carter's constant Q. Alternatively, the complete integrability of the geodesic equations in KN(a)dS spacetime can be understood as follows: The Kerr-Newman family of spacetimes possesses in addition to the two Killing vectors a Killing tensor field K αβ . This tensor can be expressed in terms of null tetrads (e.g. see eqn(7) in[44]) which implies the existence of a constant of motion K = Kµν U µ U ν . This is related to Carter's constant by: K ≡ Q + (L − aE) 2 . See also[41],[42].
The second term in the denominator in equations (45)-(46) encodes the contribution of the movement of the detector's frame[30]. If this quantity is negligible in comparison to the contribution steming from the U t d component then the detector can be considered static at spatial infinity.
We should mention at this point the extreme black hole solutions for spherical timelike non-polar geodesics in Kerr-Newman spacetime obtained in[33] in terms of formal integrals.
The parameters are consistent with data for the periastron, apoastron distances and orbital period for the stars S2, S14[4] (see also[6]).
The sextic polynomial R ′ is obtained by setting µ = 1 and L = 0 in(12).
An initial study of such hypothetical scalar effects has been performed by Gravity Collaboration for the Kerr background and in solving approximately the KGF equation for the case in which the Compton wavelength of the scalar field is much larger than the gravitational radius of the black hole[45].
For computing the radial hyperelliptic integral in (101) 9 in closed analytic form in terms of Lauricella's multivariable hypergeometric function F D , we apply the transformation[6]:The roots of the sextic radial polynomial are organised as follows:where α ν = α µ−1 , α ρ = α µ+1 = r P , α µ = r A , α i = α µ+i+1 , i = 1, 3. Also we define:The integral on the left of Eqn.(101) is an elliptic integral of the form:
Erklärung der Perihelbewegung des Merkur aus der allgemeinen Relativitätstheorie. A Einstein, Sitzungsberichte der Preussischen Akademie der Wissenschaften. 831A. Einstein, Erklärung der Perihelbewegung des Merkur aus der allge- meinen Relativitätstheorie, Sitzungsberichte der Preussischen Akademie der Wissenschaften,(1915) 831.
Measuring distance and properties of the Milky Way's central supermassive black hole with stellar orbits. A M Ghez, arXiv:0808.2870Astrophys. J. 68984ScienceA. M. Ghez et al, Measuring distance and properties of the Milky Way's central supermassive black hole with stellar orbits, Astrophys. J. 689,(2008)1044, (arXiv:0808.2870), L. Meyer et al, The Shortest-Known- Period Star Orbiting Our Galaxy's Supermassive Black Hole, Science 338 (2012)84
The nuclear cluster of Milky Way: our primary testbed for the interaction of a dense star cluster with a massive black hole. R , Class. Quantum Grav. 82244007Rev.Mod. Phys.R. Genzel et al 2010, Rev.Mod. Phys. 82 3121-95, R. Schödel et al, The nuclear cluster of Milky Way: our primary testbed for the interaction of a dense star cluster with a massive black hole Class. Quantum Grav. 31 (2014) 244007
Sinfoni in the galactic centre: young stars and infrared flares in the central light-month. F Eisenhauer, Astrophys.J. 628F. Eisenhauer et al, Sinfoni in the galactic centre: young stars and infrared flares in the central light-month (2005) Astrophys.J. 628 246-59
C M Will, Theory and Experiment in Gravitational Physics. Cambridge University PressSecond EditionC.M. Will,Theory and Experiment in Gravitational Physics, Cambridge University Press, Second Edition (2018)
Periapsis and gravitomagnetic precessions of stellar orbits in Kerr and Kerr-de Sitter black hole spacetimes. G V Kraniotis, Class. Quantum Grav. 24G. V. Kraniotis, Periapsis and gravitomagnetic precessions of stellar orbits in Kerr and Kerr-de Sitter black hole spacetimes,Class. Quantum Grav. 24 (2007) 1775-1808;
Kraniotis Precise analytic treatment of Kerr and Kerr-(anti) de Sitter black holes as gravitational lenses. G V , Class. Quant.Grav. 2885021G. V. Kraniotis Precise analytic treatment of Kerr and Kerr-(anti) de Sitter black holes as gravitational lenses, Class. Quant.Grav. 28 (2011) 085021
Gravitational lensing and frame dragging of light in the Kerr-Newman and the Kerr-Newman-(anti) de Sitter black hole spacetimes. G V Kraniotis, arXiv:1401.7118Gen. Rel. Grav. 461818G. V. Kraniotis, Gravitational lensing and frame dragging of light in the Kerr-Newman and the Kerr-Newman-(anti) de Sitter black hole spacetimes, Gen. Rel. Grav. 46 (2014) 1818 [arXiv:1401.7118]
Frame-dragging and bending of light in Kerr and Kerr-(anti) de Sitter spacetimes. G V Kraniotis, Class.Quant.Grav. 22G. V. Kraniotis, Frame-dragging and bending of light in Kerr and Kerr- (anti) de Sitter spacetimes, Class.Quant.Grav. 22 (2005) 4391-4424
Probing post-Newtonian physics near the galactic black hole with stellar redshift measurements. S Zucker, The Astrophysical Journal. 639S. Zucker et al,Probing post-Newtonian physics near the galactic black hole with stellar redshift measurements, The Astrophysical Journal, 639: L21- L24 (2006)
Detection of the gravitational redshift in the orbit of the star S2 near the Galactic centre massive black hole. A&A. 61515Gravity Collaboration, et al, Detection of the gravitational redshift in the orbit of the star S2 near the Galactic centre massive black hole A&A 615,L15(2018)
T Do, arXiv:1907.10731Relativistic redshift of the star S0-2 orbiting the Galactic centre supermassive black hole. astro-ph.GAT. Do et al, Relativistic redshift of the star S0-2 orbiting the Galactic centre supermassive black hole, arXiv:1907.10731[astro-ph.GA]
What stellar orbit is needed to measure the spin of the Galactic centre black hole from astrometric data?. I Waisberg, Mon.Not.Roy.Astron.Soc. 4763I. Waisberg et al, What stellar orbit is needed to measure the spin of the Galactic centre black hole from astrometric data?, Mon.Not.Roy.Astron.Soc. 476 (2018) no.3, 3600-3610
Metric of a Rotating, Charged Mass. E T Newman, E Couch, K Chinnapared, A Exton, A Prakash, R Torrence, Journal of Mathematical Physics. 6918E. T. Newman, E. Couch, K. Chinnapared, A. Exton, A. Prakash and R. Torrence, Metric of a Rotating, Charged Mass, Journal of Mathematical Physics 6, 918 (1965)
H Ohanian, R Ruffini, Gravitation and Spacetime. New YorkNorton and CompanyH. Ohanian and R. Ruffini 1994, Gravitation and Spacetime (New York: Norton and Company)
Gravitational field of a spinning mass as an example of algebraically special metrics. R P Kerr, Phys. Re. Lett. 11237R P Kerr, Gravitational field of a spinning mass as an example of alge- braically special metrics, Phys. Re. Lett. 11 (1963) 237
. S Perlmutter, Astrophys.Journal. 517565S. Perlmutter et al, Astrophys.Journal 517(1999) 565;
. A V Filippenko, Astron.J. 1161009A. V. Filippenko et al Astron.J.116 1009
General relativity, the cosmological constant and modular forms. G V Kraniotis, S B Whitehouse, Class. Quantum Grav. 19G. V. Kraniotis and S. B. Whitehouse, General relativity, the cosmological constant and modular forms Class. Quantum Grav. 19 (2002), 5073-5100
Compact calculation of the perihelion precession of Mercury in general relativity the cosmological constant and Jacobi's inversion problem. G V Kraniotis, S B Whitehouse, Class. Quantum Grav. 20G. V. Kraniotis and S. B. Whitehouse, Compact calculation of the perihelion precession of Mercury in general relativity the cosmological constant and Jacobi's inversion problem Class. Quantum Grav. 20 (2003) 4817-4835
Precise relativistic orbits in Kerr and Kerr-(anti) de Sitter spacetimes. G V Kraniotis, Class.Quantum Grav. 21G. V. Kraniotis, Precise relativistic orbits in Kerr and Kerr-(anti) de Sitter spacetimes , Class.Quantum Grav. 21 (2004) 4743-4769
Geodesic equation in Schwarzschild-(anti-)de Sitter space-times: Analytical solutions and applications. E Hackmann, C Lämmerzahl, Phys.Rev. 7824035E. Hackmann, C. Lämmerzahl, Geodesic equation in Schwarzschild-(anti- )de Sitter space-times: Analytical solutions and applications Phys.Rev.D78 (2008) 024035
Comparison of general relativistic and pseudo-Newtonian description of Magellanic clouds motion in the field of Milky Way. Z Stuchlík, J Schee, Int. J. of Mod.Phys.D. 2118Influence of the cosmological constant on the motion of Magellanic Clouds in the gravitational field of Milky Way JCAP 9Z.Stuchlík and J. Schee, Comparison of general relativistic and pseudo- Newtonian description of Magellanic clouds motion in the field of Milky Way, Int. J. of Mod.Phys.D 21 (2012)1250031; Influence of the cosmological constant on the motion of Magellanic Clouds in the gravitational field of Milky Way JCAP 9 (2011)018
Contribution of the cosmological constant to the bending of light in Kerr-de Sitter spacetime. J Sultana, Phys.Rev. D. 8842003J. Sultana, Contribution of the cosmological constant to the bending of light in Kerr-de Sitter spacetime Phys.Rev. D 88, 042003 (2013)
The Event Horizon Telescope Collaboration, First M87 Event Horizon Telescope Results. I. The Shadow of the Supermassive Black Hole. The Astrophysical Journal Letters. 8751The Event Horizon Telescope Collaboration, First M87 Event Horizon Tele- scope Results. I. The Shadow of the Supermassive Black Hole, The Astro- physical Journal Letters, 875: L1 (2019) April 10
Equatorial circular orbits in Kerranti-de Sitter spacetimes. E Slaný, M Pokorná, Z Stuchlík, Gen. Rel.Grav. 45E. Slaný, M. Pokorná and Z.Stuchlík, Equatorial circular orbits in Kerr- anti-de Sitter spacetimes, Gen. Rel.Grav. 45 (2013) 2611-2633
. J M Bardeen, W H Press, S P Teukolsky, The Astrophys.Journal. 178J.M. Bardeen, W.H. Press and S. P. Teukolsky, The Astrophys.Journal, 178, (1972), 347-369
. S Soroushfar, Phys.Rev. 94224052S. Soroushfar et al, Phys.Rev. D94 (2016) no.2, 024052
Kerr-Newman-AdS Black Hole In Quintessential Dark Energy. Z Xu, Phys.Rev. 95664015Z. Xu et al, Kerr-Newman-AdS Black Hole In Quintessential Dark Energy, Phys.Rev. D95 (2017) no.6, 064015
Kraniotis, Gravitational lensing and frame dragging of light in the Kerr-Newman and the Kerr-Newman-(anti) de Sitter black hole spacetimes. G V M Kraniotis ; C, D L25, T Merritt, S Alexander, C M Mikkola, ; L Will, F Iorio ; G, A Rubilar, P C Eckart ; 95, G J Fragile, ; P Mathews, ; M Saha, Grould, arXiv:1008.1720v4arXiv:1401.7118Periapsis and gravitomagnetic precessions of stellar orbits in Kerr and Kerr-de Sitter black hole spacetimes. N. N. Weinberg, M. Milosavljević and A. M. Ghez241818Gen. Rel. Grav.. General relativistic effects on the orbit of the S2 star with GRAVITY A&A. Iorio and FG. V. Kraniotis, Periapsis and gravitomagnetic precessions of stellar orbits in Kerr and Kerr-de Sitter black hole spacetimes,Class. Quantum Grav. 24 (2007) 1775-1808; C. M. Will, ApJ, 674 (2008) L25, D. Merritt, T. Alexander, S. Mikkola and C. M. Will, Phys. Rev.D 81 (2010) 062002, L. Iorio,arXiv:1008.1720v4[gr-qc], also: Jaroszyński M. Acta Astronom- ica (1998) 48, 653, G. F. Rubilar and A. Eckart (2001) A&A 374, 95, P.C. Fragile and G. J. Mathews 2000, ApJ 542, 328, N. N. Weinberg, M. Milosavljević and A. M. Ghez, (2005) ApJ 622, 878,Preto M. and P. Saha (2009) ApJ 703, 1743, G. V. Kraniotis, Gravitational lensing and frame dragging of light in the Kerr-Newman and the Kerr-Newman- (anti) de Sitter black hole spacetimes, Gen. Rel. Grav. 46 (2014) 1818 [arXiv:1401.7118],M. Grould et al, General relativistic effects on the or- bit of the S2 star with GRAVITY A&A 608, A60 (2017), L. Iorio and F.
3, A. Hees et al, Testing General Relativity with stellar orbits around the supermassive black hole in our Galactic center. Zhang ; Rong-Gen, Tong-Bo Cai, Shao-Jiang Liu, Wang, On the post-Keplerian corrections to the orbital periods of a twobody system and their application to the Galactic Center. 839Commun.Theor.Phys.Zhang, On the post-Keplerian corrections to the orbital periods of a two- body system and their application to the Galactic Center, Astrophys.J. 839 (2017) no.1, 3, A. Hees et al, Testing General Relativity with stellar orbits around the supermassive black hole in our Galactic center, Phys.Rev.Lett. 118 (2017) no.21, 211101, Rong-Gen Cai, Tong-Bo Liu, , Shao-Jiang Wang,Commun.Theor.Phys. 70 (2018) no.6, 735-748, Gravity Collabora- tion, Scalar field effects on the orbit of S2 star, Mon.Not.Roy.Astron.Soc. 489 (2019) no.4, 4606-4621
Kerr black hole parameters in terms of the redshift/blueshift of photons emitted by geodesic particles. A Herrera-Aguilar, U Nucamendi, Phys.Rev. 9245024A. Herrera-Aguilar, U. Nucamendi, Kerr black hole parameters in terms of the redshift/blueshift of photons emitted by geodesic particles Phys.Rev.D92 (2015) 045024
. M Preto, P Saha, ApJ. 7031743Preto M. and P. Saha (2009) ApJ 703, 1743
Sulle funzioni ipergeometriche a più variabili. G Lauricella, Rend.Circ.Mat. Palermo. 7G. Lauricella Sulle funzioni ipergeometriche a più variabili, Rend.Circ.Mat. Palermo 7 (1893) pp 111-158;
Sur les fonctions hypergéometriques de deux variables. P Appell, J. Math.Pure Appl. 8P. Appell Sur les fonctions hypergéometriques de deux variables, J. Math.Pure Appl.8 (1882) 173-216
Generalized Wilkins effect and selected orbits in a Kerr-Newman geometry. M Johnston, R Ruffini, Phys.Rev.D. 10M. Johnston and R. Ruffini, Generalized Wilkins effect and selected orbits in a Kerr-Newman geometry, Phys.Rev.D.10 (1974) 2324-2329
Observation of Gravitational Waves from a Binary Black Hole Merger. B P Abbott, Phys.Rev.Lett. 11661102B. P. Abbott et al,Observation of Gravitational Waves from a Binary Black Hole Merger Phys.Rev.Lett.116, 061102 (2016)
GW151226: Observation of Gravitational Waves from a 22-Solar-Mass Binary. B P Abbott, Phys.Rev.Lett. 116241103B. P. Abbott et al,GW151226: Observation of Gravitational Waves from a 22-Solar-Mass Binary Phys.Rev.Lett.116, 241103 (2016);
GW170104:Observation of a 50-Solar-Mass Binary Black Hole Coalescence at Redshift 0.2. B P Abbott, Phys. Rev. Lett. 118221101B. P. Abbott et al,GW170104:Observation of a 50-Solar-Mass Binary Black Hole Coales- cence at Redshift 0.2 Phys. Rev. Lett.118,221101 (2017);
GW170814: A Three-Detector Observation of Gravitational Waves from a Binary Black Hole Coalescence. B P Abbott, Phys.Rev.Lett. 119141101B. P. Abbott et al,GW170814: A Three-Detector Observation of Gravitational Waves from a Binary Black Hole Coalescence Phys.Rev.Lett.119, 141101 (2017);
GW170817: Observation of Gravitational Waves from a Binary Neutron Star Inspiral. B P Abbott, Phys.Rev.Lett. 119161101B. P. Abbott et al,GW170817: Observation of Gravitational Waves from a Binary Neutron Star Inspiral Phys.Rev.Lett.119, 161101 (2017)
Kerr-Newman-de Sitter black holes with a restricted repulsive barrier of equatorial photon motion. Z Stuchlík, G Bao, E Østgaard, S Hledík, Phys. Rev. D. 5884003Z. Stuchlík, G. Bao, E. Østgaard and S. Hledík, Kerr-Newman-de Sitter black holes with a restricted repulsive barrier of equatorial photon motion, Phys. Rev. D. 58 (1998) 084003
Global structure of the Kerr family of gravitational fields. B Carter, Phys.Rev. 174B. Carter, Global structure of the Kerr family of gravitational fields Phys.Rev.174 (1968)1559-71
Equatorial photon motion in the Kerr-Newman spacetimes with a non-zero cosmological constant. Z Stuchlík, S Hledík, Class. Quantum Grav. 17Z. Stuchlík and S.Hledík, Equatorial photon motion in the Kerr-Newman spacetimes with a non-zero cosmological constant, Class. Quantum Grav. 17 (2000) 4541-4576
Exact spacetimes in Einstein's General Relativity. J B Griffiths, Jiří Podolský, Cambridge Monographs on Mathematical Physics. Cambirdge University PressJ. B. Griffiths and Jiří Podolský, Exact spacetimes in Einstein's General Relativity, Cambridge Monographs on Mathematical Physics, Cambirdge University Press (2009)
The motion of test particles in black-hole backgrounds with non-zero cosmological constant. Z Stuchlík, Bull. of the Astronomical Institute of Chechoslovakia. 34Z. Stuchlík, The motion of test particles in black-hole backgrounds with non-zero cosmological constant, Bull. of the Astronomical Institute of Che- choslovakia 34 (1983) 129-149
On Quadratic First Integrals of the Geodesic Equations for Type 22 Spacetimes. M Walker, R Penrose, Commun. math. Phys. 8M. Walker and R. Penrose, On Quadratic First Integrals of the Geodesic Equations for Type 22 Spacetimes , Commun. math. Phys. I8, 265-274 (1970
The Mathematical Theory of Black Holes Oxford Classic Texts in Physical Sciences. S Chandrasekhar, S. Chandrasekhar, The Mathematical Theory of Black Holes Oxford Classic Texts in Physical Sciences, 1992
The Klein-Gordon-Fock equation in the curved spacetime of the Kerr-Newman (anti) de Sitter black hole. G V Kraniotis, arXiv:1602.04830Class. Quantum Grav. 33225011G. V. Kraniotis, The Klein-Gordon-Fock equation in the curved spacetime of the Kerr-Newman (anti) de Sitter black hole Class. Quantum Grav. 33 (2016) 225011,[arXiv:1602.04830];
The problem of perturbative charged massive scalar field in the Kerr-Newman-(anti) de Sitter black hole background. G V Kraniotis, Insight, 21/11/2016 and references thereinG. V. Kraniotis, CQG+ Insight: The problem of perturbative charged massive scalar field in the Kerr-Newman- (anti) de Sitter black hole background, 21/11/2016 and references therein
The massive Dirac equation in the Kerr-Newman-de Sitter and Kerr-Newman black hole spacetimes. G V Kraniotis, arXiv:1801.03157J.Phys.Comm. 335026G. V. Kraniotis, The massive Dirac equation in the Kerr-Newman-de Sitter and Kerr-Newman black hole spacetimes J.Phys.Comm. 3 (2019) 035026,[arXiv:1801.03157 ]
Scalar field effects on the orbit of S2 star. 489Gravity Collaboration, Scalar field effects on the orbit of S2 star, Mon.Not.Roy.Astron.Soc. 489 (2019) no.4, 4606-4621
| []
|
[
"CONSTRAINED CONVOLUTIONAL-RECURRENT NETWORKS TO IMPROVE SPEECH QUALITY WITH LOW IMPACT ON RECOGNITION ACCURACY",
"CONSTRAINED CONVOLUTIONAL-RECURRENT NETWORKS TO IMPROVE SPEECH QUALITY WITH LOW IMPACT ON RECOGNITION ACCURACY"
]
| [
"Rasool Fakoor [email protected] \nDept. of Computer Science and Engineering Univ. of Texas at Arlington\nMicrosoft Research Redmond\n76019, 98052TX, WA\n",
"Xiaodong He \nDept. of Computer Science and Engineering Univ. of Texas at Arlington\nMicrosoft Research Redmond\n76019, 98052TX, WA\n",
"Ivan Tashev [email protected] \nDept. of Computer Science and Engineering Univ. of Texas at Arlington\nMicrosoft Research Redmond\n76019, 98052TX, WA\n",
"Shuayb Zarar [email protected] \nDept. of Computer Science and Engineering Univ. of Texas at Arlington\nMicrosoft Research Redmond\n76019, 98052TX, WA\n"
]
| [
"Dept. of Computer Science and Engineering Univ. of Texas at Arlington\nMicrosoft Research Redmond\n76019, 98052TX, WA",
"Dept. of Computer Science and Engineering Univ. of Texas at Arlington\nMicrosoft Research Redmond\n76019, 98052TX, WA",
"Dept. of Computer Science and Engineering Univ. of Texas at Arlington\nMicrosoft Research Redmond\n76019, 98052TX, WA",
"Dept. of Computer Science and Engineering Univ. of Texas at Arlington\nMicrosoft Research Redmond\n76019, 98052TX, WA"
]
| []
| For a speech-enhancement algorithm, it is highly desirable to simultaneously improve perceptual quality and recognition rate. Thanks to computational costs and model complexities, it is challenging to train a model that effectively optimizes both metrics at the same time. In this paper, we propose a method for speech enhancement that combines local and global contextual structures information through convolutional-recurrent neural networks that improves perceptual quality. At the same time, we introduce a new constraint on the objective function using a language model/decoder that limits the impact on recognition rate. Based on experiments conducted with real user data, we demonstrate that our new context-augmented machinelearning approach for speech enhancement improves PESQ and WER by an additional 24.5% and 51.3%, respectively, when compared to the best-performing methods in the literature. | 10.1109/icassp.2018.8462042 | [
"https://arxiv.org/pdf/1802.05874v1.pdf"
]
| 3,362,911 | 1802.05874 | 534a0623e7f03b29c7c8ba1bdac6793a730ea2bc |
CONSTRAINED CONVOLUTIONAL-RECURRENT NETWORKS TO IMPROVE SPEECH QUALITY WITH LOW IMPACT ON RECOGNITION ACCURACY
Rasool Fakoor [email protected]
Dept. of Computer Science and Engineering Univ. of Texas at Arlington
Microsoft Research Redmond
76019, 98052TX, WA
Xiaodong He
Dept. of Computer Science and Engineering Univ. of Texas at Arlington
Microsoft Research Redmond
76019, 98052TX, WA
Ivan Tashev [email protected]
Dept. of Computer Science and Engineering Univ. of Texas at Arlington
Microsoft Research Redmond
76019, 98052TX, WA
Shuayb Zarar [email protected]
Dept. of Computer Science and Engineering Univ. of Texas at Arlington
Microsoft Research Redmond
76019, 98052TX, WA
CONSTRAINED CONVOLUTIONAL-RECURRENT NETWORKS TO IMPROVE SPEECH QUALITY WITH LOW IMPACT ON RECOGNITION ACCURACY
Index Terms-Speech EnhancementDeep LearningMulti-task LearningCurriculum LearningLanguage Model
For a speech-enhancement algorithm, it is highly desirable to simultaneously improve perceptual quality and recognition rate. Thanks to computational costs and model complexities, it is challenging to train a model that effectively optimizes both metrics at the same time. In this paper, we propose a method for speech enhancement that combines local and global contextual structures information through convolutional-recurrent neural networks that improves perceptual quality. At the same time, we introduce a new constraint on the objective function using a language model/decoder that limits the impact on recognition rate. Based on experiments conducted with real user data, we demonstrate that our new context-augmented machinelearning approach for speech enhancement improves PESQ and WER by an additional 24.5% and 51.3%, respectively, when compared to the best-performing methods in the literature.
INTRODUCTION
Recently, deep learning architectures have led to remarkable progress in problems like speech recognition [1,2], image classification [3], machine translation [4,5], image and video caption generation [6,7], speech separation and enhancement [8,9,10,11] and many others. Speech enhancement is the process of eliminating noise from an audio signal prior to primarily two higher-level tasks, namely recognition and playback through speaker phones [12]. Because traditional analytical processing methods have a limited capacity to capture complex signal and noise statistics, data-driven approaches are becoming increasingly popular to enhance speech [13,14,15,16]. These learning based approaches typically aim to optimize a particular criterion during training (i.e. the signal mean-squared error (MSE)), while the performance of speech enhancement is usually evaluated from different aspects by * Work was done as an intern at Microsoft Research Redmond. Correspondence to: Rasool Fakoor ( [email protected]). multiple metrics (e.g. WER, PESQ) during test and inference time. Therefore, there is a metric discrepancy between training and evaluation, which leads to suboptimal performance.
Jointly training the speech enhancement and recognition systems (i.e. ASR) to simultaneously improve MSE and WER could potentially alleviate this problem. Unfortunately, not only optimizing such a model can be extremely challenging namely due to the complexity of ASR models but also it can be computationally very expensive, raising the need for careful modeling and training. Motivated by these observations, in this paper, we propose a model that not only effectively combines global and local contextual knowledge to enhance speech but also learns how to regularize the speech enhancement and denoising model such that the metric discrepancy could be mitigated. Specifically, to achieve good enhancement performance, our proposed model consists of convolutional layers coupled with recurrent cells. Further, we constrain this model by including a language model/decoder in the optimization objective function. Thus, our network tries to limit the impact on recognition rate, while improving speech quality. To effectively train our model, we also adapt a curriculum-learning-based [17] training paradigm.
In contrast to our approach, existing methods for speech enhancement utilize a single, unconstrained signal-quality criterion such as the MSE for optimization [13,14,15,16]. Thus, although these algorithms improve speech quality, they degrade the recognition rate (measured by the WER metric). In this paper, we aim to overcome this limitation. The following are the specific contributions that we make:
• We propose a contextually-aware neural-network architecture for speech enhancement that is constrained with a language-decoder model to limit the impact on WER.
• We demonstrate a methodology to train such a network based on curriculum learning for multi-task regression.
• Through extensive experimentation and analysis, we show simultaneous improvements of 24.5% and 51.3% in PESQ and WER over existing methods in the literature that only optimize signal quality.
PROPOSED APPROACH
One of the main challenges in speech enhancement when using deep neural networks is to effectively combine the local and global structure of input frames. For example, the model not only needs to learn how to denoise an audio frame based on the signal in that frame but also needs to take into account the temporal structure of the entire sequence of frames over a short span of time. The recurrent neural network (RNN) is a good fit for the speech enhancement problem given its capability to model the temporal structure of speech data. As we will show in section 3, although the RNN provides a useful structure for this problem, it is insufficient to achieve good performance. This is because input speech segments are very long, usually containing thousands of frames, making it difficult for the RNN to catch both local and global contextual information for speech enhancement. Moreover, state-of-the-art speech enhancement and denoising models are usually trained on a particular criterion, while the performance is evaluated from different aspects by multiple metrics. For example, most of the models for speech enhancement are formulated as a regression problem [14,15] and use MSE as the loss function during training. However, during the evaluation, PESQ, WER, or sentence error rate (SER) are used to assess the performance of the trained model. There is a significant metric discrepancy between training and testing, e.g., a model that is trained to achieve the lowest MSE during training does not necessarily give an improvement in WER or SER at test time.
To address these problems, we first propose a convolutionrecurrent neural network (CRNN) that can efficiently model local and global structure of the speech data. Moreover, we also propose a multi-task learning approach that addresses the metric discrepancy problem and leads to a more robust performance on the speech enhancement and denoising task.
Combining Local and Global Contexts
One way to capture temporal structure of the data is to use RNNs to model this relationship. However, simple RNNs do not have the adequate capacity to model both the longterm dependencies and local contextual information among different frames [18,19]. However, for good performance, the model needs to capture the local context among neighboring frames as well as the global context. This is important because the denoising networks not only need to use the surrounding frames to denoise the current frame but also higherlevel relationships to build a more effective model.
Motivated by these observations, we propose the CRNN, which models long-term dependencies between frames by the recurrent structure in the network and the local context by applying a convolution network over a local context window of neighboring frames. In this model, at every time step, t, our model first utilizes eight neighboring frames as the input to a three-layer convolution network that models the local structure of the input frame (f t ). The output of this network will be an input to an LSTM [19] unit at time t. The recurrent unit uses the current noise frame as well as previous hidden states to reconstruct an enhanced frame. To be specific, our network at time step t utilizes eight neighboring frames and h t−1 to reconstruct a single denoised frameĝ t . Our proposed model is shown in Figure1.
The objective function for this model minimizes the error between the enhanced frame (or denoised frame for short) and clean frame which can formally be defined as follow:
L re (F ; ω) = min ω t ĝ t − g t 2 2
(1)
where ω is a network parameter,ĝ t is a denoised frame, and g t is the clean frame that we use to train the parameters of this model. As we show in the experimental results section, this model outperforms other networks that either only model the local structure or the global structure of the data.
Multi-task Learning
The metric discrepancy is also the main challenge that most of the speech enhancement and denoising models face [14,15,16]. For example, while most of the models use MSE as the training metric, the performance is evaluated on other criteria such as PESQ, SER, and WER. One possible solution is to train the models to directly optimize PESQ or WER. Although this is plausible, there are a couple of problems. First, these metrics are very expensive to calculate and often it is not practical to use them directly during training. Moreover, these metrics (i.e PESQ) are usually discrete and nondifferentiable, making the optimization (Eq. 1) very difficult. The REINFORCE algorithm [20] can be used in certain situation, but gradient estimation using REINFORCE would be non-trivial in this setup as it deals with a continuous number for sampling and gradient calculation (i.e. at each time step, network outputs a denoised frame which is a continuous vector in the speech space, making the policy-gradient estimating a hard problem [21]).
Motivated by these problems, we propose a multi-task learning framework during training that uses a language model (i.e. language decoder) to regularize the training and improve the performance of the denoising network with respect to PESQ, WER, or SER. That is, we first run the model to perform denoising, and once the denoising is done, i.e., reaches the last frame of the input speech segment, our approach uses the last hidden unit representation of the CRNN as the input to a RNN based language model, in which the language model is trained to generate the text transcript of the input speech segment. This is like imposing another task of sequence-to-sequence multimodal translation, which encodes the sequence of the denoised speech signal into one vector representation and then translates it into the sequence of words in the transcription. In order to build this language model, we add the following loss function:
L lm (S, H t ; θ) = − T j |V | i s j i log(ŝ j i ) + λ Θ 2 2(2)
where H T is the last hidden unit of the denoising model, S is a transcript for a given file, and θ is the parameter of this network. The Eq. 2 is cross-entropy loss function that tries to minimize the word prediction error. Combing this loss function with Eq. 1 helps the network to constraint and regularize the denoising network such that it will have better performance regarding WER, SER, and PESQ during test time:
L(S, F, H t ; Θ) = L re + λ 1 L lm + λ 2 Θ 2 2(3)
This architecture is shown in Figure 2 in which the language model is shown in dotted box (b) and denoising model is shown in dotted box (a). It is worth noting that the intuition behind this model is that the original denoising model does unconstrained optimization 1 as the result, the denoising model only minimizes the MSE as much as it can without considering PESQ, WER, etc. This causes the model to sometimes overfit on the MSE metric and shows worst performance on other metrics. However, by adding the language model, the model is not only focused on minimizing the MSE, but also tries to denoise in a way such that the denoised speech signal can lead to better word prediction decoded by the language model. Therefore, adding the language modeling task effectively regularize the training of the denoising model and will lead to more robust performance as reflected in the improvements in terms of WER and PESQ too. As the results show, this approach is very effective, outperforms other methods significantly on a range of evaluation metrics.
Curriculum Learning
The language model and the denoising model operate very differently, while the language model catches the dependency at the word level, the denoising model works at the lower speech frame level. If we train them together from scratch, the model has hard time to converge. Specifically, the denoising model needs hundreds of epochs to converge to a stable 1 By unconstrained optimization, we are referring to the fact that we did not explicitly impose any constraints on Eq. 1, i.e. bound the model outputs model given the level of difficulty and complexity in this denoising problem. On the other hand, since there is one transcript per file and usually it is short (i.e around 60 words per files), the language model only needs a few epochs to converge. To deal with this problem, we design a curriculum learning paradigm [17] to train this model. We first train the denoising model for few hundred epochs until it stops improving, i.e. only train it with Eq.1. At this stage, we introduce Eq. 3 to the model as the new objective functions. Note that our proposed curriculum learning is different from traditional one [17] such that in the traditional curriculum learning, it first starts with the simpler problem then goes to the harder problem. However, in our proposed approach, we first start with the core task, then we combine it with another task to further regularize the training. As shown in the next section, the proposed method is very effective for the challenging denoising task.
EXPERIMENTS Dataset
We evaluate the performance of our methodology with singlechannel recordings based on real user queries to the Microsoft Windows Cortana Voice Assistant. We split studio-level clean recordings into training, validation and test sets comprising 7500, 1500 and 1500 queries, respectively. Further, we mixed these clean recordings with noise data (collected from 25 different real-world environments), while accounting for distortions due to room characteristics and distances from the microphone. Thus, we convolved the resulting noisy recordings with specific room-impulse responses and scaled them to achieve a wide input SNR range of 0-30 dB. Each (clean and noisy) query had more than 4500 audio frames of spectral amplitudes, each lasting 16 ms. We applied a smoothing function based on a Hann window to the frames allowing accurate reconstruction with a 50% overlap. These audio frames in the spectral domain formed the features for our algorithm.
Since we utilized a 512-point short-time Fourier Transform (STFT), each feature vector was a positive real number of a dimensionality of 256.
Hyperparameter Optimization
We use random search [22] used in our best model (CRNN + LM and CRNN). These models both have 1072 hidden units. The weight decays are 2.8951e −5 and 3.6998e −5 . In addition, the LM uses 857 words in its vocabulary and all transcript are capped to have 60 words maximum. In order to optimize our network parameters, we use Adam [23] with learning rates of 6.4710e −5 and set β1, β2 to 0.8 and 0.999, respectively. The convolution layers in our models (yellow boxes in Fig. 1 and 2) have the following specifications: 1) Conv 1 has 16 filters with the kernel size of (7, 5), stride size of (3, 1), 2) Conv 2 has 32 filters with the kernel size of (5, 3), stride size of (3, 1), and 3) Conv 3 has 64 filters with the kernel size of (5, 1), stride size of (3, 1). In addition, all convolution layers use (2, 1) dilation [24] as well.
Performance Comparison
We carry out an extensive evaluation to evaluate the proposed models. In the evaluation, we compare the proposed model with state-of-the-art baselines for the speech enhancement and denoising task. We summarize the results of these experiments in Table 1. We compare our models to recent deep neural network based approaches which are strong baselines, including [15], [14], and [16]. We first build a model (NOC) that does not consider the global context of the data (i.e. no RNN) and only considers local context. Then we extend these models to our denoising model CRNN. As the results show, our proposed model CRNN outperforms the baselines on the key metrics of PESQ, WER, SER, and others.
In addition, Table 1 shows that our proposed multi-task model (CRNN + LM) outperforms other models and furthers improve PESQ, WER, and SER. In addition, we show in Figure 3 the improvement in PESQ scores by using our model.
Curriculum learning
We also studied the impact of the proposed curriculum learning procedure. Given the large gap between denoising model which operate at the lower speech frame level and the language model which operate at the higher word level, it is important to use the proposed curriculum learning based train- Table 2. Results on effects of Curriculum learning (CL).
CONCLUSION
In this paper, we propose a model that combines both local and global contextual information for speech enhancement. We show that our approach leads to better enhancement performance compared to existing baselines. Furthermore, we propose multi-task learning with curriculum learning, which regularizes the training process of the speech-enhancement model through a language model/decoder. Thus, we limit the impact of speech enhancement on recognition accuracy.
Fig. 1 .
1Proposed architecture to combine local and global context of input frames.
Fig. 2 .
2Our multi-tasks based learning architecture.
Fig. 3 .
3The positive percentage increase for PESQ score in the test files.ing method (section 2.3) to train our model. As results in
Table show 2
show, when we train both denoising network and language model together from the beginning, the performance is quite bad, compared to the performance using the proposed curriculum learning. Our model (CRNN + LM + CL) 21.90 37.27 2.74Method
WER
SER
PESQ
Our model (CRNN)
22.73
38.93
2.70
Our model (CRNN + LM)
23.79
40.13
2.69
Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. G Hinton, L Deng, D Yu, G E Dahl, A R Mohamed, N Jaitly, A Senior, V Vanhoucke, P Nguyen, T N Sainath, B Kingsbury, IEEE Signal Processing Magazine. 296G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, and B. Kingsbury, "Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups," IEEE Signal Process- ing Magazine, vol. 29, no. 6, pp. 82-97, Nov 2012.
Contextdependent pre-trained deep neural networks for largevocabulary speech recognition. G E Dahl, D Yu, L Deng, A Acero, IEEE Transactions on Audio, Speech, and Language Processing. 201G. E. Dahl, D. Yu, L. Deng, and A. Acero, "Context- dependent pre-trained deep neural networks for large- vocabulary speech recognition," IEEE Transactions on Audio, Speech, and Language Processing, vol. 20, no. 1, pp. 30-42, Jan 2012.
Imagenet classification with deep convolutional neural networks. Alex Krizhevsky, Ilya Sutskever, Geoffrey E , NIPS. HintonAlex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hin- ton, "Imagenet classification with deep convolutional neural networks," in NIPS, 2012, pp. 1097-1105.
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, ICLR. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio, "Neural machine translation by jointly learning to align and translate," ICLR, 2015.
Sequence to sequence learning with neural networks. Ilya Sutskever, Oriol Vinyals, Quoc V Le, NIPS. Ilya Sutskever, Oriol Vinyals, and Quoc V Le, "Se- quence to sequence learning with neural networks," in NIPS, 2014, pp. 3104-3112.
Memory-augmented attention modelling for videos. Rasool Fakoor, Abdel-Rahman Mohamed, Margaret Mitchell, Bing Sing, Pushmeet Kang, Kohli, abs/1611.02261CoRR. Rasool Fakoor, Abdel-rahman Mohamed, Margaret Mitchell, Sing Bing Kang, and Pushmeet Kohli, "Memory-augmented attention modelling for videos," CoRR, vol. abs/1611.02261, 2016.
Show, attend and tell: Neural image caption generation with visual attention. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, Yoshua Bengio, ICML. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio, "Show, attend and tell: Neural im- age caption generation with visual attention," in ICML, 2015, pp. 2048-2057.
Deep recurrent neural network based monaural speech separation using recurrent temporal restricted boltzmann machines. Suman Samui, Indrajit Chakrabarti, K Soumya, Ghosh, Suman Samui, Indrajit Chakrabarti, and Soumya K Ghosh, "Deep recurrent neural network based monau- ral speech separation using recurrent temporal restricted boltzmann machines," in INTERSPEECH, 2017.
A gender mixture detection approach to unsupervised singlechannel speech separation based on deep neural networks. Y Wang, J Du, L R Dai, C H Lee, IEEE Transactions on Audio, Speech, and Language Processing. 257Y. Wang, J. Du, L. R. Dai, and C. H. Lee, "A gen- der mixture detection approach to unsupervised single- channel speech separation based on deep neural net- works," IEEE Transactions on Audio, Speech, and Lan- guage Processing, vol. 25, no. 7, pp. 1535-1546, July 2017.
Collaborative deep learning for speech enhancement: A run-time model selection method using autoencoders. M Kim, ICASSP. M. Kim, "Collaborative deep learning for speech en- hancement: A run-time model selection method using autoencoders," in ICASSP, March 2017, pp. 76-80.
Reinforcement learning to adapt speech enhancement to instantaneous input signal quality. Rasool Fakoor, Xiaodong He, Ivan Tashev, Shuayb Zarar, NIPS, Machine Learning for Audio Signal Processing Workshop. Rasool Fakoor, Xiaodong He, Ivan Tashev, and Shuayb Zarar, "Reinforcement learning to adapt speech en- hancement to instantaneous input signal quality," in NIPS, Machine Learning for Audio Signal Processing Workshop, 2017.
Unified framework for single channel speech enhancement. I Tashev, A Lovitt, A Acero, 2009 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing. I. Tashev, A. Lovitt, and A. Acero, "Unified framework for single channel speech enhancement," in 2009 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing, Aug 2009, pp. 883-888.
An experimental study on speech enhancement based on deep neural networks. Y Xu, J Du, L R Dai, C H Lee, IEEE Signal Processing Letters. 211Y. Xu, J. Du, L. R. Dai, and C. H. Lee, "An experimen- tal study on speech enhancement based on deep neural networks," IEEE Signal Processing Letters, vol. 21, no. 1, pp. 65-68, Jan 2014.
Causal speech enhancement combining data-driven learning and suppression rule estimation. Seyedmahdad Mirsamadi, Ivan Tashev, INTERSPEECH. Seyedmahdad Mirsamadi and Ivan Tashev, "Causal speech enhancement combining data-driven learning and suppression rule estimation," in INTERSPEECH, 2016.
Dynamic noise aware training for speech enhancement based on deep neural networks. Yong Xu, Jun Du, L.-R Dai, C.-H Lee, INTERSPEECH. Yong Xu, Jun Du, L.-R Dai, and C.-H Lee, "Dynamic noise aware training for speech enhancement based on deep neural networks," in INTERSPEECH, 01 2014, pp. 2670-2674.
Recurrent neural networks for noise reduction in robust asr. Andrew Maas, Quoc V Le, Tyler M Oneil, Oriol Vinyals, Patrick Nguyen, Andrew Y Ng, INTERSPEECH. Andrew Maas, Quoc V. Le, Tyler M. ONeil, Oriol Vinyals, Patrick Nguyen, and Andrew Y. Ng, "Recur- rent neural networks for noise reduction in robust asr," in INTERSPEECH, 2012.
Curriculum learning. Yoshua Bengio, Jérôme Louradour, Ronan Collobert, Jason Weston, ICML. ACMYoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston, "Curriculum learning," in ICML. 2009, pp. 41-48, ACM.
Understanding the exploding gradient problem. Razvan Pascanu, Tomas Mikolov, Yoshua Bengio, abs/1211.5063CoRR. Razvan Pascanu, Tomas Mikolov, and Yoshua Ben- gio, "Understanding the exploding gradient problem," CoRR, vol. abs/1211.5063, 2012.
Long shortterm memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural Computing. 98Sepp Hochreiter and Jürgen Schmidhuber, "Long short- term memory," Neural Computing, vol. 9, no. 8, pp. 1735-1780, Nov. 1997.
Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Ronald J Williams, Journal of Machine Learning. 83-4Ronald J. Williams, "Simple statistical gradient- following algorithms for connectionist reinforcement learning," Journal of Machine Learning, vol. 8, no. 3-4, pp. 229-256, May 1992.
Deterministic policy gradient algorithms. David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, Martin Riedmiller, JMLR Workshop and Conference Proceedings. ICMLDavid Silver, Guy Lever, Nicolas Heess, Thomas De- gris, Daan Wierstra, and Martin Riedmiller, "Determin- istic policy gradient algorithms," in ICML. 2014, pp. 387-395, JMLR Workshop and Conference Proceed- ings.
Random search for hyper-parameter optimization. James Bergstra, Yoshua Bengio, Journal of Machine Learning Research. 13James Bergstra and Yoshua Bengio, "Random search for hyper-parameter optimization," Journal of Machine Learning Research, vol. 13, pp. 281-305, Feb. 2012.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, ICLR. Diederik P. Kingma and Jimmy Ba, "Adam: A method for stochastic optimization," in ICLR, 2015.
Multi-scale context aggregation by dilated convolutions. Fisher Yu, Vladlen Koltun, ICLRFisher Yu and Vladlen Koltun, "Multi-scale context ag- gregation by dilated convolutions," ICLR, 2016.
| []
|
[
"BSDEs generated by fractional space-time noise and related SPDEs",
"BSDEs generated by fractional space-time noise and related SPDEs"
]
| [
"Yaozhong Hu s:[email protected] \nSchool of Mathematics and Statistics\nShandong University\n264209WeihaiWeihaiP. R. China\n",
"Juan Li [email protected] ",
"Chao Mi [email protected]. ",
"\nDepartment of Math and Stat Sciences\nUniversity of Alberta\nT6G 2G1EdmontonABCanada\n"
]
| [
"School of Mathematics and Statistics\nShandong University\n264209WeihaiWeihaiP. R. China",
"Department of Math and Stat Sciences\nUniversity of Alberta\nT6G 2G1EdmontonABCanada"
]
| []
| This paper is concerned with the backward stochastic differential equations whose generator is a weighted fractional Brownian field:where W is a (d + 1)-parameter weighted fractional Brownian field of Hurst parameter H = (H 0 , H 1 , · · · , H d ), which provide probabilistic interpretations (Feynman-Kac formulas) for certain linear stochastic partial differential equations with colored space-time noise. Conditions on the Hurst parameter H and on the decay rate of the weight are given to ensure the existence and uniqueness of the solution pair. Moreover, the explicit expression for both components Y and Z of the solution pair are given. | 10.1016/j.amc.2023.127979 | [
"https://export.arxiv.org/pdf/2208.00289v1.pdf"
]
| 251,223,413 | 2208.00289 | d26ef1b4d2a54a8d57ea87abd07667b76883f15f |
BSDEs generated by fractional space-time noise and related SPDEs
August 2, 2022
Yaozhong Hu s:[email protected]
School of Mathematics and Statistics
Shandong University
264209WeihaiWeihaiP. R. China
Juan Li [email protected]
Chao Mi [email protected].
Department of Math and Stat Sciences
University of Alberta
T6G 2G1EdmontonABCanada
BSDEs generated by fractional space-time noise and related SPDEs
August 2, 2022arXiv:2208.00289v1 [math.PR] 30 Jul 2022Backward stochastic differential equationsstochastic partial differential equa- tionsFeynman-Kac formulasfractional space-time noiseexplicit solution, Malliavin calculus
This paper is concerned with the backward stochastic differential equations whose generator is a weighted fractional Brownian field:where W is a (d + 1)-parameter weighted fractional Brownian field of Hurst parameter H = (H 0 , H 1 , · · · , H d ), which provide probabilistic interpretations (Feynman-Kac formulas) for certain linear stochastic partial differential equations with colored space-time noise. Conditions on the Hurst parameter H and on the decay rate of the weight are given to ensure the existence and uniqueness of the solution pair. Moreover, the explicit expression for both components Y and Z of the solution pair are given.
Introduction and main result
Let R d be the d-dimensional Euclidean space. Let W = (W (t, x) , t ≥ 0 , x ∈ R d ) be a weighted fractional Brownian field. Namely, W is a mean-zero Gaussian random field with the following covariance structure:
E [W (t, x)W (s, y)] = R H 0 (s, t)ρ(x)ρ(y) d i=1 R H i (x i , y i ) ,(1.1)
where and throughout the paper, we assume H i ∈ (1/2, 1) for all i = 0, 1, · · · , d, and R H (ξ, η) = |ξ| 2H + |η| 2H − |ξ − η| 2H /2, for all ξ, η ∈ R and ρ(x) is a continuous function from R d to R satisfying some properties which will be specified later. We consider the following (one dimensional) linear backward stochastic differential equation (BSDE for short) with fractional noise generator:
Y t = ξ + T t Y s W (ds, B s ) − T t Z s dB s , t ∈ [0, T ] ,(1.2)
where B is a d-dimensional standard Brownian motion. Our interest in this equation is motivated from the following three aspects.
1 (a) The first aspect is the nonlinear Feynman-Kac formula (in our special case) which relates the following two stochastic differential equations: the first one is the backward doubly stochastic differential equation (BDSDE for short) Y t,x s = φ(X t,x T ) + T s f (r, X t,x r , Y t,x r , Z t,x r )dr
+ T s g(r, X t,x r , Y t,x r , Z t,x r )W (dr, X t,x r ) − t s Z t,x r dB r ,
where X t,x s is the solution to the following stochastic differential equation
dX t,x s = b(X t,x s )ds + σ(X t,x s )dB s , s ∈ [t, T ] , X t,x t = x ∈ R d .
The second one is the stochastic partial differential equation (SPDE for short)
−du(t, x) = [Lu(t, x) + f (t,
x, u(t, x), ∇u(t, x)σ(x))] dt
+g(t, x, u(t, x), ∇u(t, x)σ(x))W (dt, x) , (t, x) ∈ [0, T ] × R d , u(T, x) = φ(x) ,(1.3)
where L is the generator associated with the Markov process X t,x s . There are many articles along this direction since the work of [10]. Most scholars studied the BDSDEs under various conditions, whose solution can be used as the nonlinear Feynman-Kac formula to represent the solution to the correlated semi-linear SPDEs driven by white noise. We refer to [7,Theorem 5.1] and the references therein for the exact relation between the solutions of these two equations. It is worth noting that, BDSDEs and probabilistic interpretation (nonlinear Feynman-KAC formula) of SPDEs driven only by temporal white noise have been studied extensively in several directions, see e.g. [4], [5], [6] and [8]. Although Feynman-Kac formulas of (linear or non-linear) SPDE with spatial-temporal noise is obtained in [1], [12]and [16] for example, there are limited works to characterize the solution of SPDEs by using the solution of BSDEs. To the best of our knowledge only [7] and [9] dealt with such problems. This equation has enjoyed a great attention in recent decade (when the terminal condition is replaced by the initial condition and the noise W (dt, x) is replaced by more singular one ∂ d ∂x 1 ···∂x d W (dt, x)), often in the name of parabolic Anderson model. We refer to a survey work [11] and references therein for further study. Let us only point out that many works do not require that the noise is white in time in their study: For the SPDE in the above case (b), the associated BDSDEs becomes
Y t,x s = φ(B t,x T ) + T s Y t,x r W (dr, B t,x r−t ) − t s Z t,x r dB r ,(1.5)
where B t,x s = x + (B s − B t ) is a d-dimensional Brownian motion starting at time t from the point x. This equation is of the form (1.2). Its probabilistic interpretation, the explicit form and some sharp properties of solution will be the main focus of this paper.
2 To illustrate our main results of finding the explicit representation of the solution pair using partial Malliavin derivatives we shall follow the idea of [1]. Define (we shall justify it in the next section) α t s = exp t s W (dr, B r ) (1.6) and denote by F t = σ(B s , 0 ≤ s ≤ t ; W (t, x), t ≥ 0, x ∈ R d ) the σ-algebra generated by Brownian motion up to time instant t ≥ 0 and W (t, x) for all t and x ∈ R d . Then we have formally the following candidate for the solution pair [13,Equation (2.11)] ,
Y t = (α t 0 ) −1 E ξα T 0 F t = E ξ exp T t W (dr, B r ) F t byZ t = D B t Y t = D B t E ξ exp T t W (dr, B r ) F t by [13, Equation (2.23)] ,(1.7)
where D B t is the Malliavin gradient with respect to the Brownian motion B (see next section for the definition and properties), and E B is the expectation with respect to B (explained in detail in the proof of Proposition 2.1). Here is the main result of this paper.
Theorem 1.1. Suppose d i=1 (2H i − β i ) < 2 and ξ ∈ D 1,q B is measurable w.r.t. σ-field F B T , for q > 2 2H − 1
, where H = min{H 0 , . . . , H d }. Then we have the following results:
(1) The processes {(Y t , Z t ), 0 ≤ t ≤ T } formally defined by (1.7) are well-defined and square integrable, and they are the solution pair to the BSDE (1.2). Moreover, Z has the following alternative expression: (2) If for all q > 2, E|D B t ξ − D B s ξ| q ≤ C|t − s| κq/2 for some κ ∈ (0, 2), then for any a > 1 and for any ε > 0, we have the following Hölder continuity for Y and Z:
Z t = E eE|Y t − Y s | a ≤ C a |t − s| a/2 , E Z t − Z s 2 ≤ C ε |t − s| (2H 0 +H−1−ε)∧κ , ∀ s, t ∈ [0, T ]. (1.9)
(3) If a pair (Y, Z) satisfies (2) for some a, κ > 0, then (Y, Z) is represented by (1.7) and hence the BSDEs (1.2) has a unique solution.
(4) If (Y, Z) ∈ S 2 F (0, T ; R) × M 2 F (0, T ; R d )
is the solution pair of BSDEs (1.2) so that Y, D B Y are D 1,2 then the solution also has the explicit expression (1.7) and hence the BSDEs (1.2) has a unique solution.
Remark 1.2. Since we assume H 0 > 1/2, we see 2H 0 + H − 1 > 0. We can only obtain the Hölder continuity of Z in the mean square sense. We encounter the difficulty to deal with high moments for Z. Now let us point out the novelty compared to two relevant works. In the work [9], the generator W is a fractional Brownian motion (the generator W does not depend on x). In the work [7] W can depend on the space x, but it is assumed that that it is a backward martingale with to time variable t so that the backward martingale technology can be used. In our above theorem, neither the assumption that W is independent of x, nor the assumption that W is a backward martingale is assumed. In particular, we can obtain the explicit solution (for linear equation) and use this expression to obtain the some kind sharp Hölder continuity for the solution pair which, to our best knowledge, are new.
Here is the organization of this work. In next section, we shall show that the quantity t s W (dr, B r ) in (1.7) is well-defined and is exponentially integrable so that Y t is well-defined. In Section 3, we obtain some properties of the process Y t and show that it is Malliavin differentiable and Z t is well-defined. We show that the pair (Y t , Z t ) is the solution to the linear BSDE (1.2). A great difficulty is that we need to show that the process Y is in S p F (0, T ; R) and Z is in M p F (0, T ; R) due to the singularity of the noise W in the generator. We overcome this difficulty by Talagrand theorem 3.2, Borell theorem 3.3, and a new Lemma 3.7. In Section 4, we use the explicit expression to obtain Hölder continuity of the solution pair. The Hölder continuity of the process Z t is always a difficult problem (see e.g. [13,18,19]) however plays a critical role in numerical method. In Section 6 we discuss the relation between the linear BSDE (1.5) and the stochastic PDE (1.4).
Exponential integrability of
T t W (ds, B s )
Let T > 0 be a fixed time horizon and let (Ω, F, P ) be a complete probability space, on which the expectation is denoted by E.
Let {B t , 0 ≤ t ≤ T } be a d-dimensional standard Brownian motion defined on (Ω, F, P ). Suppose W = {W (t, x), t ≥ 0, x ∈ R d }
is a weighted fractional Brownian space-time field whose covariance is given by (1.1). The stochastic integral with respect to W is well-defined in many references, and we refer to [11] and references therein for more details. We shall use this concept freely. For example, we denote W (φ) = R + ×R d φ(t, x)W (dt, x)dx for any φ ∈ D = D(R + × R d , R), where D is the set of all smooth functions with compact support from R + × R d to R. We denote the spatial covariance as
q(x, y) = ρ(x)ρ(y) d i=1 R H i (x i , y i ) , ∀ x = (x 1 , · · · , x d ) T , y = (y 1 , · · · , y d ) T ∈ R d ,(2.1)
where ρ : R d → R is a continuous function of power decay, and we will specify the conditions that ρ are satisfied later. It is known that
E W (h)W (g) = R 2 + ×R 2d h(t, x)g(s, y)|s − t| 2H 0 −2 ρ(x)ρ(y) d i=1 R H i (x i , y i )dsdtdxdy (2.2)
for all h, g ∈ D. It is clear that h, g H is a scalar product on D. We denote H the Hilbert space by completing D with respect to this scalar product. Let F be a cylindrical random variable of the form
F = f W φ 1 , . . . , W (φ n ) ,
where φ i ∈ D , i = 1, · · · , n and f ∈ C ∞ p (R n ), i.e., f and all its partial derivatives have polynomial growth. The set of all such cylindrical random variables is denoted by P. If F ∈ P has the above 4 form, then D W F is the H-valued random variable defined by
D W F = n j=1 ∂f ∂x j (W (φ 1 ) , . . . , W (φ n )) φ j .
The operator D W is closable from L 2 (Ω) into L 2 (Ω, H), namely D W is the Malliavin derivative operator with respect to the fractional Brownian motion W . We define the Sobolev space D 1,p W as the closure of P under the following norm :
D W F 1,p = E |F | p + E D W F p H 1/p .
Let us denote by δ the adjoint of the derivative operator given by duality formula
E(δ(u)F ) = E D W F, u H for any F ∈ D 1,2 W ,
where δ(u) is also called the Skorohod integral of u. We refer to [3] and [2] for a detailed account on the Malliavin calculus. For any random variable F ∈ D 1,2 W and φ ∈ H, we will often use the following formula in the text:
F W (φ) = δ(F φ) + D W F, φ H .
Accordingly, we can define D B the Malliavin derivative operator with respect to the standard Brownian motion B and D 1,p B the Sobolev space in the same way. We say a random field F ∈ D 1,p if F is an element both in D 1,p B and D 1,p W . The stochastic integral studied earlier is useful in this paper but is not sufficient for our purpose. We also need to introduce a new kind of nonlinear stochastic integral similar to that of Kunita ([20]). To this end, we introduce the approximation ofẆ as follows.
W ε,η (s, B s ) = s 0 R d ϕ η (s − r)p ε (B s − y)W (dr, y)dy ,(2.3)
where ϕ η and p ε are the approximation of the Dirac delta functions:
ϕ η (t) = 1 η 1 [0,η] (t), p ε (x) = (2πǫ) −d/2 e −|x| 2 /2ε , for all η, ε > 0.
Proposition 2.1. Let ρ : R d → R be a continuous function of power decay, i.e., ρ satisfying
0 ≤ ρ(x) ≤ C d i=1 (1 + |x i |) −β i , where β i ∈ (0, 2) and 2H i > β i for all i = 1, 2, . . . d and suppose α := d i=1 (2H i − β i ) < 2 . (2.4)
Then the stochastic integral V ε,η t := T tẆ ε,η (s, B s )ds converges in L 2 (Ω) to a limit denoted by
V t = T t W (ds, B s ). (2.5)
Moreover, conditioning on F B , V t is a mean-zero Gaussian random variable with variance
Var W (V t ) = T t T t |s − s ′ | 2H 0 −2 ρ(B s )ρ(B s ′ ) d i=1 R H i (B i s , B i s ′ )dsds ′ . (2.6)
Proof. Suppose ε, ε ′ , η, η ′ ∈ (0, 1).
E W T tẆ ε,η (s, B s )ds T tẆ ε ′ ,η ′ (s, B s )ds = α H 0 T t T t s t s ′ t R 2d ϕ η (s − r)ϕ η ′ (s ′ − r ′ )p ε (B s − y)p ε ′ (B s ′ − y ′ ) |r − r ′ | 2H 0 −2 ρ(y)ρ(y ′ ) d i=1 R H i (y i , y ′ i )dydy ′ drdr ′ dsds ′ = α H 0 T t T t s t s ′ t ϕ η (s − r)ϕ η ′ (s ′ − r ′ )|r − r ′ | 2H 0 −2 (2.7) E X,X ′ ρ( √ εX + B s )ρ( √ ε ′ X ′ + B s ′ ) d i=1 R H i ( √ εX i + B i s , √ εX ′ i + B i s ′ ) drdr ′ dsds ′ =:I(ε, ε ′ , η, η ′ ) ,
where X = (X 1 , · · · , X d ), X ′ = (X ′ 1 , · · · , X ′ d ) are independent standard random variables, which are also independent of F B .
To study the limit of above I(ε, ε ′ , η, η ′ ) as ε, ε ′ , η, η ′ → 0, we observe that, firstly [1, Lemma
A.3] directly yields s t s ′ t ϕ η (s − r)ϕ η ′ (s ′ − r ′ )|r − r ′ | 2H 0 −2 drdr ′ ≤ |s − s ′ | 2H 0 −2 .
(2.8)
Moreover,
q(y, y ′ ) = 1 2 ρ(y)ρ(y ′ ) d i=1 |y i | 2H i + |y ′ i | 2H i − |y i − y ′ i | 2H i ≤ Cρ(y)ρ(y ′ ) d i=1 (|y i | 2H i + |y ′ i | 2H i ) ≤ C d i=1 (1 + |y i | 2H i )(1 + |y ′ i | 2H i )(1 + |y i |) −β i (1 + |y ′ i |) −β i (2.9) ≤ C d i=1 (1 + |y i |) 2H i −β i (1 + |y ′ i |) 2H i −β i ,
where and throughout this paper C is a generic constant depending only on H i , i = 1, . . . , d. This can be used to show that
I 1 (ε, ε ′ , s, s ′ ) := E X,X ′ ρ( √ εX + B s )ρ( √ ε ′ X ′ + B s ′ ) d i=1 R H i ( √ εX i + B i s , √ εX ′ i + B i s ′ ) (2.10)
is a pathwise bounded continuous function of ε, ε ′ , s, s ′ in the concerned domain (almost surely with respect to B). Thus, we have
E[I(ε, ε ′ , η, η ′ )] = α H 0 E T t T t s t s ′ t ϕ η (s − r)ϕ η ′ (s ′ − r ′ )|r − r ′ | 2H 0 −2 I 1 (ε, ε ′ , s, s ′ )drdr ′ dsds ′ ≤ α H 0 T t T t |s − s ′ | 2H 0 −2 d i=1 E (1 + |B i s |) 2H i −β i (1 + |B i s ′ |) 2H i −β i dsds ′ ≤ C|T − t| 2H 0 < ∞ . (2.11)
Moreover, for s = s ′ , as ε, ε ′ , η, η ′ tend to zero we have lim ε,ε ′ ,η,η ′ →0
I(ε, ε ′ , η, η ′ ) = α H 0 lim ε,ε ′ ,η,η ′ →0 T t T t s t s ′ t ϕ η (s − r)ϕ η ′ (s ′ − r ′ )|r − r ′ | 2H 0 −2 I 1 (ε, ε ′ , s, s ′ )drdr ′ dsds ′ = α H 0 T t T t |s − s ′ | 2H 0 −2 ρ(B s )ρ(B s ′ ) d i=1 R H i (B i s , B i s ′ )dsds ′ .
Therefore, if we put ε = ε ′ , η = η ′ and use the estimates (2.9) and (2.10), and with the help of Lebesgue's convergence theorem we have
E V ε,η t − V ε ′ ,η ′ t 2 = E (V ε,η t ) 2 − 2E V ε,η t V ε ′ ,η ′ t + E V ε ′ ,η ′ t 2 → 0, as ε, ε ′ , η, η ′ → 0.
As a consequence we have V εn,ηn t is a Cauchy sequence in L 2 (Ω). It has then a limit denoted by V t , proving the proposition. Proof. From (2.6) and the first inequality in (2.9), it follows
I : = E E W exp λ T t W (dr, B r ) = E exp (α H 0 λ 2 )/2 T t T t s − r 2H 0 −2 ρ(B s )ρ(B r ) d i=1 R H i (B i s , B i r )dsdr (2.12) ≤ E exp C(α H 0 λ 2 )/2 T t T t s − r 2H 0 −2 ρ(B s )ρ(B r ) d i=1 B i s 2H i + B i r 2H i dsdr .
Note that
ρ(B s )ρ(B r ) d i=1 B i s 2H i + B i r 2H i ≤ 2 d ρ(B s ) d i=1 sup s∈[t,T ] B i s 2H i ≤ 2 d d i=1 1 + sup s∈[t,T ] |B i s | 2H i −β i ≤ 2 d 1 + sup s∈[t,T ] d i=1 |B i s | d i=1 2H i −β i (2.13) =: C d 1 + sup s∈[t,T ] d i=1 |B i s | α .
We have
I ≤ E exp C T t T t s − r 2H 0 −2 dsdr · 1 + sup s∈[t,T ] d i=1 |B i s | α ,
which is finite thanks to Fernique's theorem (e.g. [3, Theorem 4.14]) since α < 2, completing the proof of the proposition.
Linear backward stochastic differential equation
Now we consider the backward stochastic differential equation (1.2). In order to study the regularity of (Y, Z), we approximate it by (2.3) and obtain the following approximation of (1.2):
Y ε,η t = ξ + T t Y ε,η sẆε,η (s, B s )ds − T t Z ε,η s dB s , t ∈ [0, T ] . (3.1)
Due to the regularity of the approximated noiseẆ ε,η and Proposition 2.2, we can explicitly express its solution as follows (see e.g. [1] and references therein):
Y ε,η t = E ξ exp T tẆ ε,η (r, B r )dr F t by [13, Equation (2.11)] , Z ε,η t = D B t E ξ exp T tẆ ε,η (r, B r )dr F t by [13, Equation (2.23)] ,(3.2)
where D B = (D B 1 , · · · , D B d ) T is the Malliavin gradient operator with respect to the Brownian motion B, so that Z ε,η t is a d-dimensional vector. We have proved that T t W (ds, B s ) is exponentially integrable in Proposition 2.2. Then we can define
Y t := E ξ exp T t W (dr, B r ) F t . (3.3) Lemma 3.1. Assume ξ ∈ L q (Ω) for some q > 2. Then for any t ∈ [0, T ], we have Y ε,η t converges to Y t in L p (Ω) for all p ∈ [1, q). Proof. Denote V ε,η t = T tẆ ε,η (s, B s )ds. Let q ′ p = q and 1/p ′ +1/q ′ = 1. From (3.2), (3.3
), Jensen's inequality and Hölder's inequality it follows
E Y ε,η t − Y t p = E E ξ exp V ε,η t − exp V t F t p ≤ E |ξ| p exp V ε,η t − exp V t p (3.4) ≤ ξ 1/q ′ q E exp V ε,η t − exp V t pp ′ 1/p ′ . 8 Similar to Proposition 2.2 we can prove sup ε,η∈(0,1] E exp(λV ε,η t ) < ∞, ∀ λ ∈ R .
Proposition 2.1 implies that V ε,η t → V t in probability. Thus, we prove the proposition by Lebesgue's convergence theorem.
Let us denote
S p F (0, T ; R) := ψ = (ψ s ) s∈[0,T ] :ψ is a real-valued F-adapted continuous process; E sup 0≤s≤T |ψ s | p < ∞ . To prove Y = {Y t , t ∈ [0, T ]} ∈ S p F (0, T ; R) for all p ∈ [1, q)
, we shall first recall Talagrand's majorizing measure theorem.
= E |X t − X s | 2 1 2 the associated natural metric on T . Then E sup t∈T X t ≍ γ 2 (T, d) := inf A sup t∈T n≥0 2 n/2 diam (A n (t)) ,
where "≍" indicates the asymptotic notation. Note that the infinimum is taken over all increasing sequence A := {A n , n = 1, 2, · · · } of partitions of T such that #A n ≤ 2 2 n (#A denotes the number of elements in the set A ), A n (t) denotes the unique element of A n that contains t, and diam (A n (t)) is the diameter (with respect to the natural distance d(·, ·) ) of A n (t).
We shall apply the above majorizing measure theorem to V t = T t W (ds, B s ) as a random variable of W (which is Gaussian under the conditional law knowing B). The associated natural metric (which is a random variable of B) is (assuming t > s)
d(t, s) := (E W |V t − V s | 2 ) = (E W | t s W (dr, B r )| 2 ) = t s t s α H 0 |u − v| 2H 0 −2 ρ(B u )ρ(B r ) d i=1 R H i (B i u , B i r )dudv ≤ C H ν(B)|t − s| H 0 , (3.5)
where C H is a constant depending only H i , i = 1, . . . , d and
ν(B) := C d (1+ sup u∈[0,T ] d i=1 |B i u |) α . (3.6)
Next, we choose the admissible sequences (A n ) as uniform partition of [0, T ] such that #(A n ) ≤ 2 2 n :
[0, T ] = 2 2 n−1 −1 j=0 j · 2 −2 n−1 T, (j + 1) · 2 −2 n−1 T .
Thus, we can deduce that, by Lemma 3.2,
E W sup t∈[0,T ] V t ≤ C sup t∈[0,T ] n≥0 2 n/2 diam (A n (t)) (3.7)
where A n (t) is the element of uniform partition A n that contains (t), i.e.,
A n (t) = j · 2 −2 n−1 T, (j + 1) · 2 −2 n−1 T such that j · 2 −2 n−1 T ≤ t < (j + 1) · 2 −2 n−1 T . Since (A n )
is a uniform partition, and by using the bound (3.5) we see the diameter of A n (t) with respect to d(t, s) can be estimated by
diam A n (t) ≤ C H ν(B)2 −H 0 2 n−1 T H 0 .
Inserting this result into (3.7), we have
E W sup t∈[0,T ] V t ≤ C sup t∈[0,T ] n≥0 2 n/2 diam (A n (t)) ≤ C H ν(B)T H 0 n≥0 2 n/2 2 −H 0 2 n−1 ≤ C H ν(B)T H 0 . (3.8)
We also need the following two results to show Y ∈ S p F (0, T ; R). Then E (sup t∈T X t ) < ∞, and for all λ > 0,
P W sup t∈T X t − E sup t∈T X t > λ ≤ 2 exp − λ 2 2σ 2 T , where σ 2 T := sup t∈T E X 2 t . Lemma 3.4. If the process {X t , t ∈ T } is symmetric, then we have E sup t∈T |X t | 2E sup t∈T X t + inf t 0 ∈T E [|X t 0 |] . (3.9)
Now we can state and prove one of the main results of this work.
Theorem 3.5. Suppose ξ ∈ L q (Ω) for some q > 2 and suppose that (2.4) holds. Then we have Y ε,η converges to Y = {Y t , t ∈ [0, T ]} ∈ S p F (0, T ; R) for all p ∈ [1, q).
Proof. We just need to verify Y t ∈ S p F (0, T ; R). Let q ′ p = q and 1/p ′ + 1/q ′ = 1. By (3.2) and Jessen's inequality and Doob's martingale inequality we see
E sup t∈[0,T ] Y t p = E sup t∈[0,T ] E B ξ exp T t W (ds, B s ) F B t p ≤ E sup t∈[0,T ] E B |ξ| p exp{p sup t∈[0,T ] |V t |} F B t ≤ p p − 1 p ξ 1/q ′ q E exp pp ′ sup t∈[0,T ] |V t | 1/p ′ .
Denote by V T := sup t∈[0,T ] V t . From Lemma 3.3 and Lemma 3.4 it follows for all λ > 0,
P W V T − E W V T ≥ λ ≤ 2 exp − λ 2 2σ 2 T , (3.10)
The above term σ 2 T is defined and bounded by
σ 2 T = sup t∈[0,T ] E W [|V t | 2 ] ≤ C T,H 0 sup u∈[0,T ] ρ(B u ) d i=1 |B i u | 2H i ≤ C T,H 0 ,d 1 + sup s∈[t,T ] d i=1 |B i s | α ,(3.E W exp m V T = E W exp m( V T − E W [ V T ]) · exp mE W [ V T ] ≤ m exp mE W [ V T ] ∞ 0 e mλ P V T − E W [ V T ] ≥ λ dλ ≤ 2m exp mE W [ V T ] ∞ 0 e mλ · e − λ 2 2σ 2 T dλ (3.12) ≤ 2 √ 2πm · σ T exp mC H T H 0 ν(B) + m 2 2 σ 2 T .
Since for all x > 0, we have x · e x 2 2 ≤ 2e x 2 . Therefore, taking account (3.6) and (3.11) it yields that, there is a constant C T,H,m,d which only depends on T, m, d, H i , i = 0, 1, . . . , d such that:
E W exp m V T ≤ 4 √ 2π exp C T,H,m,d (1 + sup u∈[0,T ] d i=1 |B i u |) α .
By the Fernique's theorem we obtain
E E W exp m V T < ∞, which implies E sup t∈[0,T ] Y t p < ∞. That is to say Y = {Y t , t ∈ [0, T ]} ∈ S p F (0, T ; R) for all p ∈ [1, q). The convergence of Y ε,η to Y = {Y t , t ∈ [0, T ]} ∈ S p F (0, T ; R) for all p ∈ [1, q)
is routine and a little bit more complicated. But the essential estimates are the same as above.
Now we want to study the second component of the solution pair of (3.1), i.e.
Z ε,η = {Z ε,η s , s ∈ [0, T ]} defined by (3.2). Introduce the space M 2 F (0, T ; R d ) := φ = (φ s ) s∈[0,T ] : R d -valued F-progressively measurable and E T 0 |φ s | 2 ds < ∞ . Theorem 3.6. Denotē H = max{H 0 , H 1 , · · · , H d } and H = min{H 0 , H 1 , · · · , H d } . Suppose d i=1 (2H i − β i ) < 2, terminal condition ξ ∈ D 1,q B is measurable w.r.t. σ-field F B T , for q > 2 2H − 1 . Then Z ε,η ∈ M 2 F (0, T ; R d ) and Z ε,η has a limit Z = {Z s , s ∈ [0, T ]} in M 2 F (0, T ; R d )
. This limit can be written as
= E e T t W (dτ,Bτ ) D B t ξ + T t e s t W (dτ,Bτ ) Y s (∇ x W )(ds, B s ) F t .
(3.14)
Proof. Presumably we may apply D B r to Y ε,η t given by (3.2). But it is inconvenient to deal with the Malliavin derivative of the conditional expectation. We find that it is more convenient to find D B r Y ε,η t by working on (3.1) directly. In fact applying D B r to (3.1) yields
D B r Y ε,η t = D B r ξ + T tẆ ε,η (s, B s )D B r Y ε,η s ds + T t Y ε,η s ∇ xẆε,η (s, B s )I [0,s] (r)ds − T t D B r Z ε,η s dB s . DenoteỸ t = D B r Y ε,η t ,Z t = D B r Z ε,η t
(we fix r) and we can rewrite the above equation as
dỸ t = −Ẇ ε,η (t, B t )Ỹ t dt − Y ε,η t ∇ xẆε,η (t, B t )I [0,t] (r)dt +Z t dB t , r ≤ t ≤ T Y T = D B r ξ .
This is another linear backward stochastic differential equation, whose solution has the following explicit form.
D B r Y ε,η t = E e T tẆ ε,η(τ,Bτ )dτ D B r ξ + T r e s tẆ ε,η (τ,Bτ )dτ Y ε,η s ∇ xẆε,η (s, B s )ds F t , t ≥ r.Z ε,η t = D B t Y ε,η t = E e T tẆ ε,η(τ,Bτ )dτ D B t ξ + T t e s tẆ ε,η(τ,Bτ )dτ Y ε,η s ∇ xẆε,η (s, B s )ds F t := Z 0,ε,η t + Z 1,ε,η t .
Assuming D B r ξ is nice, we Z 0,ε,η t can be treated in exactly the same way as Y ε,η s . We shall focus our effort on showing
Z 1,ε,η t ∈ M 2 F (0, T ; R d ). Substituting Y ε,η t
given by (3.2) into the above expression, we have
Z 1,ε,η t = E T t e s tẆ ε,η(τ,Bτ )dτ E ξ exp T sẆ ε,η (u, B u )du F s ∇ xẆε,η (s, B s )ds F t = T t E ξ ∇ xẆε,η (s, B s ) exp T tẆ ε,η (u, B u )du F t ds .
Since it involves the term ∇ xẆε,η (s, B s ), this term is much more difficult to deal with. We shall fully explore the normality of the Gaussian field W . Moreover, there is a conditional expectation in the expression of Z 1,ε,η t which seems to stop us carrying out any meaningful computations. We shall get around this difficulty by introducing two independent standard Brownian motions B 1 , B 2 which are identical copies of the Brownian motion B.
Denote F B 1 ,B 2 t = σ{B 1 s , B 2 r , 0 ≤ s, r ≤ 12 t; W (t, x), t ≥ 0, x ∈ R d }
by the σ-algebra generated by sBm B 1 , B 2 up to time instant t and W (t, x) for all t ≥ 0 and x ∈ R d . Note that, E W only denotes the expectation with respect to W , which consider other random variables as "fixed constant". Then, we have
E W Z 1,ε,η t 2 = E W T t T t E ξ(B 1 )ξ(B 2 ) ∇ xẆε,η (s 1 , B 1 s 1 ) T ∇ xẆε,η (s 2 , B 2 s 2 ) exp T t Ẇ H ε,η (u, B 1 u ) +Ẇ H ε,η (u, B 2 u ) du F B 1 ,B 2 t B 1 =B 2 =B ds 1 ds 2 = T t T t E ξ(B 1 )ξ(B 2 )I ε,η (s 1 , s 2 ) F B 1 ,B 2 t B 1 =B 2 =B ds 1 ds 2 ,(3.15)
where I ε,η (s 1 , s 2 ) is defined by
I ε,η (s 1 , s 2 ) := d i=1 E W ∇ x iẆ ε,η (s 1 , B 1 s 1 )∇ x iẆ ε,η (s 2 , B 2 s 2 ) exp T t Ẇ ε,η (u, B 1 u ) +Ẇ ε,η (u, B 2 u ) du . (3.16) Denote Z ε,η 1,i = ∇ x iẆ ε,η (s 1 , B 1 s 1 ) ; Z ε,η 2,i = ∇ x iẆ ε,η (s 2 , B 2 s 2 ) ; Y ε,η = T t Ẇ ε,η (u, B 1 u ) +Ẇ ε,η (u, B 2 u ) du .
Then
I ε,η (s 1 , s 2 ) = d i=1 E W Z ε,η 1,i Z ε,η 2,i exp (Y ε,η ) .
As random variables of W (namely for fixed B 1 , B 2 ), Z ε,η 1,i , Z ε,η 2,i , Y ε,η are jointly Gaussians, we shall use the following lemma to compute the above expectations.
Lemma 3.7. Assume that X 1 , X 2 , Y are jointly mean zero Gaussians. Then
E [X 1 X 2 exp(Y )] = (E(X 1 Y ) + E(X 2 Y ) + E(X 1 X 2 )) exp 1 2 E(Y 2 ) . (3.17)
Proof. For any constants s, t ∈ R we have
E exp(Y + sX 1 + tX 2 ) = exp 1 2 E(Y + sX 1 + tX 2 ) 2 = exp E(Y 2 ) + s 2 E(X 2 1 ) + t 2 E(X 2 2 ) + 2sE(X 1 Y ) + 2tE(X 2 Y ) + 2stE(X 1 X 2 ) 2 . Thus E [X 1 X 2 exp(Y )] = ∂ 2 ∂s∂t s=t=0 exp 1 2 E(Y + sX 1 + tX 2 ) 2 = (E(X 1 Y ) + E(X 2 Y ) + E(X 1 X 2 )) exp 1 2 E(Y 2 ) .
This is (3.17).
13
Applying the above Lemma 3.7 to evaluate I ε,η (s 1 , s 2 ) yields
E W Z 1,ε,η t 2 = T t T t E ξ(B 1 )ξ(B 2 ) d i=1 A ε,η 1,i + A ε,η 2,i + A ε,η 3,i exp A ε,η 4 2 F B 1 ,B 2 t B 1 =B 2 =B ds 1 ds 2 = 3 j=1 d i=1 I ε,η j,i,t ,
where
A ε,η 1,i :=E W (Z ε,η 1,i Z ε,η 2,i ), A ε,η 2,i := E W (Z ε,η 1,i Y ε,η ), A ε,η 3,i :=E W (Z ε,η 2,i Y ε,η ), A ε,η 4 := E W ((Y ε,η ) 2 ) (3.18)
and
I ε,η j,i,t := T t T t E ξ(B 1 )ξ(B 2 )A ε,η j,i exp A ε,η 4 2 F B 1 ,B 2 t B 1 =B 2 =B ds 1 ds 2 . (3.19)
Let us consider I ε,η 1,i,t in details. The other terms can be treated in similar way. First, let us compute
A ε,η 1,i =E W ∇ x iẆ ε,η (s 1 , B 1 s 1 )∇ x iẆ ε,η (s 2 , B 2 s 2 ) =α H 0 s 1 0 s 2 0 ϕ η (s 1 − r 1 )ϕ η (s 2 − r 2 )|r 2 − r 1 | 2H 0 −2 dr 1 dr 2 R 2d ∇ x i p ε (B 1 s 1 − w)∇ x i p ε (B 2 s 2 − z)ρ(w)ρ(z) d i=1 R H i (w i , z i )dwdz =J η 1 (s 1 , s 2 )J ε 2 (s 1 , s 2 ) ,(3.20)
where J η 1 (s 1 , s 2 ) and J ε 2 (s 1 , s 2 ) are defined as follows.
J η 1 (s 1 , s 2 ) := s 1 0 s 2 0 ϕ η (s 1 − r 1 )ϕ η (s 2 − r 2 )|r 2 − r 1 | 2H 0 −2 dr 1 dr 2 , J ε 2 (s 1 , s 2 ) := R 2d ∇ w i p ε (B 1 s 1 − w)∇ z i p ε (B 2 s 2 − z)ρ(w)ρ(z) d i=1 R H i (w i , z i )dwdz = R 2d ∇ w i p ε (B 1 s 1 − w)∇ z i p ε (B 2 s 2 − z)q(w, z)dwdz ,
where we recall that q(x, y) is the spatial covariance of noise given by (2.1). Notice that J ε 2 (s 1 , s 2 ) is independent of ε. It is elementary to see that
J η 1 (s 1 , s 2 ) := s 1 0 s 2 0 ϕ η (s 1 − r 1 )ϕ η (s 2 − r 2 )|r 2 − r 1 | 2H 0 −2 dr 1 dr 2 → |s 2 − s 1 | 2H 0 −2 as ε, η → 0 .
(3.21) Moreover, for any p < 1/(2 − 2H 0 ) and 1/p + 1/q = 1, by Hölder's inequality we have
|J η 1 (s 1 , s 2 )| p ≤ s 1 0 s 2 0 |r 2 − r 1 | (2H 0 −2)p dr 1 dr 2 × s 1 0 s 2 0 ϕ η (s 1 − r 1 )ϕ η (s 2 − r 2 )dr 1 dr 2 p/q .
14 The above second factor is less than or equal to 1. Making substitutions s 1 − r 1 → r ′ 1 η and
s 2 − r 2 → r ′ 2 η we have sup η∈(0,1] |J η 1 (s 1 , s 2 )| p ≤ sup η∈(0,1] 1 0 1 0 |s 2 − s 1 + η(r ′ 1 − r ′ 2 )| (2H 0 −2)p dr ′ 1 dr ′ 2 < ∞ . (3.22)
Now we consider J ε 2 . Integration by parts yields
J ε 2 (s 1 , s 2 ) = R 2d p ε (B 1 s 1 − w)p ε (B 2 s 2 − z)∇ w i ∇ z i q(w, z)dwdz = R 2d p ε (B 1 s 1 − w)p ε (B 2 s 2 − z) d j =i R H j (w j , z j ) ∇ w i ρ(w)∇ z i ρ(z)R H i (w i , z i ) + ∇ w i ρ(w)ρ(z) H i |z i | 2H i −1 sign(z i ) − H i |w i − z i | 2H i −1 sign(w i − z i ) + ρ(w)∇ z i ρ(z) H i |w i | 2H i −1 sign(z i ) − H i |w i − z i | 2H i −1 sign(w i − z i ) + ρ(z)ρ(w)α H i |w i − z i | 2H i −2 dw i dz i = J ε 21 (s 1 , s 2 ) + J ε 22 (s 1 , s 2 ) , where J ε 21 (s 1 , s 2 ) : = E X,X ′ d j =i R H j (B 1,j s 1 + εX j , B 2,j s 2 + εX ′ j ) × ∇ x i ρ(B 1 s 1 + εX)∇ x i ρ(B 2 s 2 + εX ′ )R H i (B 1,i s 1 + εX i , B 2,i s 2 + εX ′ i ) + ∇ x i ρ(B 1 s 1 + εX)ρ(B 2 s 2 + εX ′ ) H i |B 2,i s 2 + εX ′ i | 2H i −1 sign(B 2,i s 2 + εX ′ i ) (3.23) − H i |B 1,i s 1 + εX i − B 2,i s 2 − εX ′ i | 2H i −1 sign(B 1,i s 1 + εX i − B 2,i s 2 − εX ′ i ) + ρ(B 1 s 1 + εX)∇ x i ρ ′ (B 2 s 2 + εX ′ ) H i |B 1,i s 1 + εX i | 2H i −1 sign(B 2,i s 2 + εX ′ i ) − H i |B 1,i s 1 + εX i − B 2,i s 2 − εX ′ i | 2H i −1 sign(B 1,i s 1 + εX i − B 2,i s 2 − εX ′ i ) and J ε 22 (s 1 , s 2 ) := α H i E X,X ′ d j =i R H j (B 1,j s 1 + εX j , B 2,j s 2 + εX ′ j ) × ρ(B 1 s 1 + εX)ρ(B 2 s 2 + εX ′ )|B 1,i s 1 + εX i − B 2,i s 2 − εX ′ i | 2H i −2
with X = (X 1 , · · · , X d ), X ′ = (X ′ 1 , · · · , X ′ d ) being independent standard Gaussian random variables, which are also independent of B 1 , B 2 . From the definition, we can consider J ε 21 (s 1 , s 2 ) as a random variable of B 1 and B 2 . From the above expression it is easy to see that
sup ε∈(0,1] |J ε 21 (s 1 , s 2 )| ≤ C 1 + |B 1 s 1 | m + |B 2 s 2 | m ,(3.24)
for some positive constants C and m.
As concerns for J ε 22 (s 1 , s 2 ), we can find two constants p, q satisfying p < 1/(2 − 2H 0 ) and 1/p + 1/q = 1 such that by Hölder's inequality,
J ε 22 (s 1 , s 2 ) ≤ α H i E X,X ′ d j =i R H j (B 1,j s 1 + εX j , B 2,j s 2 + εX ′ j ) q ρ q (B 1 s 1 + εX)ρ q (B 2 s 2 + εX ′ ) 1/q × E X,X ′ |B 1,i s 1 + εX i − B 2,i s 2 − εX ′ i | (2H i −2)p 1/p .|J ε 22 (s 1 , s 2 )| ≤ C 1 + |B 1 s 1 | m + |B 2 s 2 | m · B 1,i s 1 − B 2,i s 2 (2H i −2)p .(3.A ε,η 1,i = lim η,ε→0 E W ∇ xẆε,η (s 1 , B 1 s 1 )∇ xẆε,η (s 2 , B 2 s 2 ) =α H 0 |s 2 − s 1 | 2H 0 −2 d j =i R H j (B 1,j s 1 , B 2,j s 2 ) ∇ x i ρ(B 1 s 1 )∇ x i ρ(B 2 s 2 )R H i (B 1,i s 1 , B 2,i s 2 ) + ∇ x i ρ(B 1 s 1 )ρ(B 2 s 2 ) 2H i |B 2,i s 2 | 2H i −1 sign(B 2,i s 2 ) − 2H i |B 1,i s 1 − B 2,i s 2 | 2H i −1 sign(B 1,i s 1 − B 2,i s 2 ) + ρ(B 1 s 1 )∇ x i ρ(B 2 s 2 ) 2H i |B 1,i s 1 | 2H i −1 sign(B 2,i s 2 ) + 2H i |B 1,i s 1 − B 2,i s 2 | 2H i −1 sign(B 1,i s 1 − B 2,i s 2 ) + α H i ρ(B 1 s 1 )ρ(B 2 s 2 )|B 1,i s 1 − B 2,i s 2 | 2H i −2 .
Using the spatial covariance q(x, y), we can write lim η,ε→0
A ε,η 1,i = α H 0 |s 2 − s 1 | 2H 0 −2 ∂ 2 ∂x i ∂y i q(x, y)A ε,η 2,i = α H 0 T t |s 1 − u| 2H 0 −2 ∂ ∂x i q(x, y) x=B 1 s 1 ,y=B 2 u + ∂ ∂x i q(x, y) x=B 1 s 1 ,y=B 1 u du . lim η,ε→0 A ε,η 3,i = α H 0 T t |s 2 − u| 2H 0 −2 ∂ ∂x i q(x, y) x=B 2 s 1 ,y=B 2 u + ∂ ∂x i q(x, y) x=B 2 s 1 ,y=B 1 u du .
As for A ε,η 4 , we have by definition of Y ε,η
A ε,η 4 = E W T t T t Ẇ ε,η (u, B 1 u ) +Ẇ ε,η (u, B 2 u ) Ẇ ε,η (v, B 1 v ) +Ẇ ε,η (v, B 2 v ) dudv = 3 i=1 A ε,η 4,i , where A ε,η 41 :=E W T t T tẆ ε,η (u, B 1 u )Ẇ ε,η (v, B 1 v )dudv , A ε,η 42 :=2E W T t T tẆ ε,η (u, B 2 u )Ẇ ε,η (v, B 1 v )dudv , A ε,η 43 :=E W T t T tẆ ε,η (u, B 2 u )Ẇ ε,η (v, B 2 v )dudv .
Similar to the proof of Proposition 2.1 we can show that A ε,η 4,i , i = 1, 2, 3 can be bounded by a bound analogous to (2.10). Thus, we have
lim η,ε→0 A ε,η 4 = T t T t α H 0 |u − v| 2H 0 −2 q(B 1 u , B 1 v ) + 2q(B 1 u , B 2 v ) + q(B 2 u , B 2 v ) dudv .I ε,η 1,i,t =α H 0 T t T t E ξ(B 1 )ξ(B 2 )|s 2 − s 1 | 2H 0 −2 (∂ i,i q)(B 1 s 1 , B 2 s 2 ) Υ(t, T, B 1 , B 2 ) F B 1 ,B 2 t B 1 =B 2 =B ds 1 ds 2 ,(3.28)
where ∂ i,i q(x, y) = ∂ 2 ∂x i ∂y i q(x, y) and
Υ := exp T t T t α H 0 |u − v| 2H 0 −2 q(B 1 u , B 1 v ) + 2q(B 1 u , B 2 v ) + q(B 2 u , B 2 v ) dudv . (3.29)
In a similar way we can show the existence of the limits of I ε,η 2,i,t , I ε,η 3,i,t , and we can further identity these limits.
Thus, we can easily deduce that E T 0 Z ε,η t 2 dt exists. In order to take the limit, it would be sufficient to show that, along a subsequence, Z ε,η converges to some Z ∈ M 2 F (0, T ; R d ). But this is guaranteed by the fact that E T 0 Z ε,η t 2 dt is bounded w.r.t. ε, η > 0. Indeed, as before we can also show that Z ε,η is a Cauchy sequence in M 2 F (0, T ; R d ), whose limit is denoted by Z = {Z t , t ∈ [0, T ]}. We can also write Z as (3.13) and (3.14) (whose justification is given through our above approximation). After we have found the limit Y (Theorem 3.5) and the limit Z (Theorem 3.6), we want to show that they are the solution to (1.2). To this end we shall take limit in equation (3.1). Since we have shown the convergence of Y ε,η t and Z ε,η t as in Theorems 3.5 and Theorems 3.6, we only need to discuss the limit of T t Y ε,η sẆε,η (s, B s )ds. Before discussing this limit we give the definition of a (Stratonovich) stochastic integral with respect to t 0 F s W (ds, B s ).
d i=1 (2H i − β i ) < 2 and ξ ∈ L q (Ω) for q > 2 2H − 1 , where H = min{H 0 , . . . , H d }. Then for any t ∈ [0, T ], we have T t Y ε,η sẆ ε,η (s, B s )ds → T t Y s W (ds, B s )
in L 2 sense, as ε, η ↓ 0.
Proof. By (3.1), Lemma 3.1 and Theorem 3.6, we know
T t Y ε,η sẆε,η (s, B s )ds = Y ε,η t − ξ + T t Z ε,η s dB s converges in L 2 sense to the random field A t := Y t − ξ + T t Z s dB s as ε, η tend to zero. Hence, if B ε,η t := T t Y ε,η s − Y s Ẇ ε,η (s, B s )ds → 0 (3.30) in L 2 (Ω), then we have T t Y sẆε,η (s, B s )ds = T t Y ε,η sẆε,η (s, B s )ds − B ε,η t will converge to A in L 2 (Ω)ds = T t R d T t ϕ η (s − r)p ε (B s − y) W (dr, y)dyds . (3.31) Recall F · W (φ) = δ(F φ) + D W F, φ H . Then, we obtaion Y ε,η s − Y s Ẇ ε,η (s, B s ) = Y ε,η s − Y s R d s t ϕ η (s − r)p ε (B s − y)W (dr, y)dy = s t R d Y ε,η s − Y s ϕ η (s − r)p ε (B s − y) W (δr, y)dy + D W (Y ε,η s − Y s ), ϕ η (s − ·)p ε (B s − ·) H .
Hence, by stochastic Fubini's Theorem, B ε,η t can be written as
B ε,η t = R d T t T t Y ε,η s − Y s ϕ η (s − r)p ε (B s − y)ds W (δr, y)dy + T t D W (Y ε,η s − Y s ), ϕ η (s − ·)p ε (B s − ·) H ds := B ε,η,1 t + B ε,η,2 t .
(3.32)
18
For the term B ε,η,1 t , we define
φ ε,η r,y = T t Y ε,η s − Y s ϕ η (s − r)p ε (B s − y)ds,
and with the help of L 2 estimate for Skorokhod type stochastic integral, it yields:
E B ε,η,1 t 2 ≤ E φ ε,η 2 H + E D W φ ε,η 2 H⊗H . (3.33)
The above first term can be estimated as follows:
E φ ε,η 2 H = E [t,T ] 2 Y ε,η s − Y s Y ε,η r − Y r × ϕ η (s − ·)p ε (B s − ·), ϕ η (r − ·)p ε (B r − ·) H dsdr . (3.34)
Recalling the definition in (2.2), and combining with the proof in Proposition 2.1 (refer to (2.8) and (2.10)) we deduce that
ϕ η (s − ·)p ε (B s − ·), ϕ η (r − ·)p ε (B r − ·) H = α H 0 [t,T ] 2 R 2d ϕ η (s − u)ϕ η (r − v)p ε (B s − y)p ε (B r − z) × |u − v| 2H 0 −2 ρ(y)ρ(z) d i=1 R H i (y i , z i )dudvdydz ≤ C|r − s| 2H 0 −2 ρ(B s )ρ(B r ) d i=1 R H i (B i s , B i r ). (3.35)
Substituting this into (3.34) and with the help of (2.9) we have
E φ ε,η 2 H ≤C E [t,T ] 2 Y ε,η s − Y s Y ε,η r − Y r |r − s| 2H 0 −2 ρ(B s )ρ(B r ) d i=1 R H i (B i s , B i r ) dsdr ≤C E sup s∈[0,T ] Y ε,η s − Y s 4 1/2 × E [t,T ] 2 |r − s| 2H 0 −2 d i=1 (1 + |B i s |) 2H i −β i (1 + |B i r |) 2H i −β i dsdr 2 1/2 . (3.36)
Thanks to Theorem 3.5, Proposition 2.2 and the dominated convergence theorem, we see that E φ ε,η 2 H converges to zero as ε, η tend to zero.
Secondly, we have to deal with E D W φ ε,η 2 H⊗H , the second term in (3.33). By Malliavin calculus and (3.31) we have
D W Y ε,η t = E ξ D W exp T tẆ ε,η (s, B s )ds F t = E ξ exp T tẆ ε,η (s, B s )ds T t ϕ η (s − ·)p ε (B s − ·)ds F t . (3.37) We denote F B 1 ,B 2 t,s = σ(B 1 u , B 2 v , 0 ≤ u ≤ t, 0 ≤ v ≤ s; W (t, x), t ≥ 0, x ∈ R d )E W D W Y ε,η t , D W Y ε ′ ,η ′ t H = E W E ξ(B 1 )ξ(B 2 ) exp T tẆ ε,η (s, B 1 s )ds + T tẆ ε ′ ,η ′ (s, B 2 s )ds × [t,T ] 2 ϕ η (s − ·)p ε (B 1 s − ·), ϕ η ′ (r − ·)p ε ′ (B 2 r − ·) H dsdr F B 1 ,B 2 t B 1 =B 2 =B ≤ α H 0 E ξ(B 1 )ξ(B 2 ) E W exp T tẆ ε,η (s, B 1 s )ds + T tẆ ε ′ ,η ′ (s, B 2 s )ds (3.38) × [t,T ] 2 |s − r| 2H 0 −2 ρ(B 1 s )ρ(B 2 r ) d i=1 R H i (B 1,i s , B 2,i r )dsdr F B 1 ,B 2 t B 1 =B 2 =B = α H 0 E ξ(B 1 )ξ(B 2 ) exp 2 j,k=1 T t T t |s − r| 2H 0 −2 ρ(B j s )ρ(B k r ) d i=1 R H i (B j,i s , B k,i r )dsdr × [t,T ] 2 |s − r| 2H 0 −2 ρ(B 1 s )ρ(B 2 r ) d i=1 R H i (B 1,i s , B 2,i r )dsdr F B 1 ,B 2 t B 1 =B 2 =B .
We have to prove the integrability of (3.38). Put a, b be two positive constants such that 1/a+1/b = 1 and 2a < q. With the help of Proposition 2.2 and Hölder's inequality,
E ξ(B 1 )ξ(B 2 ) exp 2 j,k=1 T t T t |s − r| 2H 0 −2 d i=1 R H i (B j,i s , B k,i r )ρ(B j,i s )ρ(B k,i r )dsdr × [t,T ] 2 |s − r| 2H 0 −2 ρ(B 1 s )ρ(B 2 r ) d i=1 R H i (B 1,i s , B 2,i r )dsdr ≤ ξ 2 q E exp 2b 2 j,k=1 T t T t |s − r| 2H 0 −2 d i=1 R H i (B j,i s , B k,i r )ρ(B j,i s )ρ(B k,i r )dsdr 1/2b (3.39) × E [t,T ] 2 |s − r| 2H 0 −2 ρ(B 1 s )ρ(B 2 r ) d i=1 R H i (B 1,i s , B 2,i r )dsdr 2b 1/2b <∞. That is, we get E D W Y ε,η t , D W Y ε ′ ,η ′ t H is integrable.
Hence, in a similar idea as that shown in (3.36), we obtain Y ε,η t also converges to Y t in D 1,2 W as ε, η ↓ 0. Then putting ε = ε ′ , η = η ′ ,
sup ε,η∈(0,1] sup t∈[0,T ] E D W Y ε,η t 2 H < ∞.
Hence, combining (3.35), (3.37) and (3.38) we have
E D W φ ε,η 2 H⊗H 20 = E [t,T ] 2 D W (Y ε,η s − Y s ), D W (Y ε,η r − Y r ) H × ϕ η (s − ·)p ε (B s − ·), ϕ η (r − ·)p ε (B r − ·) H dsdr = α H 0 E [t,T ] 2 D W (Y ε,η s − Y s ), D W (Y ε,η r − Y r ) H (3.40) × |s − r| 2H 0 −2 ρ(B s )ρ(B r ) d i=1 R H i (B i s , B i r )dsdr ≤ C E [t,T ] 2 D W (Y ε,η s − Y s ), D W (Y ε,η r − Y r ) H × |s − r| 2H 0 −2 d i=1 (1 + |B i s |) 2H i −β i (1 + |B i r |) 2H i −β i dsdr .
In a similar method as in the proof of Theorem 3.6: there are two positive constants p ′ , q ′ , 1/p ′ + 1/q ′ = 1 such that 1 < p ′ < 1 2 − 2H 0 and 2q ′ < q for which we can deduce
E D W φ ε,η 2 H⊗H ≤ [t,T ] 2 E | D W (Y ε,η s − Y s ), D W (Y ε,η r − Y r ) H | q ′ dsdr 1/q ′ × [t,T ] 2 E |s − r| (2H 0 −2)p ′ d i=1 (1 + |B i s |) 2H i −β i (1 + |B i r |) 2H i −β i p ′ dsdr 1/p ′ , (3.41) where [t,T ] 2 E |s − r| (2H 0 −2)p ′ d i=1 (1 + |B i s |) 2H i −β i (1 + |B i r |) 2H i −β i p ′ dsdr ≤ C [t,T ] 2 |s − r| (2H 0 −2)p ′ dsdr < ∞.
(3.42)
Now we only need to study the integrability of the first term on the right side of (3.41). Pick two constants a, b > 1, 1/a + 1/b = 1 such that a is sufficiently small to satisfy 2aq ′ < q. With the help of proof in Proposition 2.2 and (3.39), we have that
[t,T ] 2 E D W Y ε,η s , D W Y ε,η r H q ′ dsdr 1/q ′ ≤ ξ 2 q [t,T ] 2 E E exp bq ′ 2 j,k=1 T s T r |u − v| 2H 0 −2 ρ(B j u )ρ(B k v ) d i=1 R H i (B j,i u , B k,i v )dudv × T s T r |u − v| 2H 0 −2 ρ(B 1 u )ρ(B 2 v ) d i=1 R H i (B 1,i u , B 2,i v )dudv bq ′ F B 1 ,B 2 s,r B 1 =B 2 =B 1/b dsdr 1/q ′ < ∞. (3.43) It yields that E D W φ ε,η 2 H⊗H is integrable. Since we have deduced that Y ε,η → Y in D 1,2 W , ε, η → 0, therefore E D W φ ε,η 2
H⊗H converges to zero as ε, η tend to zero. Thus, we get B ε,η,1 t defined in (3.32) converges to zero in L 2 as ε, η tend to zero. Now we are going to bound B ε,η,2 t . We have
D W Y s = D W E ξ exp( T s W (dr, B r )) F s = D W E ξ exp( R d T s δ(B r − y)W (dr, y)dy) F s = E ξ exp T s W (dr, B r ) δ(B · − ·)|F s .B ε,η,2 t = T t D W (Y ε,η s − Y s ), ϕ η (s − ·)p ε (B s − ·) H ds = T t E ξ exp T sẆ ε,η (r, B r )dr × T s ϕ η (r − ·)p ε (B r − ·), ϕ η (s − ·)p ε (B s − ·) H dr|F s ds − T t E ξ exp T s W (dr, B r ) δ(B · − ·), ϕ η (s − ·)p ε (B s − ·) H |F s ds :=B ε,η,3 t − B ε,η,4 t . (3.45)
Note that,
δ(B · − ·), ϕ η (s − ·)p ε (B s − ·) H = [s,T ] 2 R 2d |u − v| 2H 0 −2 δ(B u − y)ϕ η (s − v)p ε (B s − z)ρ(y)ρ(z) d i=1 R H i (y i , z i )drdv = [s,T ] 2 R d |u − v| 2H 0 −2 ϕ η (s − v)p ε (B s − z)ρ(B u )ρ(z) d i=1 R H i (B i u , y i )dudvdz.
Thus, by Fubini's Theorem and previous estimates, we have
|B ε,η,3 t | ≤ T t E ξ exp T sẆ ε,η (r, B r )dr T s |s − r| 2H 0 −2 ρ(B s )ρ(B r ) d i=1 R H i (B i s , B i r )dr|F s ds (3.46) and |B ε,η,4 t | = T t E ξ exp T s W (dr, B r ) [s,T ] 2 R d |u − v| 2H 0 −2 ϕ η (s − v) × p ε (B s − y)ρ(B u )ρ(y) d i=1 R H i (B i u , y i )dudvdy|F s ds (3.47) ≤ T t E ξ exp T s W (dr, B r ) T s |s − v| 2H 0 −2 ρ(B v )ρ(B s ) d i=1 R H i (B i v , B i s )dv|F s ds.
Proposition 2.2 and dominated convergence theorem guarantee the integrability of these two expressions. Now, with the help of dominated convergence theorem we get B ε,η,3 t and B ε,η,4
t converge in L 2 to T t E ξ exp T s W (dr, B r ) T s |s − r| 2H 0 −2 ρ(B s )ρ(B r ) d i=1 R H i (B i s , B i r )dr|F s ds
as ε, η tend to zero which also mean that B ε,η,2 t converges in L 2 to zero as ε, η tend to zero.
Hölder continuity of Y and Z
Let the Assumption (2) in Theorem 1.1 be satisfied. Now we can prove the Hölder continuity of Y and Z.
Proof. First we prove the Hölder continuity of Y . Recall q > 2 2H − 1 , where H = min{H 0 , . . . , H d }.
Thus for all a ∈ (1, q), we have
E Y t − Y s a = E E ξ exp(V t ) F t − E ξ exp(V s ) F s a ≤ 2 E E ξ exp(V t ) F t − E B ξ exp(V s ) F t a + E E ξ exp(V s ) F t − E ξ exp(V s ) F s a =: 2 I 1 + I 2 .
(4.1)
For I 1 , one can use Jensen's inequality and the exponential integrability of Proposition 2.2 to get, for two positive constants p ′ , q ′ satisfying 1/p ′ + 1/q ′ = 1 and aq ′ < q,
I 1 = E E ξ exp(V t ) F t − E ξ exp(V s ) F t a ≤ E E ξ |V t − V s | exp (max{V t , V s }) F t a (4.2) ≤ E E ξ q ′ exp(q ′ max{V t , V s }) F t a/q ′ E V t − V s p ′ F t a p ′ ≤ C E V t − V s ap ′ 1 p ′ .
By (2.5), (2.6) and the equivalence between the L 2 -norm and the L p -norm for a Gaussian random variable, it yields that
I 1 ≤ E V t − V s ap ′ 1 p ′ = E t s W (dr, B r ) ap ′ 1 p ′ ≤ C E E W t s W (dr, B r ) 2 ap ′ /2 1 p ′ ≤ C E t s t s α H 0 |u − v| 2H 0 −2 ρ(B u )ρ(B v ) d i=1 R H i (B i u , B i v )dudv ap ′ /2 1 p ′ (4.3) ≤ C t s t s |u − v| (2H 0 −2)m dudv ap ′ 2m E t s t s ρ(B u )ρ(B v ) d i=1 R H i (B i u , B i v ) n dudv ap ′ 2n 1 p ′ ≤ C |t − s| aH 0 −ε ,
where ε is an arbitrary positive constant, n, m > 1 such that 1 n + 1 m = 1.
For I 2 , denote by ψ t = exp t 0 W (ds, B s ) . Proposition 2.2 tells us that, ξψ T is L q (Ω)integrable for q > 2 2H − 1 . Moreover, Clark-Ocone formula implies that,
ξψ T = E B [ξψ T ] + T 0 f r dB r , where f r = E[D B r (ξψ T )|F B r ] = E[ψ T D B r (ξ)|F r ] + E[ξD B r (ψ T )|F r ]. (4.4)
Thus, from the Burkholder-Davis-Gundy inequality and the fact that a > 2 we deduce that
I 2 = E ψ −1 s E ξψ T F t − E ξψ T F s a = E ψ −1 s t s f r dB r a ≤ E ψ −2a s 1/2 E t s f r 2 dr a 1/2 (4.5) ≤ C E t s f r 2 dr a 1/2 .
Taking (4.4) into above formula yields that
E t s f r 2 dr a ≤ C E t s E ψ T D B r (ξ) F r 2 dr a + C E t s E ξD B r (ψ T ) F r 2 dr a/2 ≤ C t s E D B r ξ 2q ′ dr a/q ′ t s E (ψ T ) 2p ′ dr a/p ′ + C t s E[ξ 2q ′ ]dr a/q ′ t s E D B r ψ T 2p ′ dr a/p ′ ≤ C|t − s| a/q ′ t s E E W (ψ T ) 2 p ′ dr a/p ′ + t s E E W D B r ψ T 2 p ′ dr a/p ′ ,
where C is a constant only depends on p ′ , q ′ , D B r ξ 2 Lq and ξ 2 Lq . We recall ψ s and D B r ψ s are centralized Gaussian processes given B. Moreover,
E W D B r ψ T 2 = E W D B r exp t 0 W (ds, B s ) 2
can be treated in a similar way as we did in the proof of Theorem 3.6. By Proposition 2.1 and Theorem 3.6, we can directly obtain the boundedness of E E W (ψ T ) 2 p ′ and E E W D B r ψ T 2 p ′ . Thus, we deduce
I 2 ≤ E t s f r 2 dr a 1/2 ≤ C |t − s| a/2 .
Because we assume that H > 1 2 , the Hölder continuous coefficient can only be 1 2 .
24
Next we have to consider the Hölder continuity of Z. Recall (3.14) for the expression of Z:
Z t = D B t Y t = E e T t W (dτ,Bτ )dτ D B t ξ + ξ exp T t W (du, B u )du T t ∇ x W (ds, B s ) F t = E e Vt D B t ξ + ξ exp V t ∇ x V t F t =: Z 1 t + Z 2 t ,
where we recall the definition of (2.5) for V t and where we denote ∇ x V t = T t ∇ x W (ds, B s ). Z 1 is easy to deal with. In fact, similar to the way to treating (4.1), (4.3), (4.5), and by the assumption that D B ξ ∈ L q (Ω) and E|D t ξ − D s ξ| q ≤ C|t − s| κq/2 for some κ > 0, we see
E Z 1 t − Z 1 s 2 ≤ C|t − s| κ∧1 .
We shall focus on Z 2 .
E Z 2 t − Z 2 s 2 = E E ξ exp(V t )∇ x V t F t − E ξ exp(V s )∇ x V s F s 2 ≤ 2E E ξ exp(V t )∇ x V t F t − E ξ exp(V s )∇ x V s F t 2 + 2E E ξ exp(V s )∇ x V s F t − E ξ exp(V s )∇ x V s F s 2 := 2(I 1 + I 2 ).
(4.6)
For I 1 , with the help of Jensen's inequality we have
I 1 ≤ E |ξ exp(V t )∇ x V t − ξ exp(V s )∇ x V s | 2 ≤ 2E |ξ (exp(V t ) − exp(V s )) ∇ x V t | 2 + |ξ exp(V s )(∇ x V t − ∇ x V s )| 2 ≤ 2E |ξ exp(max{V t , V s })∇ x V t (V t − V s )| 2 + 2E |ξ exp(V s )(∇ x V t − ∇ x V s )| 2
:= 2(I 1,1 + I 1,2 ).
We can find two constant a, b such that 1/a + 1/b = 1, 1 < a < 1 2−2H and 2b < q. Then we have
I 1,1 ≤ E (∇ x V t ) 2a 1/a E ξ 2b (V t − V s ) 2b 1/b ≤ C E ξ 2b (V t − V s ) 2b 1/b (4.7) d i=1 E B T t T t |u − v| 2H 0 −2 |B i u − B i v | 2H i −2 d j =i R H j (B j u − B j v )ρ(B j u )ρ(B j v )dudv a 1/a ≤ C|t − s| 2H 0 −ε .
As for I 1,2 , we deduce similarly that
I 1,2 ≤ E |ξ| 2b exp(2bV t ) 1/b E [∇ x V t − ∇ x V s ] 2a 1/a ≤ C a E |ξ| 2b exp(2bV t ) 1/b E E W t s t s (∇ x W (du, B u )) T (∇ x W (dv, B v )) a 1/a ≤ C E d i=1 t s t s |u − v| 2H 0 −2 |B i u − B i v | 2H i −2 d j =i R H j (B j u , B j v )ρ(B j u )ρ(B j u )dudv a 1/a (4.8) ≤ C d i=1 t s t s |u − v| (2H 0 −2)a+(H i −1)a dudv 1/a ≤ C|t − s| 2H 0 +H−1−ε .
As I 2 , the Clark-Ocone formula yields
ξ exp(V s )∇ x V s = E B [ξ exp(V s )∇ x V s ] + T s E D B r (ξ exp(V s )∇ x V s ) F r dB r . (4.9)
Thus we have
I 2 = E t s E D B r (ξ exp(V s )∇ x V s ) F r dB r 2 = E t s E D B r (ξ exp(V s )∇ x V s ) F r 2 dr = E t s E ξ∇ x V s D B r (exp(V s )) F r 2 dr + E t s E ξ exp(V s )D B r (∇ x V s ) F r 2 dr + E t s E exp(V s )∇ x V s D B r ξ F r 2 dr =: I 2,1 + t s I 2,2 dr + I 2,3 .
(4.10)
The integrability inside the integral of I 2,3 is obvious due to (4.7). For I 2,1 we have
E t s E D B r (exp(V s )) ξ ∇ x V s F r 2 dr = E t s E ξ exp(V s ) (∇ x V s ) 2 F r 2 dr ≤ t s E[ξ b exp(bV s )] 2/b E(∇ x V s ) 2a 2/a ds ≤ C|t − s| .
Finally, we deal with I 2,2 . We shall use the technique as in (3.15). Notice that,
D B r (∇ x V s ) = D B r T s ∇ x W (du, B u )1 [0,u] (r) = T s∨r ∇ 2 x W (du, B u ) .
We have analogously to (3.15)
I 2,2 = E E ξ(B 1 ) ξ(B 2 ) exp 2 j,k=1 α H 0 2 T s T s |u − v| 2H 0 −2 R H i (B j u , B k v )ρ(B j u )ρ(B k v )dudv (4.11) × T s∨r T s∨r Tr ∇ 2 x W (du, B 1 u ) T ∇ 2 x W (dv, B 2 v ) dudv F B 1 ,B 2 r B 1 =B 2 =B .
Using the Hölder inequality, we have
I 2,2 = E E ξ(B 1 ) ξ(B 2 ) exp 2 j,k=1 α H 0 2 T s∨r T s∨r |u − v| 2H 0 −2 R H i (B j u , B k v )ρ(B j u )ρ(B k v )dudv b F B 1 ,B 2 r B 1 =B 2 =B 1/b (4.12) 26 × E E T s∨r T s∨r Tr ∇ 2 x W (du, B 1 u ) T ∇ 2 x W (dv, B 2 v ) dudv a F B 1 ,B 2 r B 1 =B 2 =B 1/a ≤ CI 1/a 2,2,1 , where I 2,2,1 = E E T s∨r T s∨r Tr ∇ 2 x W (du, B 1 u ) T ∇ 2 x W (dv, B 2 v ) dudv a F B 1 ,B 2 r B 1 =B 2 =B .
We shall consider the term that contains J =
T s∨r T s∨r ∂ 2 ∂x 2 i W (du, B 1 u ) ∂ 2 ∂x 2 i W (dv, B 2 v )
(denote the corresponding term by J i ) since the other terms can be treated in similar way. When r ≥ s, we have for any a > 1,
J i =E E T r T r ∂ 2 ∂x 2 i W (du, B 1 u ) ∂ 2 ∂x 2 i W (dv, B 2 v ) a F B 1 ,B 2 r B 1 =B 2 =B ≤C a E E E W T r T r ∂ 2 ∂x 2 i W (du, B 1 u ) ∂ 2 ∂x 2 i W (dv, B 2 v ) 2 a/2 F B 1 ,B 2 r B 1 =B 2 =B ≤C a E E T r T r |u − v| 2H 0 −2 |B 1,i u − B 2,i v | 2H i −4 ρ(B 1 u )ρ(B 2 v ) j =i |R H j (B 1,j u , B 2,j v )|dudv a/2 F B 1 ,B 2 r B 1 =B 2 =B + C a ,
where in the above first inequality, we used the hypercontractivity for E W and in the above last inequality, there are terms such as the derivatives with respect to ∂ 2 x i ρ and ∂ x i ρ∂ x i R H i which are easy to be bounded. By using Hölder's inequality again, the above expectation is bounded by a multiple of 1/a ′ power of (for any a ′ > 1)
E E T r T r |u − v| 2H 0 −2 |B 1,i u − B 2,i v | 2H i −4 dudv aa ′ /2 F B 1 ,B 2 r B 1 =B 2 =B =E E T r T r |u − v| 2H 0 −2 |(B 1,i u − B 1,i r ) − (B 2,i v − B 2,i r ) + B 1,i r − B 2,i r | 2H i −4 dudv aa ′ /2 F B 1 ,B 2 r B 1 =B 2 =B =E E X,Y T r T r |u − v| 2H 0 −2 | √ u − rX − √ v − rY + B 1,i r − B 2,i r | 2H 0 −4 dudv aa ′ /2 B 1 =B 2 =B ,
where X and Y are two independent standard Gaussians. The above expectation in X and Y are bounded by (denoting Z = B 1,i r − B 2,i r and choosing aa ′ /2 < 1)
E X,Y T r T r |u − v| 2H 0 −2 | √ u − rX − √ v − rY + Z| 2H 0 −4 dudv aa ′ /2 ≤ T r T r |u − v| 2H 0 −2 R 2 1 √ 2π e −x 2 /2 1 √ 2π e −y 2 /2 | √ u − rx − √ v − ry + Z| 2H i −4 dxdydudv aa ′ /2 ≤ T r T r |u − v| 2H 0 −2 1 (u − r)(v − r) R 2 |xy| 4 1 √ 2π e −x 2 /2 1 √ 2π e −y 2 /2 | √ u − rx − √ v − ry + Z| 2H i −2 dxdy dudv aa ′ /2 ≤ T r T r |u − v| 2H 0 −2 1 (u − r)(v − r) R 2 1 √ 2π e −x 2 /4 1 √ 2π e −y 2 /4 | √ u − rx − √ v − ry + Z| 2H i −2 dxdy dudv aa ′ /2 = C T r T r |u − v| 2H 0 −2 1 (u − r)(v − r) E X,Y | √ u − rX − √ v − rY + Z| 2H i −2 dudv aa ′ /2 ≤ C T r T r |u − v| 2H 0 −2 |u − r| −1/2 |v − r| −1/2 |u − r + v − r| 2H i −2 dudv aa ′ /2 ≤ C T r T r |u − v| 2H 0 −2 |u − r| −1/2 |v − r| −1/2 |u − r| H i −1 |v − r| H i −1 dudv aa ′ /2 < ∞ ,
where the third last inequality follows from Lemma A.1 of [1] and the last inequality holds true since H i > 1/2. This proves that I 2,2 is bounded and hence I 2 ≤ C|t − s|. Hence, Combing (4.7), (4.8) and (4.10), we have
E Z 2 t − Z 2 s 2 ≤ C|t − s| 2H 0 −1+H−ε ,
and finally we deduce
E Z t − Z s 2 ≤ C|t − s| (2H 0 −1+H−ε)∧κ , for all ε > 0.
Uniqueness of solution
We have proved parts (1) and (2) of Theorem 1.1. In this section, we are going to prove part (3), the uniqueness of BSDEs (1.2). We need the following proposition. Before we prove Proposition 5.1, we first need the following lemma. Then α t s satisfies the following equation.
α t 0 = α s 0 + t 0 α r 0 W (dr, B r ). (5.2) Proof. Define K t = t 0 W (dr, B r )
. Consider a sequence of partitions π n = {0 = t 0 < t 1 < . . . < t n = t} such that |π n | = max 0≤i≤n−1 (t i+1 − t i ) → 0 when n → ∞ (the t i 's depend on n and we omit this explicit dependence to simplify notation). Since H ∈ (1/2, 1) and since W satisfies (1.1), by Markov's inequality, Proposition 2.1 and the estimate (2.11), it is easy to obtain
lim n→∞ P n i=1 t i+1 t i W (dr, B r ) 2 > ε ≤ lim n→∞ n i=1 E | t i+1 t i W (dr, B r )| 2 ε ≤ lim n→∞ C n i=1 |t i+1 − t i | 2H 0 ε = 0, (5.3)
for any ε > 0. On the other hand, we have
α t 0 − 1 = n i=0 e Kt i+1 − e Kt i = n i=0 α t i 0 K t i+1 − K t i + R n t , (5.4) where R n t = n i=0 K t i+1 − K t i 1 0 e Kt i +(Kt i+1 −Kt i )u − e Kt i du.
Combining (5.3) with (3.12) yields
|R n t | ≤ C sup 0≤r≤t e Kr · n i=0 |K t i+1 − K t i | 2 P −→ 0, n → ∞.
This proves that α t 0 satisfies (5.2).
Lemma 5.3. Let (Y, Z) satisfy (1.2) and let α t be given as above. Suppose the conditions of Proposition 5.1 are satisfied. Then
α T 0 ξ − α t 0 Y t = T t α s 0 Z s dB s . (5.5)
Proof. Let (Y, Z) satisfy (1.2) and we use partition π n = {t = t 0 < t 1 < . . . < t n = T }. Taking (5.2) into account we have
α T 0 ξ − α t 0 Y t = n i=1 α t i+1 0 Y t i+1 − α t i 0 Y t i = n i=1 α t i+1 0 (Y t i+1 − Y t i ) + Y t i (α t i+1 0 − α t i 0 ) = n i=1 α t i+1 0 − t i+1 t i Y r W (dr, B r ) + t i+1 t i Z r dB r + n i=1 Y t i t i+1 t i α r 0 W (dr, B r ) (5.6) 29 = n i=1 −α t i 0 Y t i t i+1 t i W (dr, B r ) + α t i 0 Z t i t i+1 t i dB r + n i=1 Y t i α t i 0 t i+1 t i W (dr, B r ) +R n t = n i=1 α t i 0 Z t i t i+1 t i dB r +R n t , whereR n t = n i=1 α t i+1 0 t i+1 t i [Y r − Y t i ] W (dr, B r ) + n i=1 α t i+1 0 − α t i 0 Y t i t i+1 t i W (dr, B r ) + n i=1 α t i 0 t i+1 t i [Z r − Z t i ] dB r + n i=1 α t i+1 0 − α t i 0 t i+1 t i Z r dB r + n i=1 Y t i t i+1 t i α r 0 − α t i 0 W (dr, B r ) = n i=1 R 1,i + R 2,i + R 3,i + R 4,i + R 5,i . (5.7)
For R 1,i , using (7.1) we get
|R 1,i | 2 E t i+1 t i [Y r − Y t i ] W (dr, B r ) 2 = E [t i ,t i+1 ] 2 (Y r − Y t i )(Y s − Y t i )|s − r| 2H 0 −2 q(B r , B s )drds + E [t i ,t i+1 ] 2 [r,t i+1 ] [t i ,s] R 2d D W r,y (Y u − Y t i ) D W v,w (Y s − Y t i ) × |u − v| 2H 0 −2 |s − r| 2H 0 −2 q(B u , w)q(B s , y)dwdydvdudrds . (5.8)
Recalling (2.9) that covariance q(x, y) satisfies
|q(x, y)| ≤ C d i=1 (1 + |x i |) 2H i −β i (1 + |y i |) 2H i −β i ,(5.9)
where β i > 2H i + 1, i = 1, . . . , d, it yields
|R 1,i | 2 t i+1 t i t i+1 t i E |Y r − Y t i | 2 1/2 E |Y s − Y t i | 2 1/2 |s − r| 2H 0 −2 sup ω,s,r q(B s , B r ) dsdr + t i+1 t i t i+1 t i t i+1 r s t i E |D W r,y (Y r − Y t i )| 2 1/2 E D W v,w (|Y s − Y t i )| 2 1/2 (5.10) × |u − v| 2H 0 −2 |s − r| 2H 0 −2 sup
ω,u,s,w,y q(B u , w)q(B s , y) dvdudsdr.
If Y satisfies condition (3) in Theorem 1.1, that is, the continuity coefficient of Y is 1 2 , then we can directly obtain
|R 1,i | 2 C |t i+1 − t i | 1+2H 0 + |t i+1 − t i | 4H 0 (5.11)
If Y satisfies condition (4) in Theorem 1.1, then from (1.2), we have
Y r − Y t i = t i r Y s W (ds, B s ) − t i r Z s dB s , r ∈ [t i , t t+1 ] . (5.12)
With the help of (7.1) again and the fact that (Y, Z) ∈ S 2 F (0, T ; R) × M 2 F (0, T ; R d ) as well as that Y also belongs to D 1,2 , it holds
E |Y r − Y t i | 2 ≤ 2E [t i ,r] 2 Y s Y u |s − u| 2H 0 −2 q(B s , B u )dsdu + 2E [t i ,r] 2 [u,r] [t i ,s] R 2d D W s ′ ,y Y u D W v,w Y s × |u − v| 2H 0 −2 |s − s ′ | 2H 0 −2 q(B u , w)q(B s , y)dwdydudvds ′ ds + E t i r Z s dB s 2 ≤ 2 [t i ,r] 2 E |Y s | 2 1/2 E |Y u | 2 1/2 |s − u| 2H 0 −2 sup ω,s,u q(B s , B u ) dsdu (5.13) + 2 [t i ,r] 2 [u,r] [t i ,s] R 2d E |D W s ′ ,y Y u | 2 1/2 E |D W v,w Y s | 2 1/2 |u − v| 2H 0 −2 × |s − s ′ | 2H 0 −2 sup
ω,s,u,w,y q(B u , w)q(B s , y) dwdydvduds ′ ds .
+ t i r E |Z s | 2 ds ≤ C(|r − t i | 2H 0 + |r − t i | 4H 0 + |r − t i |).
Taking this result back to (5.10) we get
|R 1,i | 2 C ≤ C (t i+1 − t i ) 4H 0 + (t i+1 − t i ) 6H 0 . Thus n−1 i=0 |R 5,i | n−1 i=0 (t i+1 − t i ) 2H 0 ≤ max 0≤i≤n−1 (t i+1 − t i ) 2H 0 −1 n−1 i=0 (t i+1 − t i ) → 0, n → ∞.
For R 3,i , from the orthogonality of the increments of standard Brownian motion and the fact that
α t i 0 = exp t i 0 W (dr, B r ) is F t i -adapted, we have E n i=1 α t i 0 t i+1 t i [Z r − Z t i ] dB r 2 ≤ n i=1 E α t i 0 2 E t i+1 t i [Z r − Z t i ] dB r 2 ≤C n i=1 t i+1 t i E |Z r − Z t i | 2 dr. (5.18)
If Z satisfies condition (3) in Theorem 1.1, we have easily
n−1 i=0 R 3,i 2 C n−1 i=0 |t i+1 − t i | κ+1 . (5.19) DenoteỸ t = D B r Y t ,Z t = D B r Z t (we fix r), and from (1.2) we obtain D B r Y t =Ỹ t =D B r ξ + T tỸ s W (ds, B s ) + T r Y s ∇ x W (ds, B s ) − T tZ s dB s , 0 ≤ t ≤ r ≤ T.
(5.20)
Therefore, we first need to verify the square integrability of T t Y s ∇ x W (ds, B s ), and then we can treat (5.20) in a similar way to that for (1.2). We can write
E T t Y s ∇ x W (ds, B s ) 2 = E T t Y s ∇ x δ(B s − x)W (ds, x)dx 2 (5.21)
From (7.1) and by integration by parts, for all 0 ≤ t ≤ T ,
E T t Y s ∇ x W (ds, B s ) 2 = E [t,T ] 2 Y r Y s |s − r| 2H 0 −2 ∇ x,≤C |T − t| 2H 0 + |T − t| 4H 0 ) ≤ CT 4H 0 .
Thus we have (Ỹ t ,Z t ) of BSDE (5.20) is well-defined, i.e., E T 0 |Ỹ t | 2 + |Z t | 2 dt < ∞. Using the classical conclusion that Z t = D B t Y t , ∀t ∈ [0, T ] (see e.g. [13]), we can treat Z r − Z t i as
Z r − Z t i = (D B r ξ − D B t i ξ) + t i r D B r Y s W (ds, B s ) + t i r Y s ∇ x W (ds, B s ) − t i r D B r Z s dB s , 0 ≤ t i ≤ r ≤ T =Z 1 +Z 2 +Z 3 +Z 4 .
(5.23)
For the above first termZ 1 we can use the assumption E|D B r ξ − D B t i ξ| 2 ≤ C|r − t i | κ for some κ > 0. We can deal with the second termZ 2 in (5.23) in the similar way as in (5.8). In fact, with the help of (7.1) again, it has
E Z 2 2 ≤ t i+1 r t i+1 r E |D B r Y s | 2 1/2 E |D B r Y s ′ | 2 1/2 |s − s ′ | 2H 0 −2 sup ω,s,s ′ q(B s , B ′ s ) dsds ′ + t i+1 r t i+1 r t i+1 s ′ s r E |D W s ′ ,y (D B r Y s ′ )| 2 1/2 E D W v,w (|D B r Y s )| 2 1/2 × |u − v| 2H 0 −2 |s − s ′ | 2H 0 −2 sup
ω,u,s,w,y q(B u , w)q(B s , y) dudvdsds ′ .
(5.24)
Since Y, D B Y ∈ D 1,2 , we have the estimate
E Z 2 2 ≤ C t i+1 r t i+1 r |s − s ′ | 2H 0 −2 dsds ′ + C t i+1 r t i+1 r t i+1 s ′ s r |u − v| 2H 0 −2 |s − s ′ | 2H 0 −2 dudvdsds ′ ≤ C |t i+1 − r| 2H 0 + |t i+1 − r| 4H 0 .(5.n i=1 t i+1 t i E |Z r − Z t i | 2 dr 1/2 ≤ C n i=1 |t i+1 − t i | 2H 0 +1 + |t i+1 − t i | 4H 0 +1 + |t i+1 − t i | κ+1 + |t i+1 − t i | 2 1/2 , (5.28) which implies n i=1 R 3,i 2 C n i=1 |t i+1 − t i | 2H 0 +1 + |t i+1 − t i | 4H 0 +1 + |t i+1 − t i | κ+1 + |t i+1 − t i | 2 ≤ max 0≤i≤n−1 (t i+1 − t i ) κ∧1 n i=1
|t i+1 − t i | → 0, n → ∞.
(5.29)
For R 2,i and R 4,i , it is easy to deduce that
|R 2,i | 2 C E Y t i 2 E α t i+1 0 − α t i 0 4 1/2 E t i+1 t i W (dr, B r ) 4 1/2 ≤ C |t i+1 − t i | 4H 0 ,(5.30)
and
|R 4,i | 2 CE α t i+1 0 − α t i 0 4 1/2 E t i+1 t i Z r dB r 2 ≤ C |t i+1 − t i | 2H 0 +1 .
(5.31)
Thus we have
n−1 i=0 |R 2,i | n−1 i=0 (t i+1 − t i ) 2H 0 ≤ max 0≤i≤n−1 (t i+1 − t i ) 2H 0 −1 n−1 i=0 (t i+1 − t i ) → 0, n → ∞. and n−1 i=0 |R 4,i | n−1 i=0 (t i+1 − t i ) H 0 +1/2 ≤ max 0≤i≤n−1 (t i+1 − t i ) H 0 −1/2 n−1 i=0 (t i+1 − t i ) → 0, n → ∞.
Hence, letting the mesh size |π n | goes to zero yieldsR n t → 0, P -a.s., and the right side of (5.6) converges to T t α r 0 Z r dB r . This concludes the proof of the lemma.
Proof of Proposition 5.1. In equation (5.5) we take the conditional expectation with respect to F B t we see to obtain α t 0 Y t = E α T 0 ξ|F t . Thus,
Y t = α t 0 −1 E B α T 0 ξ F B t = E B α T t ξ F B t = E B ξ= E F · b a R d Y s δ(B s − x)W (ds, x)dx = E D W F, Y · δ(B · − ·) H = E [a,b] 2 R 2d D W r,y F · Y s δ(B s − z)|s − r| 2H 0 −2 q(y, z)dydzdrds = E [a,b] 2 R d D W
r,y F · Y s |s − r| 2H 0 −2 q(y, B s )dydrds .
(7.3)
Note that, For I 2 , it is easy to deduce
D W r,y F = D W r,y b a R d Y s δ(B s − x)W (ds, x)dx = b r R d D W r,y Y s δ(B s − x)W (ds, x)dx + Y r δ(B r − y).I 2 = E [a,b] 2 R d Y r δ(B r − y) · Y s |s − r| 2H 0 −2 q(y, B s )dydrds = E [a,b] 2
Y r Y s |s − r| 2H 0 −2 q(B r , B s )dydrds .
(7.6) I 1 has the following expression where
I 1 = E [a,b] 2 R d b r R d D W r,I 3 = b r R d D W r,y Y s δ(B s − x)W (ds, x)dx · Y s .
Using F · W (φ) = δ(F φ) + D W F, φ H again we have
I 3 = E W b r R d D W r,y Y s δ(B s − x)W (ds, x)dx · Y s = E W D W r,y Y · δ(B · − ·), D W Y s H = E W [r,b] [a,s] R 2d D W r,y Y u δ(B u − x)D W v,w Y s |u − v| 2H 0 −2 q(x, w)dxdwdvdu = E W [r,b] [a,s] R d D W r,y Y u D W v,w Y s |u − v| 2H 0 −2 q(B u , w)dwdvdu .
(7.8)
Substituting this back to (7.7) we obtain
I 2 = E [a,b] 2 [r,b] [a,s] R 2d D W r,y Y u D W v,w Y s |u − v| 2H 0 −2
|s − r| 2H 0 −2 q(B u , w)q(y, B s )dwdydudvdrds .
(7.9)
Inserting the expressions for I 1 and I 2 into (7.5) yields the proposition.
half of the Laplacian. If further g(r, x, u, p) = u, then the above SPDE (1.3) becomes −du(t, x) = 1 2 ∆udt + u(t, x)W (dt, x), u(T, x) = φ(x) . (1.4)
(dτ,Bτ ) Y s (∇ x W )(ds, B s ) F t . (1.8)
Proposition 2 . 2 .
22Let ρ : R d → R be a continuous function satisfying (2.4). Then, for all λ ∈ R, E exp λ T t W (dr, B r ) < ∞.
Lemma 3.2. (Majorizing Measure Theorem, see e.g. [[15, Theorem 2.4.2]]. Let T be a given set and let {X t , t ∈ T } be a centred Gaussian process indexed by T . Denote by d(t, s)
Lemma 3. 3 .
3(Borell-TIS inequality, see e.g. [14, Theorem 2.1]). Let {X t , t ∈ T } be a centered separable Gaussian process on some topological index set T with almost surely bounded sample paths.
Definition 3 . 8 .
38Let be given a random field F = {F t , t ≥ 0} such that T 0 |F s |ds < ∞ almost surely, for all T > 0. Then the Stratonovich integral
Tt
F s W (ds, B s ) is defined as the following limit in probability if it exists (compared this with Proposition 2.1 when F s ≡ 1): T t F sẆε,η (s, B s )ds.
Proposition 5. 1 .
1Suppose that the conditions in Theorem 1.1 are satisfied. Let (Y, Z) ∈ S 2 F (0, T ; R)× M 2 F (0, T ; R d ) be the solution of BSDEs (1.2) so that Y, D B Y are D 1,2 .Then the solution has the explicit expression (1.7) and hence the BSDEs (1.2) has a unique solution.
W
(dr, B r ) .(5.1)
uniformly w.r.t. ε, η. So we can apply the dominated convergence theorem below.x=B 1
s 1 ,y=B 2
s 2
.
(3.26)
Analogously to (3.20), (3.22), (3.24) and (3.25), we can show the boundedness of other A ε,η
ji 's
In particular,
we have
lim
η,ε→0
). Previously, we have proved A is well-defined, and then Y s will be Stratonovich integrable. Thus, by Definition 3.8, we directly haveT
t
Y s W (ds, B s ) = lim
ε,η↓0
T
t
Y sẆε,η (s, B s )ds = A,
i.e., the equation (1.2) is satisfied.
In the remaining part of the proof, we shall show (3.30). First we note that, recalling the
definition ofẆ ε,η in (2.3) we have
T
tẆ
ε,η (s, B s
the σ-algebra generated by B 1 , B 2 and W . Recalling the definition (2.3) we know that, for random variable W (namely for fixed B), T tẆ ε,η (s, B s )ds is Gaussians. Then Proposition 2.1 and (3.35) tell us
y q(B r , B s )dydrds + E [a,b] 2 R d [r,b] [s,b] R d D W r,y Y u D W v,w Y s × |u − v| 2H 0 −2 |s − r| 2H 0 −2 ∇ x q(B u , w)∇ x q(B s ,y)dwdudvdydrds (5.22) |s − r| 2H 0 −2 sup ω,r,s ∇ x,y q(B r , B s ) dydrds [t,T ] 2 R d [r,T ] [s,T ] R d × |u − v| 2H 0 −2 |s − r| 2H 0 −2 sup ω,u,s,w,y ∇ x q(B u , w)∇ x q(B s , y) dwdudvdydrds≤
[t,T ] 2
E Y r
2 1/2
E Y s
2 1/2
33
+ E
E D W
r,y Y u
2 1/2
E D W
v,w Y s
2 1/2
≤ C |t i+1 − r| 2H 0 + |t i+1 − r| 4H 0 . (5.26)Finally it is easy to obtainE Z 4 | 2 ≤ sup s∈[r,t i ] E|D B r Z s | 2 |t i − r| ≤ C |t i − r|.(5.27) Taking those estimates back to (5.18) we have25)
From (5.22) we have
E Z 3
2
exp T t
TW (dr, B r ) F B Proof. Recalling W (φ) = R + ×R d φ(t, x)W (dt, x)dx. We have Y s W (ds, B s ) Y s δ(B s − x)W (ds, x)dx Denote by F := b a Y s W (ds, B s ) and we shall use F · W (φ) = δ(F φ) + D W F, φ H . From the definition of spatial covariance (2.1), it follows Y s W (ds, B s )t
.
(5.32)
E
b
a
2
= E
b
a
R d
2
.
(7.2)
E
b
a
2
y Y s δ(B s − x)W (ds, x)dx · Y s |s − r| 2H 0 −2 q(y, B s )drdsdy = E [a,b] 2 R d I 3 |s − r| 2H 0 −2 q(y, B s )drdsdy(7.7)
Z t = D B t Y t = D B t E ξ exp T t W (dr, B r ) F t (3.13)
Hence we have31 Using (7.1) again, we getFor R 5,2,i , recalling (5.1) we haveTaking this result back to(5.15), and with the help of (4.2), (4.3), we obtainFrom the general relationship between Z and Y (e.g.[13]) we haveThis concludes the proof of the proposition.BSDEs and semilinear SPDEsIn this section we obtain the regularity of the solution to the BSDE, and then establish the relationship between the SPDEis not differentiable in t and x, one could not apply Itô's formula to u(s, X t,x s ). Let us considerWe see that u(t, x) is differentiable with respect to x. Now we can use Itô's formula to u ε,η (s, X t,x s ) to deduceand by the uniqueness of BSDE we know Y t,x,ε,ηSimilar to the proof of Lemma 3.1, we can deduce lim ε,η→0Since u ε,η satisfies (6.3) for any C ∞ function ψ with compact support, we haveIn fact, (6.11) can be deduced in a similar way to that of Theorem 3.9. This proves the conclusion.Theorem 6.2. Suppose the same conditions as in Theorem 1.1 and let (Y t,x s , Z t,x s ) be the solution pair of BSDE (6.2). Then u(t,and is the solution of SPDE (6.1).t+h . We still use the approximated BSDE (6.5). Define u ε,η (t, x) := Y ε,η,t,x t , t ∈ [0, T ], x ∈ R d . We want to show that u ε,η (t, x) satisfies (6.1). An application of Itô's formula yields that for h > 0 Combining this with the backward SDE satisfied by u ε,η (t,(6.13) Thus, let π n be a partition t = t 0 < t 1 < · · · < t n = T. By (6.13), we haveand in particular, Z ε,η,t,x t = ∇Y ε,η,t,x t . Thus, if we let mesh sizes of the partitions π n go to zero, then it yields
Feynman-Kac formula for Heat Equation Driven by Fractional White Noise. Y Hu, D Nualart, J Song, Annals of Probability. 391Y. Hu, D. Nualart, J. Song. Feynman-Kac formula for Heat Equation Driven by Fractional White Noise. Annals of Probability. 39(1), 291-326, 2011.
The Malliavin Calculus and Related Topics. D Nualart, SpringerD. Nualart. The Malliavin Calculus and Related Topics. Springer, 2006.
. Y Hu, Analysis on Gaussian Spaces. World Scientific. Y. Hu. Analysis on Gaussian Spaces. World Scientific, 2016.
Weak solutions for SPDEs and backward doubly stochastic differential equations. V Bally, A Matoussi, Journal of Theoretical Probability. 141V. Bally, A. Matoussi. Weak solutions for SPDEs and backward doubly stochastic differential equations. Journal of Theoretical Probability. 14(1), 125-164, 2001.
Stochastic viscosity solutions for nonlinear stochastic partial differential equations. Part I. Stochastic processes and their applications. R Buckdahn, J Ma, 93R. Buckdahn, J. Ma. Stochastic viscosity solutions for nonlinear stochastic partial differential equations. Part I. Stochastic processes and their applications. 93(2), 181-204, 2001.
Stochastic viscosity solutions for nonlinear stochastic partial differential equations. Part II. Stochastic processes and their applications. R Buckdahn, J Ma, 93R. Buckdahn, J. Ma. Stochastic viscosity solutions for nonlinear stochastic partial differential equations. Part II. Stochastic processes and their applications. 93(2), 205-228, 2001.
Nonlinear Feynman-Kac formulas for Stochastic Partial Differential Equations with Space-Time Noise. J Song, X Song, Q Zhang, SIAM Journal on Mathematical Analysis. 512J. Song, X. Song, Q. Zhang. Nonlinear Feynman-Kac formulas for Stochastic Partial Differen- tial Equations with Space-Time Noise. SIAM Journal on Mathematical Analysis. 2019, 51(2): 955-990.
Semilinear Backward Doubly Stochastic Differential Equations and SPDEs Driven by Fractional Brownian Motion with Hurst Parameter in (0, 1/2). S Jing, J A León, Bulletin des Sciences Mathematiques. 1358S. Jing, León J A. Semilinear Backward Doubly Stochastic Differential Equations and SPDEs Driven by Fractional Brownian Motion with Hurst Parameter in (0, 1/2). Bulletin des Sciences Mathematiques. 135(8), 896-935, 2011.
Nonlinear Fractional Stochastic PDEs and BDSDEs with Hurst Parameter in (1/2, 1). S Jing, Systems& Control Letters. 615S. Jing. Nonlinear Fractional Stochastic PDEs and BDSDEs with Hurst Parameter in (1/2, 1). Systems& Control Letters. 61(5), 655-665, 2012.
É Pardoux, S Peng, Backward Doubly Stochastic Differential Equations and Systems of Quasilinear SPDEs. Probability Theory and Related Fields. 98É. Pardoux, S. Peng. Backward Doubly Stochastic Differential Equations and Systems of Quasilinear SPDEs. Probability Theory and Related Fields.98(2), 209-227, 1994.
Some Recent Progress on Stochastic Heat Equations. Y Hu, Acta Mathematica Scientia. 393Y. Hu. Some Recent Progress on Stochastic Heat Equations. Acta Mathematica Scientia. 39(3), 874-914, 2019.
Stochastic heat equation with rough dependence in space. Y Hu, J Huang, K Lê, D Nualart, S Tindel, The Annals of Probability. 456BY. Hu, J. Huang, K. Lê, D. Nualart, S. Tindel. Stochastic heat equation with rough dependence in space. The Annals of Probability, 45(6B), 4561-4616, 2017.
Malliavin calculus for backward stochastic differential equations and application to numerical solutions. The Annals of Applied Probability. Y Hu, D Nualart, X Song, 21Y.Hu, D.Nualart, X. Song. Malliavin calculus for backward stochastic differential equations and application to numerical solutions. The Annals of Applied Probability. 2011, 21(6): 2379- 2423.
An introduction to continuity, extrema, and related topics for general Gaussian processes. R J Adler, IMS. R.J. Adler. An introduction to continuity, extrema, and related topics for general Gaussian processes. IMS. 1990.
Upper and lower bounds for stochastic processes: modern methods and classical problems. M Talagrand, Springer Science & Business Media60M. Talagrand. Upper and lower bounds for stochastic processes: modern methods and classical problems. 60. Springer Science & Business Media. 2014.
A nonlinear stochastic heat equation: Hölder continuity and smoothness of the density of the solution. Stochastic Processes and their Applications. Y Hu, D Nualart, J Song, 123Y. Hu, D. Nualart, J. Song. A nonlinear stochastic heat equation: Hölder continuity and smoothness of the density of the solution. Stochastic Processes and their Applications. 123(3), 1083-1103, 2013.
Stochastic PDEs driven by nonlinear noise and backward doubly SDEs. A Matoussi, M Scheutzow, Journal of Theoretical Probability. 151A. Matoussi, M. Scheutzow. Stochastic PDEs driven by nonlinear noise and backward doubly SDEs. Journal of Theoretical Probability. 15(1), 1-39, 2002.
A numerical scheme for BSDEs. J Zhang, Ann. Appl. Probab. 14Zhang, J. A numerical scheme for BSDEs. Ann. Appl. Probab. 14 (2004). 459-488.
Path regularity for solutions of backward stochastic differential equations. J Ma, J Zhang, Probab. Theory Related Fields. 122Ma, J. and Zhang, J. Path regularity for solutions of backward stochastic differential equations. Probab. Theory Related Fields 122 (2002). 163-190.
Stochastic flows and stochastic differential equations. Reprint of the 1990 original. H Kunita, Cambridge Studies in Advanced Mathematics. 24Cambridge University PressKunita, H. Stochastic flows and stochastic differential equations. Reprint of the 1990 original. Cambridge Studies in Advanced Mathematics, 24. Cambridge University Press, Cambridge, 1997.
| []
|
[
"ON THE ROLE OF REDUCED HABITAT IN THE PHASE TRANSITION OF A STOCHASTIC MODEL FOR SEED DISPERSAL",
"ON THE ROLE OF REDUCED HABITAT IN THE PHASE TRANSITION OF A STOCHASTIC MODEL FOR SEED DISPERSAL"
]
| [
"Cristian F Coletti ",
"Nevena Marić ",
"Pablo M Rodriguez "
]
| []
| []
| Habitat loss is one of the biggest threats facing plant species nowadays. We formulate a simple mathematical model of seed dispersal on reduced habitats to discuss survival of the species in relation to the habitat size and seeds production rate. Seeds get dispersed around the mother plant via several agents in a random way. In our model seeds landing sites are distributed according to a homogeneous Poisson point process with a constant rate on R. We will assume that each seed will successfully germinate and grow into a new plant with the same characteristics as the mother plant. The time is discrete, scaled according to generations of plants or can represent years, since annual plants go through an entire growing cycle during one year. Then we will assume there are two symmetric barriers with respect to the origin and consider that the growth can not evolve past the barriers. Imposing barriers correspond to the physical limitation of the habitat. We appeal to tools of Probability Theory to formalize and study such a model, which can be seen as a discrete-time one-dimensional branching random walk with barriers. By means of coupling techniques and the comparison with suitably constructed multi-type branching processes we localize the critical parameter of the process around which there is survival with positive probability or extinction almost surely. In addition, we consider a discrete-space version of the model for which exact results are also obtained.2020 Mathematics Subject Classification. Primary 60J80, Secondary 60J85, 92D25. | 10.1002/mma.9138 | [
"https://export.arxiv.org/pdf/2208.00270v1.pdf"
]
| 251,224,139 | 2208.00270 | ea7f5fb948898d69f1bae66e7a7fd054ee8b646e |
ON THE ROLE OF REDUCED HABITAT IN THE PHASE TRANSITION OF A STOCHASTIC MODEL FOR SEED DISPERSAL
Cristian F Coletti
Nevena Marić
Pablo M Rodriguez
ON THE ROLE OF REDUCED HABITAT IN THE PHASE TRANSITION OF A STOCHASTIC MODEL FOR SEED DISPERSAL
Habitat loss is one of the biggest threats facing plant species nowadays. We formulate a simple mathematical model of seed dispersal on reduced habitats to discuss survival of the species in relation to the habitat size and seeds production rate. Seeds get dispersed around the mother plant via several agents in a random way. In our model seeds landing sites are distributed according to a homogeneous Poisson point process with a constant rate on R. We will assume that each seed will successfully germinate and grow into a new plant with the same characteristics as the mother plant. The time is discrete, scaled according to generations of plants or can represent years, since annual plants go through an entire growing cycle during one year. Then we will assume there are two symmetric barriers with respect to the origin and consider that the growth can not evolve past the barriers. Imposing barriers correspond to the physical limitation of the habitat. We appeal to tools of Probability Theory to formalize and study such a model, which can be seen as a discrete-time one-dimensional branching random walk with barriers. By means of coupling techniques and the comparison with suitably constructed multi-type branching processes we localize the critical parameter of the process around which there is survival with positive probability or extinction almost surely. In addition, we consider a discrete-space version of the model for which exact results are also obtained.2020 Mathematics Subject Classification. Primary 60J80, Secondary 60J85, 92D25.
Introduction
Accelerated climate change makes the humanity face many urgent issues. Persistence of species is one of them, especially of those inhabiting areas severely damaged by effects of global worming. The topic has been studied extensively in ecological literature, for example [17,22,23]. Among many factors affecting a species survival is its reproduction rate. The starting point of this work is the formulation of a simple mathematical model of seed dispersal as a phenomenon assisting reproduction in annual plants. Seeds get dispersed around the mother plant via several agents (wind, birds, water, etc) in a random manner. There are several studies relating dispersal to spatial random processes e.g. [1,18]. The subject of interest there has been mostly competition of species whereas in this work we focus on a survival of a species in relation to the habitat size and seeds production rate.
In our model seeds landing sites are distributed according to a homogeneous Poisson point process with a constant rate. This assumption has been widely used in ecological models like [9,15,19,20], among others. We will assume that each seed will successfully germinate and grow into a new plant with same characteristics as the mother plant. The time is discrete, scaled according to generations of plants or can represent years, since annual plants go through an entire growing cycle during one year.
Under these assumptions, we will focus on questions of survival and extinction of a species on an island. By islands are not considered only the islands in the usual sense but rather islands in the landscape, like mountaintops, oasis in the desert, grassland surrounded by houses, etc. This topic is related to the area of Island bio-geography that studies distribution of biodiversity over space and time of islands. Research in this field started with MacArthur and Wilson in 1960's [16] and has been expanding since then. For a thorough review of ecological responses to recent climate change see [17] and references therein.
We appeal to tools of Probability Theory to formulate a mathematical model of the seed dispersal. This stochastic model can be seen as a one-dimensional branching random walk (BRW). Roughly speaking, a branching random walk describes the evolution of particles living in a spatially structured environment, which give birth to new particles whose number and positions depend on a given reproduction law. This can be formulated as a discretetime stochastic process whose space state is described by the collection of possible positions of particles at any time. For an overview of the formulation and recent results of these type of stochastic processes we refer the reader to [6]. In our model, the space is continuous, and since it is a one-dimensional model, it is exactly R. Moreover, as a reproduction law we use a Poisson point process associated to each particle so its realization represents the progeny of the particle.
Extinction and survival of BRW, with respect to the dispersion rate, are well studied through branching processes [3]. Survival is the event of having at least one particle at any time of the process; extinction, of course, its complementary event. It is well-known that this process exhibits a phase transition phenomenon. There is a critical dispersal rate, λ c , below which the process dies out almost surely (sub-critical case). Similarly, for the dispersal rates above the critical one (super-critical case), the survival is possible and in that case one can look into long-term spatial distribution of individuals. The critical density of a BRW, as described here, is known to be equal to 1. In the super-critical case, a central limit behaviour is shown in [5].
The most relevant questions regarding the climate change context, ice melting, and islands shrinking look into the change of survival conditions i.e. how the critical dispersal rate change with introduction of spatial barriers.
In our model we will assume there are two symmetric barriers with respect to the origin and consider a BRW on R that can not evolve past the barriers. We emphasize that imposing barriers to the BRW correspond to the physical limitation of the habitat. Initially there is an individual located at the origin. It produces a Poisson number of children that are uniformly distributed in its neighborhood of the unit size. The second generation is distributed as a Poisson Point Process on [−1, 1] with rate λ/2, so that the expected total number of children equals λ. Every new individual produces the offspring in its own neighborhood following the same law and independently of other siblings.
Note that, in general, we are assuming that the process can not evolve past points −L and L (L ∈ R). The main focus of our work is to study how the barriers affect phase transition in the BRW and change in critical density λ c . Similar processes with only one barrier were studied in different settings by many authors like [4,7,8,11,14].
The rest of the paper is organized as follows. Our study is subdivided into two parts. The first one is the formulation and study of the BRW with barriers at −1 and 1. In Section 2 we give the formal definition of the main process (Y n ) n∈N and subsequently state our main result, Theorem 2.1, which allow us to localize the critical value. To prove this theorem, we appeal to the construction of two auxiliary multi-type branching processes (X n ) n∈N and (Z n ) n∈N which sandwich the original process. For the auxiliary processes we are able to obtain critical values numerically. The monotonic relation between these processes and a coupling argument is then used to find the critical density of (Y n ) n∈N , see Corollary 2.2 and Theorem 2.3. Such constructions and the related results are included in Section 3. The second part of our work is organized in Section 4, where we propose a discrete-space version of the model. The advantage of this approach is that for this process we are able to obtain the critical value exactly, not only for L = 1, and to see how it changes with L. Finally, Section 5 is devoted to a discuss of our results and prospects for future research.
The model and the existence of phase transition
Initially there is one particle located in the origin. The descendants of this particle are scattered in the interval [−L, L] according to a Poisson point process ξ with rate λ/2 (written also as ξ ∼ PPP(λ/2)), where L ≥ 1 and λ > 0. The first generation is constituted by the descendants of the particle located at the origin. Each of the particles in the first generation produces its own descendants according to the following mechanism. Assume that a particle in the first generation is located at x ∈ [−L, L]. Then, this particle tries to give rise to particles scattered at [x − L, x + L] according to a Poisson point process ξ with rate λ/2. Only attempts inside [−L, L] are considered successful. The second generation is given by the successful births and their parents are considered dead. This procedure is repeated indefinitely, but can possibly end if there is no particle alive in a generation. This process is called branching random walk (BRW) with two barriers at −L and L respectively. We will focus our discussion on the case L = 1. See Figure 1.
Denote by Y n the set of particles alive at generation n, for any n ≥ 0, and let Y := (Y n ) n∈N . We call Y the BRW with two barriers and offspring given by a Poisson point process with intensity λ/2. On the other hand, let S λ be the event of survival of the process; that is,
S λ := n≥0 {Y n = ∅}. [ ] n = 0 −1 0 1 n = 1 [ ] n = 2 [ ] .
. .
(a) At n = 0 there is one particle located at 0. At n = 1, its descendants are scattered in the interval [−1, 1] according to a Poisson point process with rate λ/2, λ > 0; and they compose the first generation. Whenever a particle in the first generation is located at Appealing to a coupling argument it is not difficult to see that the survival probability is non-decreasing on λ, i.e. P (S λ 1 ) ≤ P (S λ 2 ) .
(2.1) for any 0 < λ 1 < λ 2 . Here P stands for the law of the process. Therefore we may define the critical parameter for the BRW with two barriers Y as Using multi-type branching processes in conjunction with coupling techniques we are able to localize the critical parameter λ c (Y) in the case L = 1. We shall see that such a critical value is related to the Perron-Frobenious eigenvalue of a Toeplitz matrix. In what follows we use the notation T k,d for the k × k banded symmetric Toeplitz matrix with 0 − 1 values and bandwidth d. For instance,
T 6,3 = 1 1 1 0 0 0 1 1 1 1 0 0 1 1 1 1 1 0 0 1 1 1 1 1 0 0 1 1 1 1 0 0 0 1 1 1 .
Now we can state the main results of our work.
Theorem 2.1. Let Y be the BRW with two barriers at −1 and 1 respectively. Then
lim m→∞ 2 m+1 ρ (T 2 m+1 ,2 m ) ≤ λ c (Y) ≤ lim m→∞ 2 m+1 ρ (T 2 m+1 ,2 m +1 ) , where, for m ∈ N, ρ (T k,d ) denotes the Perron-Frobenius eigenvalue of the 0 − 1 values banded Toeplitz matrix T k,d .
We refer the reader to [12] for more details on banded Toeplitz matrices. Although there is no closed formula for the eigenvalues of such matrices of arbitrary dimension, Theorem 2.3 gains in interest if we realize that it allows us to obtain an approximation of the critical value from the numerical computation of the eigenvalues for several values of m.
Corollary 2.2. Let Y be the BRW with two barriers at −1 and 1 respectively. Then
1.286907 ≤ λ c (Y) ≤ 1.287096.
As it is well-known, the critical parameter at which phase transition holds for the BRW without barriers is equal to 1. In words, our result shows that imposing barriers to the original process produces a shift in its critical parameter to approximately 1.28. Coming back to our motivation, although we are dealing with a simplified model, this is enough to catch how the survival of a plant species is negatively affected in the presence of a reduced habitat.
In order to prove Theorem 2.1 we study two sequences of stochastic processes that sandwich Y. Indeed, we say that the random set A is dominated by the (random) set B if A ⊂ B a.s. In this case we also say that B dominates A. For any m ≥ 1, the first sequence (X m n ) n∈N will be dominated by Y and the second one, denoted by (Z m n ) n∈N will dominate Y. The sandwiching processes are related to multi-type branching processes with 2 m+1 types. We then study critical densities of these multi-type branching processes as the number of types tends to infinity, which allows us to provide bounds for the critical parameter λ c of the original process. Furthermore, we are able to prove that such bounds are equal. This is the content of the following corollary.
λ c (Y) = lim m→∞ 2 m+1 ρ (T 2 m+1 ,2 m ) = lim m→∞ 2 m+1 ρ (T 2 m+1 ,2 m +1 ) .
Auxiliary results and proof of the results
We begin this section by describing the construction of two processes X m := (X m n ) n∈N and Z m := (Z m n ) n∈N indexed by m ∈ N and such that X m n ⊆ Y n ⊆ Z m n almost surely for any m ≥ 1 and for any n ≥ 0. Then, we relate such processes to multi-type branching processes which will be used to obtain sequences of upper and lower bounds for λ c (Y). Using properties of Hermitian matrices we prove that the critical parameter of our model coincides with the limit of (any) of such sequences.
3.1. Multi-type branching processes and a label for the process Y. In order to study the behavior of our model we appeal to the theory of discrete-time multi-type branching processes. In such processes and, just to fix ideas, particles can be classified into k different types where k ≥ 1 is fixed. After each time step, a particle of type i will give birth to particles of different types according to a given probability law. Thus the multitype branching process is a k-dimensional discrete-time Markov chain (B n ) n∈N , where B n is the k-dimensional vector whose ith−coordinate, i ∈ {1, . . . , k}, represents the number of particles of type i which were given birth at time n. For a deeper discussion of these processes we refer the reader to [3,Chapter V]. An object of interest when dealing with multi-type branching processes is the mean matrix M = (m ij ) i,j∈{1,...,k} where m ij denotes the expected number of type j offspring of a single type i particle in one generation. Indeed, this matrix carries information about the survival or extinction of the process. Here survival means that at all times at least one particle is alive, no matter its type. By [3, Theorem 2, Chapter V] we know that there is survival with positive probability if, and only if, the maximum eigenvalue of M is greater than 1 provided the process (B n ) n∈N is positive regular and non-singular.
In order to construct a multi-type branching processes related to the process Y consider the partition P m of the interval [−1, 1] using 2 m+1 + 1 equally distant points with fixed m ≥ 1. That is, for j ∈ {0, . . . , 2 m+1 } set x m j = −1 + j/2 m and denote the associated partition by
P m := {x m 0 , . . . , x m 2 m+1 }. (3.1) For 1 ≤ j ≤ 2 m+1 − 1 let I m j := [x m j−1 , x m j ), and set I m 2 m+1 := [1 − (1/2 m ), 1]
. Now label the particles of the original process Y according to their position relative to the partition P m . More precisely, for a given m, we say that a particle located at x ∈ [−1, 1] is of type j if it belongs to I m j . The resulting process, with labeled particles, will be denoted by Y m = (Y m n ) n≥0 . The next subsections are devoted to the construction of the two processes X m and Z m .
3.2.
The process X m . For any alive particle y denote by D(y) its descendants. Observe that
D(y) = [−1, 1] ∩ N y where N y ∼ PPP(λ/2) on [y − 1, y + 1]. Define X m = (X m n ) n≥0 as follows. See Figure 2. i) Initially let X m 0 := Y m 0 = {0} and X m 1 := Y m 1 . ii) For n ≥ 1, if x ∈ X m n is of type j, then (a) If x < 0 then all its offspring in [−1, x m j−1 + 1] belong to X m n+1 . (b) If x ≥ 0 then all its offspring in [x m j − 1, 1] belong to X m n+1 . )[ )[ )[ [ ] n = 0 −1 −1/2 0 1/2 1 )[ )[ )[ n = 1 [ ] x y )[ )[ )[ n = 2 [ ] .
. . . At time n = 1, the offspring of the particle located at the origin at n = 0 is formed by the particles x, of type 1, and y, of type 4. Both particles have one attempt each of a birth not allowed by the barriers (red circles outside [−1, 1]). Moreover, at n = 2, while x gave birth to two particles of type 2, particle y gave birth to one particle of type 4 and, also, it has an attempt of birth which is not allowed by the construction of X 1 even if such attempt is inside [−1, 1] ∩ [y − 1, y + 1] (blue circle outside [0, 1]).
Observe that by construction the process X m is a subset of Y a.s. Thus the proof of the following lemma is an immediate consequence of this construction. then the stochastic process X m = (X m n ) n∈N is a multi-type branching process with 2 m+1 types, whose matrix of expected numbers of progeny of all types of parent particles of all types is given by
M (X m )(i, j) = λ 2 · 1 2 m 1(|i − j| ≤ 2 m − 1), (3.3)
where 1(A) denotes the 0 − 1 random variable indicating the occurrence of the event A. That is M (X m ) is a matrix of order 2 m+1 matrix whose (i, j)-th entry represents the mean number of children of type j of an individual of type i. We denote by λ c (X m ) the critical parameter of the process X m . The following result follows directly from Lemma 3.1, and the fact that
|X m n | = 2 m+1 j=1 X m,j n . Proposition 3.2. Fix m ≥ 1. Then λ c (X m ) ≥ λ c (Y).
Note that for each m the partition P m+1 is a refinement of the partition P m , i.e. P m ⊂ P m+1 . Indeed, the partition using powers of 1/2 is used exactly for this reason. Indeed, the natural attempt to slice [−1, 1] in intervals of length 1/m do not have this useful property. Now, suppose that there is a particle at ξ < 0 whose type is j in X m n . Then its offspring lie in the interval [−1, x m j−1 + 1]. For m + 1 the same particle at ξ reproduces either in the same interval [−1, x m j−1 + 1] or in the larger interval [−1, x m j−1 + 1 + 1/2 m ] depending whether ξ belongs to the first or the second half of the interval with endpoints x j−1 and x j . In any case, |X m n | ≤ |X m+1 n | for every m, n ≥ 1. Therefore
λ c (X m ) ≥ λ c X m+1 for m ≥ 1.
The sequence of critical values is therefore non-negative, non-increasing, and bounded from above by λ c (X 1 ) = 1.527864 (see Table 1). Thus the sequence (λ c (X m )) m has a limit. Combining this fact with Proposition 3.2 we obtain . At time n = 1, the offspring of the particle located at the origin at n = 0 is formed by the particles x, of type 1, and y, of type 4. Both particles have one attempt each of a birth not allowed by the barriers (red circles outside [−1, 1]). Moreover, at n = 2, following the rules of the process Y, x gave birth to two particles of type 2, and particle y gave birth to one particle of type 2 and other of type 4. In addition, an additional birth is allowed for x by the construction of Z 1 even if such attempt is outside [−1, 1]∩[x−1, x+1] (red particle inside [−1, 1/2)).
λ c (Y) ≤ lim m→+∞ λ c (X m ).)[ )[ )[ [ ] n = 0 −1 −1/2 0 1/2 1 )[ )[ )[ n = 1 [ ] x y )[ )[ )[ n = 2 [ ] . . .
We analyze the process Z m in a similar way as we analyzed X m . We may think of Z m as being a multi-type branching process. In this case, one can denote by Z m,j n the number of particles of type j in Z m at time n and set Thus, Z m = (Z m n ) n∈N is a multi-type branching process with 2 m+1 types. Similarly as in (3.3) its matrix of expected values has the following entries
M (Z m )(i, j) = λ 2 · 1 2 m 1(|i − j| ≤ 2 m ). (3.5)
The proof of the following lemma is an immediate consequence of the construction of Z m n . Remark 3.1. The reason for considering the Poisson process M y on [y − 2, y + 2] is to guarantee that Y m n ⊂ Z m n a.s. There are many other choices rather than 2 but this choice avoid introducing more cumbersome notation. Indeed, for fixed m we may consider [y − (1 + 1/2 m ), y + (1 + 1/2 m )] instead of [y − 2, y + 2] but this notation would introduce an unnecessary difficulty to the reader.
We denote by λ c (Z m ) the critical parameter of Z m . The following result follows directly from Lemma 3.3 and the fact that
|Z m n | = 2 m+1 j=1 Z m,j n .
Proposition 3.4. For any m ≥ 1,
λ c (Z m ) ≤ λ c (Y) .
Since P m+1 is a refinement of P m , then |Z m n | ≥ |Z m+1 n |. Indeed, consider a particle located at ξ < 0 in the Z m process of type j. This particle reproduces in the interval [−1, x m j + 1]. In the process Z m+1 , the particle at ξ is either of type j or type j + 1. In the latter case it reproduces in the same interval as the m−th case, while in the former case that interval is shortened by 1/2 m . Thus, the population of Z m does not increase as m grows and therefore
λ c (Z m ) ≤ λ c Z m+1 , for m ≥ 1.
This sequence of critical values is non-decreasing and bounded from above by λ c (X 1 ). Also, we have
lim m→+∞ λ c (Z m ) ≤ λ c (Y).
(3.7)
In the next section we use (3.4) and (3.7) in order to obtain bounds for λ c (Y).
λ c (Z m ) ≤ λ c (Y) ≤ lim m→+∞ λ c (X m ), (3.8)
where λ c (X m ) and λ c (Z m ) are the critical parameters associated to the multi-type branching process related to the processes X m and Z m , respectively. We point out that λ c (X m ) and λ c (Z m ) are the maximum eigenvalue of the mean values matrices for the respective multi-type branching processes, see [3, Theorem 2, Chapter V]. Their entries are given by (3.3) and (3.5), respectively. In particular, for m = 1, we get
M (X 1 ) = λ 2 × 1 4 × 1 1 0 0 1 1 1 0 0 1 1 1 0 0 1 1 , M (Z 1 ) = λ 2 × 1 4 × 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 ,
and for m > 1 the mean values matrices are square matrices of order 2 × 2 m . Indeed, we have 1 1 · · · 1 1 1 1 · · · 1 0 1 1 · · · 1 1 1 1 · · · 1 1 0 1 · · · 1 1 1 1 · · · 1 1 0 0 · · · 1 1 1 1 · · · 1 1 . 1 1 · · · 1 1 1 1 · · · 1 1 1 1 · · · 1 1 1 1 · · · 1 1 1 1 · · · 1 1 1 1 · · · 1 1 0 1 · · · 1 1 1 1
M (X m ) = λ 2 × 1 2 m × 1 1 · · · 1 1 0 0 · · · 0 0 1 1 · · · 1 1 1 0 · · · 0 0 . .0 0 · · · 0 1 1 1 · · · 1 1 0 0 · · · 0 0 1 1 · · · 1 1 , and M (Z m ) = λ 2 × 1 2 m × 1 1 · · · 1 1 1 0 · · · 0 0 1 1 · · · 1 1 1 1 · · · 0 0 . .0 0 · · · 1 1 1 1 · · · 1 1 0 0 · · · 0 1 1 1 · · · 1 1 ,
where the black lines divide rows and columns into halves of size 2 m . Observe that the 0 − 1 matrices above are actually banded symmetric Toeplitz matrices. In what follows we use the notation T k,d for the k × k banded symmetric Toeplitz matrix with 0 − 1 values and bandwidth d. Thus M (X 1 ) = (λ/2) × (1/4) × T 4,2 , M (Z 1 ) = (λ/2) × (1/4) × T 4,3 , and in general:
M (X m ) = λ 2 × 1 2 m × T 2 m+1 ,2 m and M (Z m ) = λ 2 × 1 2 m × T 2 m+1 ,2 m +1
. The proof of Theorem 2.1 is completed upon observing that the critical values associated to any of the dominant process is computed using the formula:
λ c (·) = 2 m+1 ρ(M (·)) ,
where ρ(M (·)) denotes the Perron-Frobenius eigenvalue of the Toeplitz matrix associated to M (·). From here, we use the Software R to obtain the eigenvalues numerically for several values of m. As the size of such matrices grow exponentially fast, we only compute these critical parameters up to m = 12. In Table 1 we list the critical values for X m and Z m , for m ∈ {1, . . . , 12}. See also Figure 4 for a graphical representation of these values as a function of m. Table 1. Critical values for X m and Z m , for m ∈ {1, . . . , 12}. Therefore we have the following numerical estimation for the critical value:
1.286907 ≤ λ c ≤ 1.287096,
which is the result stated in Corollary 2.2. One can obtain better estimates just by computing the corresponding eigenvalues for m > 12.
The obtained numerical values are clearly in support of the convergence of λ c (X m ) and λ c (Z m ) to a common limit. Moreover, Figure 5 suggests that such convergence is exponentially fast. 3.5. Proof of Corollary 2.3. By (3.8) it is enough to prove that lim m→+∞ λ c (Z m ) ≥ lim m→+∞ λ c (X m ). Here λ c (·) = 2 m+1 /ρ(·), where ρ(.) denotes the Perron-Frobenius eigenvalue of the Toeplitz matrix associated to M (·). Note that
M (X m ) = M (Z m ) − A m with A m =
0 0 · · · 0 0 1 0 · · · 0 0 0 0 · · · 0 0 0 1 · · · 0 0 . . . 0 0 · · · 0 0 0 0 · · · 1 0 0 0 · · · 0 0 0 0 · · · 0 1 1 0 · · · 0 0 0 0 · · · 0 0 0 1 · · · 0 0 0 0 · · · 0 0 . .
0 0 · · · 1 0 0 0 · · · 0 0 0 0 · · · 0 1 0 0 · · · 0 0 ,
where the black lines divide rows and columns into halves of size 2 m . These are Hermitian matrices so their eigenvalues are real and can be ordered as ρ 2 m+1 ≤ ρ n−1 ≤ · · · ≤ ρ 1 . Although there is no general formula for the eigenvalues of a sum of Hermitian matrices, the Courant-Fischer theorem, see [13], yields the lower bound:
ρ 1 (M (Z m ) − A m ) ≥ ρ 1 (M (Z m )) + ρ 2 m+1 (−A m ).
Moreover, since ρ 2 m+1 (−A m ) = −1, ρ 1 (M (Z m )) = ρ(M (Z m )), and ρ 1 (M (X m )) = ρ(M (X m )), we get
λ c (X m ) = 2 m+1 ρ(M (X m )) ≤ 2 m+1 ρ(M (Z m )) − 1 ,
where the last inequality holds because ρ(M (Z m )) > 1 for any m ∈ N (indeed ρ(M (Z m )) ∞). Hence
lim m→+∞ λ c (X m ) ≤ lim m→+∞ λ c (Z m ). (3.9)
This completes the proof.
A discrete-space version of the model
It is worth pointing out that the main strategy to deal with our BRW with barriers model is the comparison with stochastic processes obtained from a kind of discretization of the original process. Such discretization comes from the classification of particles in a finite number of types. In this section, motivated by the construction of such auxiliary processes, we propose and study a related model on Z which allows us to obtain exact results for any L ∈ N. Suppose that at time 0 there is only one particle at the origin. The initial particle has Poisson(λ/3) children at each position in {−1, 0, 1} so the mean number of particles at generation 1 is λ. In general, if a particle is located at site k, then it has Poisson(λ/3) children at each position in {k − 1, k, k + 1}. Note that, thus defined, at the n-th generation there are λ n particles in total, in average. A question we are interested in is the distribution of the particles on {−n, ..., n}. To formalize the model let W n (k) be the number of particles at site k in the n-th generation, with n ≥ 0 and k ∈ Z. Let {P l n,k } n,l,k be a sequence of independent random variables with Poisson(λ/3) distribution, for n, l ∈ N, and k ∈ Z. Then, we consider the stochastic process W = (W n ) n≥0 , with states-space {N ∪ {0}} Z , as follows:
(i) W 0 (0) = 1 and W 0 (k) = 0, for k = 0. (ii) W 1 (−1) = P 1 1,−1 , W 1 (0) = P 1 1,0 , and W 1 (1) = P 1 1,1 . (iii) For any n > 1, we let for k ∈ Z,
W n+1 (k) = Wn(k−1) i=1 P i n,k−1 + Wn(k) j=1 P j n,k + Wn(k+1) l=1 P l n,k+1 .
We point out that this is a discrete-space BRW. As a first result we characterize the expected number of particles at site k in the n-th generation. In order to do it, we appeal to the trinomial triangle, which is a variation of Pascal's triangle such that an entry is the sum of the three entries above it: One can recognize here the Trinomial triangle, whose elements in the n-th row are coefficients in the expansion of (1 + x + x 2 ) n . Indeed, this can be proved by induction. The k-th entry in the n-th row is denoted by n k 2 [25]. Proposition 4.1 gains in interest if we realize that we have obtained exactly the shape of the mean vector m n . Some explicit formulas for n k 2 are given in [25]. This allows us to analyze the behavior of EW n (k), as a function of n, for some values of k and different values of λ. See Figure 6 for a comparison of the mean number of particles at 0 for λ ∈ {1.1, 1.3, 1.5}. Figure 6. Behavior of the mean number of particles at 0 as a function of n for the discrete-space process with λ ∈ {1.1, 1.3, 1.5}. Now we shall see how that the process behaves when two barriers are imposed as in the BRW with barriers of the previous sections. Suppose that the discrete-space BRW defined above can not evolve outside the barriers L and −L, for some L ≥ 1, and denote it as W L = (W L n ) n≥0 . Note that now the states-space is given by {N ∪ {0}} {−L,...,L} . As in the continuous-space model we are interested in studying the survival of the process. Thus, let
S λ (L) := n≥0 −L≤k≤L {W n (k) ≥ 1},
be the event of survival of the process, and note that the survival probability is nondecreasing on λ. Therefore we may define the critical parameter for the process as λ c (L) := sup{λ > 0 : P (S λ (L)) = 0}.
(4.
3)
The advantage of dealing with the discrete-space model is that we can obtain the exact localization of the critical parameter as a function of L. This is the result of our next theorem. As a consequence we can characterize the phase-diagram for this process, see Figure 7. Moreover, as L → ∞, λ c (L) → 1, which is the critical value for the process without barriers.
Proof. As in the BRW with barriers we appeal to the theory of discrete-time multi-type branching processes. We consider the process for which particles can be classified into 2L + 1 different types where L ≥ 1 is fixed. More precisely, for a given k ∈ {−L, . . . , L}, we say that a particle located at k is of type k. Then, it is not difficult to see that the process W L is the discrete-time multi-type branching process with expected values given by:
M (i, j) = E(W n+1 (j)|W n (i) = 1)) = λ 3 1(|i − j| ≤ 1).
Hence, the mean matrix, of dimension 2L + 1 × 2L + 1, is given by 0 0 0 0 · · · 1 1 1 0 0 0 0 0 · · · 0 1 1 1 0 0 0 0 · · · 0 0 1 1
M = λ 3 × = λ 3 × T 2L+1,2 .
Here T 2L+1,2 is a tridiagonal matrix 2L + 1 × 2L + 1 where all the non-zero values are equal to 1. Note that although we are dealing again with a banded symmetric Toeplitz matrix, the spectra is known [24]; namely, ρ L k = 1 + 2 cos(kπ/(2L + 2)), k ∈ {1, 2, . . . , 2L + 1}. The Perron-Frobenius eigenvalue is ρ L 1 = 1 + 2 cos(2π/(2L + 2)) and the corresponding eigenvector v L = sin π 2L + 2 , sin 2π 2L + 2 , ..., sin (2L + 1)π 2L + 2 .
From here we have directly the critical value and also the limiting density. That is λ L c = 3 1 + 2 cos π L+1 .
(4.4)
Note that λ L c → 1 as L → ∞, which is the critical value for the process without barriers.
Discussion and future work
In this work we propose a special stochastic process as a one-dimensional model of seed dispersal through space in a limited habitat or islands. We focused on how shrinking of islands affects survival of a specie by studying the localization of the critical parameter for phase transition in the model. From an application point of view our arguments can be adapted to expand our findings to a two-dimensional model. In future work one could also consider a non-homogeneous Poisson point process to accommodate greater accumulation of seeds near the mother plant. Use of different density kernels in literature was reviewed in [10]. Both types of model extensions can be addressed by a suitable adaptation of our constructions to compare the original process with multi-type branching processes.
It should be noted that our model, which can be seen as a branching random walk with two barriers, is of mathematical interest per se. As with the entire class of branching random walks, possible applications are multiple. In the context of computational models of evolution, the BRW with barriers was considered in [21]. These models often include individual-based simulations in which organisms exist in a so-called morphospace. A point in that space represents the traits of an organism. The movement of the points in the space over time represent the evolutionary process in which groups of organisms may express varying traits as they evolve. For each generation, a population of offspring organisms was generated from the current (parent) population according to a reproduction scheme. After reproduction, the parent population was eliminated. Particles representing genetic traits differ from the mother particle through mutation and spacial barriers represent limits of viability. The findings of present work can also be applied to that setting.
The stochastic process may be seen as a discrete-time branching random walk (BRW) restricted to [−1, 1]. The random set Y n is the set of particles (its positions) alive in generation n, and the BRW is the stochastic process Y := (Y n ) n∈N .
Figure 1 .
1Illustration of a possible realization of the BRW with two barriers at −1 and 1, respectively, and offspring given by a Poisson point process with intensity λ/2, λ > 0. Particles are represented by black points.
λ c (Y) := sup{λ > 0 : P (S λ ) = 0}.(2.2)
Corollary 2 . 3 .
23Let Y be the BRW with two barriers at −1 and 1 respectively. Then
Figure 2 .
2First steps in the definition of the process X m . Here m
Lemma 3. 1 .
1Fix m ≥ 1. Then, for any n ≥ 1, the spatial position of the particles, X m may be seen as a regular discretetime multi-type branching process with 2 m+1 types. Indeed, if we denote by X m,j n the number of particles of type j in X m at time n and if we let
.
The process Z m . For coupling purposes we consider a version of the process Y constructed as follows. Assume that there is a particle alive at y.Denote by M y an homogeneous Poisson point process with intensity λ/2 on [y − 2, y + 2]. The offspring of y are the Poisson points of M y which are at a distance no greater than one of y and which are inside [−1, 1]. Define Z m = (Z m n ) n≥0 as follows. See Figure 3. i) Initially let Z m 0 := Y m 0 = {0} and set Z m 1 := Y m 1 . ii) For n ≥ 1, if z ∈ Z m n is of type j, then (a) If z < 0 then take as its offspring the points of M z in [−1, x m j + 1]. These points belong to Z m n+1 . (b) If z ≥ 0 take as its offspring the points of M z in [x m j−1 − 1, 1]. These points belong to Z m n+1 .
Figure 3 .
3First steps in the definition of the process Z m . Here m
Lemma 3. 3 .
3Fix m ≥ 1. Then, for any n ≥ 1,
Figure 4 .
4Comparison of λ c (X m ) (black dots) and λ c (Z m ) (red dots) as functions of m, for m ∈ {1, . . . , 12}.
Figure 5 .
5Linear decay of log(λ c (X m )-λ c (Z m )) with m, for m ∈ {1, . . . , 12}.
. 1 .
1Let n ≥ 0 and k ∈ N. Then EW n (k) = EW n (−k), and EW n (the k-th entry in the n-th row of the trinomial triangle.Proof. Let m n (k) := EW n (k). Then m 1 (−1) = m 1 (0) = m 1 (1) = λ/3 and we let m 1 = (λ/3)(1, 1, 1). Note that due to independence of the involved random variables we have, for n = 2: because of symmetry, m 2 (1) = m 2 (−1), and m 2 (2) = m 2 (−2). Thus, the second generation in average looks like m 2 = (λ/3) 2 (1, 2, 3, 2, 1). Analogously, we obtain the third generation as m 3 = (λ/3) 3 (1, 3, 6, 7, 6, 3, 1). For the sake of simplicity let's introduce the notation m n = b n × (λ/3) n , and note that the first values of b n are given by:
Figure 7 .
7Phase-diagram for the discrete-space process. The critical parameter is separating the behavior of the process between survival with positive probability, and almost surely extinction. (a) For illustration purposes we drew the critical parameter λ c (L) obtained in Theorem 4.2 as a continuous function of L for L ∈ [1, ∞). (b) The exact step function λ c (L) for L ∈ [1, ∞).
Theorem 4. 2 .
2Let W L be the discrete-space model with two barriers at −L and L respec-
AcknowledgmentsPart of this work was carried out during a visit of C.C. to ICMC-USP, and a visit of P.M.R. to UFABC. The authors are grateful to these institutions for their hospitality and support. Part of this work has been supported by Fundação de Amparoà Pesquisa do Estado de São Paulo -FAPESP (Grant 2017/10555-0).
Stochastic models in seed dispersals: random walks and birth-death processes. A Abdullahi, S Shohaimi, A Kilicman, M H Ibrahim, Journal of Biological Dynamics. 131Abdullahi, A., Shohaimi, S., Kilicman, A. and Ibrahim, M. H. (2019). Stochastic models in seed dispersals: random walks and birth-death processes, Journal of Biological Dynamics, 13(1), 345-361.
Additive property and its applications in branching processes. K B Athreya, N Kaplan, Advances in Probability. 1Athreya, K. B. and Kaplan, N. (1978). Additive property and its applications in branching processes. Advances in Probability, 1, 27-60.
K B Athreya, P E Ney, Branching Processes. Berlin HeidelbergSpringer-VerlagAthreya, K. B. and Ney, P. E. (1972). Branching Processes, Springer-Verlag Berlin Heidelberg.
Survival probability of the branching random walk killed below a linear boundary. J Bérard, J B Gouéré, Electronic Journal of Probability. 16Bérard, J. and Gouéré, J. B. (2011). Survival probability of the branching random walk killed below a linear boundary. Electronic Journal of Probability, 16, 396-418.
The central limit theorem for the supercritical branching random walk, and related results. J D Biggins, Stochastic Processes and their Applications. 34Biggins, J. D. (1990). The central limit theorem for the supercritical branching random walk, and related results, Stochastic Processes and their Applications, 34, 255-274.
Recent results on branching random walks. D Bertacchi, F Zucca, Statistical Mechanics and Random Walks: Principles, Processes and Applications. New YorkNova Science PublishersBertacchi, D. and Zucca, F. (2012). Recent results on branching random walks. In Statistical Mechan- ics and Random Walks: Principles, Processes and Applications, 289-340. New York: Nova Science Publishers.
Galton-Watson processes in varying environment and accessibility percolation. D Bertacchi, P M Rodriguez, F Zucca, Brazilian Journal of Probability and Statistics. 343Bertacchi, D., Rodriguez, P. M. and Zucca, F. (2020). Galton-Watson processes in varying environment and accessibility percolation. Brazilian Journal of Probability and Statistics, 34(3), 613-628.
A branching random walk with a barrier. J D Biggins, B D Lubachevsky, A Shwartz, A Weiss, Annals of Applied Probability. 14Biggins, J. D., Lubachevsky, B. D., Shwartz, A. and Weiss, A. (1991). A branching random walk with a barrier, Annals of Applied Probability, 1(4), 573-581.
Seed dispersal near and far: patterns across temperate and tropical forests. J S Clark, M Silman, R Kern, E Macklin, J Hillerislambers, Ecology. 805Clark, J. S., Silman, M., Kern, R., Macklin, E. and HilleRisLambers, J. (1999). Seed dispersal near and far: patterns across temperate and tropical forests. Ecology, 80(5), 1475-1494.
Invasion by extremes: population spread with variation in dispersal and reproduction. J S Clark, M Lewis, L Horvath, The American Naturalist. 1575Clark, J. S., Lewis, M. and Horvath, L. (2001). Invasion by extremes: population spread with variation in dispersal and reproduction. The American Naturalist, 157(5), 537-554.
The survival probability of a branching random walk in presence of an absorbing wall. B Derrida, D Simon, Europhysics Letters. 78660006Derrida, B. and Simon, D.(2007). The survival probability of a branching random walk in presence of an absorbing wall. Europhysics Letters, 78(6), 60006.
Are the eigenvalues of banded symmetric Toeplitz matrices known in almost closed form?. S E Ekstrom, C Garoni, S Serra-Capizzano, Experimental Mathematics. 274Ekstrom, S. E., Garoni, C. and Serra-Capizzano, S.(2018). Are the eigenvalues of banded symmetric Toeplitz matrices known in almost closed form? Experimental Mathematics, 27(4), 478-87.
G H Golub, C F Van Loan, Matrix Computations. Baltimore, MD, USAJohns Hopkins University Press3rd ed.Golub, G. H. and Van Loan, C. F. (1996). Matrix Computations (3rd ed.). Johns Hopkins University Press. Baltimore, MD, USA.
The critical barrier for the survival of branching random walk with absorption. Annales de l'Institut Henri Poincaré. B Jaffuel, Probabilités et Statistiques. 484Jaffuel, B.(2012). The critical barrier for the survival of branching random walk with absorption. Annales de l'Institut Henri Poincaré, Probabilités et Statistiques, 48(4), 989-1009.
Coexistence and relative abundance in annual plant assemblages: the roles of competition and colonization. J M Levine, M Rees, The American Naturalist. 1604Levine, J. M. and Rees, M. (2002). Coexistence and relative abundance in annual plant assemblages: the roles of competition and colonization. The American Naturalist, 160(4), 452-467.
Theory of Island Biogeography. R H Macarthur, E O Wilson, Princeton University PressMacArthur, R. H. and Wilson, E. O. (2016). Theory of Island Biogeography., Princeton University Press.
Ecological and evolutionary responses to recent climate change. C Parmesan, Annual Review of Ecology, Evolution, and Systematics. 37Parmesan, C. (2006). Ecological and evolutionary responses to recent climate change. Annual Review of Ecology, Evolution, and Systematics, 37, 637-669.
An open-system approach to complex biological networks. R Rebolledo, S A Navarrete, S Kéfi, S Rojas, P A Marquet, SIAM Journal on Applied Mathematics. 792Rebolledo, R., Navarrete, S. A., Kéfi, S., Rojas, S. and Marquet, P. A. (2019). An open-system approach to complex biological networks. SIAM Journal on Applied Mathematics, 79(2), 619-640.
Seedling recruitment in forests: calibrating models to predict patterns of tree seedling dispersion. E Ribbens, J A SilanderJr, S W Pacala, Ecology. 756Ribbens, E., Silander Jr, J. A. and Pacala, S. W. (1994). Seedling recruitment in forests: calibrating models to predict patterns of tree seedling dispersion. Ecology, 75(6), 1794-1806.
Modelling seed dispersal to predict seedling recruitment: recolonization dynamics in a plantation forest. F Sagnard, C Pichot, P Dreyfus, P Jordano, B Fady, Ecological modelling. 2033-4Sagnard, F., Pichot, C., Dreyfus, P., Jordano, P. and Fady, B. (2007). Modelling seed dispersal to predict seedling recruitment: recolonization dynamics in a plantation forest. Ecological modelling, 203(3-4), 464-474.
Clustering and phase transitions on a neutral landscape. A D Scott, D M King, N Marić, S Bahar, Europhysics Letters. 102668003Scott, A. D., King, D. M., Marić, N. and Bahar, S. (2013). Clustering and phase transitions on a neutral landscape. Europhysics Letters, 102(6), 68003.
Single species dynamics under climate change. M Tejo, S Niklitschek-Soto, C Vásquez, P A Marquet, Theoretical Ecology. 10Tejo, M., Niklitschek-Soto, S., Vásquez, C. and Marquet, P. A. (2017). Single species dynamics under climate change. Theoretical Ecology, 10, 181-193.
Coexistence, dispersal and spatial structure in metacommunities: a stochastic model approach. M Tejo, C Quiñinao, R Rebolledo, P A Marquet, Theoretical Ecology. 142Tejo, M., Quiñinao, C., Rebolledo, R. and Marquet, P. A. (2021). Coexistence, dispersal and spatial structure in metacommunities: a stochastic model approach. Theoretical Ecology, 14(2), 279-302.
Eigenvalues of several tridiagonal matrices. Y Wen-Chyuan, Applied Mathematics E-Notes. 5Wen-Chyuan, Y. (2005). Eigenvalues of several tridiagonal matrices. Applied Mathematics E-Notes 5, 66-74.
Trinomial Coefficient. E W Weisstein, Weisstein, E. W. (2004). Trinomial Coefficient. https://mathworld. wolfram. com/.
. F Cristian, Coletti Centro De Matemática, Computação E Cognição, Universidade Federal do ABC Avenida dos Estados. 5001Brazil e-mail: [email protected] F. Coletti Centro de Matemática, Computação e Cognição, Universidade Federal do ABC Avenida dos Estados 5001, Bangu, Santo André, São Paulo, Brazil e-mail: [email protected]
. email: [email protected] Marić School of Computing, Union University Kneza Mihaila. 6Nevena Marić School of Computing, Union University Kneza Mihaila 6, Belgrade, Serbia email: [email protected]
. Pablo M Rodriguez Centro De Ciências Exatas E Da Natureza, 1235Recife, PE, Brazil e-mailUniversidade Federal de Pernambuco Av. Prof. Moraes Rego ; Cidade UniversitáriaPablo M. Rodriguez Centro de Ciências Exatas e da Natureza, Universidade Federal de Pernambuco Av. Prof. Moraes Rego, 1235, Cidade Universitária, Recife, PE, Brazil e-mail: [email protected]
| []
|
[
"DEEP VULMAN: A DEEP REINFORCEMENT LEARNING-ENABLED CYBER VULNERABILITY MANAGEMENT FRAMEWORK",
"DEEP VULMAN: A DEEP REINFORCEMENT LEARNING-ENABLED CYBER VULNERABILITY MANAGEMENT FRAMEWORK"
]
| [
"Soumyadeep Hore [email protected] \nIndustrial and Management Systems Engineering\nIndustrial and Management Systems Engineering\nArmy Cyber Institute United States Military Academy, West Point\nUniversity of South Florida Tampa\nUniversity of South Florida Tampa\n33620, 33620, 10996FL, FL, NY\n",
"Ankit Shah [email protected] \nIndustrial and Management Systems Engineering\nIndustrial and Management Systems Engineering\nArmy Cyber Institute United States Military Academy, West Point\nUniversity of South Florida Tampa\nUniversity of South Florida Tampa\n33620, 33620, 10996FL, FL, NY\n",
"Nathaniel D Bastian [email protected] \nIndustrial and Management Systems Engineering\nIndustrial and Management Systems Engineering\nArmy Cyber Institute United States Military Academy, West Point\nUniversity of South Florida Tampa\nUniversity of South Florida Tampa\n33620, 33620, 10996FL, FL, NY\n"
]
| [
"Industrial and Management Systems Engineering\nIndustrial and Management Systems Engineering\nArmy Cyber Institute United States Military Academy, West Point\nUniversity of South Florida Tampa\nUniversity of South Florida Tampa\n33620, 33620, 10996FL, FL, NY",
"Industrial and Management Systems Engineering\nIndustrial and Management Systems Engineering\nArmy Cyber Institute United States Military Academy, West Point\nUniversity of South Florida Tampa\nUniversity of South Florida Tampa\n33620, 33620, 10996FL, FL, NY",
"Industrial and Management Systems Engineering\nIndustrial and Management Systems Engineering\nArmy Cyber Institute United States Military Academy, West Point\nUniversity of South Florida Tampa\nUniversity of South Florida Tampa\n33620, 33620, 10996FL, FL, NY"
]
| []
| Cyber vulnerability management is a critical function of a cybersecurity operations center (CSOC) that helps protect organizations against cyber-attacks on their computer and network systems. Adversaries hold an asymmetric advantage over the CSOC, as the number of deficiencies in these systems is increasing at a significantly higher rate compared to the expansion rate of the security teams to mitigate them in a resource-constrained environment. The current approaches are deterministic and one-time decision-making methods, which do not consider future uncertainties when prioritizing and selecting vulnerabilities for mitigation. These approaches are also constrained by the sub-optimal distribution of resources, providing no flexibility to adjust their response to fluctuations in vulnerability arrivals. We propose a novel framework, Deep VULMAN, consisting of a deep reinforcement learning agent and an integer programming method to fill this gap in the cyber vulnerability management process. Our sequential decision-making framework, first, determines the near-optimal amount of resources to be allocated for mitigation under uncertainty for a given system state and then determines the optimal set of prioritized vulnerability instances for mitigation. Our proposed framework outperforms the current methods in prioritizing the selection of important organization-specific vulnerabilities, on both simulated and real-world vulnerability data, observed over a one-year period.Deep VULMANNational Vulnerability Database NVD [2022], as well as the lack of security personnel (resources) available to mitigate them. This has resulted in vulnerabilities persisting in the computer and network systems of the organizations for a long time, thereby creating a significant advantage for the adversaries. There exists a critical gap in research needed to develop resource-constrained approaches for effectively identifying and mitigating important organization-specific security vulnerabilities to protect against adversarial exploitation and minimize damage from cyber-attacks.A typical cyber vulnerability management process starts with the scanning of the software and hardware components of an organization's network with a vulnerability scanner (such as Tenable, Qualys, or IBM) to find vulnerabilities reported in the NVD. The generated vulnerability report contains all vulnerability instances found in the network along with their attributes, which include the common vulnerability exposure (CVE) code, host name, description, and the common vulnerability scoring system (CVSS) severity rating, among others. The security teams at the cybersecurity operations centers (CSOCs) then assign resources to mitigate the vulnerability instances based on certain schemes. Examples of actions taken by security personnel are applying patches (vendor-supplied or CSOC-designed), upgrading software, disabling services, and adding IP filters, among others. The current approaches for vulnerability management, which include methods employed at the CSOCs and proposed in recently published literature, use rule-based mechanisms or static (one-time) optimization models Farris et al.[ ], Shah et al. [2019, Hore et al. [2022] to prioritize the selection of vulnerabilities for mitigation, given the number of resources available at a particular time-step (for instance, a week or a month). | 10.1016/j.eswa.2023.119734 | [
"https://export.arxiv.org/pdf/2208.02369v2.pdf"
]
| 251,320,477 | 2208.02369 | 7ee0367d95c94a96580187cd23ddf8d5de1bf07a |
DEEP VULMAN: A DEEP REINFORCEMENT LEARNING-ENABLED CYBER VULNERABILITY MANAGEMENT FRAMEWORK
Soumyadeep Hore [email protected]
Industrial and Management Systems Engineering
Industrial and Management Systems Engineering
Army Cyber Institute United States Military Academy, West Point
University of South Florida Tampa
University of South Florida Tampa
33620, 33620, 10996FL, FL, NY
Ankit Shah [email protected]
Industrial and Management Systems Engineering
Industrial and Management Systems Engineering
Army Cyber Institute United States Military Academy, West Point
University of South Florida Tampa
University of South Florida Tampa
33620, 33620, 10996FL, FL, NY
Nathaniel D Bastian [email protected]
Industrial and Management Systems Engineering
Industrial and Management Systems Engineering
Army Cyber Institute United States Military Academy, West Point
University of South Florida Tampa
University of South Florida Tampa
33620, 33620, 10996FL, FL, NY
DEEP VULMAN: A DEEP REINFORCEMENT LEARNING-ENABLED CYBER VULNERABILITY MANAGEMENT FRAMEWORK
Cyber Vulnerability Management · Vulnerability Prioritization · Security Resources Optimization · Deep Reinforcement Learning · Integer Programming · DRL Cyber Framework 1
Cyber vulnerability management is a critical function of a cybersecurity operations center (CSOC) that helps protect organizations against cyber-attacks on their computer and network systems. Adversaries hold an asymmetric advantage over the CSOC, as the number of deficiencies in these systems is increasing at a significantly higher rate compared to the expansion rate of the security teams to mitigate them in a resource-constrained environment. The current approaches are deterministic and one-time decision-making methods, which do not consider future uncertainties when prioritizing and selecting vulnerabilities for mitigation. These approaches are also constrained by the sub-optimal distribution of resources, providing no flexibility to adjust their response to fluctuations in vulnerability arrivals. We propose a novel framework, Deep VULMAN, consisting of a deep reinforcement learning agent and an integer programming method to fill this gap in the cyber vulnerability management process. Our sequential decision-making framework, first, determines the near-optimal amount of resources to be allocated for mitigation under uncertainty for a given system state and then determines the optimal set of prioritized vulnerability instances for mitigation. Our proposed framework outperforms the current methods in prioritizing the selection of important organization-specific vulnerabilities, on both simulated and real-world vulnerability data, observed over a one-year period.Deep VULMANNational Vulnerability Database NVD [2022], as well as the lack of security personnel (resources) available to mitigate them. This has resulted in vulnerabilities persisting in the computer and network systems of the organizations for a long time, thereby creating a significant advantage for the adversaries. There exists a critical gap in research needed to develop resource-constrained approaches for effectively identifying and mitigating important organization-specific security vulnerabilities to protect against adversarial exploitation and minimize damage from cyber-attacks.A typical cyber vulnerability management process starts with the scanning of the software and hardware components of an organization's network with a vulnerability scanner (such as Tenable, Qualys, or IBM) to find vulnerabilities reported in the NVD. The generated vulnerability report contains all vulnerability instances found in the network along with their attributes, which include the common vulnerability exposure (CVE) code, host name, description, and the common vulnerability scoring system (CVSS) severity rating, among others. The security teams at the cybersecurity operations centers (CSOCs) then assign resources to mitigate the vulnerability instances based on certain schemes. Examples of actions taken by security personnel are applying patches (vendor-supplied or CSOC-designed), upgrading software, disabling services, and adding IP filters, among others. The current approaches for vulnerability management, which include methods employed at the CSOCs and proposed in recently published literature, use rule-based mechanisms or static (one-time) optimization models Farris et al.[ ], Shah et al. [2019, Hore et al. [2022] to prioritize the selection of vulnerabilities for mitigation, given the number of resources available at a particular time-step (for instance, a week or a month).
Introduction
Adversaries are actively looking to exploit unpatched vulnerabilities in the computer and network systems to cause significant damage to public and private organizations. Recently, the United States White House issued a memo urging organizations to promptly identify and remediate vulnerabilities in their systems, among other recommendations to bolster cybersecurity against the adversaries WH [2021]. Major challenges faced by the organizations to implement this recommendation result from a significant recent increase in the number of new vulnerabilities that are reported in the There are many shortcomings in the current approaches. First, the vulnerability selection process does not include a comprehensive list of factors associated with the host machine and the respective organizational environment to determine the true priority of a vulnerability instance found in a scan report. For instance, a CSOC security team performs many functions, which include intrusion detection system (IDS) alert management along with vulnerability management. An IDS alert log can identify host machines with possible intrusion attempts and integrating this information, along with other factors such as the CVSS severity score, into prioritizing vulnerability instances found on such machines can help better protect against potential attacks. Second, recently proposed optimization models have focused on selecting vulnerability instances from dense reports to maximize their cumulative vulnerability utility or exposure score, given a limited number of available resources. Such an approach does not result in the selection of all important vulnerabilities as these mathematical formulations focus on the value of selecting a vulnerability instance based on the time it takes to patch or mitigate it. These methods will select a larger number of less important vulnerabilities if their mitigation time is considerably low when compared to an important vulnerability with a significantly higher mitigation time. Third, the current approaches assume a deterministic environment for solving this problem, in which the number and type of vulnerability arrivals are considered to be known and are uniformly distributed across the time horizon. They do not take into account the uncertainty in vulnerability arrivals and consider a pre-determined (often, an equal) number of resources distributed across all the individual decision-making time-steps to prioritize the selection of vulnerabilities for mitigation.
Cyber vulnerability management is a continuous process aimed at strengthening the security posture of an organization within an infinite time horizon. This requires sequential decision-making, and to make it robust against the uncertainties in the process, it is imperative that (i) the number of resources to be allocated at each time-step is optimized and (ii) the important vulnerabilities are identified and prioritized for mitigation, given the optimized allocation of resources. Our research objective is to fill the current gap in the cyber vulnerability management process by proposing a novel artificial intelligence (AI) enabled framework, powered by a deep reinforcement learning (DRL) agent and an integer programming method for effective vulnerability triage and mitigation.
The main contributions of the paper are as follows. First, we developed a novel dynamic cyber vulnerability triage framework, Deep VULMAN, which is designed to combat the uncertainty in the vulnerability management process and select the most important vulnerability instances for mitigation from a dense list of vulnerabilities identified in the network. Unlike other methods in recent literature, we pose the problem as a sequential decision-making problem and segregate the vulnerability management process in our proposed framework into two parts: (i) determining the near-optimal amount of resources required for mitigation, given the observed state of the system and (ii) determining the optimal set of prioritized vulnerability instances for mitigation which has the maximizing average cumulative attribute score among all the vulnerability instances. Second, we developed a DRL agent based on a policy gradient approach that learns to make near-optimal resource allocation decisions under uncertainty in vulnerability arrivals. The agent continuously interacts with a simulated CSOC operations environment built using real-world vulnerability data and gets feedback from a novel reward signal engineered from (i) the mitigation of important vulnerabilities and (ii) the number of resources utilized at each time-step. Third, we formulated and solved a combinatorial mathematical model with an integer programming method for vulnerability prioritization and selection for mitigation with the allocated resource decision from the DRL agent. Unlike the recent methods in the literature, we present a unique formulation that generates an optimal set of prioritized vulnerability instances for mitigation, which has the maximum average cumulative attribute score among all the vulnerability instances. Fourth, to the best of our knowledge, this study is the first to propose a framework that integrates alert information from IDS to vulnerability data to improve the vulnerability management process at a CSOC. This is a major step toward building a robust defense system against adversaries. Our experiment results demonstrated that with this added information from the alert logs, through prioritized vulnerability instances, we were able to find machines that had very old or expired versions of software making them easier targets for the adversaries. Finally, we provided valuable insights obtained using our proposed framework by comparing our results with recent vulnerability prioritization and selection methods from the literature. Our experimental results using real-world vulnerability data show that our approach is more efficient and effective in terms of selecting important organization-specific vulnerabilities in comparison with the other methods.
The paper is organized as follows. Section 2 presents the related literature. Section 3 presents the proposed Deep VULMAN framework, which consists of the CSOC operations simulation environment and the AI-enabled decisionsupport component that recommends near-optimal decisions for vulnerability management. Section 4 presents the numerical experiments performed using real-world vulnerability scan data. Section 5 presents the experimental results and comparisons with recent methods from the literature. Lastly, in Section 6, we provide conclusions.
Related Literature
We organized the literature review by dividing the related literature into two topics: (i) vulnerability scoring systems and triage methods, and (ii) DRL approaches in solving sequential decision-making problems under uncertainty.
Vulnerability Scoring Systems and Triage Methods
To gauge the severity or threat of a vulnerability, it is important to have a mechanism for scoring the attributes or impacts of the vulnerability. In 2006, Mell et al. [2006] proposed the common vulnerability scoring system (CVSS) to provide a base score to quantify the vulnerability severity. Later, in 2007, the same authors proposed CVSS version 2 to cover the shortcomings of CVSS version 1 by reducing inconsistencies, providing additional granularity, and increasing the capability to reflect a wide variety of vulnerabilities Mell et al. [2007]. The CVSS framework is managed by the Forum of Incident Response and Security Teams (FIRST), and the latest version of CVSS in use today is version 3.1. The CVSS metric consists of eight base metrics, three temporal metrics, and four environmental metrics FIR [2020]. However, the computation of environmental metrics is complicated and not well proven Gallon [2010]. The NVD omits the temporal and environmental metrics and considers only the base metrics when calculating the CVSS severity of reported vulnerabilities Fruhwirth and Mannisto [2009]. CVSS base metric group is a common choice of application among most organizations to gauge the severity of the vulnerabilities present in their network. However, anecdotal and literary evidences suggest that the CVSS base score alone is not sufficient to measure the impact of a vulnerability in a particular organization due to the absence of organizational context Fruhwirth and Mannisto [2009], Farris et al.
[2018], Holm et al. [2011Holm et al. [ , 2012. There have been many contributions from researchers to bridge this gap. Some of the important contributions are by McQueen et al. [2009], in which the authors proposed two metrics Median Active Vulnerabilities (MAV) and Vulnerability-Free Days (VFD) based on the report time of the vulnerability and time when the patch is issued by the vendor Allodi and Massacci [2014]; they considered black-market exploit data to boost the statistical significance of the indication pertaining to the true severity of a vulnerability; Farris et al. [2018] proposed two performance metrics: Total Vulnerability Exposure (TVE) that scores the density of unmitigated vulnerabilities per month and Time-to-Vulnerability Remediation (TVR) based on the maximum amount of time (in months) an organization is willing to tolerate the presence of a certain vulnerability in their system; and Hore et al. [2022] presented a novel Vulnerability Priority Scoring System (VPSS) that takes into account the context of the vulnerability along with the CVSS score by considering relevant host machine information (positional significance of the host machine, level of importance of the host machine, and protection level of the host machine).
Deep Reinforcement Learning (DRL) Approaches
DRL is one of the most promising solution methods for obtaining near-optimal policies under uncertain (stochastic) conditions. DRL was first applied by Mnih et al. in 2013 to successfully learn a control policy from sensory inputs with high dimensions Mnih et al. [2013]. Today, DRL has been used in various application domains such as autonomous vehicles, stock trading, robotics, cyber-security, and marketing, among others Bogyrbayeva et al. [2021], Kirtas et al. [2020], Liang [2020]. The model-free DRL methods in published literature can be broadly classified in two parts: value-based and policy-based. In value-based DRL approaches, we try to estimate the Q-value or a state-action pair by employing a deep neural network estimator. Policy based methods aim to directly learn the stochastic or deterministic policies, where the action is generated by sampling from the policy. Mnih et. al proposed a novel method, Deep Q Learning (DQN), which is a value-based method with superior performance demonstrated on Atari 2600 games. Some of the notable advancements made in the area of value-based DRL methods include the works by: Van hasselt et. al, who proposed DRL with double q-learning (DDQN) to overcome the overestimation suffered by DQN Van Hasselt et al. [2016]; and Wang et. al, who proposed the dueling network architectures for DRL with two identical but separate neural network estimators for estimating the state value function and action advantage function Wang et al. [2016], among others. One of the popular advancements in policy-based methods includes the work by Mnih et. al, who presented asynchronous methods for DRL with parallel actor learners, asynchronous advantage actor critic (A3C), and outperformed others on Atari 2600 games Mnih et al. [2016]. Vanilla policy gradient algorithms generally suffer from high variance, poor sample efficiency, and slow convergence. Schulman et al. [2015] presented Trust Region Policy Optimization (TRPO) that limits the policy update with a certain KL-divergence constraint and also guarantees monotonic improvement. In 2017, Schulman et al. [2017] proposed Proximal policy Optimization (PPO) that has all the advantages of TRPO, and in addition, it is simpler, faster, and more sample efficient. PPO uses a clipped surrogate objective function that prevents large changes in the policy. The clipped surrogate objective is also a lightweight replacement of the KL-divergence constraint in TRPO. Due to its simplicity, sample efficiency, and robustness to hyper-parameter tuning, PPO is a promising approach to solving dynamic sequential decision-making problems.
There is a clear gap in the literature for cyber vulnerability prioritization and selection, as most of the work has been focused on formulating one-time (static) strategies for selecting vulnerabilities from dense vulnerability reports by considering a fixed amount of resource availability and without taking future vulnerability arrivals into account. To the best of our knowledge, no research has addressed the vulnerability management problem as a sequential decisionmaking problem under the uncertainty of vulnerability arrivals and with resource fluctuations. This paper focuses on strengthening the security posture of the CSOCs by generating robust vulnerability management policies for real-world uncertain environments. Next, we present the proposed framework for dynamic vulnerability management under uncertainty. We propose the development of a sequential decision-making framework that provides a dynamic resource allocation strategy along with an optimal selection of vulnerabilities that are prioritized for mitigation. Figure 1 shows a schematic representation of the proposed Deep VULMAN framework. The framework consists of two key components: (i) a CSOC operations environment, where relevant computer and network data are collected and aggregated using various software applications, and (ii) a decision-support component, in which (a) a DRL agent is trained using a policy gradient algorithm to make near-optimal resource allocation decisions under uncertainty and (b) an integer programming model is developed to generate the set of vulnerabilities, which are prioritized for mitigation with the amount of resources allocated by the DRL agent. We first describe the CSOC operations environment, in which we propose a simulator to overcome the data insufficiency issues for training the DRL agent, followed by the decision-support component.
Deep Reinforcement Learning-enabled Cyber Vulnerability Management (Deep VULMAN) Framework
CSOC Operations Simulation
Obtaining a real and large data set for a research study is a major challenge for cybersecurity researchers. Very few studies in published literature, such as Farris et al. [2016], Xu et al. [2018], and Shah et al. [2019], have investigated the process of cyber-incident or vulnerability emergence using historical data. However, these have been small and/or private data sets. The unavailability of data sets is due to a lack of complete information in a cyber environment or confidentiality reasons. Cyber-incident data have been studied in Haldar and Mishra [2017] and Kuypers and Paté-Cornell [2016] for large cyber breaches and it has been found that a Poisson distribution provides the best fit for describing the arrivals in these data sets. It is imperative that the DRL agent interacts with an environment that closely resembles the real CSOC operations to learn the best policies that can be implemented in real-world conditions. Hence, to overcome the challenges, such as having insufficient data to properly train a DRL agent or learning in a slow-moving real-world environment Dulac-Arnold et al.
[2019], we built a simulator from the large amount of real data that we collected by working with a CSOC. We developed an agent-based discrete event simulation (DES) algorithm with fixed-increment time progression to model the vulnerability management process at a CSOC. The agent-based approach is added to the traditional DES to accommodate the interaction between the DRL-agent (explained in the next section) and the simulation environment. The inputs to the algorithm are the various vulnerability scan reports and other relevant network related information obtained from applications such as Nessus, Lansweeper, and IDS. The uncertainty in the vulnerability arrival process is captured in the simulator by randomly generating vulnerability arrival patterns at each time-step. For example, there could be a high, medium, or a low number of vulnerability arrivals in a given week. Vulnerability instances with varying characteristics and related host machine data are randomly sampled from the historical data sets at each time-step. The arrivals are generated using a Poisson distribution with varying mean, which can be obtained from the historical data.
The cyber vulnerability instances, the respective host machine information, and the resources available are then passed on to the decision-support component (see Figure 1) as the state of the system at the given time-step. The action pertaining to this system state is then taken as input by the simulated environment from the decision-support component. The simulation algorithm executes this action, which contains the set of vulnerabilities selected for mitigation. A scalar reward is computed in the simulator, which consists of two terms, one related to the mitigation of important vulnerabilities and another for the number of resources utilized. The details of the action selection and the reward function are presented in the next section. The cumulative time taken to mitigate the selected vulnerabilities is then deducted from the total available time at the beginning of the time-step. The environment is then stepped forward to the next time-step with the remaining resources. A new set of vulnerability instances is then generated, and this process continues for the entire episode (e.g., a month). The simulator adds the new set of vulnerabilities (arrivals) to the unmitigated set of vulnerabilities from the previous time-step.
Decision Support for Vulnerability Management
The objective of this research is to identify and prioritize important cyber vulnerabilities for mitigation under uncertainty of future vulnerability arrivals in a resource-constrained system. It is to be noted that the decision-making problem can be broken down into obtaining two decisions: (i) determining the near-optimal resources to be allocated and (ii) determining the set of vulnerability instances for mitigation given these resources, which reduces the vulnerability exposure of the organization in the long run. The former decision of allocating the appropriate amount of resources is affected by the uncertainty in the environment and the CSOC can enhance their security with a dynamic resource allocation strategy. Once the decision on the amount of resources allocated is made, the mathematical model can be invoked to optimally select the set of important vulnerabilities for mitigation. We first describe the DRL problem formulation for optimizing the resource allocation strategy, followed by the formulation of the mathematical model, which outputs the vulnerability selection decision.
DRL Formulation
The problem of making sequential decisions for resource allocation to mitigate important vulnerabilities and thereby reducing the vulnerability exposure and strengthening the security posture of an organization in the long run can be formulated as a Markov decision process (MDP). The key elements of the MDP formulation are as follows:
• State, s t , represents the information that is visible to the agent at time t, which consists of the vulnerability instances, their respective attributes, and the total amount of resources available. The state space is N * (M + 1) dimensional, where M is the number of attributes and N is the maximum number of vulnerabilities historically found in the vulnerability scan reports. We use the concept of zero padding to fill empty rows (N -J number of vulnerabilities found at each scan) with zeros Lin et al. [2020]. The state space provides the DRL agent with the information needed to make the resource allocation decision for vulnerability selection.
• Action, a t , represents the control. The action is the amount of resources to be allocated at time t, given a state, s t . The action space is continuous for this problem.
• State transition function determines the probability with which a system will transition from state s t to s t+1 under action a t . The state transition probabilities for this problem are unknown and the possible number of state transitions are very high (state space explosion). Hence, it is infeasible to determine the state transition probabilities.
• Reward, r t , is a measure of the goodness of an action, a t , taken in a given state, s t , at time t. The agent's goal is to maximize the long-term cumulative reward. Hence, setting up the reward signal is critical to train the agent to achieve the research objective. In this research, we engineer a novel reward function, which consists of two weighted terms. The reward is obtained from: (i) the mitigation of important vulnerabilities (r 1 ) and (ii) the number of resources utilized (r 2 ). The reward function, at time t, is given by Equation 1 where w 1 and w 2 are weights associated with the reward terms and whose sum must be equal to 1.
r t = w 1 * r 1 t + w 2 * r 2 t(1)
The importance of a vulnerability instance is determined by taking into consideration the following attributes: asset criticality, level of protection, and organizational relevance of the host machine, the CVSS severity of the vulnerability instance, and if the host machine has been identified in any IDS alerts. These attributes are obtained using various applications from the organization's computer and network systems. Categorical attributes are transformed into numerical values based on certain rules from literature. We use the same scheme, as in Hore et al. [2022], , Farris et al. [2018], to identify various categories for each attribute and assign normalized numerical values. For completeness, we describe this scheme here. The following attributes: asset criticality, level of protection, and organizational relevance of the host machine are assigned either of the three categories, high, medium, or low. It is to be noted that more categories could be added to this list such as a critical priority category. The categorical attribute with the highest priority is assigned a numerical value of 1 and the lowest is assigned a value of 0.1. The ordered categories in between the highest and lowest priorities are then assigned numerical values based on a linear scale. For instance, if the asset criticality associated with a certain machine is of the highest priority (critical), then it takes a value of 1 and if it is the lowest priority (low), then it is assigned a value of 0.1. The CVSS severity score for a vulnerability is obtained from the NVD through the application, which is then normalized between 0 and 1. For instance, NESSUS provides this score as a part of the scan report, which ranges from 1 to 10. If the machine is identified in the IDS alert logs for a possible intrusion, then the attribute is assigned a value of 1, else 0. All these factors are considered equally important. There is a positive reward for selecting vulnerabilities, which is calculated by taking the average of all the attribute values of the selected vulnerabilities. If there are J number of selected vulnerabilities and v ij represents the value of attribute i of the vulnerability instance j, then the positive reward can be calculated as r 1 t = J j=1 I i=1 vij I * J . Since the CSOC operations environment is resource constrained, there exists a trade-off between the number of vulnerabilities selected for mitigation and the number of resources that remain available for vulnerability selection in the next time-step. Hence, we assign a small cost to the utilization of the resources in this formulation. For the J number of vulnerability instances selected for mitigation with S j representing the time required to mitigate vulnerability j and C representing the cost/unit resource utilized, then the resource utilization penalty (r 2 t ) is calculated as − J j=1 C * S j .
The large state space and continuous action space make this problem infeasible to solve using conventional reinforcement learning approaches. To overcome the issue of not being able to calculate and store the action-value (or Q value) for all possible state-action pairs due to state space explosion, we propose a deep neural network-based learning model with a policy gradient algorithm for efficiently solving this problem Silver et al. [2014]. Vanilla policy gradient algorithms have disadvantages such as poor data efficiency, lack of robustness, and are often subjected to large changes in policies resulting in unstable learning. Hence, we propose the proximal policy optimization (PPO) approach Schulman et al. [2017] for solving this problem, which is an on-policy algorithm that overcomes the aforementioned challenges. PPO ensures smoother learning of the policies with the objective clipping feature. Additionally, PPO is easy to implement and tune, and provides better sample efficiency.
Vulnerability Prioritization and Selection Model
The prioritization and selection of cyber vulnerability instances is achieved by solving a mathematical model, whose solution provides us with the set of prioritized vulnerabilities selected for mitigation by the available resources (decision made by the DRL agent). The vulnerability selection problem is posed as a combinatorial optimization problem and solved using integer programming. The static vulnerability prioritization and selection models in Farris et al. [2018], Shah et al. [2019], Hore et al. [2022] directly maximize the cumulative utility or exposure scores of their respective factors to obtain the sets of prioritized vulnerability instances. Such a set of vulnerabilities may not contain all the important vulnerabilities, as their formulations do not maximize the average value of the selected vulnerabilities. In our proposed formulation, we counter this issue by maximizing the average of the cumulative value of all the attributes across all selected vulnerability instances subject to the total time available for mitigation in any given time-period. In addition, we take into consideration the largest set of attributes associated with any vulnerability and its respective host machine in published literature. Below, we present the input parameters, decision variables, objective function, constraints, and the output of the vulnerability selection model.
Input parameters:
• The attribute scores for all vulnerability instances, v ij ∀i, j.
• Expected time taken to mitigate a vulnerability instance j, S j .
• Total number of vulnerability instances in the scan report, J.
• Total resources available at time t (action from the DRL agent), a t .
Decision variables:
• z j = 1 if vulnerability instance j is selected, and 0 otherwise.
Objective function:
The objective of the model is to select the set of vulnerability instances prioritized for mitigation that maximizes the average of the cumulative value of the attribute scores across all selected vulnerability instances. The objective function is given by:
y = M ax J j=1 I i=1 v ij * z j J j=1 z j (2) Constraint:
The constraint for the model is the availability of resource time, a t , at any given time t, which is obtained from the DRL agent. The constraint for the total time taken to mitigate the selected vulnerability instances not being higher than the total resource time available at time t is expressed as:
J j=1 S j * z j ≤ a t(3)
Output: The output of the vulnerability prioritization and selection model is the set of prioritized vulnerability instances selected for mitigation.
Numerical Experiments
We worked closely with a CSOC to collect the vulnerability data and other relevant computer network information. Our conversations with the security analysts helped us determine various parameter values that were used in setting up the environment, and training and testing the proposed Deep VULMAN framework.
Data Collection and Simulation Environment for CSOC Operations
We developed a simulator from the real-world data set that we collected by working with a CSOC. We used two applications: Tenable's Nessus vulnerability scanner and Lansweeper to collect the vulnerability data. We collected a total of 98,842 vulnerability instances over a span of two years. We also collected relevant host machine data and alert data generated by the IDS. The Lansweeper report contained information about the host machines in the network, which included the software versions of the operating system and SQL server, among others. In this research study, we also integrated information from the IDS alert logs to obtain the intrusion status of the host machine. If a host machine with the reported vulnerability was identified in the IDS alert log for the respective time-period (say, between time t − 1 and t), then this information was recorded and the intrusion status attribute of a vulnerability instance was set accordingly. All the machine-specific information for the host machines on which the vulnerabilities were found was added to the consolidated data set. The aggregated data set contained information about the host machine and vulnerability instances such as the host IP, CVE code, the description, the CVSS severity score, the importance of the machine in the network, the versions of software running on the host machine, and the estimated personnel-hours required to mitigate the vulnerability instance Farris et al. [2018], among other known information. We then applied vulnerability data preprocessing techniques, which included quantification of the attributes: asset criticality, level of protection, and organizational relevance of the host machine, along with the CVSS severity of the vulnerability. We used the same categories and the quantification process as used in Farris et al. [2018] and Hore et al. [2022].
We created an agent-based DES that mimics the arrival and mitigation process of vulnerability instances in a CSOC.
With the help of a simulation model, we generated diverse patterns of new vulnerability arrivals to expose the DRL agent to uncertainty it may find in a real-world environment. From our discussions with the CSOC security personnel and historical evidence, along with the information published in literature Farris et al.
[2018], we modeled the vulnerability instance arrival process using a Poisson distribution and varied the average number of arrivals from 40 to 600 per week (indicating a very large network). We segregated our arrivals into three different categories, namely, high, medium, and low. Different patterns of arrivals, based on the aforementioned average numbers per week, were simulated for training the DRL agent. Some examples of arrival patterns for four consecutive weeks in a month include [high, high, low, low], [medium, medium, medium, low], and [low, high, medium, high], among others. Vulnerability instances were randomly sampled from the data set based on the arrival pattern (Poisson distribution with the respective average number of arrivals) at each time-step (i.e., t = 1 week) emulating the arrival process in the CSOC. All the information about the vulnerability instances is then passed to the decision-support component (explained in the next sub-section) to obtain an action indicating the set of vulnerability instances that are selected for mitigation. Upon receiving this information, the simulation algorithm is stepped forward and the selected vulnerability instances are mitigated utilizing the time assigned to each of the vulnerability instances in the consolidated data set. The total mitigation time of the selected vulnerability instances is then deducted from the available resource time from the previous time-step and the remaining resource time is carried forward to the next time-step. Each week is represented as a time-step in the simulator. Next, we describe the training and testing phases of the proposed Deep VULMAN framework.
Training Phase
We conducted our experiments with some of the hyper-parameter values from published literature Schulman et al. [2017] and Carvalho Melo and Omena Albuquerque Máximo [2019], and tuned the remaining by trial-and-error, which involved running experiments with different sets of values. The PPO approach is known to be more forgiving to sub-optimal initialization of hyper-parameter values. We conducted the experiments on a machine with 11 th Gen Intel Core i7-12700H processor with NVIDIA GeForce RTX 2080 graphics card (16GB RAM).
We used a multi-layer perceptron (MLP) model with two hidden layers, each containing 68 perceptrons and Tanh activation functions for the actor and critic networks. It is to be noted that we implemented various architectures with a larger number of hidden layers and perceptrons but did not find any significant improvements in the performance of the DRL agent and selected the two hidden layer model that was computationally efficient among the others. The DRL agent took actions using the policy network. There was some standard deviation added to these actions, which started with a value of 0.65 and decayed to 0.01 with a rate of 0.025. The decay rate and decay frequency are problem-specific, and hence we had to tune it with a trial-and-error approach. To avoid getting stuck in a local optimum and encourage exploration, we used 0.01 as the entropy co-efficient value, which was multiplied by the entropy and subtracted from the loss function. The value of the entropy factor was adopted from the literature Schulman et al. [2017]. We set the maximum number of time-steps for training to 200M. At each time-step, the output of the DRL agent is provided as an input to the vulnerability prioritization and selection mathematical model and the vulnerability instances are selected for mitigation, which are then passed on to the CSOC operations environment. Based on these actions, a scalar reward value is calculated, which is derived from the two terms in the reward function (as shown in Equation 1). We considered equal values of the weights used for the two reward terms in the reward function (in Equation 1) and assigned a value of 10 −5 to the cost per unit resource utilized (C).
Testing Phase
We evaluated the DRL-enabled Deep VULMAN framework with the real-world vulnerability data from the collaborating CSOC. We set the standard deviation to zero during the testing phase, to avoid any further exploration by the DRL agent when taking actions using the policy network. We compared our method with two recent vulnerability selection methods from published literature, namely, VPSS Hore et al. [2022] andVULCON Farris et al. [2018]. We did not consider the CVSS-value based selection method in our comparison due to its limitation in taking the context of an organization into consideration. To compare the three approaches, we recorded the vulnerabilities that were selected for mitigation from (a) high value assets, (b) machines with lower level of protection, (c) organizationally relevant machines (i.e., web and database servers), and (d) machines with intrusion detection alert signals. Next, we describe and analyze the results obtained from the aforementioned experiments. In this section, we present the evaluation results obtained using the real-world CSOC data. We evaluated the performance of our approach on the real-world vulnerability data set, which was collected from a collaborating CSOC for a period of one year. As shown in, Figure 2(a) and Figure 2(c), with our proposed approach more vulnerabilities are prioritized for mitigation from the important machines, i.e., web and database servers. We had observed similar results on the previously unseen simulated data. These results matched the requirements we had gathered from the security personnel at the CSOC. Figure 2(d) shows another interesting result obtained using our method is the prioritization of vulnerabilities that were found on machines identified in potential attacks using the IDS alert data. Further investigation of these machines also revealed that the majority of these machines were identified to have a lower level of protection (old software versions with no or limited support from the vendor) Figure 2(b), which indicates that they were an easier target for adversaries and vulnerabilities found in them must be prioritized. The results point towards a high degree of robustness an organization can achieve by employing the proposed DRL-enabled Deep VULMAN framework in the vulnerability triage process. Figure 3 shows a particular episode (month), in which the vulnerability arrival pattern fluctuates between high, medium and low among the four time-steps (weeks). The orange bar shows the total expected mitigation time required (in minutes) to mitigate all the vulnerabilities identified in the network and the blue bar shows the amount of resources allocated by the DRL agent. We have highlighted the expected mitigation time of vulnerabilities, whose average for the cumulative normalized attribute values is high, in red. In particular, we have considered the vulnerability instances with the value of I i=1 vi,j I ≥ 0.75 to show the effectiveness of our proposed approach in allocating resources to mitigate these critical vulnerabilities. The dotted line in the figure represents the even distribution of resources, which is a commonly employed practice at the CSOCs and is utilized by the other two methods (VPSS and VULCON). It can be seen that the DRL agent allocates a lower than average number of resources in the first week to match the arrival pattern of vulnerabilities, followed by a lower than average number of resources in the second week, thereby saving more resources in anticipation of a larger number of new vulnerability arrivals in the last two weeks of the month. In this case, the DRL agent demonstrates that it is able to react appropriately in the first two time-steps by allocating less number of resources and has learned non-trivial decisions of utilizing conserved resources to counter an anticipated future event (of high arrivals). Accordingly, the prioritization and selection model is able to prioritize the selection of vulnerabilities across all the factors. This episode example amongst many others shows that the DRL agent has learned to make better decisions in the wake of uncertain vulnerability arrivals.
Analysis of Results
Conclusions
The paper presented a novel cyber vulnerability management framework, Deep VULMAN, to identify and prioritize important vulnerabilities for mitigation in the wake of uncertain vulnerability arrivals in a resource-constrained environment. We first trained a state-of-the-art DRL agent using a simulated CSOC operations environment, which was built using real-world CSOC data, to learn the near-optimal policy of allocating resources for selecting vulnerabilities for mitigation. Next, a mathematical model for vulnerability prioritization and selection was formulated and solved using the integer programming method, which generated the set of important vulnerabilities prioritized for mitigation based on the resources allocated. We conducted our experiments on both simulated and real-world vulnerability data for a one-year period. The results showed that our proposed framework outperformed the current methods by prioritizing the selection of maximum number of vulnerability instances from high-value assets, organizationally relevant machines (web and database servers), machines identified in intrusion detection alert signals, and machines with lower level of protection. The DRL agent learned non-trivial decisions in the wake of uncertain vulnerability arrival patterns. For instance, the agent was able to anticipate future events of high vulnerability arrivals, and accordingly adjusted (conserved) the allocation of resources in earlier time-steps to counter the important vulnerabilities during those events.
We first trained a state-of-the-art DRL agent using a simulated CSOC operations environment, which was built using real-world CSOC data, to learn the near-optimal policy of allocating resources for selecting vulnerabilities for mitigation. Next, a mathematical model for vulnerability prioritization and selection was formulated and solved using the integer programming method to obtain the prioritized set of important vulnerabilities selected for mitigation. We conducted our experiments on both simulated and real-world vulnerability data for a one-year period. The results showed that our proposed framework outperformed the current methods by prioritizing the selection of the maximum number of vulnerability instances from high-value assets, organizationally relevant machines (web and database servers), machines identified in intrusion detection alert signals, and machines with lower level of protection. The DRL agent learned non-trivial decisions in the wake of uncertain vulnerability arrival patterns. For instance, the agent was able to anticipate future events of high vulnerability arrivals, and accordingly adjusted (conserved) the allocation of resources in earlier time-steps to counter the important vulnerabilities during those events.
The proposed DRL-enabled cyber vulnerability management framework, Deep VULMAN, can strengthen the security posture of an organization by generating robust policies in uncertain and resource-constrained real-world environments. In this study, we also determined the optimal allocation of limited number of resources that are available in a CSOC, across different time-steps under uncertainty. An interesting follow-up work or a future research direction can include the development of data-driven models to determine an optimal number of security personnel needed to achieve the performance goal of a vulnerability management team. Furthermore, a trade-off study can be conducted comparing the impact of budget on staffing and performance of the vulnerability management teams.
Figure 1 :
1Deep VULMAN Framework for Cyber Vulnerability Management.
Figure 2 :
2Comparison of the total number of vulnerabilities selected from real-world data (one year) from (a) high value assets, (b) machines with low level of protection, (c) organization-specific relevant machines, and (d) machines with intrusion alert signals.
Figure 3 :
3Comparison between expected mitigation time of critical vulnerabilities and mitigation time allocated by the DRL agent.
This work has been submitted to Elsevier for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.
Acknowledgments
Executive Order on Improving the Nation's Cybersecurity (Presidential Actions. Online; accessed 1-May-2022Executive Order on Improving the Nation's Cybersecurity (Presidential Actions, May 12, 2021). https://www. whitehouse.gov/briefing-room/presidential-actions, 2021. [Online; accessed 1-May-2022].
Vulcon: A system for vulnerability prioritization, mitigation, and management. A Katheryn, Ankit Farris, George Shah, Rajesh Cybenko, Sushil Ganesan, Jajodia, ACM Transactions on Privacy and Security (TOPS). 214Katheryn A Farris, Ankit Shah, George Cybenko, Rajesh Ganesan, and Sushil Jajodia. Vulcon: A system for vulnerability prioritization, mitigation, and management. ACM Transactions on Privacy and Security (TOPS), 21(4): 1-28, 2018.
Vulnerability selection for remediation: An empirical analysis. The Journal of Defense Modeling and Simulation. Ankit Shah, Katheryn A Farris, Rajesh Ganesan, Sushil Jajodia, 1548512919874129Ankit Shah, Katheryn A Farris, Rajesh Ganesan, and Sushil Jajodia. Vulnerability selection for remediation: An empirical analysis. The Journal of Defense Modeling and Simulation, page 1548512919874129, 2019.
Towards optimal triage and mitigation of context-sensitive cyber vulnerabilities. Soumyadeep Hore, Fariha Moomtaheen, Ankit Shah, Xinming Ou, IEEE Transactions on Dependable and Secure Computing. Soumyadeep Hore, Fariha Moomtaheen, Ankit Shah, and Xinming Ou. Towards optimal triage and mitigation of context-sensitive cyber vulnerabilities. IEEE Transactions on Dependable and Secure Computing, 2022.
Common vulnerability scoring system. Peter Mell, Karen Scarfone, Sasha Romanosky, IEEE Security & Privacy. 46Peter Mell, Karen Scarfone, and Sasha Romanosky. Common vulnerability scoring system. IEEE Security & Privacy, 4 (6):85-89, 2006.
A complete guide to the common vulnerability scoring system version 2.0. In Published by FIRST-forum of incident response and security teams. Peter Mell, Karen Scarfone, Sasha Romanosky, 123Common Vulnerability Scoring System version 3.1: Specification Document. Online; accessed 18-May-2022Peter Mell, Karen Scarfone, Sasha Romanosky, et al. A complete guide to the common vulnerability scoring system version 2.0. In Published by FIRST-forum of incident response and security teams, volume 1, page 23, 2007. Common Vulnerability Scoring System version 3.1: Specification Document. https://www.first.org/cvss/ specification-document, 2020. [Online; accessed 18-May-2022].
On the impact of environmental metrics on cvss scores. Laurent Gallon, 2010 IEEE Second International Conference on Social Computing. IEEELaurent Gallon. On the impact of environmental metrics on cvss scores. In 2010 IEEE Second International Conference on Social Computing, pages 987-992. IEEE, 2010.
Improving cvss-based vulnerability prioritization and response with context information. Christian Fruhwirth, Tomi Mannisto, 3rd International symposium on empirical software engineering and measurement. IEEEChristian Fruhwirth and Tomi Mannisto. Improving cvss-based vulnerability prioritization and response with context information. In 2009 3rd International symposium on empirical software engineering and measurement, pages 535-544. IEEE, 2009.
Jonas Almroth, and Mats Persson. A quantitative evaluation of vulnerability scanning. Hannes Holm, Teodor Sommestad, Information Management & Computer Security. Hannes Holm, Teodor Sommestad, Jonas Almroth, and Mats Persson. A quantitative evaluation of vulnerability scanning. Information Management & Computer Security, 2011.
Empirical analysis of system-level vulnerability metrics through actual attacks. Hannes Holm, Mathias Ekstedt, Dennis Andersson, IEEE Transactions on dependable and secure computing. 96Hannes Holm, Mathias Ekstedt, and Dennis Andersson. Empirical analysis of system-level vulnerability metrics through actual attacks. IEEE Transactions on dependable and secure computing, 9(6):825-837, 2012.
Empirical estimates and observations of 0day vulnerabilities. Trevor A Miles A Mcqueen, Wayne F Mcqueen, May R Boyer, Chaffin, 42nd Hawaii international conference on system sciences. IEEEMiles A McQueen, Trevor A McQueen, Wayne F Boyer, and May R Chaffin. Empirical estimates and observations of 0day vulnerabilities. In 2009 42nd Hawaii international conference on system sciences, pages 1-12. IEEE, 2009.
Comparing vulnerability severity and exploits using case-control studies. Luca Allodi, Fabio Massacci, ACM Transactions on Information and System Security (TISSEC). 171Luca Allodi and Fabio Massacci. Comparing vulnerability severity and exploits using case-control studies. ACM Transactions on Information and System Security (TISSEC), 17(1):1-20, 2014.
Playing atari with deep reinforcement learning. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, Martin Riedmiller, arXiv:1312.5602arXiv preprintVolodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
A reinforcement learning approach for rebalancing electric vehicle sharing systems. Aigerim Bogyrbayeva, Sungwook Jang, Ankit Shah, Young Jae Jang, Changhyun Kwon, IEEE Transactions on Intelligent Transportation Systems. Aigerim Bogyrbayeva, Sungwook Jang, Ankit Shah, Young Jae Jang, and Changhyun Kwon. A reinforcement learning approach for rebalancing electric vehicle sharing systems. IEEE Transactions on Intelligent Transportation Systems, 2021.
Deepbots: A webots-based deep reinforcement learning framework for robotics. M Kirtas, Konstantinos Tsampazis, IFIP International Conference on Artificial Intelligence Applications and Innovations. SpringerNikolaos Passalis, and Anastasios TefasM Kirtas, Konstantinos Tsampazis, Nikolaos Passalis, and Anastasios Tefas. Deepbots: A webots-based deep reinforce- ment learning framework for robotics. In IFIP International Conference on Artificial Intelligence Applications and Innovations, pages 64-75. Springer, 2020.
A precision advertising strategy based on deep reinforcement learning. Haiqing Liang, 252020Ingénierie des Systèmes d'InformationHaiqing Liang. A precision advertising strategy based on deep reinforcement learning. Ingénierie des Systèmes d'Information, 25(3), 2020.
Deep reinforcement learning with double q-learning. Arthur Hado Van Hasselt, David Guez, Silver, Proceedings of the AAAI conference on artificial intelligence. the AAAI conference on artificial intelligence30Hado Van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double q-learning. In Proceedings of the AAAI conference on artificial intelligence, volume 30, 2016.
Dueling network architectures for deep reinforcement learning. Ziyu Wang, Tom Schaul, Matteo Hessel, Hado Hasselt, Marc Lanctot, Nando Freitas, International conference on machine learning. PMLRZiyu Wang, Tom Schaul, Matteo Hessel, Hado Hasselt, Marc Lanctot, and Nando Freitas. Dueling network architectures for deep reinforcement learning. In International conference on machine learning, pages 1995-2003. PMLR, 2016.
Asynchronous methods for deep reinforcement learning. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu, International conference on machine learning. PMLRVolodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International conference on machine learning, pages 1928-1937. PMLR, 2016.
Trust region policy optimization. John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, Philipp Moritz, International conference on machine learning. PMLRJohn Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In International conference on machine learning, pages 1889-1897. PMLR, 2015.
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov, arXiv:1707.06347Proximal policy optimization algorithms. arXiv preprintJohn Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
A preliminary analysis of quantifying computer security vulnerability data in" the wild. A Katheryn, Sean R Farris, Adam Mcnamara, George Goldstein, Cybenko, Sensors, and Command, Control, Communications, and Intelligence (C3I) Technologies for Homeland Security, Defense, and Law Enforcement Applications XV. 982598250Katheryn A Farris, Sean R McNamara, Adam Goldstein, and George Cybenko. A preliminary analysis of quantifying computer security vulnerability data in" the wild". In Sensors, and Command, Control, Communications, and Intelligence (C3I) Technologies for Homeland Security, Defense, and Law Enforcement Applications XV, volume 9825, page 98250T. International Society for Optics and Photonics, 2016.
Modeling and predicting cyber hacking breaches. Maochao Xu, Kristin M Schweitzer, Raymond M Bateman, Shouhuai Xu, IEEE Transactions on Information Forensics and Security. 1311Maochao Xu, Kristin M Schweitzer, Raymond M Bateman, and Shouhuai Xu. Modeling and predicting cyber hacking breaches. IEEE Transactions on Information Forensics and Security, 13(11):2856-2871, 2018.
Mathematical model on vulnerability characterization and its impact on network epidemics. Kaushik Haldar, Bimal Kumar Mishra, International Journal of System Assurance Engineering and Management. 82Marshall Kuypers and Elisabeth Paté-Cornell. Department of energy cyber security incidentsKaushik Haldar and Bimal Kumar Mishra. Mathematical model on vulnerability characterization and its impact on network epidemics. International Journal of System Assurance Engineering and Management, 8(2):378-392, 2017. Marshall Kuypers and Elisabeth Paté-Cornell. Department of energy cyber security incidents.
Challenges of real-world reinforcement learning. Gabriel Dulac-Arnold, Daniel Mankowitz, Todd Hester, arXiv:1904.12901arXiv preprintGabriel Dulac-Arnold, Daniel Mankowitz, and Todd Hester. Challenges of real-world reinforcement learning. arXiv preprint arXiv:1904.12901, 2019.
Softgym: Benchmarking deep reinforcement learning for deformable object manipulation. Xingyu Lin, Yufei Wang, Jake Olkin, David Held, arXiv:2011.07215arXiv preprintXingyu Lin, Yufei Wang, Jake Olkin, and David Held. Softgym: Benchmarking deep reinforcement learning for deformable object manipulation. arXiv preprint arXiv:2011.07215, 2020.
A two-step approach to optimal selection of alerts for investigation in a csoc. Ankit Shah, Rajesh Ganesan, Sushil Jajodia, Hasan Cam, IEEE Transactions on Information Forensics and Security. 147Ankit Shah, Rajesh Ganesan, Sushil Jajodia, and Hasan Cam. A two-step approach to optimal selection of alerts for investigation in a csoc. IEEE Transactions on Information Forensics and Security, 14(7):1857-1870, 2018.
Deterministic policy gradient algorithms. David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, Martin Riedmiller, International conference on machine learning. PMLRDavid Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministic policy gradient algorithms. In International conference on machine learning, pages 387-395. PMLR, 2014.
Learning humanoid robot running skills through proximal policy optimization. Luckeciano Carvalho Melo, Marcos Ricardo Omena Albuquerque Máximo, 10.1109/LARS-SBR-WRE48964.2019.00015Latin American Robotics Symposium (LARS), 2019 Brazilian Symposium on Robotics (SBR) and 2019 Workshop on Robotics in Education (WRE). Luckeciano Carvalho Melo and Marcos Ricardo Omena Albuquerque Máximo. Learning humanoid robot running skills through proximal policy optimization. In 2019 Latin American Robotics Symposium (LARS), 2019 Brazil- ian Symposium on Robotics (SBR) and 2019 Workshop on Robotics in Education (WRE), pages 37-42, 2019. doi:10.1109/LARS-SBR-WRE48964.2019.00015.
| []
|
[
"Needs-aware Artificial Intelligence: AI that 'serves [human] needs'",
"Needs-aware Artificial Intelligence: AI that 'serves [human] needs'"
]
| [
"Ryan Watkins \nGeorge Washington University\nG Street NW20052WashingtonDCUSA\n",
"Soheil Human [email protected] \nGeorge Washington University\nG Street NW20052WashingtonDCUSA\n\nSustainable Computing Lab, Institute for Information Systems and New Media, Vienna University of Economics and Business\nWelthandelsplatz 1, Vienna, A-1020Austria, EU\n\nDepartment of Philosophy & Vienna Cognitive Science Hub\nUniversity of Vienna\nUniversitätsstraße 7, Vienna, A-1010EUAustria\n"
]
| [
"George Washington University\nG Street NW20052WashingtonDCUSA",
"George Washington University\nG Street NW20052WashingtonDCUSA",
"Sustainable Computing Lab, Institute for Information Systems and New Media, Vienna University of Economics and Business\nWelthandelsplatz 1, Vienna, A-1020Austria, EU",
"Department of Philosophy & Vienna Cognitive Science Hub\nUniversity of Vienna\nUniversitätsstraße 7, Vienna, A-1010EUAustria"
]
| []
| By defining the current limits (and thereby the frontiers), many boundaries are shaping, and will continue to shape, the future of Artificial Intelligence (AI). We push on these boundaries in order to make further progress into what were yesterday's frontiers. They are both pliable and resilient-always creating new boundaries of what AI can (or should) achieve. Among these are technical boundaries (such as processing capacity), psychological boundaries (such as human trust in AI systems), ethical boundaries (such as with AI weapons), and conceptual boundaries (such as the AI people can imagine). It is within | 10.1007/s43681-022-00181-5 | [
"https://arxiv.org/pdf/2202.04977v3.pdf"
]
| 246,706,125 | 2202.04977 | d0cf7f99531aaeae121de94aa29805f2e2cff255 |
Needs-aware Artificial Intelligence: AI that 'serves [human] needs'
26 May 2022
Ryan Watkins
George Washington University
G Street NW20052WashingtonDCUSA
Soheil Human [email protected]
George Washington University
G Street NW20052WashingtonDCUSA
Sustainable Computing Lab, Institute for Information Systems and New Media, Vienna University of Economics and Business
Welthandelsplatz 1, Vienna, A-1020Austria, EU
Department of Philosophy & Vienna Cognitive Science Hub
University of Vienna
Universitätsstraße 7, Vienna, A-1010EUAustria
Needs-aware Artificial Intelligence: AI that 'serves [human] needs'
26 May 2022Springer Nature 2021 L A T E X template * Corresponding author. Contributing authors: [email protected];needsneeds-awaresociotechnicalinterdisciplinary
By defining the current limits (and thereby the frontiers), many boundaries are shaping, and will continue to shape, the future of Artificial Intelligence (AI). We push on these boundaries in order to make further progress into what were yesterday's frontiers. They are both pliable and resilient-always creating new boundaries of what AI can (or should) achieve. Among these are technical boundaries (such as processing capacity), psychological boundaries (such as human trust in AI systems), ethical boundaries (such as with AI weapons), and conceptual boundaries (such as the AI people can imagine). It is within
this final category 1 that we find the construct of needs and the limitations that our current concept of need places on the future AI.
Serve [Human] Needs
Multiple AI advocates (including Kai-Fu Lee [1] and Ben Shneiderman [2,3]), among many others, have posited that a primary goal of AI (and Humancentric AI 2 ) is to serve human needs. A laudable goal for sure, but there is a great deal of history, controversy, and complexity packed into both the word need and the overarching construct of needs [4]. Thus, if serving needs is to remain an ambition of our AI systems, further attention (i.e., dialogue, research, guidelines, policies) and collaboration across multiple disciplines is required to develop the construct of needs into a pragmatic tool that be applied to shape the very goals of what future AI can and should achieve.
Need is a commonplace word (such as, "I need coffee"), making it easy to overlook that the term has specific meaning, definition, connotation, and power. Its power, for example, stems from the connotation that the object of the statement (such as, coffee in the example above) seems to be absolutely necessary and without alternative. In other words, coffee is required to satisfy the implied need. Coffee may not be sufficient, but tea or water alone definitely won't do. 3 Most of us routinely leverage this power (as do politicians and advertisers) when we use the word need to effectively eliminate other options (such as, "Cryptocurrency companies need national regulations", when, e.g., international regulations, market-based instruments, co-regulation, self-regulation, education [5], and end-user empowerment [6] might be other viable options to be considered) 4 . We do this because need statements typically induce the desired associated behaviors (such as, choosing national regulations rather than other alternatives); though typically creating ethical difficulties both for those defining the need, and those tasked with satisfying the need. Defining needs, after all, is not just about an academic concept; rather it can determine whose needs are prioritized, who gets resources and who does not, and how inequalities are considered in meeting the basics of the human condition. In these cases, need is a very powerful construct-and yet it remains one that we have little understanding of or agreement on. Those who define needs (whether they be individuals for themselves or for others, institutions such as companies or governments, or in the future AI systems) have both implicit and explicit power-and yet we rarely recognize that power since it is routinely lost in the common usage of the term. If an AI system, for example, were permitted to 1 while it can play a fundamental role in all other boundaries 2 HCAI 3 A useful exercise can be to go a day, or week, without using the word "need" at all; quickly allowing each of us to recognize just how often we use the power of the term in our daily activities. 4 Here, we suspend our judgment regarding the national regulations of cryptocurrency companies since this is out of the scope of this article; the point here is that by using "need" we imply necessity [without evidence] and infer that any action must include national regulations when other options should also be considered.
determine [and prioritize] a patient's needs [and the satisfiers of that needs], the power of the tool is substantially greater than if it only offers options for medical care.
It is worth emphasizing that being in need (and accordingly serving needs) is not limited to individual humans. Needs can be associated with different types of systems (e.g. life forms, organizations, societies). Therefore, needsaware AI systems [7] should ideally consider different systems' needs (plural) on different levels and different contexts sustainably.
What Are Needs?
Distinguishing between what is necessary (i.e., needs) and what is desired (i.e., transitory wants, cravings, motivators) has multiple ethical implications for AI and AI developers. This distinction is easily lost, for example, when put into the context of determining what potential clients or customers will purchase (where people might elect to spend their own money on what they desire over what is necessary). While ascertaining peoples' desires is not always an easy task, it is relatively much easier than identifying and prioritizing their needs (i.e., the goal of a needs assessment [8]). Different scholars, such as the philosopher Stephen McLeod, have even questioned if people are capable of knowing their needs at all [9].
For AI developers, for instance, the challenges of this distinction (i.e., needs from wants) 5 leads to an ethical difficulty that spans the continuum stretching from creating systems that merely meet consumers stated desires at the moment, to systems that assist in resolving [human] needs even when people may be unaware of the benefits at the time. Moving from basic perspectives of needs (e.g., needs are what people say they need, or needs are only what motivates an individual to take action [10]) to a more robust and multidimensional definition and understanding of needs (e.g., needs are gaps between desired accomplishments and current achievements at multiple interdependent levels [8]) brings many benefits, but also introduces complexity for AI developers creating (or co-creating) Sustainable H uman-centric, Accountable, Lawful, and E thical AI (Sustainable HALE AI [11]) systems (for instance, balancing individual, organizational, and societal needs that are routinely in conflict).
What are needs? What are not needs? How do we prioritize among needs? How do my needs relate to your needs, and how do our needs relate to the needs of others? How do we measure needs? How can we utilize needs? What will satisfy a need, and how will we know if the need has been satisfied? How can AI serve needs and still be economically viable? How can/will different sociopolitical, socio-economic, socio-technical and socio-cognitive aspects influence the co-creation of needs-aware AI systems, and how can/will such aspects be appropriately considered in a Sustainable HALE co-creation of such systems? These, and many other, questions have been and are still debated within and across multiple disciplines (e.g., philosophy, ethics, law, social work, education, business, economics, political science, sociology, management, cognitive science, psychology, and engineering). These debates have not, however, reached a resolution; and we suggest that this does, and will continue to, create pragmatic boundaries on what AI can and should achieve. Likewise, without answers to these questions (or at least many/most of them) it might be ethically challenging to ask (or expect) AI developers (or AI systems) to assess the needs of others, and then to use the results of those assessments to create AI systems that meet ethical standards.
Roles for Needs
AI developers are often placed in a so-called social dilemmas-with societal good on one side and commercial pressures on the other [12]. Part of the solution to these dilemmas (beyond ethical, legal, and regulatory frameworks) could be the introduction of well-defined and measurable needs 6 . For example, by identifying and measuring needs (i.e., societal, organizational, and individual needs) we can contribute to building the foundations for finding an appropriate equilibrium that serves needs in meaningful and balanced ways; while providing tools capable of guiding AI ethics. As an integrated component of H uman-centric, Accountable, Lawful, and E thical AI (or HALE AI) [11], the construct of needs can, we suggest, add value and push the boundaries of AI development from chasing wants, to serving needs 7 .
Needs can thereby contribute in multiple roles in the development of AI. HCAI developers, for example, can utilize needs to identify and prioritize both what the systems can and should achieve; meeting peoples' desires and also serving their needs. AI systems, for instance, can use measurable needs to evaluate their own performance in resolving needs, while at the same time assisting people in making decisions where the complex relationships among needs must be weighed. Meanwhile, policymakers can utilize well-defined societal needs to craft effective policy, regulatory, and ethical frameworks. As such, precise, comprehensive, and transparent constructs of needs can play many vital roles in the future development of AI (and our digital societies).
What Next?
If AI is going to serve our needs, then we have to answer some of these questions, and discover new questions that are waiting below the surface. From our perspective this is an urgent matter since these questions will not be answered 6 Calling for well-define and measurable needs (or needs satisfaction) does not mean that we are advocating absolutist perspectives on needs. With that in mind, we propose that, among others, considering disagreements [13] should be an important aspect of needs-aware AI systems (see [7] for a more detailed discussion on measuring, explicitizing, utilizing, or enactizing needs). 7 Considering that meeting different systems' interrelated (and sometimes conflicting) needs in a sustainable manner is crucially important for our societies, re-thinking needs (and needs satisfaction) into AI can not only contribute toward the development of HALE AI but Sustainable HALE AI [11].
quickly and without debate, and AI researchers and developers must be part of the professional dialogues in order for useful guidance to be achieved. No single discipline or field can come to resolution on these matters, and thereby needs are illustrative of the types of broad interdisciplinary challenges (bringing together STEM, social science, and humanities scholars and practitioners) that will be the hallmark of future decades of AI research and development. At the same time, the development of new AI systems will not necessarily wait for academic debates-as history shows.
Needs, both as a construct and professional term, can (and should) be a fundamental element of ethical (and sociotechnical) frameworks and the tools that are derived from those frameworks. We must use the word with the same precision and with the same care as we accord to terms such as "values" or "rights". We must also work to create a shared understanding of what needs are, defining them in manners that can transcend disciplinary boundaries and allow us to align individual, organizational, and societal needs [14].
If we give up, however, and choose not to become precise in our construct of need (our language when discussing needs), and the operational definitions required for future Needs-aware AI systems, then we will be left with AI that merely helps us meet our transitory wants, desires, cravings, motivations, or passions 8 . All of which may be profitable and favorable at times, but none of which are sufficient (nor necessary) for meeting our ideal of future AI that has the capacity to serve [human] needs.
The path to needs-aware AI will take time. Truly interdisciplinary dialogue and collaboration requires time. 9 From philosophy to computer science, and cognitive science to social science, many disciplines have contributions to offer, and yet there is much to learn about those potential contributions as we prepare for the future. For instance, many scholars who study the psychology of need do not also follow current development in computer science and AI; and the reverse is true as well. We therefore suggest that the process of interdisciplinary collaboration on needs-aware AI must begin soon, to ensure that the distinction of needs isn't lost (or assumed) as technologies develop over the next decade(s). This can begin here, with responses to this initial editorial; and then grow through cross-disciplinary dialogue. Whether it is maintaining needs as a distinct concept in [re]presentations, high-lighting the unique role of needs as systemic or algorithm features, or applying needs in design and cocreation processes, the role of needs in the future of AI depends on recognizing the power and value of this frequently misunderstood construct.
Statements and Declarations
No funding was received to assist with the preparation of this manuscript. The authors have no relevant financial or non-financial interests to disclose.
Though we recognize that colleagues in multiple disciplines have also proposed typologies for "needs", we will not address those in this article. Typologies are one of many topics we hope will be taken up in future interdisciplinary dialogues/debates.
and maybe only as a by-product some of our "needs", though we would have a hard time knowing it.9 while developers might not wait for it.
Meet the Expert: How AI Will Change Our World by 2041. K.-F Lee, T Oreilly, OReilly Media, IncLee, K.-F., OReilly, T.: Meet the Expert: How AI Will Change Our World by 2041. OReilly Media, Inc. (2021).
Design lessons from ai's two grand goals: Human emulation and useful applications. B Shneiderman, IEEE Transactions on Technology and Society. 12Shneiderman, B.: Design lessons from ai's two grand goals: Human emu- lation and useful applications. IEEE Transactions on Technology and Society 1(2), 73-82 (2020)
Human-Centered AI. B Shneiderman, Oxford University PressOxford, UKShneiderman, B.: Human-Centered AI. Oxford University Press, Oxford, UK (2022)
Knowledge Engineering and Semantic Web. S Human, F Fahrenbach, F Kragulj, V Savenkov, Communications in Computer and Information Science. Różewski, P., Lange, C.Springer International PublishingOntology for Representing Human NeedsHuman, S., Fahrenbach, F., Kragulj, F., Savenkov, V.: Ontology for Rep- resenting Human Needs. In: Różewski, P., Lange, C. (eds.) Knowledge Engineering and Semantic Web. Communications in Computer and Infor- mation Science, pp. 195-210. Springer International Publishing, Cham (2017)
OECD Report: Regulation, alternatives traditional. OECD Report: Regulation, alternatives traditional. https://www.oecd.org/gov/regulatory-policy/42245468.pdf
End-user empowerment: An interdisciplinary perspective. S Human, R Gsenger, G Neumann, Proceedings of the 53rd Hawaii International Conference on System Sciences. the 53rd Hawaii International Conference on System SciencesHawaii, United StatesHuman, S., Gsenger, R., Neumann, G.: End-user empowerment: An inter- disciplinary perspective. In: Proceedings of the 53rd Hawaii International Conference on System Sciences, Hawaii, United States, pp. 4102-4111 (2020)
S Human, R Watkins, arXiv:2202.04977[cs.AINeeds and Artificial Intelligence. arXiv. Human, S., Watkins, R.: Needs and Artificial Intel- ligence. arXiv (arXiv:2202.04977 [cs.AI]) (2022).
. 10.48550/arXiv.2202.04977https://doi.org/10.48550/arXiv.2202.04977
A Guide to Assessing Needs: Essential Tools for Collecting Information, Making Decisions, and Achieving Development Results. R Watkins, M W Meiers, Y Visser, World Bank Publications, D.CUSAWatkins, R., Meiers, M.W., Visser, Y.: A Guide to Assessing Needs: Essen- tial Tools for Collecting Information, Making Decisions, and Achieving Development Results. World Bank Publications, D.C., USA (2012)
Knowledge of need. S K Mcleod, International Journal of Philosophical Studies. 192McLeod, S.K.: Knowledge of need. International Journal of Philosophical Studies 19(2), 211-230 (2011)
A theory of human motivation. A H Maslow, Psychological Review. 504370Maslow, A.H.: A theory of human motivation. Psychological Review 50(4), 370 (1943)
THE HALE WHALE: A Framework for the Co-creation of Sustainable, Human-centric, Accountable, Lawful, and Ethical Digital Sociotechnical Systems. S Human, Sustainable Computing Paper Series (2022/01) (2022Human, S.: THE HALE WHALE: A Framework for the Co-creation of Sustainable, Human-centric, Accountable, Lawful, and Ethical Digital Sociotechnical Systems. Sustainable Computing Paper Series (2022/01) (2022)
The Social Dilemma in Artificial Intelligence Development and Why We Have to Solve It. I Strümke, M Slavkovik, V I Madai, Strümke, I., Slavkovik, M., Madai, V.I.: The Social Dilemma in Artificial Intelligence Development and Why We Have to Solve It (2021)
Supporting Pluralism by Artificial Intelligence: Conceptualizing Epistemic Disagreements As Digital Artifacts. PT-AI 2017: Philosophy and Theory of Artificial Intelligence. S Human, G Bidabadi, V Savenkov, Springer and SpringerLeedsHuman, S., Bidabadi, G., Savenkov, V.: Supporting Pluralism by Arti- ficial Intelligence: Conceptualizing Epistemic Disagreements As Digital Artifacts. PT-AI 2017: Philosophy and Theory of Artificial Intelligence 2017, pp. 190-193. Springer and Springer, Leeds (2018)
Alignment and success: Applying the hierarchy of planning and the needs-assesment hierarchy. R Kaufman, Performance Improvement. 587Kaufman, R.: Alignment and success: Applying the hierarchy of planning and the needs-assesment hierarchy. Performance Improvement 58(7), 24- 28 (2019)
| []
|
[
"Segmentation of Photovoltaic Module Cells in Uncalibrated Electroluminescence Images",
"Segmentation of Photovoltaic Module Cells in Uncalibrated Electroluminescence Images"
]
| [
"Sergiu Deitsch [email protected] ",
"Claudia Buerhop-Lutz ",
"Evgenii Sovetkin ",
"Ansgar Steland ",
"Andreas Maier ",
"Florian Gallwitz ",
"Christian Riess ",
"S Deitsch ",
"\nPattern Recognition Lab University of Erlangen-Nuremberg Martensstr\n\n",
"\n91058ErlangenGermany\n",
"\nIntroduction\n\n"
]
| [
"Pattern Recognition Lab University of Erlangen-Nuremberg Martensstr\n",
"91058ErlangenGermany",
"Introduction\n"
]
| []
| High resolution electroluminescence (EL) images captured in the infrared spectrum allow to visually and non-destructively inspect the quality of photovoltaic (PV) modules. Currently, however, such a visual inspection requires trained experts to discern different kinds of defects, which is time-consuming and expensive. Automated segmentation of cells is therefore a key step in automating the visual inspection workflow.In this work, we propose a robust automated segmentation method for extraction of individual solar cells from EL images of PV modules. This enables controlled studies on large amounts of data to understanding the effects of module degradation over time-a process not yet fully understood.The proposed method infers in several steps a highlevel solar module representation from low-level ridge edge features. An important step in the algorithm is to formulate the segmentation problem in terms of lens calibration by exploiting the plumbline constraint. We evaluate our method on a dataset of various solar modules types containing a total of 408 solar cells with various defects. Our method robustly solves this task with a median weighted Jaccard index of 94.47 % and an F 1 score of 97.62 %, both indicating a high sensitivity and a high similarity between automatically segmented and ground truth solar cell masks. | 10.1007/s00138-021-01191-9 | [
"https://export.arxiv.org/pdf/1806.06530v4.pdf"
]
| 235,187,477 | 1806.06530 | 560fdb3d0c7881af50e35b7e271746cec8f368d9 |
Segmentation of Photovoltaic Module Cells in Uncalibrated Electroluminescence Images
Sergiu Deitsch [email protected]
Claudia Buerhop-Lutz
Evgenii Sovetkin
Ansgar Steland
Andreas Maier
Florian Gallwitz
Christian Riess
S Deitsch
Pattern Recognition Lab University of Erlangen-Nuremberg Martensstr
91058ErlangenGermany
Introduction
Segmentation of Photovoltaic Module Cells in Uncalibrated Electroluminescence Images
Received: date / Accepted: dateNoname manuscript No. (will be inserted by the editor)PV modulesEL imagingvisual inspec- tionlens distortionsolar cell extractionpixelwise classification
High resolution electroluminescence (EL) images captured in the infrared spectrum allow to visually and non-destructively inspect the quality of photovoltaic (PV) modules. Currently, however, such a visual inspection requires trained experts to discern different kinds of defects, which is time-consuming and expensive. Automated segmentation of cells is therefore a key step in automating the visual inspection workflow.In this work, we propose a robust automated segmentation method for extraction of individual solar cells from EL images of PV modules. This enables controlled studies on large amounts of data to understanding the effects of module degradation over time-a process not yet fully understood.The proposed method infers in several steps a highlevel solar module representation from low-level ridge edge features. An important step in the algorithm is to formulate the segmentation problem in terms of lens calibration by exploiting the plumbline constraint. We evaluate our method on a dataset of various solar modules types containing a total of 408 solar cells with various defects. Our method robustly solves this task with a median weighted Jaccard index of 94.47 % and an F 1 score of 97.62 %, both indicating a high sensitivity and a high similarity between automatically segmented and ground truth solar cell masks.
Introduction
Visual inspection of solar modules using EL imaging allows to easily identify damage inflicted to solar panels either by environmental influences such as hail, during the assembly process, or due to prior material defects or material aging [65,10,90,91,5,93]. The resulting defects can notably decrease the photoelectric conversion efficiency of the modules and thus their energy yield. This can be avoided by continuous inspection of solar modules and maintenance of defective units. For an introduction and review of non-automatic processing tools for EL images, we refer to Mauk [59].
An important step towards an automated visual inspection is the segmentation of individual cells from the solar module. An accurate segmentation allows to extract spatially normalized solar cell images. We already used the proposed method to develop a public dataset of solar cells images [12], which are highly accurate training data for classifiers to predict defects in solar modules [18,60]. In particular, the Convolutional Neural Network (CNN) training is greatly simplified when using spatially normalized samples, because CNNs are generally able to learn representations that are only equivariant to small translations [35, pp. 335-336]. The learned representations, however, are not naturally invariant to other spatial deformations such as rotation and scaling [35,44,52].
The identification of solar cells is additionally required by the international technical specification IEC TS 60904-13 [42,Annex D] for further identification of defects on cell level. Automated segmentation can also ease the development of models that predict the performance of a PV module based on detected or identified failure modes, or by determining the operating voltage of each cell [70]. The data describing the cell characteristics can be fed into an electric equivalent arXiv:1806.06530v4 [cs.CV] 24 May 2021 model that allows to estimate or simulate the currentvoltage characteristic (I-V) curve [72,13,46] or even the overall power output [47].
The appearance of PV modules in EL images depends on a number of different factors, which makes an automated segmentation challenging. The appearance varies with the type of semiconducting material and with the shape of individual solar cell wafers. Also, cell cracks and other defects can introduce distracting streaks. A solar cell completely disconnected from the electrical circuit will also appear much darker than a functional cell. Additionally, solar modules vary in the number of solar cells and their layout, and solar cells themselves are oftentimes subdivided by busbars into multiple segments of different sizes. Therefore, it is desirable for a fully automated segmentation to infer both the arrangement of solar cells within the PV module and their subdivision from EL images alone, in a way that is robust to various disturbances. In particular, this may ease the inspection of heterogeneous batches of PV modules.
In this work, we assume that EL images are captured in a manufacturing setting or under comparable conditions in a test laboratory where field-aged modules are analyzed either regularly or after hazards like hailstorms. Such laboratories oftentimes require agile work processes where the equipment is frequently remounted. In these scenarios, the EL irradiation of the solar module predominates the background irradiation, and the solar modules are captured facing the EL camera without major perspective distortion. Thus, the geometric distortions that are corrected by the proposed method are radial lens distortion, in-plane rotation, and minor perspective distortions. This distinguishes the manufacturing setting from acquisitions in the field, where PV modules may be occluded by cables and parts of the rack, and the perspective may be strong enough to require careful correction. However, perspective distortion also makes it more difficult to identify defective areas (e.g., microcracks) due to the foreshortening effect [4]. Therefore, capturing EL images from an extreme perspective is generally not advisable. Specifically for manufacturing environments, however, the proposed method yields a robust, highly accurate, and completely automatic segmentation of solar modules into solar cells from high resolution EL images of PV modules.
Independently of the setting, our goal is to allow for some flexibility for the user to freely position the camera or use zoom lenses without the need to recalibrate the camera.
With this goal in mind, a particular characteristic of the proposed segmentation pipeline is that it does not require an external calibration pattern. During the detection of the grid that identifies individual solar cells, the busbars and the inter solar cell borders are directly used to estimate lens distortion. Avoiding the use of a separate calibration pattern also avoids the risk of an operator error during the calibration, e.g., due to inexperienced personnel.
A robust and fully automatic PV module segmentation can help understanding the influence of module degradation on module efficiency and power generation. Specifically, this allows to continuously and automatically monitor the degradation process, for instance, by observing the differences in a series of solar cell images captured over a certain period of time. The segmentation also allows to automatically create training data for learning-based algorithms for defect classification and failure prediction.
Contributions
To the best of our knowledge, the proposed segmentation pipeline is the first work to enable a fully automatic extraction of solar cells from uncalibrated EL images of solar modules (cf., Fig. 1b). Within the pipeline, we seek to obtain the exact segmentation mask of each solar cell through estimation of non-linear and linear transformations that warp the EL image into a canonical view. To this end, our contributions are three-fold: 1. Joint camera lens distortion estimation and PV module grid detection for precise solar cell region identification. 2. A robust initialization scheme for the employed lens distortion model. 3. A highly accurate pixelwise classification into active solar cell area on monocrystalline and polycrystalline PV modules robust to various typical defects in solar modules. Moreover, our method operates on arbitrary (unseen) module layouts without prior knowledge on the layout.
Outline
The remainder of this work is organized as follows. Section 2 discusses the related work. In Section 3, the individual stages of the segmentation pipeline are presented. In Section 4, we evaluate the presented segmentation approach on a number of different PV modules with respect to the segmentation accuracy. Finally, the conclusions are given in Section 5.
Related Work
The segmentation of PV modules into individual solar cells is related to the detection of calibration patterns, Row 1 . . . Figure 1: (a) An EL image of a PV module overlaid by a rectangular grid ( ) and parabolic curve grid ( ) including the busbars ( ) determined using our approach. The intersections of the rectangular grid were registered to curve grid intersections to accurately align both grids. Notice how the rectangular grid is still not able to capture the curved surface of the solar module induced by the (weak) lens distortion that increases especially towards the image border. Using the curve grid, we estimate the lens distortion, rectify the image and finally extract the individual cells using the estimated module topology (b). The segmented solar cells can be used for further analysis, such as automatic defect classification or failure prediction in PV modules. The solar cells are approximately 15.60 cm × 15.60 cm with a standard 60 cell PV module with overall dimensions of 1 m × 1.65 m.
such as checkerboard patterns commonly used for calibrating intrinsic camera and lens parameters [79,69,29,41,36]. However, the appearance of calibration patterns is typically perfectly known, whereas detection of solar cells is encumbered by various defects that are a priori unknown. Additionally, the number of solar cells in a PV module and their layout can vary. We also note that existing lens models generally assume wide angle lenses. However, their application to standard lenses is to our knowledge not widely studied.
To estimate the parameters of a lens distortion model, the plumbline constraint is typically employed [11]. The constraint exploits the fact that the projection of straight lines under radial and tangential distortion will not be truly straight. For example, under radial distortion, straight lines are images as curves. For typical visual inspection tasks, a single image is sufficient to estimate the lens distortion parameters [20,25,16,2,17,78]. This can be achieved by decoupling the intrinsic parameters of the camera from the parameters of the lens distortion model [20].
Novel methodologies employ CNNs for various segmentation tasks. Existing CNN-based segmentation tasks can be categorized into (1) object detection, (2) semantic segmentation, and (3) instance-aware segmentation. One of the first CNN object detection architectures is Regions with CNN features (R-CNN) [32] to learn features that are subsequently classified using a class-specific linear Support Vector Machine (SVM) to generate region proposals. R-CNN learns to simultaneously classify object proposals and refine their spatial locations. The predicted regions, however, provide only a coarse estimation of object's location in terms of bounding boxes. Girshick [31] proposed Fast Regionbased Convolutional Neural Network (Fast R-CNN) by accelerating training and testing times while also increasing the detection accuracy. Ren et al. [75] introduced Region Proposal Network (RPN) that shares full-image convolutional features with the detection network enabling nearly cost-free region proposals. RPN is combined with Fast R-CNN into a single network that simultaneously predicts object bounds and estimates the probability of an object for each proposal. For semantic segmentation, Long et al. [56] introduced Fully Convolutional Networks (FCNs) allowing for pixelwise inference. The FCN is learned end-to-end and pixels-to-pixels requiring appropriately labeled training data. Particularly, in medical imaging the U-Net network architecture by Ronneberger et al. [77] has been successfully applied for various segmentation tasks. In instance segmentation, Li et al. [51] combined segment proposal and object detection for Fully Convolutional Instance Segmentation (FCIS) where the general idea is to predict the locations in a fully convolutional network.
He et al. [39] proposed a Mask R-CNN which extends Faster R-CNN.
The work by Mehta et al. [62] introduces a CNN for the prediction of power loss. Their system additionally localizes and classifies the type of soiling. Their work is based on RGB images of whole PV modules and addresses the additional geometric challenges of acquisitions in the field. In contrast, this work operates on EL images of individual cells of a PV module, and in particular focuses on their precise segmentation in a manufacturing setting.
The main limitation of learning-based approaches is the requirement of a considerable number of appropriately labeled images for training. However, pixelwise labeling is time-consuming, and in absence of data not possible at all. Also, such learning-based approaches require training data that is statistically representative for the test data, which oftentimes requires to re-train a model on data with different properties. In contrast, the proposed approach can be readily deployed to robustly segment EL images of PV modules without notable requirements of labeled training data.
The closest work related to the proposed method was presented by Sovetkin and Steland [86]. This method proposes a robust PV module grid alignment for the application on field EL images, where radial and perspective distortion, motion blur, and disturbing background may be present. The method uses an external checkerboard calibration for radial distortion correction, and prior knowledge on the solar cell topology in terms of the relative distances of the grid lines separating the busbars and cell segments. In contrast, EL images taken under manufacturing conditions may be cropped or rotated, and the camera is not always precalibrated. Hence, the proposed method performs an automated on-line calibration for every EL image. This is particularly useful for EL images of PV modules from various sources, for which the camera parameters may not be available, or when zoom lenses are used. Additionally, the proposed method performs a pixelwise classification of pixels belonging to the active cell area and therefore is able to provide masks tailored to a specific module type. Such masks allow to exclude unwanted background information and to simplify further processing.
In this work, we unify lens distortion estimation and grid detection by building upon ideas of Devernay and Faugeras [20]. However, instead of using independent line segments to estimate lens distortion parameters, we constrain the problem using domain knowledge by operating on a coherent grid. This joint methodology allows to correct errors through feedback from the optimization loop used for estimating lens model parameters. The proposed approach conceptually differs from Sovetkin and Steland [86], where both steps are decoupled and an external calibration is required.
Methodology
The proposed framework uses a bottom-up pipeline to gradually infer a high-level representation of a solar module and its cells from low-level ridge edge features in an EL image. Cell boundaries and busbars are represented as parabolic curves to robustly handle radial lens distortion which causes straight lines to appear curved in the image. Once we estimated the lens distortion parameters, the parabolas are rectified to obtain a planar cell grid. This rectified representation is used to segment the solar cells.
Overview
The general framework for segmenting the solar cells in EL images of PV modules is illustrated in Fig. 2 and consists of the following steps. First, we locate the busbars and the inter solar cell borders by extracting the ridge edges. The ridge edges are extracted at subpixel accuracy and approximated by a set of smooth curves defined as second-degree polynomials. The parametric representation is used to construct an initial grid of perpendicularly arranged curves that identify the PV module. Using this curve grid, we estimate the initial lens distortion parameters and hypothesize the optimal set of curves by further excluding outliers in a RANdom SAmple Consensus (RANSAC) scheme. Then we refine the lens distortion parameters that we eventually use to rectify the EL image. From the final set of curves we infer the PV module configuration and finally extract the size, perspective, and orientation of solar cells.
Preprocessing
First, the contrast of an EL image is enhanced to account for possible underexposure. Then, low-level edge processing is applied to attenuate structural variations that might stem from cracks or silicon wafer texture, with the goal of preserving larger lines and curves.
Contrast Enhancement
Here, we follow the approach by Franken et al. [28]. A copy I bg of the input EL image I is blurred with a Gaussian kernel, and a morphological closing with a disk-shaped structure element is applied. Dividing each pixel of I by I bg attenuates unwanted background noise while emphasizing high contrast regions. Then, histogram equalization [34, pp. 134 sqq.] is applied to increase its overall contrast. Figure 5b shows the resulting image I.
Gaussian Scale-Space Ridgeness
The high-level grid structure of a PV module is defined by inter-cell borders and busbars, which correspond to ridges in the image. Ridge edges can be determined from second-order partial derivatives summarized by a Hessian. To robustly extract line and curve ridges, we compute the second-order derivative of the image at multiple scales [54,55]. The responses are computed in a Gaussian pyramid constructed from an input EL image [53]. This results in several layers of the pyramid at varying resolutions commonly referred to as octaves. The eigendecomposition of the Hessian computed afterwards provides information about line-like structures. More in detail, let u := (u, v) denote discrete pixel coordinates, O ∈ N the number of octaves in the pyramid, and P ∈ N the number of sublevels in each octave. At the finest resolution, we set σ to the golden ratio σ = 1 + √ 5 /2 ≈ 1.6. At each octave o ∈ {0, . . . , O − 1} and sublevel ∈ {0, . . . , P − 1}, we compute the Hessian by convolving the image with the derivatives of the Gaussian kernel. To obtain the eigenvalues, the symmetric Hessian is diagonalized by annihilating the offdiagonal elements using the Jacobi method which iteratively applies Givens rotations to the matrix [33]. This way, its eigenvalues and the corresponding eigenvectors can be simultaneously extracted in a numerically stable manner. Let H = VΛV denote the eigendecomposition of the Hessian H, where Λ := diag(λ 1 , λ 2 ) ∈ R 2×2 is a diagonal matrix of eigenvalues λ 1 > λ 2 and V := ( v 1 , v 2 ) are the associated eigenvectors. Under a Gaussian assumption, the leading eigenvector dominates the likelihood if the associated leading eigenvalue is spiked. In this sense, the local ridgeness describes the likelihood of a line segment in the image at position u, and the orientation of the associated eigenvector specifies the complementary angle β( u) of the most likely line seg-ment orientation at position u. The local ridgeness R( u) is obtained as the maximum positive eigenvalue λ 1 ( u) across all octaves and sublevels. Both the ridgeness R( u) and the angle β( u) provide initial cues for ridge edges in the EL image (see Fig. 5c).
Contextual Enhancement via Tensor Voting
Ridgeness can be very noisy (cf., Fig. 5c). To discern noise and high curvatures from actual line and curve features, R( u) is contextually enhanced using tensor voting [61].
Tensor voting uses a stick tensor voting field to model the likelihood that a feature in the neighborhood belongs to the same curve as the feature in the origin of the voting field [27]. The parameter ς > 0 controls the proximity of the voting field, and ν determines the angular specificity that we set to ν = 2 in our experiments.
Following Franken et al. [27], sticknessR( u) = λ 1 −λ 2 is computed as the difference between the two eigenvaluesλ 1 ,λ 2 of the tensor field, whereλ 1 >λ 2 . β( u) = ∠˜ e 1 is the angle of the eigenvector˜ e 1 ∈ R 2 associated with the largest eigenvalueλ 1 , analogously to β( u).
We iterate tensor voting two times, since one pass is not always sufficient [28]. Unlike Franken et al., however, we do not thin out the stickness immediately after the first pass to avoid too many disconnected edges. Given the high resolution of the EL images in our dataset of approximately 2500 × 2000 pixels, we use a fairly large proximity of ς 1 = 15 in the first tensor voting step, and ς 2 = 10 in the second. Figure 5d shows a typical sticknessR( u) output. The stickness along the orientationβ( u) is used to extract curves at subpixel accuracy in the next step of the pipeline.
Curve Extraction
We seek to obtain a coherent grid which we define in terms of second-degree curves. These curves are traced along the previously extracted ridges by grouping centerline points by their curvature. We then fit second-degree polynomials to these points, which yields a compact high-level curve representation while simultaneously allowing to discard point outliers.
Extraction of Ridges at Subpixel Accuracy
To ensure a high estimation accuracy of lens distortion parameters, we extract ridge edges at subpixel accuracy. This also makes the segmentation more resilient in outof-focus scenarios, where images may appear blurry and the ridge edges more difficult to identify due to their smoother appearance. Blurry images can be caused by slight camera vibrations during the long exposure time of several seconds that is required for imaging. Additionally, focusing in a dark room can be challenging, hence blur cannot be always avoided. Nevertheless, it is beneficial to be able to operate also on blurry images, as they can still be useful for defect classification and power yield estimation in cell areas that do not irradiate.
To this end, we perform non-maximum suppression by Otsu's global thresholding [67] on the sticknessR( u) followed by skeletonization [80]. Afterwards, we collect the points that represent the centerline of the ridges through edge linking [48]. The discrete coordinates can then be refined by setting the centerline to the mean of a Gaussian function fitted to the edge profile [23] using the Gauss-Newton (GN) optimization algorithm [66]. The 1-dimensional window of the Gaussian is empirically set to 21 pixels, with four sample points per pixel that are computed via bilinear interpolation. The GN algorithm is initialized with the sample mean and standard deviation in the window, and multiplicatively scaled to the stickness magnitude at the mean. The mean of the fitted Gaussian is then reprojected along the edge profile oriented atβ( u) to obtain the edge subpixel position. Figure 3 visualizes these steps.
A non-parametric alternative to fitting a Gaussian to the ridge edge profile constitutes fitting a parabola instead [19]. Such an approach is very efficient since it involves a closed-form solution. On the downside, however, the method suffers from oscillatory artifacts which require additional treatment [30].
Connecting Larger Curve Segments
A limitation of the edge linking method is that it does not prioritize curve pairs with similar orientation. To address this, we first reduce the set of points that constitute a curve to a sparse representation using the non-parametric variant of the Ramer-Douglas-Peucker algorithm [73,21] introduced by Prasad et al. [71]. Afterwards, edges are disconnected if the angle between the corresponding line segments is nonzero. In a second pass, two line segments are joined if they are nearby, of approximately the same length, and pointing into the same direction within an angle range ϑ = 5°. Figure 4 illustrates the way two curve segments are combined.
In the final step, the resulting n i points of the i-th curve of a line segment form a matrixQ (i) ∈ R 2×ni . For brevity, we denote the j-th column ofQ (i) byˆ q j ∈ R 2 . Q (i) is used to find the parametric curve representation. Figure 4: When considering combining two adjacent curve segments, one with the end line segment − − → AB and the other with the start line segment − −− → B A , we evaluate the angles α 1 , α 2 , and α 3 and ensure they are below the predefined threshold ϑ with α 1 , α 2 ≥ α 3 ≥ π − ϑ. This way, the combined curve segments are ensured to have a consistent curvature.
A B B A α 1 α 2 α 3
Parametric Curve Representation
Projected lines are represented as second-degree polynomials to model radial distortion. The curve parameters are computed via linear regression on the curve points.
More specifically, let
f (x) = a 2 x 2 + a 1 x + a 0 (1) denote a second-degree polynomial in horizontal or vertical direction. The curve is fitted to line segment pointsˆ q j ∈ {(x j , y j ) | j = 1, . . . , n i } ⊆Q (i) of the i-th curveQ (i) by minimizing the Mean Squared Error (MSE) MSE(f ) = 1 n i ni j=1 (f (x j ) − y j ) 2(2)
using RANSAC iterations [24]. In one iteration, we randomly sample three points to fit Eq. (1), and then determine which of the remaining points support this curve model via MSE. Outlier points are discarded if the squared difference between the point and the parabolic curve value at its position exceeds ρ = 1.5. To keep the computational time low, RANSAC is limited to 100 iterations, and stopped early once sufficiently many inliers at a 99 % confidence level are found [38, ch. 4.7]. After discarding the outliers, each curve is refitted to supporting candidate points using linear least squares [33]. To ensure a numerically stable and statistically robust fit, the 2-D coordinates are additionally normalized [37].
Curve Grid Model Estimation
The individual curves are used to jointly form a grid, which allows to further discard outliers, and to estimate lens distortion. To estimate the lens distortion, we employ the plumbline constraint [11]. The constraint models the assumption that curves in the image correspond to straight lines in real world. In this way, it becomes possible to estimate distortion efficiently from a single image, which allows to use this approach also post hoc on cropped, zoomed or similarly processed images.
Representation of Lens Distortion
Analogously to Devernay and Faugeras [20], we represent the radial lens distortion by a function L : R ≥0 → R ≥0 that maps the distance of a pixel from the distortion center to a distortion factor. This factor can be used to radially displace each normalized image coordinate˜ x. Image coordinates are normalized by scaling down coordinates x := (x, y) horizontally by the distortion aspect ratio s x (corresponding to image aspect ratio decoupled from the projection on the image plane) followed by shifting the center of distortion c := (c x , c y ) to the origin and normalizing the resulting 2-D point to the unit range using the dimensions M ×N of the image of width M and height N . Homogeneous coordinates allow to express the normalization conveniently using a matrix product. By defining the upper-triangular matrix
K = s x M 0 c x 0 N c y 0 0 1 (3)
the normalizing mapping n :
Ω → [−1, 1] 2 is n( x) = π K −1 π −1 ( x) ,(4)
where π : R 3 → R 2 projects homogeneous to inhomogeneous coordinates,
π : (x, y, z) → 1 z (x, y) , for z = 0(5)
and the inverse operation π −1 : R 2 → R 3 backprojects inhomogeneous to homogeneous coordinates:
π −1 : (x, y) → (x, y, 1) .(6)
Note that the inverse mapping n −1 converts normalized image coordinates to image plane coordinates.
The Field-of-View Lens Distortion Model
To describe the radial lens distortion, we use the firstorder Field-of-View (FOV) lens model by Devernay and Faugeras that has a single distortion parameter ω. While images can also suffer from tangential distortion, this type of distortion is often negligible [92]. The sole parameter 0 < ω ≤ π denotes the opening angle of the lens. The corresponding radial displacement function L is defined in terms of the distortion radius r ≥ 0 as One advantage of the model is that its inversion has a closed-form solution with respect to the distortion radius r.
L(r) = 1 ω arctan 2r tan ω 2 , for ω = 0 .(7)
Similar to Devernay and Faugeras, we decouple the distortion from the projection onto the image plane, avoiding the need to calibrate for intrinsic camera parameters. Instead, the distortion parameter ω is combined with the distortion center c ∈ Ω and distortion aspect ratio s x which are collected in a vector θ := ( c, s x , ω).
Normalized undistorted image coordinates˜
x u = δ −1 (˜ x d ) can be directly computed from distorted coor- dinates˜ x d as δ −1 (˜ x d ) = L −1 (r d ) r d˜ x d , for r d = 0 (8) where r d = ˜ x d 2 is the distance of˜ x d from the origin. L −1 (r)
is the inverse of the lens distortion function in Eq. (7), namely
L −1 (r) = tan rω 2 tan ω 2 , for ω = 0 .(9)
The function that undistorts a point x ∈ Ω is thus
u( x) = n −1 δ −1 ( n( x)) .(10)
Note that Eq. (8) exhibits a singularity at r d 0 for points close to the distortion center. By inspecting the function's limits, one obtains
lim r d →0 + δ −1 (˜ x d ) = ω 2 tan ω 2˜ x d .(11)
Analogously, Eq. (9) is singular at ω = 0 but approaches lim r→0 + L −1 (r) = r at the limit. In this case, Eq. (8) is an identity transformation which does not radially displace points.
Estimation of Initial Lens Distortion Model Parameters
Lens distortion is specified by the distortion coefficient ω, the distortion aspect ratio s x , and the distortion center c. Naive solution leads to a non-convex objective function with several local minima. Therefore, we first seek an initial set of parameters close to the optimum, and then proceed using a convex optimization to refine the parameters. We propose the following initialization scheme for the individual parameters of the FOV lens model.
Distortion Aspect Ratio and Center
We initialize the distortion aspect ratio to s x = 1, and the distortion center to the intersection of two perpendicular curves with smallest coefficients in the highest order polynomial term. Such curves can be assumed to have the smallest curvature and are thus located near the distortion center.
To find the intersection of two perpendicular curves, we denote the coefficients of a horizontal curve by a 2 , a 1 , a 0 , and the coefficients of a vertical curve by b 2 , b 1 , b 0 . The position x of a curve intersection is then the solution to
a 2 2 b 2 x 4 + 2a 1 a 2 b 2 x 3 + x 2 2a 0 a 2 b 2 + a 2 1 b 2 + a 2 b 1 + x · (2a 0 a 1 b 2 + a 1 b 1 − 1) + a 2 0 b 2 + a 0 b 1 + b 0 = 0 . (12)
The real roots of the quartic (12) can be found with the Jenkins-Traub Rpoly algorithm [45] or a specialized quartic solver [26].
The corresponding values f (x) are determined by inserting the roots back into Eq. (1).
Distortion Coefficient Estimation of the distortion coefficient ω from a set of distorted image points is not straightforward because the distortion function L(r) is non-linear. One way to overcome this problem is to linearize L(r) with Taylor polynomials, and to estimate ω with linear least squares.
To this end, we define the distortion factor
k := L(r) r , for k ∈ R >0(13)
which maps undistorted image points { p j } n j=1 lying on the straight lines to distorted image points { q j } n j=1 lying on the parabolic curves. Both point sets are then related by pk = q .
The distorted points q j are straightforward to extract by evaluating the second-degree polynomial of the parabolic curves. To determine p j , we define a line with the first and the last point in q j , and select points from this line. Collecting these points in the vectors p ∈ R 2n and q ∈ R 2n yields an overdetermined system of 2n linear equations in one unknown.k is then estimated via linear least squares aŝ
k = argmin k q − pk 2 2 ,(15)
where the solution is found via the normal equations [33] aŝ
k := p q p p .(16)
The points q j , p j refer to the columns of the two matrices Q (i) , P (i) ∈ R 2×ni , respectively, where n i again denotes the number of points, which are used in the following step of the pipeline.
To determine ω from the relation k = L(r) r , L(r) is expanded around ω 0 = 0 using Taylor series. More specifically, we use a second-order Taylor expansion to approximate
arctan(x) = x + O(x 2 ) ,(17)
and a sixth-order Taylor expansion to approximate tan(y) = y + y 3 3 + 2y 5 15
+ O(y 6 ) .(18)
Let L(r) = 1 ω arctan(x) with x = 2r tan(y), and y = ω 2 . We substitute the Taylor polynomials from Eqs. (17) and (18), and x, y into Eq. (13) to obtain a biquadratic polynomial Q(ω) independent of r:
L(r) r ≈ 1 + 1 12 ω 2 + 1 120 ω 4 =:Q(ω) .(19)
By equating the right-hand side of Eq. (19) to k
Q(ω) = k(20)
we can estimate ω from four roots of the resulting polynomial Q(ω). These roots can be found by substituting z = ω 2 into Eq. (19), solving the quadratic equation with respect to z, and substituting back to obtain ω. This eventually results in the four solutions ± √ z 1,2 .
The solution exists only if k ≥ 1, as complex solutions are not meaningful, and thus corresponds to the largest positive real root. We evaluated the accuracy of the approximation (19) with the results shown in Fig. 6. For large radii, the approximation significantly deviates from the exact solution. Consequently, this means that the selected points for the estimation must ideally be well distributed across the image. Otherwise, the lens distortion parameter will be underestimated. In practice, however, this constraint does not pose an issue due to the spatial distribution of the solar cells across the captured EL image.
Minimization Criterion for the Refinement of Lens Distortion Parameters
The Levenberg-Marquardt algorithm [50,57] is used to refine the estimated lens distortion parameters θ. The objective function is 19) ( ) compared to the exact solution with respect to varying radii r. For large radii outside the range of normalized coordinates (i.e., the radius of the half-unit circle r > 1 / √ 2), the estimate is not accurate. This implies that the ideal sampled points must be both at some distance from the image border and also from the distortion center. As a side note, the estimation error becomes unacceptable for wide lenses where ω > π /4. However, the EL images in this work (ω) are well below this threshold. the points from the corresponding ideal straight line [20]. The undistorted image coordinates p j := (x j , y j ) ∈ Ω are computed as p j = u( q j ) by applying the inverse lens distortion given in Eq. (10) to the points q j of the i-the curve Q (i) . In a similar manner, the obtained points p j form the columns of P (i) ∈ R 2×ni . Following Devernay and Faugeras, we iteratively optimize the set of lens parameters θ. In every step t, we refine these parameters and then compute the overall error t := n i=1 χ 2 (P (i) , θ) over all curve points. Afterwards, we undistort the curve points and continue the optimization until the relative change in error := ( t−1 − t )/ t falls below the threshold = 10 −6 .
θ := argmin θ 1 2 n i=1 χ 2 (P (i) , θ) .(21)P (i) ∈ R
Minimizing the objective function (21) for all parameters simultaneously may cause the optimizer to be trapped in a local minimum. Hence, following Devernay and Faugeras [20], we optimize the parameters θ = (ω, s x , c) in subsets starting with ω only. Afterwards, we additionally optimize the distortion center c. Finally, the parameters θ are jointly optimized. Figure 7: Estimation of solar module topology requires determining the number of subdivisions (i.e., rectangular segments) in a solar cell. Common configurations include no subdivisions at all (i.e., one segment) (a), three segments (b) and four segments (c). Notice how the arrangement of rectangular segments is symmetric and segment sizes increase monotonically towards the center, i.e., ∆ 1 < · · · < ∆ n . In particular, shape symmetry can be observed not only along the vertical axis of the solar cell but also along the horizontal one as well.
∆ 1 (a) ∆ 1 ∆ 1 ∆ 2 (b) ∆ 1 ∆ 1 ∆ 2 ∆ 2 (c)
Obtaining a Consistent Parabolic Curve Grid Model
The layout of the curves is constrained to a grid in order to eliminate outlier curves. Ideally, each horizontally oriented parabola should intersect each vertically oriented parabola exactly once. This intersection can be found using Eq. (12). Also, every parabolic curve should not intersect other parabolic curves of same orientation within the image plane. This set of rules eliminates most of the outliers. [15] is used to remove outlier curves. In every LO-RANSAC iteration, the grid constraints are imposed by randomly selecting two horizontal and two vertical curves to build a minimal grid model. Inliers are all curves that (1) exactly once intersect the model grid lines of perpendicular orientation, (2) not intersect the model grid lines of parallel orientation, and (3) whose MSE of the reprojected undistorted points is not larger than one pixel.
Robust Outlier Elimination Locally Optimized RANdom SAmple Consensus (LO-RANSAC)
Remaining Curve Outliers Halos around the solar modules and holding mounts (such as in Fig. 5) can generate additional curves outside of the cells. We apply Otsu's thresholding [67] on the contrast-normalized image and discard outer curves that generate additional grid rows or columns with an average intensity in the enclosed region below the automatically determined threshold.
Estimation of the Solar Module Topology
A topology constraint on the solar cell can be employed to eliminate remaining non-cell curves in the background of the PV module, and the number and layout of solar cells can be subsequently estimated. However, outliers prevent a direct estimation of the number of solar cell rows and columns in a PV module. Additionally, the number and orientation of segments dividing each solar cell are generally unknown. Given the aspect ratio of solar cells in the imaged PV module, the topology can be inferred from the distribution of parabolic curves. For instance, in PV modules with equally long horizontal and vertical cell boundary lines, the solar cells have a square (i.e., 1 : 1) aspect ratio.
The number of curves crossing each square image area of solar cell is constant. Clustering the distances between the curves allows to deduce the number of subdivisions within solar cells.
Estimation of the Solar Cell Subdivisions and the Number of Rows and Columns
The solar cells and their layout are inferred from the statistics of the line segment lengths in horizontal and vertical direction. We collect these lengths separately for each dimension and cluster them. Dbscan clustering [22] is used to simultaneously estimate cluster membership and the number of clusters. Despite the presence of outlier curves, clusters are representative of the distribution of segment dimensions within a cell. For example, if a solar cell consists of three vertically arranged segments (as in Fig. 7b) with heights of 20 : 60 : 20 pixels, the two largest clusters will have the medians 60 and 20. With the assumption that the segment arrangement is typically symmetric, the number of segments is estimated as the number of clusters times two minus one. If clustering yields a single cluster, we assume that the solar cells consist of a single segment. Outlier curves or segments, respectively, are rejected by only considering the largest clusters, with the additional constraint that the sizes of the used clusters are proportional to each other, and that not more than two different segments (as in Fig. 7c) can be expected in a cell. The number of rows and columns of a solar cell is determined by dividing the overall size of the curve grid by the estimated cell side lengths.
Curve Grid Outlier Elimination
The estimated proportions are used to generate a synthetic planar grid that is registered against the curve grid intersections. Specifically, we use the rigid point set registration of Coherent Point Drift (CPD) [64] because it is deterministic and allows us to account for the proportion of outliers using a parameter 0 ≤ w ≤ 1. We can immediately estimate w as the fraction of points in the synthetic planar grid and the total number of intersections in the curve grid.
To ensure CPD convergence, initial positions of the synthetic planar grid should be sufficiently close to the curve grid intersections. We therefore estimate the translation and rotation of the planar grid to closely pre-align it with the grid we are registering against. The initial translation can be estimated as the curve grid intersection point closest to the image plane origin. The 2-D in-plane rotation is estimated from the average differences of two consecutive intersection points along each curve grid row and column. This results in two 2-D vectors which are approximately orthogonal to each other. The 2-D vector with the larger absolute angle is rotated by 90°such that both vectors become roughly parallel. The estimated rotation is finally obtained as the average angle of both vectors.
Undistortion and Rectification
The PV module configuration is used to undistort the whole image using Eq. (10). After eliminating the lens distortion, we use Direct Linear Transform (DLT) [38] to estimate the planar 2-D homography using the four corners of the curve grid with respect to the corners of the synthetic planar grid. The homography is used to remove perspective distortion from the undistorted curve grid.
The intersections of the perspective corrected curve grid may not align exactly with respect to the synthetic planar grid because individual solar cells are not always accurately placed in a perfect grid but rather with a margin of error. The remaining misalignment is therefore corrected via affine Moving Least Squares (MLS) [81], which warps the image using the planar grid intersections as control points distorted using the estimated lens parameters, and curve grid intersections are used as their target positions.
Estimation of the Active Solar Cell Area
We use solar cell images extracted from individual PV modules to generate a mask that represents the active solar cell area. Such masks allow to exclude the background and the busbars of a solar cell (see Fig. 8). In particular, active cell area masks are useful for detection of cell cracks since they allow to mask out the busbars, which can be incorrectly identified as cell cracks due to high similarity of their appearance [87,89].
Estimation of solar cell masks is related to the image labeling problem, where the goal is to classify every pixel into several predefined classes (in our case, the background and the active cell area). Existing approaches solve this problem using probabilistic graphical models, such as a Conditional Random Field (CRF) which learns the mapping in a supervised manner through contextual information [40]. However, since the estimated curve grid already provides a global context, we tackle the pixelwise classification as a combination of adaptive thresholding and prior knowledge with regard to the straight shape of solar cells. Compared to CRFs, this approach does not require a training step and is easy to implement.
To this end, we use solar cells extracted from a PV module to compute a mean solar cell (see Figs. 8a to 8b). Since intensities within a mean solar cell image can exhibit a large range, we apply locally adaptive thresholding [68] on 25 × 25 pixels patches using their mean intensity, followed by a 15 × 15 morphological opening and flood filling to close any remaining holes. This leads to an initial binary mask.
Ragged edges at the contour are removed using vertical and horizontal cell profiles (Fig. 8b). The profiles are computed as pixelwise median of the initial mask along each image row or column, respectively. We combine the backprojection of these profiles with the convex hull of the binary mask determined with the method of Barber et al. [6] to account for cut-off corners using bitwise AND (cf., Fig. 8c). To further exclude repetitive patterns in the EL image of a solar cell, e.g., due to low passivation efficiency in the contact region (see Fig. 8d), we combine the initial binary mask and the augmented mask via bitwise XOR.
We note that solar cells are usually symmetric about both axes. Thus, the active solar cell area mask estimation can be restricted to only on quadrant of the average solar cell image to enforce mask symmetry. Additionally, the convex hull of the solar cell and its extra geometry can approximated by polygons [1] for a more compact representation.
Parameter Tuning
The proposed solar cell segmentation pipeline relies on a set of hyperparameters that directly affect the segmentation robustness and accuracy. Table 1 provides an overview of all parameters with their values used in this work.
Manual Search
Since the parameters of the proposed segmentation are intuitive and easily interpretable, it is straightforward to select them based on the setup used for EL image acquisition.
Main influence factors that must be considered when choosing the parameters are image resolution and physical properties of the camera lens.
Provided parameter values were found to work particularly well for high resolution EL images and standard camera lenses, as in our dataset (cf., Section 4.1). For low resolution EL images, however, the number of pyramid octaves and sublevels will need to be increased to avoid missing important image details. Whereas, tensor voting proximity, on contrary, will need to be lowered, since the width of ridge edges in low resolution images tends to be proportional to the image resolution. This immediately affects the size of the 1-D sampling window for determining the Gaussian-based subpixel position of curve points.
Curve extraction parameters correlate with the fieldof-view of the EL camera lens. In particular for wide angle lenses, the merge angle ϑ must be increased.
Parabolic curve fit error ρ balances between robustness and accuracy of the segmentation result. The window size for locally adaptive thresholding used for estimation of solar cell masks correlates both with the resolution of EL images, but also with the amount of noise and texture variety in solar cells, e.g., due to cell cracks.
Automatic Search
The parameters can also be automatically optimized in an efficient manner using random search [74,83,82,58,85,7] or Bayesian optimization [49,63,8,9,84,3] class of algorithms. Since this step involves supervision, pixelwise PV module annotations are needed. In certain cases, however, it may be not be possible to provide such annotations because individual defective PV cells can be hard to delineate, e.g., they appear completely dark. Also, the active solar cell area of defective cells is not always well-defined. Therefore, we refrained from automatically optimizing the hyperparameters in this work.
Evaluation
We evaluate the robustness and accuracy of our approach against manually annotated ground truth masks. Further, we compare the proposed approach against the method by Sovetkin and Steland [86] on simplified masks, provide qualitative results and runtimes, and discuss limitations.
Dataset
We use a dataset consisting of 44 unique PV modules with various degrees of defects to manually select the parameters for the segmentation pipeline and validate the results. These images served as a reference during the development of the proposed method. The PV modules were captured in a testing laboratory setting at different orientations and using varying camera settings, such as exposure time. Some of EL images were post-processed by cropping, scaling, or rotation. This dataset consists of 26 monocrystalline and 18 polycrystalline solar cells. In total, these 44 solar modules consist of 2,624 solar cells out of which 715 are definitely defective with defects ranging from microcracks to completely disconnected cells and mechanically induced cracks (e.g., electrically insulated or conducting cracks, or cell cracks due to soldering [88]). 106 solar cells exhibit smaller defects that are not with certainty identifiable as completely defective, and 295 solar cells feature miscellaneous surface abnormalities that are no defects. The remaining 1,508 solar cells are categorized as functional without any perceivable surface abnormalities. The solar cells in imaged PV modules have a square aspect ratio (i.e., are quadratic).
The average resolution of the EL images is 2,779.63× 2,087.35 pixels with a standard deviation of image width and height of 576.42 and 198.30 pixels, respectively. The median resolution is 3,152 × 2,046 pixels.
Additional eight test EL images (i.e., about 15 % of the dataset) are used for the evaluation. Four modules are monocrystalline and the remaining four are polycrystalline. Their ground truth segmentation masks consist of hand-labeled solar cell segments. The ground truth additionally specifies both the rows and columns of the solar cells, and their subdivisions. These images show various PV modules with a total of 408 solar cells. The resolution of the test EL images varies around 2,649.50 ± 643.20 × 2,074 ± 339.12 with a median image resolution of 2,581.50 × 2,046.
Three out of four monocrystalline modules consist of 4 × 9 cells and the remaining monocrystalline module consists of 6 × 10 cells. All of their cells are subdivided by busbars into 3 × 1 segments.
The polycrystalline modules consist of 6 × 10 solar cells each. In two of the modules, every cell is subdivided into 3 × 1 segments. The cells of the other two modules are subdivided into 4 × 1 segments.
Evaluation Metrics
We use two different metrics, pixelwise scores and the weighted Jaccard index to evaluate both the robustness and the accuracy of the proposed method and to compare our method against related work. In the latter case, we additionally use a third metric, the Root Mean Square Error (RMSE), to compute the segmentation error on simplified masks.
Root Mean Square Error
The first performance metric is the RMSE given in pixels between the corners of the quadrilateral mask computed from the ground truth annotations and the corners estimated by the individual modalities. The metric provides a summary of the method's accuracy in absolute terms across all experiments.
Pixelwise Classification
The second set of performance metrics are precision, recall, and the F 1 score [76]. These metrics are computed by considering cell segmentation as a multiclass pixelwise classification into background and active area of individual solar cells. A typical 60 cell PV module will therefore contain up to 61 class labels. A correctly segmented active area pixel is a true positive, the remaining quantities are defined accordingly. Pixelwise scores are computed globally with respect to all the pixels. Therefore, the differences between the individual results for these scores are naturally smaller than for metrics that are computed with respect to individual solar cells, such as the Jaccard index.
Weighted Jaccard Index
The third performance metric is the weighted Jaccard index [14,43], a variant of the metric widely known as Intersection-over-Union (IoU). This metric extends the common Jaccard index by an importance weighting of the input pixels. As the compared masks are not strictly binary either due to antialiasing or interpolation during mask construction, we define importance of pixels by their intensity. Given two non-binary masks A and B, the weighted Jaccard similarity is
J w = u∈Ω min{A( u), B( u)} u∈Ω max{A( u), B( u)} .(22)
The performance metric is computed on pairs of segmented cells and ground truth masks. A ground truth cell mask is matched to the segmented cell with the largest intersection area, thus taking structural coherence into account. We additionally compute the Jaccard index of the background, which corresponds to the accuracy of the method to segment the whole solar module. Solar cell Minimal change in error during refinement of lens distortion parameters 10 −6
3.5 Solar cell aspect ratio 1 : 1 3. 6 Locally adaptive thresholding window size 25 × 25 misalignment or missed cells will therefore penalize the segmentation accuracy to a high degree. Therefore, the solar module Jaccard index provides a summary of how well the segmentation performs per EL image.
Quantitative Results
We evaluate the segmentation accuracy and the robustness of our approach using a fixed set of parameters as specified in Table 1 on EL images of PV modules acquired in a material testing laboratory.
Comparison to Related Work with Simplified Cell Masks
The method by Sovetkin and Steland focuses on the estimation of the perspective transformation of the solar module and the extraction of solar cells. Radial distortion is corrected with a lens model of an external checkerboard calibration. The grid structure is fitted using a priori knowledge of the module topology. For this reason, we refer to the method as Perspectivecorrected Grid Alignment (PGA). The method makes no specific proposal for mask generation and therefore yields rectangular solar cells.
In order to perform a comparison, the exact masks (cf., Fig. 9a) are restricted to quadrilateral shapes (cf., Fig. 9b). The quadrilateral mask is computed as the minimum circumscribing polygon with four sides, i.e., a quadrilateral, using the approach of Aggarwal et al. [1]. The quadrilateral exactly circumscribes the convex hull of the solar cell mask with all the quadrilateral sides flush to the convex hull.
PGA assumes that radial distortion is corrected by an external checkerboard calibration. This can be a limiting factor in practice. Hence, the comparison below considers both practical situations by running PGA on distorted images and on undistorted images using the distortion correction of this work. Table 2 provides the RMSE in pixels between the corners of the quadrilaterals computed by the respective modality and the quadrilateral mask estimated from the ground truth. The metric is provided for monocrystalline and polycrystalline solar wafers separately, and for both types combined. In all cases, the proposed approach outperforms both PGA variants. We particularly notice that PGA greatly benefits from lens distortion estimation. This underlines our observation that the latter is essential for highly accurate segmentation.
Root Mean Square Error
Pixelwise Classification Pixelwise scores for the simplified masks of both methods are given in Table 3. For monocrystalline PV modules, PGA generally achieves higher scores. However, highest scores are achieved only for images for which the lens distortion has been removed. The proposed method fails to segment a row of cells in a solar module resulting in a lower recall. However, for polycrystalline PV modules, the proposed method consistently outperforms PGA. In the overall score, the proposed method also outperforms the best-case evaluation for PGA on undistorted images. However, PGA has highest recall, which is due to the lower number of parameters of PGA.
Weighted Jaccard Index The Jaccard scores summarized as boxplots in Fig. 10 support the pixelwise classification scores, showing that the proposed method is more accurate than PGA. The latter, however, is slightly more robust. For complete modules, the considerable spread of PGA is partially attributed to one major outlier. Overall, the proposed segmentation pipeline is highly accurate. Particularly once a cell is detected, the cell outline is accurately and robustly segmented.
Ablation Study
We ablate the lens distortion parameters and the post hoc application of affine MLS to investigate their effect on the accuracy and the success rate of the segmentation process. The ablation is performed both on original (i.e., distorted) EL images and undistorted ones.
Distorted vs. Undistorted EL Images For the ablation study, we consider two main cases. In the undistorted case, both reference and predicted masks are unwarped using estimated lens distortion parameters. Then, quadrilaterals are fitted to individual cell masks to allow a comparison against PGA which always yields such quadrilateral cell masks. For a fair comparison, PGA is also applied to undistorted images.
In the distorted case, however, the comparison is performed in the original image space. Since the proposed method yields a curved grid after applying the inverse of lens distortion, we synthesize a regular grid from backwarped cell masks. Specifically, we extract the contours of estimated solar cell masks to obtain the coordinates of the quadrilateral in the unwarped image, and then apply the inverse of estimated geometric transformations to rectangle coordinates. Afterwards, we fit lines to each side of the backwarped quadrilaterals along grid rows and columns. From their intersections we finally obtain the corner coordinates of each solar cells in the distorted image which we can use for comparison against distorted PGA results. Figure 9: Example of an exact mask (a) of solar cells estimated using the proposed approach and a quadrilateral mask (b) determined from the exact mask. The latter is used for comparison against the method of Sovetkin and Steland [86]. Both masks are shown as color overlays. Different colors denote different instances of solar cells. Parameterization First, we reduce the lens distortion model to a single radial distortion parameter ω and assume both square aspect ratio (i.e., s x = 1) and the center of distortion to be located in the image center. During optimization, these two parameters are kept constant. In this experiment, we also do not correct the curve grid using affine MLS. The comparison against PGA shows that such a simplistic lens model is still more accurate than PGA both in the distorted and undistorted cases (cf., Table 2). However, while the precision is high, the recall and therefore the F 1 score drops considerably (cf., Table 3). The reason for this is that such a lens parametrization is too rigid. As a consequence, this weakens the grid detection: correctly detected curves are erroneously discarded because of inaccuracies of the lens model with only a single parameter ω instead of four parameters (ω, s x , c).
For the next comparison, we increase the number of degrees of freedom. Both the distortion aspect ratio s x and the center of distortion c are refined in addition to the radial distortion parameter ω. Curve grid correction via affine MLS is again omitted. This parametrization achieves much improved RMSE and segmentation success rates.
Finally, we use the full parametrization, i.e., we refine all lens distortion parameters (ω, s x , c) and ap-ply post hoc correction via affine MLS. This model is denoted as w/ MLS.
Discussion
We summarize the results of the ablation study in Tables 2 and 3. Here, w/ MLS denotes the full model that includes the correction step via affine MLS. The full model with post hoc affine MLS grid correction performs in many instances best. However, applying MLS is not always beneficial. Particularly, for monocrystalline PV modules, grid correction does not always improve the results.
We conclude that the proposed joint lens model estimation with full parametrization and grid detection is essential for robustness and accuracy of the segmentation. Since the subsequent grid correction using affine MLS only marginally improves the results, its application can be seen as optional.
Segmentation Performance with Exact Cell Masks
To allow an exact comparison of the segmentation results to the ground truth, we inverse-warp the estimated solar cell masks back to the original image space by using the determined perspective projection and lens distortion parameters. This way, the estimated solar module masks will as exactly as possible overlay the hand-labeled ground truth masks. Table 4 summarizes the pixelwise classification scores for the exact masks estimated using the proposed method. The method is more robust on polycrystalline PV modules than on monocrystalline modules. However, for both module types, the method achieves a very high overall accuracy beyond 97 % for all metrics. Investigation of failure cases for monocrystalline modules reveals difficulties on cells where large gaps coincide with cell cracks and ragged edges.
Pixelwise Classification
Weighted Jaccard Index Jaccard scores for exact masks are given in Fig. 11. The scores confirm the results of the pixelwise metrics. Notably, the interquartile range (IQR) of individual cells has a very small spread, which indicates a highly consistent segmentation. The IQR of whole modules is slightly larger. This is, however, not surprising since the boxplots summarize the joint segmentation scores across multiple modules. Figure 12 shows the qualitative results of the segmentation pipeline on four test images. The two results in the left column are computed on monocrystalline modules, the two results in the right column on polycrystalline modules. The estimated solar module curve grids are highly accurate. Even in presence of complex texture intrinsic to the material, the accuracy of the predicted solar module curve grid is not affected.
Qualitative Results
Monocrystalline
Polycrystalline Overall 88 For this benchmark, EL images were processed sequentially running only on the CPU. Note, however, that the implementation was not optimized in terms of the runtime and only parts of the pipeline utilize all available CPU cores. To this end, additional speedup can be achieved by running parts of the pipeline in parallel or even on a GPU.
On average, it takes 1 min and 6 s to segment all solar cells in a high resolution EL image (cf., Fig. 14). Preprocessing is computationally most expensive, curve and cell extraction are on average cheapest. The standard deviation of the model estimation step is highest (see Fig. 13), which is mostly due to dependency upon the total number of ridge edges and the number of resulting curves combined with the probabilistic nature of LO-RANSAC.
Interestingly, processing EL images of monocrystalline solar modules takes slightly longer on average than processing polycrystalline solar modules. This is due to large gaps between ridges caused by cut-off corners that produce many disconnected curve segments which must be merged first. Conversely, curve segments in polycrystalline solar modules are closer, which makes it more likely that several curve segments are combined early on. An average processing time of 1 min and 6 s is substantially faster than manual processing, which takes at least several minutes. For on-site EL measurements with in-situ imaging of PV modules, the processing times must be further optimized, likely by at least a factor of ten. However, in other imaging environments, for example material testing laboratories, the runtime is fully sufficient, given that the handling of each module for EL measurements and the performance evaluation impose much more severe scheduling bottlenecks.
Limitations
Mounts that hold PV modules may cause spurious ridge edges. Early stages of the segmentation focus on ridges without analyzing the whole image content, which may occasionally lead to spurious edges and eventually to an incorrect segmentation. Therefore, automatic image cropping prior to PV module segmentation could help reduce segmentation failures due to visible mounts.
While the algorithm is able to process disconnected (dark) cells, rows or columns with more than 50 % of disconnected cells pose a difficulty in correctly detecting the grid due to insufficient edge information. However, we observed that also human experts have problems to determine the contours under such circumstances.
We also observed that smooth edges can result in segmentation failures. This is because the stickness of smooth edges is weak and may completely fade away after non-maximum suppression. This problem is also related to situations where the inter-cell borders are exceptionally wide. In such cases, it is necessary to adjust the parameters of the ridgeness filter and the proximity of the tensor voting.
Conclusions
In this work, we presented a fully automatic segmentation method for precise extraction of solar cells from high resolution EL images. The proposed segmentation is robust to underexposure, and works robustly in presence of severe defects on solar cells. This can be attributed to the proposed preprocessing and the ridgeness filtering, coupled with tensor voting to robustly determine the inter-cell borders and busbars. The segmentation is highly accurate, which allows to use its output for further inspection tasks, such as automatic classification of defective solar cells and the prediction of power loss.
We evaluated the segmentation with the Jaccard index on eight different PV modules consisting of 408 hand-labeled solar cells. The proposed approach is able to segment solar cells with an accuracy of 97.80 %. With respect to classification performance, the segmentation pipeline reaches an F 1 score of 97.62 %.
Additionally, we compared the proposed method against the PV module detection approach by Sovetkin and Steland [86], which is slightly more robust but less accurate than our method. The comparison also shows that our joint lens distortion estimation and grid detection approach achieves a higher accuracy than a method that decouples both steps.
Beyond the proposed applications, the method can serve as a starting point for bootstrapping deep learning architectures that could be trained end-to-end to directly segment the solar cells. Future work may include to investigate the required adaptations and geometric relaxations for using use the method not only in manufacturing setting but also in the field. Such relaxations could be achieved, for instance, by performing the grid detection end-to-end using a CNN.
Given that grid structure is pervasive in many different problem domains, the proposed joint lens estimation and grid identification may also find other application fields, for example the detection of PV modules in aerial imagery of solar power plants, building facade segmentation, and checkerboard pattern detection for camera calibration.
Figure 2 :
2The proposed PV module segmentation pipeline consists of four stages. In the preprocessing stage (a), local ridge features are extracted. In the curve extraction stage (b), candidate parabolic curves are determined from ridges. In the model estimation stage (c), a coherent grid and the lens distortion are jointly estimated. In the cell extraction stage (d) the cell topology is determined and the cells are extracted.
Figure 3 :
3Extraction of ridge edges from stickness at subpixel accuracy. (a) shows a stickness patch with its initial centerline ( ) at discrete coordinates obtained by skeletonization. The refined ridge centerline at subpixel accuracy is estimated by fitting a Gaussian function ( ) to the cross-section profile of the ridge edge in (b) to equidistantly sampled stickness values within a predefined sampling window ( ).
image R( u) from the filter responses at multiple scales (d) Stickness of ridgeness contextually enhanced using tensor voting (e) Extracted line segments grouped by their curvature (f) Horizontal ( ) and vertical ( ) parabolic curves filtered using the intersection constraint
Figure 5 :
5Visualization of the preprocessing, curve extraction, and model estimation stages for the PV module fromFig. 1
Figure 6 :
62×m is a matrix of m 2-D points of the i-th curve. The distortion error χ 2 quantifies the deviation of Approximation of the distortion coefficient ω using Eq. (
Figure 8 :
8Intermediate steps of the solar mask estimation process
Figure 10 :Figure 11 :Figure 13
101113Boxplots of Jaccard scores for the three evaluated modalities. The Jaccard scores are computed against hand-labeled ground truth masks. In (a), the scores are computed for the individual solar cells. In (b), the scores are evaluated against the whole solar modules. The two left-most groups in each figure correspond to boxplots with respect to different solar wafers. Whereas the right-most group summarizes the performance of both solar wafer types combined. Boxplots breaks down the average time taken by the individual steps of the segmentation pipeline.Figure 14summarizes the contribution of individual pipeline steps to the overall processing time for all 44 images. The timings were obtained on a consumer system with an Intel i7-3770K CPU clocked at 3.50 GHz and 32 GB of RAM. The first three stages of the segmentation pipeline are implemented in C++ whereas the last stage (except for MLS image deformation) is implemented in Python.
Figure 12 :Figure 13
1213Qualitative segmentation results of four test images depicting the estimated curve grid superimposed over the contrast-normalized input EL image. For visualization purposes, the original EL images were cropped.
: Average time taken by individual steps of the segmentation pipeline, in seconds. The error bars denote the upper range of the standard deviation.
Figure 14 :
14Relative contribution of the average processing time for individual pipeline steps to the overall runtime with respect to different solar module types and both types combined.
Table 1 :
1Overview of segmentation pipeline parameters and their values used in this work §Symbol Description
Table 2 :
2Root Mean Square Error (RMSE), in pixels, of the distance between the corners of the quadrilateral
mask determined from the ground truth annotations and the corners determined by the respective method in all
eight test images. Bold face denotes smallest error.
Distorted
Undistorted
PGA
Proposed
PGA
Proposed
ω
(ω, s x , c)
w/ MLS
ω
(ω, s x , c)
w/ MLS
Monocrystalline
6.09
2.71
2.61
2.64
4.00
2.88
2.68
2.61
Polycrystalline
5.32
2.56
2.52
2.45
2.76
2.33
1.91
1.77
Overall
5.65
2.62
2.56
2.53
3.32
2.55
2.26
2.15
Table 3 :
3Pixelwise classification scores for quadrilateral masks estimated using PGA and the proposed approach. Bold face denotes the best performing method.(a) Monocrystalline
Distorted
Undistorted
PGA
Proposed
PGA
Proposed
Metric
ω
(ω, s x , c)
w/ MLS
ω
(ω, s x , c)
w/ MLS
Precision
97.55 %
97.77 %
99.18 %
98.98 %
98.43 %
98.34 %
99.67 %
99.53 %
Recall
98.37 %
82.08 %
97.53 %
97.71 %
98.87 %
81.56 %
96.94 %
97.14 %
F 1 score
97.95 %
86.57 %
98.24 %
98.24 %
98.65 %
86.46 %
98.18 %
98.21 %
Accuracy
98.19 %
88.43 %
98.34 %
98.32 %
98.82 %
88.55 %
98.33 %
98.38 %
(b) Polycrystalline
Distorted
Undistorted
PGA
Proposed
PGA
Proposed
Metric
ω
(ω, s x , c)
w/ MLS
ω
(ω, s x , c)
w/ MLS
Precision
97.22 %
97.70 %
98.82 %
98.77 %
98.36 %
97.40 %
99.52 %
99.59 %
Recall
97.70 %
88.35 %
99.29 %
99.36 %
99.29 %
87.08 %
99.17 %
99.35 %
F 1 score
97.45 %
91.32 %
99.05 %
99.06 %
98.82 %
90.42 %
99.35 %
99.47 %
Accuracy
97.13 %
92.44 %
98.89 %
98.90 %
98.66 %
92.06 %
99.25 %
99.39 %
(c) Overall
Distorted
Undistorted
PGA
Proposed
PGA
Proposed
Metric
ω
(ω, s x , c)
w/ MLS
ω
(ω, s x , c)
w/ MLS
Precision
97.37 %
97.30 %
99.00 %
98.88 %
98.38 %
97.09 %
99.58 %
99.56 %
Recall
97.78 %
82.36 %
98.27 %
98.39 %
99.01 %
81.15 %
97.97 %
98.18 %
F 1 score
97.57 %
87.45 %
98.60 %
98.60 %
98.69 %
86.60 %
98.73 %
98.83 %
Accuracy
97.74 %
90.15 %
98.58 %
98.57 %
98.75 %
90.06 %
98.72 %
98.81 %
Table 4 :
4Pixelwise classification scores for exact masks estimated using the proposed approachMetric
Monocrystalline Polycrystalline Overall
Precision
97.47 %
97.49 %
97.53 %
Recall
96.93 %
98.90 %
97.77 %
F 1 score
97.09 %
98.19 %
97.62 %
Accuracy
97.67 %
97.97 %
97.80 %
Runtime [%]0
20
40
60
80
100
Mono-
crystalline
Poly-
crystalline
∅
9.36 %
10.08 %
9.70 %
34.50 %
29.75 %
32.23 %
12.84 %
14.59 %
13.68 %
43.30 %
45.58 %
44.39 %
Preprocessing
Curve Extraction
Model Estimation Cell Extraction
Acknowledgements This work was funded by Energy Campus Nuremberg (EnCN) and partially supported by the Research Training Group 1773 "Heterogeneous Image Systems" funded by the German Research Foundation (DFG).
Minimum area circumscribing polygons. A Aggarwal, J S Chang, C K Yap, 10.1007/BF01898354The Visual Computer. 1215Aggarwal A, Chang JS, Yap CK (1985) Minimum area circumscribing polygons. The Visual Com- puter 1(2):112-117, doi: 10.1007/BF01898354 13, 15
Nonmetric calibration of camera lens distortion: Differential methods and robust estimation. M Ahmed, A Farag, 10.1109/TIP.2005.846025IEEE Transactions on Image Processing. 1483Ahmed M, Farag A (2005) Nonmetric calibration of camera lens distortion: Differential methods and ro- bust estimation. IEEE Transactions on Image Pro- cessing 14(8):1215-1230, doi: 10.1109/TIP.2005. 846025 3
Optuna: A next-generation hyperparameter optimization framework. T Akiba, S Sano, T Yanase, T Ohta, M Koyama, 10.1145/3292500.3330701Proceedings of the 25 th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. the 25 th ACM SIGKDD International Conference on Knowledge Discovery and Data MiningNew York, NY, USAAssociation for Computing Machinery1913Akiba T, Sano S, Yanase T, Ohta T, Koyama M (2019) Optuna: A next-generation hyperparameter optimization framework. In: Proceedings of the 25 th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Association for Computing Machinery, New York, NY, USA, KDD '19, pp 2623-2631, doi: 10.1145/3292500. 3330701 13
Shape from texture. J Aloimonos, 10.1007/BF00363944doi: 10.1007/ BF00363944 2Biological Cybernetics. 585Aloimonos J (1988) Shape from texture. Bio- logical Cybernetics 58(5):345-360, doi: 10.1007/ BF00363944 2
Micro-crack detection of multicrystalline solar cells featuring an improved anisotropic diffusion filter and image segmentation technique. S A Anwar, M Z Abdullah, 10.1186/1687-5281-2014-15EURASIP Journal on Image and Video Processing. 2014115Anwar SA, Abdullah MZ (2014) Micro-crack de- tection of multicrystalline solar cells featuring an improved anisotropic diffusion filter and im- age segmentation technique. EURASIP Journal on Image and Video Processing 2014(1):15, doi: 10.1186/1687-5281-2014-15 1
The quickhull algorithm for convex hulls. C B Barber, D P Dobkin, D P Dobkin, H Huhdanpaa, 10.1145/235815.235821ACM Transactions on Mathematical Software. 224Barber CB, Dobkin DP, Dobkin DP, Huhdan- paa H (1996) The quickhull algorithm for convex hulls. ACM Transactions on Mathematical Soft- ware 22(4):469-483, doi: 10.1145/235815.235821 12
Random search for hyper-parameter optimization. J Bergstra, Y Bengio, The Journal of Machine Learning Research. 13Bergstra J, Bengio Y (2012) Random search for hyper-parameter optimization. The Journal of Ma- chine Learning Research 13:281-305 13
Algorithms for hyper-parameter optimization. J Bergstra, R Bardenet, Y Bengio, B Kégl, Advances in Neural Information Processing Systems. Shawe-Taylor J, Zemel R, Bartlett P, Pereira F, Weinberger KQCurran Associates, Inc2413Bergstra J, Bardenet R, Bengio Y, Kégl B (2011) Algorithms for hyper-parameter optimization. In: Shawe-Taylor J, Zemel R, Bartlett P, Pereira F, Weinberger KQ (eds) Advances in Neural Informa- tion Processing Systems, Curran Associates, Inc., vol 24, pp 2546-2554 13
Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision architectures. J Bergstra, D Yamins, D D Cox, Proceedings of the 30 th International Conference on International Conference on Machine Learning, ICML'13. the 30 th International Conference on International Conference on Machine Learning, ICML'1313Bergstra J, Yamins D, Cox DD (2013) Making a sci- ence of model search: Hyperparameter optimization in hundreds of dimensions for vision architectures. In: Proceedings of the 30 th International Conference on International Conference on Machine Learning, ICML'13, pp 115-123 13
Can luminescence imaging replace lockin thermography on solar cells?. O Breitenstein, J Bauer, K Bothe, D Hinken, J Müller, W Kwapil, M C Schubert, W Warta, 10.1109/JPHOTOV.2011.2169394doi: 10.1109/ JPHOTOV.2011.2169394 1IEEE Journal of Photovoltaics. 12Breitenstein O, Bauer J, Bothe K, Hinken D, Müller J, Kwapil W, Schubert MC, Warta W (2011) Can luminescence imaging replace lock- in thermography on solar cells? IEEE Jour- nal of Photovoltaics 1(2):159-167, doi: 10.1109/ JPHOTOV.2011.2169394 1
Close-range camera calibration. D C Brown, doi: 10.1.1.14.6358Photogrammetric Engineering & Remote Sensing. 377Brown DC (1971) Close-range camera calibration. Photogrammetric Engineering & Remote Sensing 37:855-866, doi: 10.1.1.14.6358 3, 7
A benchmark for visual identification of defective solar cells in electroluminescence imagery. C Buerhop-Lutz, S Deitsch, A Maier, F Gallwitz, S Berger, B Doll, J Hauch, C Camus, C J Brabec, 10.4229/35thEUPVSEC20182018-5CV.3.15doi: 10.4229/ 35thEUPVSEC20182018-5CV.3.15 135 th European PV Solar Energy Conference and Exhibition. Buerhop-Lutz C, Deitsch S, Maier A, Gallwitz F, Berger S, Doll B, Hauch J, Camus C, Brabec CJ (2018) A benchmark for visual identification of defective solar cells in electroluminescence im- agery. In: 35 th European PV Solar Energy Confer- ence and Exhibition, pp 1287-1289, doi: 10.4229/ 35thEUPVSEC20182018-5CV.3.15 1
A detailed modeling method for photovoltaic cells. R Chenni, M Makhlouf, T Kerbache, A Bouzid, 10.1016/j.energy.2006.12.006doi: 10. 1016/j.energy.2006.12.006 2Energy. 329Chenni R, Makhlouf M, Kerbache T, Bouzid A (2007) A detailed modeling method for pho- tovoltaic cells. Energy 32(9):1724-1730, doi: 10. 1016/j.energy.2006.12.006 2
Finding the Jaccard median. F Chierichetti, R Kumar, S Pandey, S Vassilvitskii, Symposium on Discrete Algorithms. Austin, Texas14Chierichetti F, Kumar R, Pandey S, Vassilvitskii S (2010) Finding the Jaccard median. In: Symposium on Discrete Algorithms, Austin, Texas, pp 293-311 14
Locally optimized RANSAC. O Chum, J Matas, J Kittler, 10.1007/978-3-540-45243-0_31doi: 10.1007/ 978-3-540-45243-0_31 11Pattern Recognition. Michaelis B, Krell GBerlinSpringer2781HeidelbergChum O, Matas J, Kittler J (2003) Locally optimized RANSAC. In: Michaelis B, Krell G (eds) Pattern Recognition, Springer, Berlin, Hei- delberg, vol 2781, pp 236-243, doi: 10.1007/ 978-3-540-45243-0_31 11
A plumbline constraint for the rational function lens distortion model. D Claus, A W Fitzgibbon, 10.5244/C.19.10British Machine Vision Conference (BMVC). Claus D, Fitzgibbon AW (2005) A plumbline con- straint for the rational function lens distortion model. In: British Machine Vision Conference (BMVC), pp 99-108, doi: 10.5244/C.19.10 3
A rational function lens distortion model for general cameras. D Claus, A W Fitzgibbon, 10.1109/CVPR.2005.43Conference on Computer Vision and Pattern Recognition (CVPR). 1Claus D, Fitzgibbon AW (2005) A rational func- tion lens distortion model for general cameras. In: Conference on Computer Vision and Pattern Recognition (CVPR), vol 1, pp 213-219, doi: 10.1109/CVPR.2005.43 3
Automatic classification of defective photovoltaic module cells in electroluminescence images. S Deitsch, V Christlein, S Berger, C Buerhop-Lutz, A Maier, F Gallwitz, C Riess, 10.1016/j.solener.2019.02.067Solar Energy. 185Deitsch S, Christlein V, Berger S, Buerhop-Lutz C, Maier A, Gallwitz F, Riess C (2019) Auto- matic classification of defective photovoltaic mod- ule cells in electroluminescence images. Solar En- ergy 185:455-468, doi: 10.1016/j.solener.2019. 02.067, 1807.02894 1
A non-maxima suppression method for edge detection with sub-pixel accuracy. F Devernay, INRIA. Tech. Rep. RR-2724Devernay F (1995) A non-maxima suppression method for edge detection with sub-pixel accu- racy. Tech. Rep. RR-2724, INRIA, URL https: //hal.inria.fr/inria-00073970 6
Straight lines have to be straight. F Devernay, O Faugeras, 10.1007/PL00013269Machine Vision and Applications. 13110Devernay F, Faugeras O (2001) Straight lines have to be straight. Machine Vision and Applications 13(1):14-24, doi: 10.1007/PL00013269 3, 4, 7, 9, 10
Algorithms for the reduction of the number of points required to represent a digitized line or its caricature. Cartographica: The. D H Douglas, T K Peucker, 10.3138/fm57-6770-u75u-7727International Journal for Geographic Information and Geovisualization. 102Douglas DH, Peucker TK (1973) Algorithms for the reduction of the number of points required to represent a digitized line or its caricature. Carto- graphica: The International Journal for Geographic Information and Geovisualization 10(2):112-122, doi: 10.3138/fm57-6770-u75u-7727 6
A density-based algorithm for discovering clusters in large spatial databases with noise. M Ester, H P Kriegel, J Sander, X Xu, Proceedings of the 2 nd International Conference on Knowledge Discovery and Data Mining. the 2 nd International Conference on Knowledge Discovery and Data MiningPortland, OR, USAAAAI Press11Ester M, Kriegel HP, Sander J, Xu X (1996) A density-based algorithm for discovering clusters in large spatial databases with noise. In: Proceedings of the 2 nd International Conference on Knowledge Discovery and Data Mining, AAAI Press, Portland, OR, USA, KDD'96, pp 226-231 11
A survey of subpixel edge detection methods for images of heat-emitting metal specimens. A Fabijańska, 10.2478/v10006-012-0052-3International Journal of Applied Mathematics and Computer Science. 223Fabijańska A (2012) A survey of subpixel edge de- tection methods for images of heat-emitting metal specimens. International Journal of Applied Math- ematics and Computer Science 22(3):695-710, doi: 10.2478/v10006-012-0052-3 6
Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. M A Fischler, R C Bolles, 10.1145/358669.358692Communications of the ACM. 246Fischler MA, Bolles RC (1981) Random sample consensus: A paradigm for model fitting with ap- plications to image analysis and automated cartog- raphy. Communications of the ACM 24(6):381-395, doi: 10.1145/358669.358692 7
Simultaneous linear estimation of multiple view geometry and lens distortion. A Fitzgibbon, 10.1109/CVPR.2001.990465IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 1Fitzgibbon A (2001) Simultaneous linear estimation of multiple view geometry and lens distortion. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE, vol 1, pp 125-132, doi: 10.1109/CVPR.2001.990465 3
Algorithm 954: An accurate and efficient cubic and quartic equation solver for physical applications. N Flocke, 10.1145/2699468ACM Transactions on Mathematical Software. 414Flocke N (2015) Algorithm 954: An accurate and ef- ficient cubic and quartic equation solver for physical applications. ACM Transactions on Mathematical Software 41(4):30:1-30:24, doi: 10.1145/2699468
An efficient method for tensor voting using steerable filters. E Franken, M Van Almsick, P Rongen, L Florack, B Ter Haar Romeny, 10.1007/11744085_18European Conference on Computer Vision (ECCV). Leonardis A, Bischof H, Pinz AFranken E, van Almsick M, Rongen P, Florack L, ter Haar Romeny B (2006) An efficient method for tensor voting using steerable filters. In: Leonardis A, Bischof H, Pinz A (eds) European Conference on Computer Vision (ECCV), pp 228-240, doi: 10.1007/11744085_18 5
Detection of electrophysiology catheters in noisy fluoroscopy images. E Franken, P Rongen, M Van Almsick, B Ter Haar Romeny, 10.1007/11866763_4Larsen R, Nielsen M, Sporring JSpringer45Berlin, HeidelbergMedical Image Computing and Computer-Assisted Intervention -MICCAIFranken E, Rongen P, van Almsick M, ter Haar Romeny B (2006) Detection of electrophys- iology catheters in noisy fluoroscopy images. In: Larsen R, Nielsen M, Sporring J (eds) Medical Im- age Computing and Computer-Assisted Interven- tion -MICCAI 2006, Springer, Berlin, Heidelberg, pp 25-32, doi: 10.1007/11866763_4 4, 5
OCPAD -occluded checkerboard pattern detector. P Fürsattel, S Dotenco, S Placht, M Balda, A Maier, C Riess, 10.1109/WACV.2016.7477565Winter Conference on Applications of Computer Vision (WACV). IEEEFürsattel P, Dotenco S, Placht S, Balda M, Maier A, Riess C (2016) OCPAD -occluded checker- board pattern detector. In: Winter Conference on Applications of Computer Vision (WACV), IEEE, doi: 10.1109/WACV.2016.7477565 3
A subpixel edge detector: An implementation of the Canny/Devernay algorithm. R Grompone Von Gioi, G Randall, 10.5201/ipol.2017.216Image Processing On Line. 7Grompone von Gioi R, Randall G (2017) A sub- pixel edge detector: An implementation of the Canny/Devernay algorithm. Image Processing On Line 7:347-372, doi: 10.5201/ipol.2017.216 6
R B Girshick, R-Cnn Fast, 10.1109/ICCV.2015.169IEEE International Conference on Computer Vision. Girshick RB (2015) Fast R-CNN. In: IEEE Inter- national Conference on Computer Vision, pp 1440- 1448, doi: 10.1109/ICCV.2015.169 3
Rich feature hierarchies for accurate object detection and semantic segmentation. R B Girshick, J Donahue, T Darrell, J Malik, 10.1109/CVPR.2014.81IEEE Conference on Computer Vision and Pattern Recognition. Girshick RB, Donahue J, Darrell T, Malik J (2014) Rich feature hierarchies for accurate object detec- tion and semantic segmentation. In: IEEE Confer- ence on Computer Vision and Pattern Recognition, pp 580-587, doi: 10.1109/CVPR.2014.81 3
Johns Hopkins Studies in the Mathematical Sciences. G H Golub, C F Van Loan, Johns Hopkins University Press59Matrix ComputationsGolub GH, Van Loan CF (2013) Matrix Compu- tations, 4th edn. Johns Hopkins Studies in the Mathematical Sciences, Johns Hopkins University Press 5, 7, 9
Digital Image Processing. R C Gonzalez, R E Woods, PearsonNew York, NY, USA 54th ednGonzalez RC, Woods RE (2018) Digital Image Pro- cessing, 4th edn. Pearson, New York, NY, USA 5
Deep Learning. I Goodfellow, Y Bengio, A Courville, MIT PressGoodfellow I, Bengio Y, Courville A (2016) Deep Learning. MIT Press 1
Deltille grids for geometric camera calibration. H Ha, M Perdoch, H Alismail, I S Kweon, Y Sheikh, 10.1109/ICCV.2017.571International Conference on Computer Vision (ICCV). Ha H, Perdoch M, Alismail H, Kweon IS, Sheikh Y (2017) Deltille grids for geometric camera cali- bration. In: International Conference on Computer Vision (ICCV), pp 5354-5362, doi: 10.1109/ICCV. 2017.571 3
Direct type-specific conic fitting and eigenvalue bias correction. M Harker, P O'leary, Zsombor-Murray , P , 10.1016/j.imavis.2006.12.006Image and Vision Computing. 263Harker M, O'Leary P, Zsombor-Murray P (2008) Direct type-specific conic fitting and eigenvalue bias correction. Image and Vision Computing 26(3):372- 381, doi: 10.1016/j.imavis.2006.12.006 7
Multiple View Geometry in Computer Vision. R Hartley, A Zisserman, Cambridge University Press12New York, NY, USA 72nd ednHartley R, Zisserman A (2004) Multiple View Ge- ometry in Computer Vision, 2nd edn. Cambridge University Press, New York, NY, USA 7, 12
K He, G Gkioxari, P Dollár, R B Girshick, 10.1109/ICCV.2017.322doi: 10.1109/ ICCV.2017.322 4IEEE International Conference on Computer Vision. He K, Gkioxari G, Dollár P, Girshick RB (2017) Mask R-CNN. In: IEEE International Conference on Computer Vision, pp 2980-2988, doi: 10.1109/ ICCV.2017.322 4
Multiscale conditional random fields for image labeling. X He, R S Zemel, M A Carreira-Perpinan, 10.1109/CVPR.2004.1315232Conference on Computer Vision and Pattern Recognition. 2He X, Zemel RS, Carreira-Perpinan MA (2004) Multiscale conditional random fields for image la- beling. In: Conference on Computer Vision and Pattern Recognition, IEEE, vol 2, pp 695-702, doi: 10.1109/CVPR.2004.1315232 12
A robust chessboard detector for geometric camera calibration. M Hoffmann, A Ernst, T Bergen, S Hettenkofer, J U Garbas, 10.5220/0006104300340043International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP). Hoffmann M, Ernst A, Bergen T, Hettenkofer S, Garbas JU (2017) A robust chessboard de- tector for geometric camera calibration. In: In- ternational Joint Conference on Computer Vi- sion, Imaging and Computer Graphics Theory and Applications (VISIGRAPP), pp 34-43, doi: 10.5220/0006104300340043 3
Photovoltaic devices -part 13: Electroluminescence of photovoltaic modules. Iec Ts 60904, International Electrotechnical Commission. 13Technical specificationIEC TS 60904-13:2018 (2018) Photovoltaic devices -part 13: Electroluminescence of photovoltaic mod- ules. Technical specification, International Elec- trotechnical Commission 1
Improved consistent sampling, weighted minhash and L 1 sketching. S Ioffe, 10.1109/ICDM.2010.80International Conference on Data Mining. Ioffe S (2010) Improved consistent sampling, weighted minhash and L 1 sketching. In: Interna- tional Conference on Data Mining, pp 246-255, doi: 10.1109/ICDM.2010.80 14
Spatial transformer networks. M Jaderberg, K Simonyan, A Zisserman, K Kavukcuoglu, Advances in Neural Information Processing Systems. Cortes C, Lawrence ND, Lee DD, Sugiyama M, Garnett RCurran Associates, Inc28Jaderberg M, Simonyan K, Zisserman A, Kavukcuoglu K (2015) Spatial transformer networks. In: Cortes C, Lawrence ND, Lee DD, Sugiyama M, Garnett R (eds) Advances in Neural Information Processing Systems, 28, Curran Associates, Inc., pp 2017-2025 1
A three-stage algorithm for real polynomials using quadratic iteration. M A Jenkins, J F Traub, 10.1137/0707045Journal on Numerical Analysis. 74Jenkins MA, Traub JF (1970) A three-stage algo- rithm for real polynomials using quadratic iteration. Journal on Numerical Analysis 7(4):545-566, doi: 10.1137/0707045 9
Development of a suitable model for characterizing photovoltaic arrays with shaded solar cells. E Karatepe, M Boztepe, M Çolak, 10.1016/j.solener.2006.12.001Solar Energy. 818Karatepe E, Boztepe M, Çolak M (2007) Develop- ment of a suitable model for characterizing photo- voltaic arrays with shaded solar cells. Solar Energy 81(8):977-992, doi: 10.1016/j.solener.2006.12. 001 2
Energy yield simulations of interconnected solar PV arrays. N D Kaushika, N K Gautam, 10.1109/TEC.2002.805204IEEE Transactions on Energy Conversion. 181Kaushika ND, Gautam NK (2003) Energy yield sim- ulations of interconnected solar PV arrays. IEEE Transactions on Energy Conversion 18(1):127-134, doi: 10.1109/TEC.2002.805204 2
MATLAB and Octave functions for computer vision and image processing. Kovesi, Kovesi P (2017) MATLAB and Octave functions for computer vision and image processing. URL http://www.peterkovesi.com/matlabfns 6
A new method of locating the maximum point of an arbitrary multipeak curve in the presence of noise. H J Kushner, 10.1115/1.3653121Journal of Basic Engineering. 861Kushner HJ (1964) A new method of locating the maximum point of an arbitrary multipeak curve in the presence of noise. Journal of Basic Engineering 86(1):97-106, doi: 10.1115/1.3653121 13
A method for the solution of certain non-linear problems in least squares. K Levenberg, Quarterly of Applied Mathematics. 2210Levenberg K (1944) A method for the solution of certain non-linear problems in least squares. Quar- terly of Applied Mathematics 2(2):164-168 10
Fully convolutional instance-aware semantic segmentation. Y Li, H Qi, J Dai, Ji X Wei, Y , 10.1109/CVPR.2017.472IEEE Conference on Computer Vision and Pattern Recognition. Li Y, Qi H, Dai J, Ji X, Wei Y (2017) Fully convolutional instance-aware semantic segmenta- tion. In: IEEE Conference on Computer Vision and Pattern Recognition, pp 4438-4446, doi: 10.1109/CVPR.2017.472 3
Inverse compositional spatial transformer networks. C H Lin, S Lucey, 10.1109/CVPR.2017.242IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEELin CH, Lucey S (2017) Inverse compositional spatial transformer networks. In: IEEE Confer- ence on Computer Vision and Pattern Recognition (CVPR), IEEE, pp 2252-2260, doi: 10.1109/CVPR. 2017.242 1
Scale-space theory: A basic tool for analyzing structures at different scales. T Lindeberg, 10.1080/757582976Journal of Applied Statistics. 211-2Lindeberg T (1994) Scale-space theory: A basic tool for analyzing structures at different scales. Journal of Applied Statistics 21(1-2):225-270, doi: 10.1080/757582976 5
Edge detection and ridge detection with automatic scale selection. T Lindeberg, 10.1109/CVPR.1996.517113Proceedings of the Conference on Computer Vision and Pattern Recognition. the Conference on Computer Vision and Pattern RecognitionLindeberg T (1996) Edge detection and ridge detec- tion with automatic scale selection. In: Proceedings of the Conference on Computer Vision and Pat- tern Recognition, pp 465-470, doi: 10.1109/CVPR. 1996.517113 5
Edge detection and ridge detection with automatic scale selection. T Lindeberg, 10.1023/A:1008097225773International Journal of Computer Vision. 302Lindeberg T (1998) Edge detection and ridge detec- tion with automatic scale selection. International Journal of Computer Vision 30(2):117-156, doi: 10.1023/A:1008097225773 5
Fully convolutional networks for semantic segmentation. J Long, E Shelhamer, T Darrell, 10.1109/CVPR.2015.7298965Conference on Computer Vision and Pattern Recognition (CVPR). Long J, Shelhamer E, Darrell T (2015) Fully con- volutional networks for semantic segmentation. In: Conference on Computer Vision and Pattern Recog- nition (CVPR), pp 3431-3440, doi: 10.1109/CVPR. 2015.7298965 3
An algorithm for leastsquares estimation of nonlinear parameters. D W Marquardt, 10.1137/0111030Journal of the Society for Industrial and Applied Mathematics. 112Marquardt DW (1963) An algorithm for least- squares estimation of nonlinear parameters. Journal of the Society for Industrial and Applied Mathe- matics 11(2):431-441, doi: 10.1137/0111030 10
A global optimization algorithm using adaptive random search. S F Masri, G A Bekey, F B Safford, 10.1016/0096-3003(80)90027-2Applied Mathematics and Computation. 74Masri SF, Bekey GA, Safford FB (1980) A global op- timization algorithm using adaptive random search. Applied Mathematics and Computation 7(4):353- 375, doi: 10.1016/0096-3003(80)90027-2 13
Image processing for solar cell analysis, diagnostics and quality assurance inspection. M G Mauk, Image Processing: Concepts, Methodologies, Tools, and Applications. IGI Global71Mauk MG (2013) Image processing for solar cell analysis, diagnostics and quality assurance inspec- tion. In: Image Processing: Concepts, Methodolo- gies, Tools, and Applications, IGI Global, chap 71, pp 1426-1462 1
Weakly supervised segmentation of cracks on solar cells using normalized L p norm. M Mayr, M Hoffmann, A Maier, V Christlein, 10.1109/ICIP.2019.88031162019 IEEE International Conference on Image Processing (ICIP). Mayr M, Hoffmann M, Maier A, Christlein V (2019) Weakly supervised segmentation of cracks on solar cells using normalized L p norm. In: 2019 IEEE In- ternational Conference on Image Processing (ICIP), pp 1885-1889, doi: 10.1109/ICIP.2019.8803116 1
Tensor voting: Theory and applications. G Medioni, C K Tang, M S Lee, Proceedings of RFIA 5. RFIA 5Medioni G, Tang CK, Lee MS (2000) Tensor voting: Theory and applications. In: Proceedings of RFIA 5
DeepSolarEye: Power loss prediction and weakly supervised soiling localization via fully convolutional networks for solar panels. S Mehta, A P Azad, S A Chemmengath, V Raykar, S Kalyanaraman, 10.1109/WACV.2018.00043Winter Conference on Applications of Computer Vision (WACV). Mehta S, Azad AP, Chemmengath SA, Raykar V, Kalyanaraman S (2018) DeepSolarEye: Power loss prediction and weakly supervised soiling lo- calization via fully convolutional networks for so- lar panels. In: Winter Conference on Applications of Computer Vision (WACV), pp 333-342, doi: 10.1109/WACV.2018.00043 4
On bayesian methods for seeking the extremum. J Močkus, Technical Conference on Optimization Techniques. Marchuk GIBerlin Heidelberg; Berlin, HeidelbergSpringer13Močkus J (1975) On bayesian methods for seeking the extremum. In: Marchuk GI (ed) IFIP Technical Conference on Optimization Techniques, Springer Berlin Heidelberg, Berlin, Heidelberg, pp 400-404 13
Point set registration: Coherent point drift. A Myronenko, X Song, 10.1109/TPAMI.2010.46IEEE Transactions on Pattern Analysis and Machine Intelligence. 3212Myronenko A, Song X (2010) Point set registration: Coherent point drift. IEEE Transactions on Pat- tern Analysis and Machine Intelligence 32(12):2262- 2275, doi: 10.1109/TPAMI.2010.46 11
Automatic detection of defects in solar modules: Image processing in detecting. B Nian, Z Fu, L Wang, X Cao, 10.1109/WICOM.2010.56007032010 International Conference on Computational Intelligence and Software Engineering. IEEENian B, Fu Z, Wang L, Cao X (2010) Automatic detection of defects in solar modules: Image process- ing in detecting. In: 2010 International Conference on Computational Intelligence and Software Engi- neering, IEEE, pp 1-4, doi: 10.1109/WICOM.2010. 5600703 1
Numerical Optimization. J Nocedal, S J Wright, 10.1007/978-0-387-40065-5Operations Research and Financial Engineering. 43Springer2nd ednNocedal J, Wright SJ (2006) Numerical Optimiza- tion, Operations Research and Financial Engineer- ing, vol 43, 2nd edn. Springer, New York, USA, doi: 10.1007/978-0-387-40065-5 6
A threshold selection method from gray-level histograms. N Otsu, 10.1109/TSMC.1979.4310076doi: 10. 1109/TSMC.1979.4310076IEEE Transactions on Systems. 9111Man, and CyberneticsOtsu N (1979) A threshold selection method from gray-level histograms. IEEE Transactions on Sys- tems, Man, and Cybernetics 9(1):62-66, doi: 10. 1109/TSMC.1979.4310076 6, 11
Digital Image Processing Algorithms. I Pitas, Prentice-Hall, IncUpper Saddle River, NJ, USA 12Pitas I (1993) Digital Image Processing Algorithms. Prentice-Hall, Inc., Upper Saddle River, NJ, USA 12
ROCHADE: Robust checkerboard advanced detection for camera calibration. S Placht, P Fürsattel, E A Mengue, H Hofmann, C Schaller, M Balda, E Angelopoulou, 10.1007/978-3-319-10593-2_50Conference on Computer Vision (ECCV). Fleet D, Pajdla T, Schiele B, Tuytelaars T8692Placht S, Fürsattel P, Mengue EA, Hofmann H, Schaller C, Balda M, Angelopoulou E (2014) ROCHADE: Robust checkerboard advanced detec- tion for camera calibration. In: Fleet D, Pajdla T, Schiele B, Tuytelaars T (eds) European Confer- ence on Computer Vision (ECCV), Lecture Notes in Computer Science, vol 8692, pp 766-779, doi: 10.1007/978-3-319-10593-2_50 3
Detection of the voltage distribution in photovoltaic modules by electroluminescence imaging. T Potthoff, K Bothe, U Eitner, D Hinken, M Köntges, 10.1002/pip.941Progress in Photovoltaics: Research and Applications. 182Potthoff T, Bothe K, Eitner U, Hinken D, Köntges M (2010) Detection of the voltage distribution in photovoltaic modules by electroluminescence imag- ing. Progress in Photovoltaics: Research and Appli- cations 18(2):100-106, doi: 10.1002/pip.941 1
A novel framework for making dominant point detection methods non-parametric. Image and Vision Computing. D K Prasad, Mkh Leung, C Quek, S Y Cho, 10.1016/j.imavis.2012.06.010doi: 10.1016/j. imavis.2012.06.010 630Prasad DK, Leung MKH, Quek C, Cho SY (2012) A novel framework for making dominant point de- tection methods non-parametric. Image and Vi- sion Computing 30(11):843-859, doi: 10.1016/j. imavis.2012.06.010 6
Numerical simulation of current-voltage characteristics of photovoltaic systems with shaded solar cells. V Quaschning, R Hanitsch, 10.1016/0038-092X(96)00006-0Solar Energy. 566Quaschning V, Hanitsch R (1996) Numerical sim- ulation of current-voltage characteristics of photo- voltaic systems with shaded solar cells. Solar En- ergy 56(6):513-520, doi: 10.1016/0038-092X(96) 00006-0 2
An iterative procedure for the polygonal approximation of plane curves. U Ramer, 10.1016/S0146-664X(72)80017-0Computer Graphics and Image Processing. 13Ramer U (1972) An iterative procedure for the polygonal approximation of plane curves. Computer Graphics and Image Processing 1(3):244-256, doi: 10.1016/S0146-664X(72)80017-0 6
The convergence of the random search method in the extremal control of a many parameter system. L A Rastrigin, Automation and Remote Control. 24Rastrigin LA (1963) The convergence of the ran- dom search method in the extremal control of a many parameter system. Automation and Remote Control 24:1337-1342 13
Faster R-CNN: Towards real-time object detection with region proposal networks. S Ren, K He, R Girshick, J Sun, 10.1109/TPAMI.2016.2577031IEEE Transactions on Pattern Analysis and Machine Intelligence. 396Ren S, He K, Girshick R, Sun J (2017) Faster R- CNN: Towards real-time object detection with re- gion proposal networks. IEEE Transactions on Pat- tern Analysis and Machine Intelligence 39(6):1137- 1149, doi: 10.1109/TPAMI.2016.2577031 3
Information Retrieval. C J Van Rijsbergen, 14Butterworth-Heinemann2nd ednvan Rijsbergen CJ (1979) Information Retrieval, 2nd edn. Butterworth-Heinemann 14
U-Net: Convolutional networks for biomedical image segmentation. O Ronneberger, P Fischer, T Brox, 10.1007/978-3-319-24574-4_28Medical Image Computing and Computer-Assisted Intervention -MIC-CAI 2015. Navab N, Hornegger J, Wells WM, Frangi AFChamSpringerRonneberger O, Fischer P, Brox T (2015) U- Net: Convolutional networks for biomedical image segmentation. In: Navab N, Hornegger J, Wells WM, Frangi AF (eds) Medical Image Comput- ing and Computer-Assisted Intervention -MIC- CAI 2015, Springer, Cham, pp 234-241, doi: 10.1007/978-3-319-24574-4_28 3
Camera distortion self-calibration using the plumb-line constraint and minimal Hough entropy. E Rosten, R Loveland, 10.1007/s00138-009-0196-9doi: 10.1007/ s00138-009-0196-9, 0810.4426 3Machine Vision and Applications. 22Rosten E, Loveland R (2011) Camera distor- tion self-calibration using the plumb-line con- straint and minimal Hough entropy. Machine Vi- sion and Applications 22(1):77-85, doi: 10.1007/ s00138-009-0196-9, 0810.4426 3
Automatic detection of checkerboards on blurred and distorted images. M Rufli, D Scaramuzza, R Siegwart, 10.1109/IROS.2008.4650703International Conference on Intelligent Robots and Systems. Rufli M, Scaramuzza D, Siegwart R (2008) Auto- matic detection of checkerboards on blurred and distorted images. In: International Conference on Intelligent Robots and Systems, pp 3121-3126, doi: 10.1109/IROS.2008.4650703 3
K3M: A universal algorithm for image skeletonization and a review of thinning techniques. K Saeed, M Tabȩdzki, M Rybnik, M Adamski, 10.2478/v10006-010-0024-4International Journal of Applied Mathematics and Computer Science. 202Saeed K, Tabȩdzki M, Rybnik M, Adamski M (2010) K3M: A universal algorithm for image skeletonization and a review of thinning tech- niques. International Journal of Applied Mathe- matics and Computer Science 20(2):317-335, doi: 10.2478/v10006-010-0024-4 6
Image deformation using moving least squares. S Schaefer, T Mcphail, J Warren, 10.1145/1141911.1141920doi: 10.1145/ 1141911.1141920 12ACM Transactions on Graphics. 253533Schaefer S, McPhail T, Warren J (2006) Image deformation using moving least squares. ACM Transactions on Graphics 25(3):533, doi: 10.1145/ 1141911.1141920 12
Optimized relative step size random searches. G Schrack, M Choit, 10.1007/BF01580669Mathematical Programming. 101Schrack G, Choit M (1976) Optimized relative step size random searches. Mathematical Programming 10(1):230-244, doi: 10.1007/BF01580669 13
Adaptive step size random search. M Schumer, K Steiglitz, 10.1109/TAC.1968.1098903IEEE Transactions on Automatic Control. 13313Schumer M, Steiglitz K (1968) Adaptive step size random search. IEEE Transactions on Au- tomatic Control 13(3):270-276, doi: 10.1109/TAC. 1968.1098903 13
Practical Bayesian optimization of machine learning algorithms. J Snoek, H Larochelle, R P Adams, Advances in Neural Information Processing Systems. Pereira F, Burges CJC, Bottou L, Weinberger KQCurran Associates, Inc2513Snoek J, Larochelle H, Adams RP (2012) Prac- tical Bayesian optimization of machine learning algorithms. In: Pereira F, Burges CJC, Bottou L, Weinberger KQ (eds) Advances in Neural Informa- tion Processing Systems, Curran Associates, Inc., vol 25, pp 2951-2959 13
Minimization by random search techniques. F J Solis, Rjb Wets, 10.1287/moor.6.1.19Mathematics of Operations Research. 61Solis FJ, Wets RJB (1981) Minimization by ran- dom search techniques. Mathematics of Operations Research 6(1):19-30, doi: 10.1287/moor.6.1.19 13
Automatic processing and solar cell detection in photovoltaic electroluminescence images. E Sovetkin, A Steland, 10.3233/ICA-180588Integrated Computer-Aided Engineering. 26220Sovetkin E, Steland A (2019) Automatic processing and solar cell detection in photovoltaic electrolumi- nescence images. Integrated Computer-Aided Engi- neering 26(2):123-137, doi: 10.3233/ICA-180588 4, 13, 15, 16, 20
Automatic detection and evaluation of solar cell micro-cracks in electroluminescence images using matched filters. S Spataru, P Hacke, D Sera, 10.1109/PVSC.2016.77498912016 IEEE 43 rd Photovoltaic Specialists Conference (PVSC). 12Spataru S, Hacke P, Sera D (2016) Automatic de- tection and evaluation of solar cell micro-cracks in electroluminescence images using matched filters. In: 2016 IEEE 43 rd Photovoltaic Specialists Confer- ence (PVSC), pp 1602-1607, doi: 10.1109/PVSC. 2016.7749891 12
A power and energy procedure in operating photovoltaic systems to quantify the losses according to the causes. F Spertino, A Ciocia, P D Leo, R Tommasini, I Berardone, M Corrado, A Infuso, M Paggi, 10.1016/j.solener.2015.05.033Solar Energy. 118Spertino F, Ciocia A, Leo PD, Tommasini R, Be- rardone I, Corrado M, Infuso A, Paggi M (2015) A power and energy procedure in operating pho- tovoltaic systems to quantify the losses accord- ing to the causes. Solar Energy 118:313-326, doi: 10.1016/j.solener.2015.05.033 13
Enhanced crack segmentation (eCS): A reference algorithm for segmenting cracks in multicrystalline silicon solar cells. D Stromer, A Vetter, H C Oezkan, C Probst, A Maier, 10.1109/JPHOTOV.2019.2895808doi: 10.1109/ JPHOTOV.2019.2895808IEEE Journal of Photovoltaics. 9312Stromer D, Vetter A, Oezkan HC, Probst C, Maier A (2019) Enhanced crack segmentation (eCS): A reference algorithm for segmenting cracks in multicrystalline silicon solar cells. IEEE Jour- nal of Photovoltaics 9(3):752-758, doi: 10.1109/ JPHOTOV.2019.2895808 12
Defect detection of solar cells in electroluminescence images using Fourier image reconstruction. D M Tsai, S C Wu, W C Li, 10.1016/j.solmat.2011.12.007doi: 10.1016/j. solmat.2011.12.007 1Solar Energy Materials and Solar Cells. 99Tsai DM, Wu SC, Li WC (2012) Defect detection of solar cells in electroluminescence images using Fourier image reconstruction. Solar Energy Mate- rials and Solar Cells 99:250-262, doi: 10.1016/j. solmat.2011.12.007 1
Defect detection in solar modules using ICA basis images. D M Tsai, S C Wu, W Y Chiu, 10.1109/TII.2012.2209663IEEE Transactions on Industrial Informatics. 91Tsai DM, Wu SC, Chiu WY (2013) Defect detection in solar modules using ICA basis images. IEEE Transactions on Industrial Informatics 9(1):122- 131, doi: 10.1109/TII.2012.2209663 1
A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. R Y Tsai, 10.1109/JRA.1987.1087109IEEE Journal on Robotics and Automation. 34Tsai RY (1987) A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE Journal on Robotics and Automation 3(4):323-344, doi: 10.1109/JRA.1987.1087109 7
Automatic finger interruption detection in electroluminescence images of multicrystalline solar cells. D C Tseng, Y S Liu, C M Chou, 10.1155/2015/879675doi: 10.1155/ 2015/879675 1Mathematical Problems in Engineering. 2015Tseng DC, Liu YS, Chou CM (2015) Automatic finger interruption detection in electroluminescence images of multicrystalline solar cells. Mathematical Problems in Engineering 2015:1-12, doi: 10.1155/ 2015/879675 1
| []
|
[
"SEMI-SPARSITY ON PIECEWISE CONSTANT FUNCTION SPACES FOR TRIANGULAR MESH DENOISING",
"SEMI-SPARSITY ON PIECEWISE CONSTANT FUNCTION SPACES FOR TRIANGULAR MESH DENOISING"
]
| [
"Junqing Huang ",
"Haihui Wang ",
"Michael Ruzhansky "
]
| []
| []
| We present a semi-sparsity model for 3D triangular mesh denoising, which is motivated by the success of semi-sparsity regularization in image processing applications. We demonstrate that such a regularization model can be also applied for graphic processing and gives rise to the similar simultaneous-fitting results in preserving sharp features and piece-wise smoothing surfaces. Specifically, we first describe the piecewise constant function spaces associated with the differential operators on triangular meshes and then show how to extend the semi-sparsity model to meshes denoising. To verify its effectiveness, we present an efficient iterative algorithm based on alternating direction method of multipliers (ADMM) technique and show the experimental results on synthetic and real scanning data against the state-of-the-arts both visually and quantitatively. | 10.48550/arxiv.2305.04834 | [
"https://export.arxiv.org/pdf/2305.04834v1.pdf"
]
| 258,557,493 | 2305.04834 | 810beef2ab19d93f3e6245385d73d54462f90f9b |
SEMI-SPARSITY ON PIECEWISE CONSTANT FUNCTION SPACES FOR TRIANGULAR MESH DENOISING
Junqing Huang
Haihui Wang
Michael Ruzhansky
SEMI-SPARSITY ON PIECEWISE CONSTANT FUNCTION SPACES FOR TRIANGULAR MESH DENOISING
We present a semi-sparsity model for 3D triangular mesh denoising, which is motivated by the success of semi-sparsity regularization in image processing applications. We demonstrate that such a regularization model can be also applied for graphic processing and gives rise to the similar simultaneous-fitting results in preserving sharp features and piece-wise smoothing surfaces. Specifically, we first describe the piecewise constant function spaces associated with the differential operators on triangular meshes and then show how to extend the semi-sparsity model to meshes denoising. To verify its effectiveness, we present an efficient iterative algorithm based on alternating direction method of multipliers (ADMM) technique and show the experimental results on synthetic and real scanning data against the state-of-the-arts both visually and quantitatively.
INTRODUCTION
Mesh denoising is a long-standing fundamental research topic in geometry processing. With the rapid development of 3D scanning devices, it has become increasingly popular and common to acquire and reconstruct meshes from the real world automatically. In many practical scenarios, it is inevidently for the acquired meshes to be contaminated by various noises because of local measurement errors in the scanning complex geometries and computational errors in reconstruction algorithms. As a result, it is highly expected to develop an effective denoising method to recover high-quality geometric structures from the corrupted acquiring data. However, it is a challenging problem because of the resemble high-frequency characteristics of geometric features and oscillating noises.
In the literature, many techniques have been investigated to remove noise while preserving geometric features, including filtering methods [3,13,4], variational-based methods [12,8] and higher-order variants [6,7], and so on. For example, bilateral filter [3] and guided filter [13] have been used for practical geometry processing due to the simplicity and ease of implementation, but they may cause over-smoothing effects around sharp edges limiting producing high-performance results. Variational-based methods have attracted great attention for mesh denoising, as they can well preserve sharp features while suppressing noise significantly. Unfortunately, they may lead to stair-case artifacts in polynomialsmoothing surfaces. Recently, higher-order variational extensions such as total generalized variation (TGV) have been proposed to amend the potential stair-case artifacts, but they may still blur geometric features in case of strong noise and complex graphic features.
In general, existing methods have brought great progress in removing weak or smallscale edges and retaining strong or large-scale edges. However, it still has much space for improvement in removing local noise while reserving the local complex graphical features. In this paper, we also present a higher-order model for 3D triangular mesh denoising, which is based on semi-sparsity regularization to preserve sharp features and piece-wise smoothing surfaces. Specifically, we first describe the piecewise constant function spaces associated with the differential operators on triangular meshes and then show how to extend the semi-sparsity model to meshes denoising. To verify its effectiveness, we present an efficient iterative algorithm based on alternating direction method of multipliers (ADMM) technique. The proposed method is also compared with the experimental results on synthetic and real scanning data against the state-of-the-arts both visually and quantitatively.
PRELIMINARIES
In this section, we briefly introduce some notations and definitions of piecewise constant function spaces for the proposed semi-sparsity mesh denoisng model. The reader is also referred to [1,2,6,7,9,12] for more details.
2.1. Notation. Let M be a non-degenerate triangulated surface with vertices, edges, and triangles denoted as v i(i=0,1,··· ,I−1) , e j(j=0,1,··· ,E−1) and τ k(k=0,1,··· ,T −1) , respectively. We introduce the relative orientation of an edge e to a triangle τ , denoted by s(e, τ ), where v ≺ e represents that v is an endpoint of an edge e. Similarly, e ≺ τ denotes that e is an edge of a triangle τ ; v ≺ τ denotes that v is a vertex of a triangle τ .
We further introduce the relative orientation of an edge e to a triangle τ , which is denoted by sgn(e, τ ) as follows. We assume that all triangles have the counter-clockwise orientations and all edges are with randomly chosen fixed orientations. If an edge e and a triangle τ have the same orientation, then s(e, τ ) = 1; otherwise, s(e, τ ) = −1. 2.2. Piece-wise linear Function Spaces and Differential Operators. Piecewise constant function spaces have achieved great success in computer graphics applications [6,7,12], because the spaces associated with differential operators form a basic easily-handled space to process graphic data such as triangular meshes. We introduce the spaces and show how to derive the assoiciated differential operators on triangulated meshes. We define the space U = R T , which is isomorphic to the piecewise constant function space on a triangulated mesh M. Let u = (u 0 , . . . , u T−1 ) ∈ U and u τ be a vector restricted to triangle τ , sometimes written as u| τ for convenience. For example, u τ can be defined as the outward-facing normal vector restricted on triangle τ , which, as shown in Fig. 1(a), is perpendicular to plane defined by triangle τ . According to [6,7], the jump function of u over an edge e is defined as
[u] e = e≺τ u| r sgn(e, τ ), e ∂M 0, e ⊆ ∂M (2.1)
Here, the jump function [u] e can be illustrated in Fig. 1
(b).
The space U has the standard inner product and norm,
u 1 , u 2 U = τ u 1 τ u 2 τ s r , u V = (u, u) U , (2.2)
where u 1 , u 2 and u ∈ V , and s τ is the area of triangle τ . Let the edge function space V = R E and v e ∈ V (or v| e ) be a vector restricted to edge e, it is then natural to define the first-order differential operator D : U → V on M as
Du| e = [u] e , ∀e, for u ∈ V. (2.3)
By definition, the space V is also equipped with the inner product and norm:
v 1 , v 2 V = e v 1 e v 2 e len(e), v V = (v, v) V , (2.4)
where v 1 , v 2 , v ∈ V , and len(e) is the length of the edge e.
As explained in [6,7], the adjoint operator of D, that is, D * : U → V given by
(D * v)| τ = − 1 s τ e≺τ e ⊂∂M
v| e sgn(e, τ ) len(e), ∀τ.
(2.5)
Eq. 2.3 and Eq. 2.5 define the first-order differential operator and the associated adjoint operator on the triangled mesh M.
By analogy, it is easy to define the higher-order differential operators. Let l be the line connecting the barycenter and one vertex of the triangle τ . As depicted in Fig. 1(c), the two edges e + and e − share the common vertex of l, and the two triangles sharing edges e + and e − are denoted as τ + and τ − , respectively. We then define the jump difference over the line l as
[[u]] l,τ (or, [[u]] l ), [[u]] l,τ = [u] e + sgn(e + , τ + )+[u] e − sgn(e − , τ − ) = (u τ + −u τ ) − (u τ −u τ − ) = u τ + −2u τ +u τ − . (2.6)
It is clear that Eq. 2.6 can be viewed as a second order difference operator with respect to u. For any u ∈ U with the Neumann boundary condition, we actually have
[[u]] l = u τ + − 2u τ + u τ − , e + or e − ⊂ ∂M 0, e + or e − ⊂ ∂M (2.7)
We can see that [[u]] l is invariant under the choice of orientation of edges.
In the discrete case, for each triangle τ , there are three first-order differences over the edges along three different directions. Thus, we have the gradient operator in τ as,
∇u| τ = D M u| e1,τ , D M u| e2,τ , D M u| e3,τ ,(2.8)
where e i,τ ≺ τ, i = 1, 2, 3. We may write the discrete gradient as ∇u = (∂ 1 u, ∂ 2 u, ∂ 3 u) for convenience.
Similarity, it is natural to denote the second-order gradient with respect to u restricted to τ as,
∇ 2 : U → W, u → ∇ 2 u, ∇ 2 u τ = [[u]] l0,τ , [[u]] l1,τ , [[u]] l2,τ , ∀τ , for u ∈ V , where W = R T × R T × R T .
For convenience, we may also write it in the form
∇ 2 u τ = ∂ 1 ∂ 1 u ∂ 1 ∂ 2 u ∂ 1 ∂ 3 u ∂ 2 ∂ 1 u ∂ 2 ∂ 2 u ∂ 2 ∂ 3 u ∂ 3 ∂ 1 u ∂ 3 ∂ 2 u ∂ 3 ∂ 3 u ,(2.9)
where the diagonal entries ∂ i ∂ i u, i = {1, 2, 3} are the second-order directional derivatives in the same direction, while the off-diagonal entries ∂ i ∂ j u, i = j are the second-order directional derivatives in two different directions. It is also possible to define the higherorder differential operator for the edge e as shown in Fig. 1(d) 2.3. Piecewise Constant Function Spaces. To handle vectorial data, we extend the above concepts to vectorial cases with the definitions of spaces U, V and W as follows:
U = U × · · · × U N , V = V × · · · × V N , W = W × · · · × W N ,(2.10)
for N-channel data. The inner products and norms in U, V and W are as follows:
u 1 , u 2 U = 1≤i≤N u 1 i , u 2 i U , u V = (u, u) U , u 1 , u 2 , u ∈ U, v 1 , v 2 V = 1≤i≤N v 1 i , v 2 i V , v V = (u, v) V , v 1 , v 2 , v ∈ V, w 1 , w 2 W = 1≤i≤N w 1 i , w 2 i W , w W = (w, w) W , w 1 , w 2 , w ∈ W.
(2.11)
We mention that ∇u, ∇ 2 u and their adjoint operators can be computed channel by channel.
SEMI-SPARSITY REGULARIZATION FOR MESH DENOISING
Similar to many filtering methods that are firstly proposed for image processing and then applied in graphic processing [3,4,13], it is straightforward to extend the semisparse model [5] to 3D geometry, because the 3D meshes suffer from the similar piecewise constant and smoothing surfaces with discontinuous boundaries. The semi-sparsity model is a higher-order case of sparse regularization that enables us to smooth 3D meshes without causing stair-case artifacts.
3.1. Problem Formulation. According to [5], the semi-sparsity prior knowledge of signal is suggested to be formulated into a higher-order L 0 regularization model in the context of optimization-based framework, which has a general following form,
min u β 2 u − f 2 2 + α 1 n−1 k=1 ∇ k u − ∇ k f p p + α 2 ∇ n u 0 (3.1)
where u and f are the target output and observation signals (images, 3D meshes, etc.), respectively. β, α 1 and α 2 weigh the balance of three terms. The first term in Eq. 3.1 is data fidelity to in a sense of least square minimization. The second term measures the L p (p ≥ 1)norm similarity of higher-order gradients ∇ k u and ∇ k f in consideration of the piece-wise polynomial surfaces. The third term ∇ n u 0 favors the highest-order gradient ∇ n u to be fully sparse. The idea of Eq. 3.1 is straightforward, that is, a sparse-induced L 0 -norm constraint is only imposed on the highest order n − th gradient domain, as the ones with the orders less than n are not fully sparse but also have a small error L p space.
The above claims are also valid for the piece-wise smoothing surfaces of 3D meshes. Given a noisy mesh M with the normal field denoted as N 0 , we have the semi-sparse regularization for normal filter as the following problem,
min u β 2 N − N 0 2 U + α 1 ∇N − ∇N 0 V + α 2 ∇ 2 N 0 W . (3.2)
Here, as indicated in [5], we set highest order n = 2 and p = 1 in Eq. 3.1 for the sake of simplicity and computational efficiency.
3.2. The Efficient ADMM Solver. Due to the non-smooth and non-convex objective function of Eq.3.2, a direct solution is not available. Instead, we propose to solve the problem based on an alternating direction method of multipliers (ADMM), which has achieved great success in solving the related problems [6,7]. By introducing the new variables P, and Q, we then reformulate Eq. 3.2 as a constrained optimization problem with the following form,
min N,P,Q β 2 N − N 0 2 U + α 1 P V + α 2 Q 0 W + Ψ(N) , s.t. P = ∇N − ∇N 0 , Q = ∇ 2 N,(3.3)
where
Ψ(N) = 0, N τ = 1, ∀τ, +∞, otherwise. (3.4)
Accordingly, we introduce the augmented Lagrangian function of the above constrained optimization problem,
L (N, P, Q, λ P , λ Q ) = β 2 N − N 0 2 U + α 1 P V + α 2 Q 0 W + Ψ(N) + λ P , (∇N − ∇N 0 ) − P V + ρ 1 2 (N − N 0 ) − P 2 V + λ Q , ∇ 2 N − Q W + ρ 2 2 ∇ 2 N − Q 2 W ,(3.5)
where λ P , and λ Q are Lagrange multipliers, ρ 1 and ρ 2 are positive penalty weights. The variable-splitting technique is then applied to iteratively update the variables in an alternative way, giving the the following sub-problems:
3.2.1. The N-subproblem:
min N β 2 N − N 0 2 U + ρ 1 2 (∇N − ∇N 0 ) − P + λ P ρ 1 2 V + ρ 2 2 ∇ 2 N − Q + λ Q ρ 2 2 V + Ψ(N). (3.6)
It is clear that Eq. 3.6 is a quadratic optimization problem with the unit normal constraints. As suggested in [7], an approximation strategy is employed to solve this problem. We first solve the quadratic program and then project the solution N onto a unit sphere. Specifically, the corresponding Euler-Lagrange equation based on the first-order optimal conditions has the form
β (N − N 0 ) − ρ 1 ∇ * ∇N − ∇N 0 − P + λ P ρ 1 + ρ 2 ∇ 2 * ∇ 2 N − Q + λ Q ρ 2 = 0. (3.7)
where ∇ * and ∇ 2 * are the adjoint operators of the first-order and second-order differential operators, respectively. The above equation is a sparse and positive semi-definite linear system, which can be solved by efficient sparse linear solvers.
3.2.2. The P-subproblem:
min P α 1 P V + ρ 1 2 P − (∇N + ∇N 0 ) + λ P ρ 1 2 V , (3.8)
where 3.8 is a classical Lasso problem and can be solved efficiently by splitting each variable P e independently, where the P e has a closed form solution
P e = S (∇N + ∇N 0 ) − λ P ρ 1 , α 1 ρ 1 , (3.9)
with the soft shrinkage operator S(x, T ) defined as:
S(x, T ) = sign(x) max (0, x − T ) .
3.2.3.
The Q-subproblem:
min Q α 2 Q 0 W + ρ 2 2 Q − ∇ 2 N + λ Q ρ 2 2 V .
(3.10)
The Q-subproblem 3.10 is a L 0 norm minimization problem, which has a similar separable property as 3.8 and each variableQ l is given by the formula
Q l = H ∇ 2 N − λ Q ρ 2 , α 2 ρ 2 , (3.11)
with the hard-threshold operator defined as:
H(x, T ) = 0, x ≤ T, x, otherwise.
Finally, the Lagrange multipliers λ P and λ Q are updated in the form,
λ P =λ P + ρ 1 ((∇N − ∇N 0 ) − P) , λ Q =λ Q + ρ 2 ∇ 2 N − Q . (3.12)
In summary, the semi-sparsity model in Eq. 3.2 for normal filter is achieved by solving the ADMM subproblems and Lagrange multipliers iteratively. The procedure terminates when one of the stopping criteria is met. The scheme is also verified by the numeral results in the next section.
EXPERIMENTAL RESULTS
We have interpreted the definitions of the differential operators on triangular meshes in the previous section. Once they are computed, it is easy to substitute them into the semi-sparsity model and solve it based on the ADMM algorithm accordingly. In order to further illustrate the proposed semi-sparsity model, we here compare it with the existing mesh denoising methods, including the feature-aware mesh filter [10],bilateral filter (BF) [3], guided normal filter (GNF) [13], cascaded normal regression (CNR) [11], L 0 minimization [4], and high-order TGV regularization [7]. We carefully tune the parameters of each competing methods so that satisfactory results are produced. Our Matlab implementation runs on the PC with Intel Core2 Duo CPU 2.13G and 32 GB RAM.
As shown in Fig. 2, the original surface contains corners, edges and polynomial smoothing surfaces. The BF method removes noise in smoothing areas but also seriously blurs (a) Noisy (b) BF [3] (c) GNF [13] (d) CNR [11] (e) L 0 [4] (f) TGV [7] (g) Ours (h) GT sharp features; the GNF and CNR methods produce much better smoothing results but slightly blur the strong edges; and L 0 minimization retains sharp features but leads to slanted artifacts in the smoothing regions. While, our method not only produces a similar result as the cutting-edge high-order (HO) regularization [6] in polynomial-smoothing regions but also preserves the sharpening corners and edges. This is further demonstrated by the results in Fig. 3 and the real scanned surfaces in Fig. 4. The experiments demonstrate the extension of our semi-sparse model to triangular meshes. We define the differential operator over the edge and only update the vertexes for mesh denoising. It is also possible to take a two-stage strategy for both vertexes and normal vectors as explained in [6]. We refer the interested reader to [6,7,9] for more details.
(a) Noisy (b) BF [3] (c) GNF [13] (d) CNR [11] (e) L 0 [4] (f) TGV [7] (g) Ours (h) GT
FIGURE 1 .
1The illustration of discrete operators on meshes.
FIGURE 2 .
2Mesh denoising, surface corrupted by Gaussian noise in random directions with standard deviation σ = 0.3L (L is the average length of edges). (a) Noisy input, (b) Bilateral filter (BF)[3], (c) Guided normal filter (GNF)[13], (d) Cascaded normal regression (CNR)[11], (e) L 0 minimization[4], (f) TGV regularization[6], (g) Our result and (h) Ground Truth (GT).
FIGURE 3 .
3Mesh denoising results. (a) Input (Gaussian noise σ = 0.3L), (b) Bilateral filter (BF)[3], (c) Guided normal filter (GNF)[13], (d) Cascaded normal regression (CNR)[11], (e) L 0 minimization[4], (f) TGV regularization[7], (g) Our result and (h) Ground Truth (GT).
Polygon mesh processing. M Botsch, L Kobbelt, M Pauly, P Alliez, B Lévy, CRC pressM. Botsch, L. Kobbelt, M. Pauly, P. Alliez, and B. Lévy. Polygon mesh processing. CRC press, 2010.
Discrete differential geometry: An applied introduction. Notices of the AMS, Communication. K Crane, K. Crane. Discrete differential geometry: An applied introduction. Notices of the AMS, Communication, pages 1153-1159, 2018.
Bilateral mesh denoising. S Fleishman, I Drori, D Cohen-Or, ACM transactions on graphics. ACM22S. Fleishman, I. Drori, and D. Cohen-Or. Bilateral mesh denoising. In ACM transactions on graphics (TOG), volume 22, pages 950-953. ACM, 2003.
FIGURE 4. Mesh denoising on scanned surfaces. (a) Noisy input, (b) Cascaded normal regression. L He, S Schaefer, ACM Transactions on Graphics (TOG). 32464Mesh denoising via l 0 minimization. c) L 0 minimization [4], (d) Our resultL. He and S. Schaefer. Mesh denoising via l 0 minimization. ACM Transactions on Graphics (TOG), 32(4):64, 2013. FIGURE 4. Mesh denoising on scanned surfaces. (a) Noisy input, (b) Cascaded normal regression [11], (c) L 0 minimization [4], (d) Our result.
Semi-sparsity for smoothing filters. J Huang, H Wang, X Wang, M Ruzhansky, IEEE Transactions on Image Processing. 32J. Huang, H. Wang, X. Wang, and M. Ruzhansky. Semi-sparsity for smoothing filters. IEEE Transactions on Image Processing, 32:1627-1639, 2023.
Triangulated surface denoising using high order regularization with dynamic weights. Z Liu, R Lai, H Zhang, C Wu, SIAM Journal on Scientific Computing. 411Z. Liu, R. Lai, H. Zhang, and C. Wu. Triangulated surface denoising using high order regularization with dynamic weights. SIAM Journal on Scientific Computing, 41(1):B1-B26, 2019.
Mesh total generalized variation for denoising. Z Liu, Y Li, W Wang, L Liu, R Chen, IEEE Transactions on Visualization and Computer Graphics. 2812Z. Liu, Y. Li, W. Wang, L. Liu, and R. Chen. Mesh total generalized variation for denoising. IEEE Transac- tions on Visualization and Computer Graphics, 28(12):4418-4433, 2021.
Mesh denoising via a novel mumford-shah framework. Z Liu, W Wang, S Zhong, B Zeng, J Liu, W Wang, Computer-Aided Design. 102858Z. Liu, W. Wang, S. Zhong, B. Zeng, J. Liu, and W. Wang. Mesh denoising via a novel mumford-shah framework. Computer-Aided Design, page 102858, 2020.
Discrete differential-geometry operators for triangulated 2-manifolds. M Meyer, M Desbrun, P Schröder, A H Barr, Visualization and mathematics III. SpringerM. Meyer, M. Desbrun, P. Schröder, and A. H. Barr. Discrete differential-geometry operators for triangulated 2-manifolds. In Visualization and mathematics III, pages 35-57. Springer, 2003.
Fast and effective feature-preserving mesh denoising. X Sun, P L Rosin, R Martin, F Langbein, IEEE transactions on visualization and computer graphics. 135X. Sun, P. L. Rosin, R. Martin, and F. Langbein. Fast and effective feature-preserving mesh denoising. IEEE transactions on visualization and computer graphics, 13(5):925-938, 2007.
Mesh denoising via cascaded normal regression. P.-S Wang, Y Liu, X Tong, ACM Trans. Graph. 356P.-S. Wang, Y. Liu, and X. Tong. Mesh denoising via cascaded normal regression. ACM Trans. Graph., 35(6):232-1, 2016.
Variational mesh denoising using total variation and piecewise constant function space. H Zhang, C Wu, J Zhang, J Deng, IEEE transactions on visualization and computer graphics. 217H. Zhang, C. Wu, J. Zhang, and J. Deng. Variational mesh denoising using total variation and piecewise constant function space. IEEE transactions on visualization and computer graphics, 21(7):873-886, 2015.
Guided mesh normal filtering. W Zhang, B Deng, J Zhang, S Bouaziz, L Liu, Computer Graphics Forum. Wiley Online Library34W. Zhang, B. Deng, J. Zhang, S. Bouaziz, and L. Liu. Guided mesh normal filtering. In Computer Graphics Forum, volume 34, pages 23-34. Wiley Online Library, 2015.
| []
|
[
"CLASSIFICATION OF CONDITIONAL MEASURES ALONG CERTAIN INVARIANT ONE-DIMENSIONAL FOLIATIONS",
"CLASSIFICATION OF CONDITIONAL MEASURES ALONG CERTAIN INVARIANT ONE-DIMENSIONAL FOLIATIONS"
]
| [
"M Noriega ",
"G Ponce ",
"R Varão "
]
| []
| []
| Let (M, A, µ) be a probability space and f : M → M a homeomorphism preserving the Borel ergodic probability measure µ. Given F a continuous one-dimensional f -invariant foliation of M with C 1 leaves, we show that if f preserves a continuous F -arc length system, then we only have three possibilities for the conditional measures of µ along F , namely:• they are atomic for almost every leaf, or • for almost every leaf their support is a Cantor subset of the leaf or • for almost every leaf they are equivalent to the measure λ x induced by the invariant arc-length system over F . This trichotomy classifies, for example, the possible disintegrations of ergodic measures along foliations over which f acts as an isometry, and also disintegrations of ergodic measures along the center foliation preserved by transitive partially hyperbolic diffeomorphisms with topological neutral center direction.CONTENTS | null | [
"https://export.arxiv.org/pdf/1812.00057v3.pdf"
]
| 252,734,699 | 1812.00057 | 143643aff5e1efff0857b5b1af6e7608dc940076 |
CLASSIFICATION OF CONDITIONAL MEASURES ALONG CERTAIN INVARIANT ONE-DIMENSIONAL FOLIATIONS
M Noriega
G Ponce
R Varão
CLASSIFICATION OF CONDITIONAL MEASURES ALONG CERTAIN INVARIANT ONE-DIMENSIONAL FOLIATIONS
arXiv:1812.00057v3 [math.DS] 6 Oct 2022
Let (M, A, µ) be a probability space and f : M → M a homeomorphism preserving the Borel ergodic probability measure µ. Given F a continuous one-dimensional f -invariant foliation of M with C 1 leaves, we show that if f preserves a continuous F -arc length system, then we only have three possibilities for the conditional measures of µ along F , namely:• they are atomic for almost every leaf, or • for almost every leaf their support is a Cantor subset of the leaf or • for almost every leaf they are equivalent to the measure λ x induced by the invariant arc-length system over F . This trichotomy classifies, for example, the possible disintegrations of ergodic measures along foliations over which f acts as an isometry, and also disintegrations of ergodic measures along the center foliation preserved by transitive partially hyperbolic diffeomorphisms with topological neutral center direction.CONTENTS
Given a topological space X, B its Borel sigma-algebra and µ a probability measure on X, it is well known that for any sub-sigma algebra E ⊂ B, µ may be disintegrated over the partition induced by E in the sense that we may find a system of probability measures {µ x } x∈X , such that µ x x∈E,E∈E E = 1, x → µ x is Borel measurable and µ(B) = µ x (B)dµ for any B ∈ B. This system is referred to as being the disintegration of µ along E. This fact was latter extended to the more general context of Lebesgue spaces, where V. Rokhlin (see [17]) proved that µ may be 1 disintegrated over any measurable partition, that is, over any partition that is countably generated by elements of the sigma-algebra of the Lebesgue space in consideration.
The mentioned theorem of Rokhlin is extensively used in dynamical systems and was further extended to more general contexts (see for example [19], [15]). In general, given a certain dynamics, in many contexts this dynamics admits certain invariant partitions which are dynamically defined and the study of the disintegration of the invariant measures along these partitions usually yields important properties of the dynamics. In some cases these partitions are naturally given by invariant foliations of the dynamics.
On smooth ergodic theory for example, Anosov and partially hyperbolic diffeomorphisms admit a pair of invariant foliations called stable and unstable foliations, which we will denote here by F s and F u . In these contexts, measure disintegration techniques have been an essential tool to obtain ergodicity and rigidity of some properties such as regular conjugacy between certain C 1 -close Anosov maps.
In his seminal work D. Anosov proved [2] that the disintegration of the volume measure along the unstable (resp. stable) foliation of a volume preserving Anosov diffeomorphism is absolutely continuous with respect to the leaf measure. This result was generalized to the stable and unstable foliation of partially hyperbolic diffeomorphisms(see [4] for example). This is clearly not the general case for an arbitrary foliation, an example for which absolute continuity does not occur was given by A. Katok [11] and there are now several examples of other natures in the literature. More specifically, Katok's example shows a foliation by analytic curves of (0, 1) × R/Z such that there exists a full Lebesgue measure set which intersects each leaf in exactly one point. In this case we say that the conditional measures along the leaves are atomic, or that the foliation is atomic with respect to the referrence measure -which in the example of Katok is the standard two dimensional Lebesgue measure.
Atomicity and absolute continuity are two extremes among the possibilities that one could expect when studying the conditional measures along a foliation. The first one is equivalent to saying that the conditional measures are atomic measures and the last one implies that the conditional measures are absolutely continuous with respect to the Riemannian measure of the leaf, a property which is usually called leafwise absolute continuity or Lebesgue disintegration of measure. Although these are two extreme behaviors among, a priori, many possibilities for the disintegration of a measure, recent results have indicated that this dichotomy is more frequent than one would at first expect. In [18] D. Ruelle and A. Wilkinson proved that for certain skew product type of partially hyperbolic dynamics, if the fiberwise Lyapunov exponent is negative then the disintegration of the preserved measure along the fibers is atomic. Later A. Homburg [9] proved that some examples treated in [18] one can actually prove that the disintegration is composed by only one dirac measures. A. Avila, M. Viana and A. Wilkinson [3] proved that for C 1 -volume preserving perturbations of the time-1 map of geodesic flows on negatively curved surfaces, the disintegration of the volume measure along the center foliation is either atomic or absolutely continuous and that in the latter case the perturbation should be itself the time-1 map of an Anosov flow. Also inside the class of derived from Anosov diffeomorphisms, G. Ponce, A. Tahzibi and R. Varão [14] exhibited an open class of volume preserving diffeomorphisms which have (mono) atomic disintegration along the center foliation and, recently, A. Tahzibi and J. Zhang [20] proved that non-hyperbolic measures of derived from Anosov diffeomorphisms on T 3 also must have atomic disintegration along the center foliation, answering a question from [13].
In this paper our main goal is to better understand the disintegration of an invariant measure along an invariant foliation for the dynamics without requiring hyperbolicity or partialy hyperbolicity for f but assuming that the invariant foliation has some type of metric rigidity with respect to f . In other words, we aim to investigate what are the possible characterizations of the conditional measures obtained when we disintegrate µ over a foliation F , assuming that the behavior of f along F is very far from being hyperbolic.
Setting and statement of results.
A continuous m-dimensional foliation F of a smooth manifold M by C r -submanifolds is a partition of M into C r -submanifolds which can be locally trivialized by local charts, that is, for each
x ∈ M one can find open sets U ⊂ M, V ⊂ R m , W ⊂ R n−m , n = dim M, and a homeomorphism ϕ : U → W × V, such that for every c ∈ W the set ϕ −1 ({c} × V)
, which is called a plaque of F , is a connected component of L ∩ U for a certain L ∈ F . Given a foliation F of M, we denote by F (x) the element of F which contains x and call such elements the leaves of F .
The following is the main result of this work.
Theorem A.
λ x (γ([0, 1])) = l x (γ), where γ is a simple arc. c) for almost every x ∈ M, the conditional measure on F (x) is supported in a Cantor subset of F (x).
The existence of invariant systems of metrics was obtained in [6] for the context of transitive partially hyperbolic diffeomorphisms with topological neutral center, meaning that f and f −1 have Lyapunov stable center direction (see [16, section 7.3.1]), i.e, given any ε > 0 there exists δ > 0 for which, given any C 1 path γ tangent to the center direction, one has lenght(γ) < δ ⇒ lenght( f n (γ)) < ε, ∀n ∈ Z.
For these diffeomorphisms, the center direction integrates to a continuous foliation F c of M ( [16,Corollary 7.6]). In particular an immediate consequence of Theorem A is the following.
Theorem B. Let f : M → M be a transitive C 1 partially hyperbolic diffeomorphism with one-dimensional topological neutral center direction. If f is ergodic with respect to a f -invariant measure µ then one of the following holds:
a) the disintegration of µ along F c is atomic. b) for almost every x ∈ M, the conditional measure on F c (x) is equivalent to the measure λ x defined on simple arcs of F c (x) by: λ x (γ([0, 1])) = l x (γ), where γ is a simple arc.
c) for almost every x ∈ M, the conditional measure on F c (x) is supported in a Cantor subset of F c (x).
1.2.
Organization of the paper. In Section 2 we give some preliminaries on measure theory and disintegration of measures along a foliation. In Section 3 we introduce the definition of F -arc length system with respect to a dynamics in (M, A, µ) and the construction of the measures induced by this system. In Section 4 we prove several technical lemmas concerning the continuity/measurability of certain functions such as the evaluation of conditional measures on certain balls inside the leaves of F . Finally, in Section 5 we give the proof of Theorem A.
BASICS ON CONDITIONAL MEASURES
All along the paper (M, A, µ) will be a probability space, where M is a compact Riemannian manifold, with dimension at least two, µ is a non-atomic Borel measure and A is a completion of the Borel σ-algebra B of M with respect to the measure µ. In other words, (M, A, µ) is measurably isomorphic to ([0, 1], A [0,1] , Leb [0,1] ) where Leb [0,1] is the standard Lebesgue measure on [0, 1] and A [0,1] is the σ-algebra of Lebesgue measurable sets of [0, 1]. We will denote by µ(·|U) the restriction of µ to a subset U ⊂ M, that is, it denotes the measure given by:
µ(·|U) = µ(U) −1 · µ(B ∩ U).∈ B, x → µ x (B) is E-measurable, ii) for every x ∈ X, µ x ([x]) = 1, iii) µ(B) = y∈B µ y (B)dµ(y).
As it is well known, every Borel measure µ admits a system of conditional measures with respect to any countably generated sub-σ algebra E ⊂ B and such system is essentially unique, see for example [8,Theorem 5.14].
In our context it is usually convenient to consider systems of conditional measures along certain partitions by C 1 submanifolds, as we detail in the sequel.
We say that F is a continuous foliation of dimension m by C 1 -manifold if F = {F (x)} x∈M is a partition of M into C 1 submanifolds of dimension m, such that for every x ∈ M there exist open sets U ⊂ M, V ⊂ R m , W ⊂ R n−m and a homeomorphism ϕ : U → V × W, called a local chart, such that for every c ∈ W the set ϕ −1 (V × {c}), which is called a plaque of F is a connected component of L ∩ U for a certain L ∈ F . Given a foliation F of M, we denote by F (x) the element of F which contains x and call such elements the leaves of F . Whenever U is a foliated chart, we denote by F |U the continuous foliation of U given by plaques of F restricted to U.
Given a one-dimensional continuous foliation F of M, consider a local chart of F
ϕ : U → (0, 1) × B n−1 1 (0),
we also call U a foliated box. Let { E q,k } a countable collection of sets defined by
E q,k = (0, 1) × B(q, 1/k) ⊂ R × R n−1 , q ∈ B n−1 1 (0) ∩ Q n−1 , k ∈ N.
Let E q,k = ϕ −1 ( E q,k ) and E ⊂ B the sub-σ-algebra generated by the family of Borelian sets E q,k . Notice that for every y ∈ M the atom, [y] is the connected component of F (y) ∩ U that contains y, in this case we also call the system of conditional measures of µ associated to E, {µ U y } y∈U , the disintegration of µ along F restricted to U. We say that the disintegration of µ along F is atomic if for any local chart U, for almost every y ∈ U there exists a(y) in the plaque F |U(y) with µ U y (a(y)) > 0. As we may observe, the disintegration of µ along F is always done inside a local chart. However, the following well known result states that if two local charts intersect, the disintegration of both local charts are equal except by a multiplicative constant when restricted to the intersection. x and µ U 2 x coincide up to a constant on
U 1 ∩ U 2 .
As observed in [3], this allows us to define a family of classes of measures {x ∈ M : Ω x }, such that
• ω x (M \ F (x)) = 0, for any representant ω x of Ω x , • any two represents of Ω x are equal modulo multiplication by a constant • for any local chart U of M and {µ U x } x∈U a disintegration of µ(·|U) along F |U, for almost every point x ∈ U we have
µ U x = ω x (·|F |U(x)),
where ω x denotes a representant of Ω x .
The family {Ω x } x∈M will be called a disintegration of µ along F .
INVARIANT ARC-LENGTH AND INVARIANT METRIC SYSTEMS
Given f : M → M and a foliation F of M, we say that f preserves
F , or that F is f -invariant if for x ∈ M F ( f (x)) = f (F (x)).
From now on, F will denote a continuous and f -invariant one dimensional foliation. By a simple arc γ on a leaf F (x), we mean a C 1 curve γ :
[0, 1] → F (x) for which γ(t) = γ(s) for all t = s with (t, s) / ∈ {(0, 1), (1, 0)}.
In the space of simple arcs we define an equivalence relation by saying that γ ∼ σ if σ is a reparametrization of γ. We say that a sequence of simple arcs γ n converges to γ (in the C 0 -topology) if γ n converges pointwise to γ. By convention, by a degenerate arc we mean a point.
The following definition is inspired in the concept of center metric given in [6].
Definition 3.1. We will call {l x } a F -arc length system, if for x ∈ M, l x is defined on the simple arcs on F (x), and l x satisfies the following properties:
(1) strictly positive on the non-degenerate arcs, and vanishing on degenerate arcs,
(2) l x (γ) = l x (σ) if γ ∼ σ, (3) let γ : [0, 1] → F (x)
be a simple arc and a ∈ (0, 1) then
l x (γ[0, a]) + l x (γ[a, 1]) = l x (γ[0, 1]), (4) let γ : [0, 1] → F (x) a simple arc, then l x (γ[0, 1]) = l f (x) ( f (γ[0, 1])). (5) given a sequence of simple arcs γ n : [0, 1] → F (x n ), converging to a simple arc γ : [0, 1] → F (x), then l x n (γ n ) → l x (γ), as n → +∞.
In general, it is easy to give examples of systems preserving some continuous foliation of dimension one F and admitting some F -arc length system.
Examples:
a) Let M = T d , d ≥ 2, and L : R d → R d a linear map given by a matrix with integer entries and for which 1 is an eigenvalue. Let v be an eigenvector associated to 1 and take E = R · v. The linear map L induces a linear function f L :
T d → T d and E induced a foliation F of T d which is one-dimensional and f L -invariant.
Clearly f L is an isometry along F . In particular, the family of standard arc-lengths on the leaves of F constitute a F -arc length system. For this case we obtain that any ergodic measure invariant by f L must have either atomic conditionals, conditionals supported on Cantor subsets of the leaves or they must be equivalent to the Lebesgue measure on the leaves. Question 1. Does there exists an ergodic measure µ preserved by some f L whose conditional measures are supported on a Cantor subset of the leaves?
b) Let ϕ : R × M → M be any C 1 flow. The foliation F given by the orbits of ϕ is a ϕ t - invariant C 1 -foliation of M for any fixed t ∈ R.
There is a natural F -arc length system in this case given by:
l x (γ) := l, with ϕ(l, γ(0)) = γ(1).
Assume that almost every x ∈ M is not a periodic point of ϕ. Given any ϕ t -ergodic invariant measure µ, it follows from [10, Example 7.4] that the disintegration of µ along F is either Lebesgue or atomic. c) Some skew-products also provide interesting examples. For example take f :
T d × S 1 → T d × S 1 given by f (x, y) = (g(x), R α (y)),
where g : T d → T d is any homeomorphism and R α : S 1 → S 1 denotes a rotation of angle α.
In this case, the foliation F whose leaves are {x} × S 1 , x ∈ T d , is f -invariant and by taking l x on {x} × S 1 to be given by the usual arc length on S 1 , we conclude that {l x } is a F -arc length system. In this example it is easy to determine the measurable properties of F in the sense that, given a Borel g-invariant measure ν, the measure ν × λ S 1 1 is f -invariant and, a direct application of the Fubbini Theorem shows that, the disintegration of µ along F has the Lebesgue measures λ S 1 as its conditional measures.
d) Another, more interesting case, is provided by recent results of Bonatti-Zhang [6]. A C 1 diffeomorphism f : M → M, on a compact Riemannian manifold M, is said to be partially hyperbolic if there is a nontrivial splitting 1 Here we denote by λ S 1 the standard Lebesgue measure on S 1 and a Riemannian metric for which there are continuous positive functions µ,μ, ν,ν, γ,γ with
TM = E s ⊕ E c ⊕ E u such that D f (x)E τ (x) = E τ ( f (x)), τ ∈ {s, c, u}ν(p),ν(p) < 1, and µ(p) < ν(p) < γ(p) <γ(p) −1 <ν(p) −1 <μ(p) −1 ,
such that for any vector v ∈ T p M,
µ(p)||v|| < ||D f (p) · v|| < ν(p)||v||, if v ∈ E s (p) γ(p)||v|| < ||D f (p) · v|| <γ(p) −1 ||v||, if v ∈ E c (p) ν(p) −1 ||v|| < ||D f (p) · v|| <μ(p) −1 ||v||, if v ∈ E u (p).
We say that f has topological neutral center if, for any ε > 0, there exists δ > 0 for which: given any smooth curve γ :
[0, 1] → M with γ ′ (t) ∈ E c (γ(t)), 0 ≤ t ≤ 1, if length(γ) < δ then length( f n (γ)) < ε, for all n ∈ Z.
For partially hyperbolic diffeomorphisms with neutral center, the center distribution E c integrates to a f -invariant foliation F c (see [16,Corollary 7.6]) called center foliation of f . In [6] the authors proved that if f : M → M is a C 1 partially hyperbolic diffeomorphism with neutral center direction, then f admits a continuous F -arc length system. To understand the measurable properties of the center foliation preserved by such maps, were one of the motivations of this work. As a consequence of our results, the disintegration of any f -invariant ergodic probability measure of such maps falls in three possible cases. When the conditional measures have full support, the second author proves in [12] the occurrence of an invariance principle. Further, if the measure is smooth, full support of the conditional measures imply the Bernoulli property for f . If, moreover, f is locally accessible, then F c is as regular as the map f itself.
Definition 3.2.
Given y, z, w ∈ F (x) we say that y is between z and w, if there exists a simple arc γ
: [0, 1] → F (x) such that γ(0) = z, γ(1) = w, γ(t) = y for some t ∈ (0, 1), and l x (γ) = min{l x (α) : α : [0, 1] → F (x), α(0) = z, α(1) = w}. Definition 3.3. Let {l x } be a F -arc length system. For x ∈ M we define a metric d x on F (x) by d x (y, z) := min{l x (γ) : γ : [0, 1] → F (x) is simple with γ(0) = y, γ(1) = z}.
We call the family {d x } x the F -metric system associated to the F -
arc length system {l x } x .
In what follows, we prove that indeed d x is a metric over F (x) and, moreover, the metric system is f -invariant.
Lemma 3.4.
Let {d x } be a F -metric system given by the Definition 3.3, then i) d x is an additive metric, that is, given y, z, w ∈ F (x) such that y is between z and w then
d x (z, w) = d x (z, y) + d x (y, w); ii) d x is invariant by f , that is, d f (x) ( f (z), f (y)) = d x (z, y).
Proof. Observe that the second item is trivial by the definiton of {d x } x , thus we only have to prove that d x is indeed a metric on F (x).
It is easy to see that d x (x, y) = 0 if, and only if,
x = y and that d x (x, y) = d x (y, x). Since F (x) is an one-dimensional submanifold of M, F (x) is homeomorphic either to R or S 1 . If F (x) is homeomorphic to R then, for z, w ∈ F (x) there is only one simple connected path γ : [0, 1] → F (x), modulo reparametrization, with γ(0) = z and γ(1) = w.
Given y between z and w there exists a ∈ (0, 1) such that γ(a) = y. In particular, γ | [0,a] and γ | [a,1] are the only simple connected paths, modulo reparametrization, from z to y, and from y to w respectively. Thus by Definition 3.1 we have
d x (z, w) = l x (γ) =l x (γ | [0,a] ) + l x (γ | [a,1] ) =d x (z, y) + d x (y, w). Now, if F (x) is homeomorphic to S 1 then there are two paths γ 1 , γ 2 : [0, 1] → F (x) that connect the points z and w. Assume, without loss of generality, that γ 1 satisfies d x (z, w) = l x (γ 1 ) and γ 1 (a) = y for some a ∈ (0, 1). Let us prove that d x (z, y) = l x (γ 1 | [0,a] ).
If this is not the case, there is another path
γ 3 : [0, 1] → F (x) for which γ 3 (0) = z, γ 3 (1) = y and d x (z, y) = l x (γ 3 ). In this case we have l x (γ 3 ) > l x (γ 2 ), since γ 3 is the concatenation of γ 2 with −γ 1 |[−1,−a] , where −γ 1 denotes the curve −γ 1 : [−1, 0] → F (x), −γ 1 (t) := γ 1 (−t). Thus, l x (γ 1 | [0,a] ) > d x (z, y) = l x (γ 3 ) ≥ l x (γ 2 ) ≥ l x (γ 1 ),
which is a contradiction. Therefore, l x (γ 1 | [0,a] ) = d x (z, y) and, analogously, l x (γ 1 | [a,1] ) = d x (y, w). By the second item of Definition 3.1 we have,
d x (z, w) = l x (γ 1 ) = l x (γ 1 | [0,a] ) + l x (γ 1 | [a,1] ) = d x (z, y) + d x (y, w),
concluding that d x is an additive metric as we wanted to show.
It is not true that {d x } x∈M is continuous in the sense that we may have sequences
x n → x, y n → y, with y n ∈ F (x n ), y ∈ F (x) but d x n (x n , y n ) d x (x, y)
. Indeed this happens, for example, for compact foliations where the leaves do not have uniformly bounded length. It is true, however, that this family of metrics are continuous when restricted to plaques inside local charts. We make this property more precise below.
Definition 3.5. Consider F a continuous foliation of M. A function F : x∈M F (x) × F (x) → [0, ∞)
will be called plaque-continuous if given any p ∈ M, there exists a local chart p ∈ U of F , such that for any sequences x n → x, y n → y with y n ∈ F |U(x n ), x ∈ U and y ∈ F |U(x), we have lim n→∞ F(x n , y n ) = F(x, y).
Any such local chart U will be called a continuity-domain of F. Definition 3.6. We say that a family of metrics {d x :
x ∈ M}, each d x defined on F (x), is plaque- continuous if F : x∈M F (x) × F (x) → [0, ∞) defined by F(x, y) := d x (x, y),
is plaque continuous. In this case if U is a continuity-domain of F we will also say that U is a continuitydomain of {d x }.
Proof. Let ϕ : U → (0, 1) × V ⊂ R n be a local chart of F where ϕ −1 ((0, 1) × {c}), c ∈ V, are the plaques of F in U. For any p ∈ U, consider ξ : W ⊂ U → (0, 1) × K ⊂ R n another local chart centered in p such that for any z ∈ W ⇒ l z (F |U(z)) > 3 · l z (F |W(z)).
This can be done by the continuity of {l x }. In particular, for any
x ∈ W, y ∈ F |W(x) = ξ −1 ((0, 1), c ′ ), the simple curve γ(t) = ξ −1 ((1 − t)ξ(x) + tξ(y), c ′ ) minimizes the l x -length connecting x and y, that is, d x (x, y) = l x (γ).
On that account, consider x ∈ W, y ∈ F |W(x) = ξ −1 ((0, 1), c ′ ) and sequences x n ∈ W, y n ∈ F |W(x n ) = ξ −1 ((0, 1), c n ) with x n → x and y n → y. Let γ be defined as in the previous paragraph and γ n (t) := ξ −1 ((1 − t)ξ(x n ) + tξ(y n ), c n ). By the convergence of the sequences we have γ n → γ. But by the previous discussion on the choice of the local chart W we have d x n (x n , y n ) = l x n (γ n ) and d x (x, y) = l x (γ).
We then conclude by the continuity of {l x } that lim n→∞ d x n (x n , y n ) = lim n→∞ l x n (γ n ) = l x (γ) = d x (x, y).
Therefore {d x } is plaque-continuous as we wanted to show.
Lemma 3.8.
Given any local open transversal T to F , for any r small enough, the set
S := x∈T B d x (x, r)
is open.
Proof. Assume that T is a local transversal associated to a certain local chart (U, ϕ). Let r > 0 small enough we have B d x (x, r) ⊂ U for every x ∈ T. In particular U \ T has two open connected components U 1 and U 2 with U 1 ∩ U 2 = T.
Since U is a local chart, we may consider an orientation on the F |U-plaques. Assume that S is not open. Then, there exists y ∈ S and a sequence y k / ∈ S, with y k → y.
Consider x ∈ T such that y ∈ B d x (x, r) and denote by ϕ the flow on the F |U-plaques induced by the orientation fixed before and such that
d p (ϕ t (p), p) = |t|, whenever ϕ t (p) is defined. Let t 0 be such that x = ϕ t 0 (y). As y ∈ B d x (x, r), there exists δ > 0 for which ϕ t (y) ∈ S, t ∈ [t 0 − δ, t 0 + δ].
Now, by the plaque continuity and the fact that y k → y, we have
ϕ t 0 −δ (y k ) → ϕ t 0 −δ (y), ϕ t 0 +δ (y k ) → ϕ t 0 +δ (y).
Observe that ϕ t 0 −δ (y) and ϕ t 0 +δ (y) belong to different connected components, thus, for k large enought the same happens for ϕ t 0 −δ (y k ) and ϕ t 0 +δ (y k ). Since γ k :
= {ϕ t (y k ) : t ∈ [t 0 − δ, t 0 + δ]}
is an arc with points in the interior and in the exterior of U 1 , it must intersect its boundary, namely T. This implies that y k ∈ S for large k, yielding a contradition.
∈ M, there is U ∈ U with B d x (x, r) ⊂ U.
Proof. For each x ∈ M, take any U x ∈ U with x ∈ U x . There exists r x > 0 for which
B d x (x, r x ) ⊂ U x . By plaque continuity of {d x } there exists a neighborhood x ∈ V x ⊂ U x for which y ∈ V x ⇒ B d y (y, r x ) ⊂ U x .
Since M is compact we may cover M with a finite number of neighborhoods V x i , 1 ≤ i ≤ l. Take
r = min{r x i : 1 ≤ i ≤ l}.
In the sequel we will prove a technical result which will be used along the proof of the main theorem. Namely, we prove that the continuous translation of a measurable set along the foliation F is also a measurable set.
(1) Φ t (A) := {x ∈ M : d x (x, A) < t},
is a measurable set.
Proof. Let U be a finite cover of M by local charts which are continuity-domains of {d x }. Consider r the number given by Proposition 3.9. In particular the family {U r/2 : U ∈ U }, defined by
U r/2 = {x ∈ U : d x (x, ∂U) ≥ r/2}, is still a cover of M. Let A ⊂ M be a Borel subset. Observe that Φ t (A ∩ U r/2 ) ⊂ U, U ∈ U , t < r/2.
We will prove that for U ∈ U , the subset Φ t (A ∩ U r/2 ) is measurable. Let ϕ U : U → B n−1 1 (0) × (0, 1) be a local chart of F , and inside U consider the orientation in the plaques F (x) induced by the orientation in the line segments of the form {x} × (0, 1) ⊂ R n−1 × R. This orientation induces, at each plaque, an order relation which we will denote by ≺ (the plaque being implicit in the context).
Now for s ∈ [−t, t], with 0 ≤ t < r/2 fixed, we define φ U s : U r/2 → U by: • for s > 0, φ U s (x) is the only point of the plaque F |U(x) such that d x (x, φ U s (x)) = s and x ≺ φ U s (x); • for s < 0, φ U s (x) is the unique point of the plaque F |U(x) such that d x (x, φ U s (x)) = −s and φ U s (x) ≺ x. Observe that φ U s is continuous for every |s| < t since U is a continuity-domain of {d x } and, consequently, it is a homeomorphism. Thus φ U s (A ∩ U r/2 ) is a measurable subset of M for every s ∈ [−t, t]. Now, for each 1 ≤ i ≤ n take Φ U t (A) := q∈Q q<t φ U q (A ∩ U r/2 ), 0 ≤ t < r/2.
Notice that Φ U t (A) is a measurable set, since each set in the countable union is measurable as we have proved before. Consequently,
Φ t (A) = U∈U Φ U t (A),
is a measurable set, as we wanted to show.
Definition 3.11.
Let {l x } be a F -arc length system. Then, we have a well defined homeomorphism
h x : F (x) → F, where F = R or F = S 1 , h x (x) = 0 2 ,
and such that, for any simple arc
γ : [0, 1] → F (x) we have l x (γ[0, 1]) = λ(h x (γ[0, 1])),
where λ denotes the Lebesgue measure on F. In particular λ(h x (γ[0, 1])) is the size of the interval h x (γ[0, 1]). We now define the measure λ x on F (x) given by:
λ x = (h −1 x ) * λ.
Note that if γ[0, 1] is a simple arc in F (x) then,
λ x (γ[0, 1]) = λ(h x (γ[0, 1])) = l x (γ[0, 1]).
Consequently, the measure λ x is a doubling measure 3 .
PROPERTIES OF NON-ATOMIC DISINTEGRATIONS OVER A CONTINUOUS ONE-DIMENSIONAL
FOLIATION
The proof of the main result of this paper follows from understanding the topological structure of supp µ U x , for a disintegration {µ U x } x∈U of µ(·|U) of µ on a local chart U. To this end we first need to understand the behavior, in terms of measure theory, of the map
(x, r) → µ U x (B d x (x, r)),
defined for a certain subset of U × R. This is the goal of this Section. Along the rest of the paper we assume the following:
• F is a f -invariant one dimensional continuous foliation, • U is a finite cover of M by local charts U of F such that U is still inside a local chart of F ,
• each U ∈ U is a continuity-domain of {d x }, • for each U ∈ U , {µ U x } is a disintegration of µ(·|U)
along the plaques F |U, • the disintegration of µ along F is not atomic, in particular, for each U ∈ U there exists a subset A U ⊂ U with µ(A U ) = 0 and for which
x / ∈ A U ⇒ µ U x is not atomic,
• r > 0 is any constant given by Proposition 3.9,
• {Ω x } x∈M a disintegration of µ along F and we denote ω x any representative of Ω x , x ∈ M.
We also fix the following notation: for any subset X ⊂ M, we denote by B X the Borel sigma algebra of X given by the topology induced by that of M. It is important to observe that, by definition, for any U ∈ U , the set A U is F |U-saturated in U.
Lemma 4.1.
For each 0 < r < r, U ∈ U and x ∈ U \ A U , the map
y → µ U x (B d x (y, r)),
is continuous when restricted to the subset F r ⊂ F |U(x) given by 3 Recall that given a metric space (X, d), a measure ν on X is said to be a doubling measure if there exists a constant Ω > 0 such that for any x ∈ X and any r > 0 we have ν(B(x, 2r)) ≤ Ω · ν (B(x, r)).
F r (x) = {y ∈ F |U(x) : B d x (y, r) ⊂ U}.
Proof. Take x ∈ U, U ∈ U . Let y n → y, y n , y ∈ F r (x). We want to show that for r > 0 y (y, r)).
lim n→∞ µ U x (B d yn (y n , r)) = µ U x (B d
Given any k ∈ N, since µ U x is not atomic, we have that
µ U x (∂B d x (y, r)) = 0 and µ U x (∂B d x (y n , r)) = 0, ∀n ∈ N,
where ∂B d x denotes the boundary of the set inside the leaf F (x). Now, let B n := B d x (y n , r)∆B d x (y, r) where Y∆Z denotes the symmetric difference of the sets Y and Z. From standard measure theory: (y, r)), as we wanted to show.
lim sup n→∞ µ U x (B n ) ≤ µ U x lim sup n→∞ B n . Thus lim sup n→∞ µ U x (B n ) ≤ µ U x ∞ m=1 n≥m B n ≤ µ U x (∂B d x (y, r)) = 0. Therefore lim n→∞ µ U x (B d x (y, r) \ B d x (y n , r)) = lim n→∞ µ U x (B d x (y n , r) \ B d x (y, r)) = 0 and consequently lim n→∞ µ U x (B d x (y n , r)) = µ U x (B d x
Proposition 4.2.
Let U ∈ U and 0 < r < r. Consider (V, ϕ) a local chart inside U such that
x ∈ V ⇒ B d x (x, r) ⊂ U. Then, restricted to V \ A U the map x → µ U x (B d x (x, r)), is B V\A U -measurable, and consequently B U\A U -measurable as V ⊂ U.
Proof. If x and y belong to the same F -plaque in U then µ U x = µ U y . We already know that for all
Borel subset W ⊂ U x ∈ V → µ U x (W), is Borel measurable.
Consider the local chart be given by the homeomorphism ϕ :
V → (0, 1) × G, where G ⊂ R n−1 is an open subset. Setting g r : V → [0, ∞) g r (x) = µ U x (B d x (x, r)), we have g r • ϕ −1 (x 1 , x 2 ) = µ U ϕ −1 (x 1 ,x 2 ) (B d ϕ −1 (x 1 ,x 2 ) (ϕ −1 (x 1 , x 2 ), r)).
Let us prove that this function is continuous in x 1 and Borel measurable in x 2 . By Lemma 4.1, restricted to V \ A U we already have the continuity of g r • ϕ −1 the first coordinate, since the second coordinate being fixed means we are evaluation the function on a single plaque where the conditional measure is non-atomic. Now, fix the first coordinate x 1 and consider the transversal T = {x 1 } × B n−1 1 (0). By Lemma 3.8, the set
S := x∈ϕ −1 (T) B d x (x, r),
is an open subset of M. Thus, y → µ U y (S) is a Borel measurable function in V, which implies that
g r • ϕ −1 (x 1 , ·) is B T -measurable. In particular, its restriction to T ∩ ϕ(V \ A U ) = T \ ϕ(A U ) is a B T\ϕ(A U ) -measurable map. But observe that for x ∈ T we have µ U x (S) = µ U x (B d x (x, r)).
Therefore, for x 1 fixed the map
x 2 ∈ G \ π 2 (ϕ(A U )) → µ U ϕ −1 (x 1 ,x 2 ) (B d ϕ −1 (x 1 ,x 2 ) (ϕ −1 (x 1 , x 2 ), r)) is B G\π 2 (ϕ(A U )) -measurable, where π 2 : (0, 1) × G → G is the projection onto the second coordinate. Consequently, g r • ϕ −1 restricted to ((0, 1) × G) \ ϕ(A U )
is a jointly measurable function with respect to the product sigma-algebra B (0,1) × B G\π 2 (ϕ(A U )) (see for example [1,Lemma 4.51]). As ϕ is a homeomorphism, we conclude that g r is B V\A U -measurable, as we wanted to show.
In the following Lemma, we prove that the subset of M consisting of all points x ∈ M for which there is a ball in F (x) with null µ U x measure, is a relatively Borel set.
Z U = x∈U\A U F |U(x) \ supp µ U x , is B U\A U -measurable set.
Proof. First let us give a better formulation for the definition of Z U . Observe that
Z U = {x ∈ U \ A U : µ U x (I) = 0 for some open ball x ∈ I ⊂ F (x)}.∈ N, define φ U i : U i \ A U → R by φ U i (x) = µ U x (B d x (x, q i )), where U i = {x ∈ U : B d x (x, q i ) ⊂ U}.
Observe that we may cover U i with a countable number of local charts V j i ⊂ U i , j ∈ N and, by Proposition
4.2, φ U i |V j i is B V j i \A U -measurable for every j. In par- ticular φ U i is B U i \A U -measurable for every i. On that account we have that Z U i := (φ U i ) −1 ({0}) ⊂ M is a B U i \A U -measurable subset, in particular, a B U\A U -measurable subset. It is not difficult to see that (2) Z U = ∞ i=1 Z U i .
Therefore Z U is a B U\A U -measurable subset as we wanted to show. Moreover, µ(Z U ) = 0.
Consider P the set given by
P = {x : ∃U, V ∈ U , x ∈ U ∩ V, µ U x (·|U ∩ V) ≁ µ V x (·|U ∩ V)},
that is, P is the set of points x for which there exists two local charts U and V in U , both containing x, where the respective conditional measures at the plaque of x, µ U x and µ V x , are not equivalent on the intersection F |U(x) ∩ F |V(x). In particular this set has zero measure by Proposition 2.1. Set
M := M ( U∈U (Z U ∪ A U ) ∪ P ). Let M 0 := n∈Z f n ( M).
As µ( M) = 1, we have µ(M 0 ) = 1. For each x ∈ M 0 , we will denote by µ x the measure on B d x (x, r) given by the conditional µ U x , for some U ∈ U with x ∈ U, normalized to give weight exactly one to B d x (x, r), that is, for a measurable F ⊂ F |U(x)
(3) µ x (F) = µ U x (F|B d x (x, r)).
Given any y ∈ B d x (x, r) ∩ M 0 , the measures µ y and µ x are proportional to each other at the intersection B d x (x, r) ∩ B d y (y, r) by Proposition 2.1, that is, there exists a constant β for which µ y = β · µ x restricted to B d x (x, r) ∩ B d y (y, r).
In particular, evaluating both sides at B d x (x, r) ∩ B d x (y, r) we see that
β · µ x (B d x (x, r) ∩ B d x (y, r)) = µ y (B d x (x, r) ∩ B d x (y, r)) ⇒ β = µ y (B d x (x, r) ∩ B d x (y, r)) µ x (B d x (x, r) ∩ B d x (y, r)
) .
Corollary 4.4. For each
0 < r < r, x ∈ M 0 y ∈ B d x (x, r) ∩ M 0 → µ y (B d x (y, r)),
is continuous.
Proof. For a certain 0 < r < r fixed, take any x ∈ M 0 . Let y ∈ B d x (x, r) ∩ M 0 and U ∈ U with B d x (y, r) ⊂ U, take y n ∈ B d x (x, r) ∩ M 0 with y n → y as n → ∞ and B d x (y n , r) ⊂ U. By definition, µ y = µ U y (·|B d x (y, r)), µ y n = µ U y n (·|B d x (y n , r)).
Therefore for n ∈ N such that B d x (y, r) ⊂ B d x (y n , r) and B d x (y n , r) ⊂ B d x (y, r), (y n , r))
(4) µ y n (B d x (y n , r)) = µ U y n (B d x (y n , r)) µ U y n (B d x (y n , r)) = µ U y (B d x (y n , r)) µ U y (B d x
. (y, r)) and µ U y (B d x (y n , r)) → µ U y (B d x (y, r)) as n → ∞. Therefore µ y n (B d x (y n , r)) → µ y (B d x (y, r)) as we wanted to show.
By Lemma 4.1 we have µ U y (B d x (y n , r)) → µ U y (B d x
Corollary 4.5.
For each x ∈ M 0 , the map
r ∈ [0, r] → µ x (B d x (x, r)),
is continuous. Furthermore the map
(5) (x, r) ∈ M 0 × [0, r] → µ x (B d x (x, r)), is jointly measurable. Proof. Let x ∈ M 0 , first, let us prove that r ∈ [0, r] → µ x (B d x (x, r)) is a continuous function. Let 0 = r < r and r n ∈ [0, r] ց r (if r = r the argument is analogous), hence µ x (B d x (x, r n )) = µ x (B d x (x, r)) + µ x (B d x (x, r n ) \ B d x (x, r)). As µ x is non-atomic we have lim n→∞ µ x (B d x (x, r n ) \ B d x (x, r)) = 0. Then, µ x (B d x (x, r n )) → µ x (B d x (x, r)),
showing the first part of the statement. Let us show the second statement. For each x ∈ M 0 , let x ∈ V x a local chart with
y ∈ V x ⇒ B d y (y, r) ⊂ U x , for some U x ∈ U .
As M is compact we may cover M with a finite number of such local charts, say V 1 , V 2 , . . . , V l and call U 1 , U 2 , . . . , U l the associated local charts in U . For any j, consider y (y, r)).
y ∈ V j → µ y (B d
Observe that
µ y (B d y (y, r)) = µ U j y (B d y (y, r)) µ U j y (B d y (y, r)) . Therefore, by Proposition 4.2, y ∈ V j ∩ M 0 → µ y (B d y (y, r)) is a B U j \A U j -measurable map. As j is arbitrary, y ∈ M 0 → µ y (B d x (y, r)) is a B U∩M 0 -measurable map.
Thus, the map given by (5) is jointly measurable as it is continuous in the first coordinate and B M 0 -measurable in the second.
PROOF OF THE MAIN THEOREM
First of all, we can assume that (M, µ) is an atom-less probability space and that the disintegration of µ along F is not atomic, otherwise there would be nothing to do. We also consider the same objects fixed in the beginning of Section 4. Now we define the distortion of the disintegration measure of µ relative to the metric system in each leaf F (x) by the following. , and let d = {d x } be the F -metric system induced by the arc-length system {l x } as in Definition 3.3. We define the µ-distortion of the F -metric system by
∆(x) = lim sup ε→0 µ x (B dx (x,ε)) 2ε if x ∈ M 0 0 if x / ∈ M 0 .
Recall that B d x (x, ε) is the ball inside F (x), centered in the point x and with radius ε with respect to the metric d x . Observe that, a priori, ∆(x) is a measurable function but it is not immediately true that ∆(x) < ∞ for µ-almost every x. Also note that,
f * µ x = µ f (x) and f (B d x (x, ε)) = B d f (x) ( f (x), ε), since d f (x) ( f (x), f (y)) = d x (x, y),
we conclude that ∆(x) is f -invariant map. By ergodicity of f it follows that ∆(x) is constant almost everywhere, let us call that constant by ∆, that is for almost every x:
(6) ∆(x) = ∆.
Let M ⊂ M 0 be a Borel f -invariant full measure set of points x for which (6) occurs.
5.1.
Technical Lemmas for the case ∆ = ∞.
Lemma 5.2. If ∆ = ∞, there exists a sequence ε k → 0, as k → +∞, and a full measure subset R ∞ ⊂ M such that i) R ∞ is f -invariant; ii) for all x ∈ R ∞ we have (7) µ x (B d x (x, ε k )) 2ε k ≥ k.
Proof. Let k ∈ N * arbitrary. Since ∆(x) = ∆ for every x ∈ M, define
ε k (x) := sup ε ≤ 1 : µ x (B d x (x, ε)) 2ε ≥ k , x ∈ M.
Claim:, The function ε k (x) is measurable for all k ∈ N.
Proof. Define
w(x, ε) = µ x (B d x (x, ε)) 2ε .
By Corollary 4.5, for any x ∈ M 0 the function w(x, ·) : (0, r) → (0, ∞) is continuous and, for 0 < ε < r fixed the function w(·, ε) : M 0 → (0, ∞) is measurable function by Proposition 4.2. Given any k ∈ N, β > 0, the continuity of w(x, ·) implies that
ε −1 k ((0, β)) ={x : ε k (x) ∈ (0, β)} = β≤r≤1 w(·, r) −1 ([0, k)) = β≤r≤1,r∈Q w(·, r) −1 ([0, k)).
Therefore ε −1 k ((0, β)) is measurable, as it is a countable intersection of measurable subsets of M 0 , and consequently ε k is a measurable function for every k.
Note that ε k (x) is f -invariant. Thus, by ergodicity, for every k ∈ N the function ε k is constant almost everywhere, let R ∞ k be a full measure set such that ε k (x) is constant equal to ε k . It is easy to see that the sequence ε k goes to 0 as k goes to infinity. Take R ∞ := +∞ k=1 R ∞ k . Since each R ∞ k has full measure, R ∞ has full measure and clearly satisfies what we want for the sequence {ε k } k . Finally, take R ∞ = i∈Z f i ( R ∞ ). The set R ∞ is f -invariant, has full measure and satisfies (i) and (ii).
We now set, for each U ∈ U , x ∈ U \ (Z U ∪ A U ),
Π ∞ x,U := y ∈ F |U(x) \ Z U : 1 2ε k · µ U y (B d x (y, ε k )) µ U y (B d x (y, r)) ≥ k, ∀k with B d x (y, ε k ) ⊂ U , and Π ∞ U := x∈U\(Z U ∪A U ) Π ∞ x,U . Observe that if x ∈ R ∞ then x ∈ Π ∞ x,U therefore R ∞ ∩ U ⊂ Π ∞ U . In particular U \ Π ∞ U ⊂ U \ R ∞ . Since µ(R ∞ ) = 1 then Π ∞ U is measurable.
Also we can clearly assume that ε k is strictly decreasing and ε 1 < r.
Lemma 5.3. For every x ∈ R ∞ ∩ U, consider δ = δ(x) > 0 for which B d x [x, 2 · δ + r] ⊂ U. The set Π ∞ x,U ∩ B d x [x, δ]
is a closed subset on the plaque F |U(x).
Proof. Let y n → y, y n ∈ Π ∞ x,U ∩ B d x [x, δ], y ∈ F |U(x). In particular, B d x (y n , r) ⊂ B d x (x, δ + r) ⊂ U and by taking the limit over n we also have B d x (y, r) ⊂ B d x (x, δ + r) ⊂ U. Furthermore, it is clear that y ∈ B d x [x, δ] since this is a closed set. By Lemma 4.1, for each k ∈ N the map
y ∈ B d x [x, δ] ⊂ F ε k (x) → µ U y (B d x (y, ε k )),
is continuous and the same holds for
y ∈ B d x [x, δ] ⊂ F r (x) → µ U y (B d x (y, r)).
Thus,
lim n→∞ µ U y n (B d x (y n , ε k )) µ U y n (B d x (y n , r)) = µ U y (B d x (y, ε k )) µ U y (B d x (y, r))
, k ≥ 1.
which implies that for all k ≥ 1 we have
µ U y (B d x (y, ε k )) 2ε k · µ U y (B d x (y, r)) = lim n→∞ µ U y n (B d x (y n , ε k )) 2ε k · µ U y n (B d x (y n , r)) ≥ k,
that is, y ∈ Π ∞ x,U as we wanted.
Now we consider the following sets
D ∞ U := F |U(Π ∞ U ) \ (F |U)(Z U ).
We claim that D ∞ U is a measurable subset. In fact, consider the natural projection π : U → U/F , as U is an open subset of a manifold (in particular it is a Polish space) we have that U/F is a Polish space with the quotient topology.
Since Z U = χ U ∩ (U \ A U ), where χ U is a Borel subset and U \ A U is F |U-saturated, then π(Z U ) = π(χ U ) ∩ π(U \ A U ),
where π(χ U ) is a Souslin set 5 by [5, Corollary 1.10.9], therefore
F |(Z U ) = π −1 (π(χ U ) ∩ π(U \ A U )) = π −1 (π(χ U )) ∩ (U \ A U ), is a measurable set. Since R ∞ ∩ U ⊂ F |U(Π ∞ U ) and µ(R ∞ ) = 1 we have that F |U(Π ∞ U ) is a measurable subset of U, this implies that D ∞ U = F |U(Π ∞ U ) \ F |U(Z U )
is a measurable set as we wanted to show. Since D ∞ U is a measurable, by ergodicity of f the f -invariant set:
D ∞ := n∈Z,U∈U f n D ∞ U ,
must satisfy either µ(D ∞ ) = 0 or µ(D ∞ ) = 1.
5.2.
Technical Lemmas for the case ∆ < ∞.
Lemma 5.4. If ∆ < ∞, there exists a sequence ε k → 0, as k → +∞, and a full measure subset R ⊂ M such that
i) R is f -invariant;
ii) for every x ∈ R, then
(8) µ x (B d x (x, ε k )) 2ε k − ∆ ≤ 1 k ;
Proof. The proof is very similar to the proof of Lemma 5.2. Let k ∈ N * arbitrary. Since ∆(x) = ∆ for every x ∈ M define
ε k (x) := sup ε : µ x (B d x (x, ε)) 2ε − ∆ ≤ 1 k .
Observe that such ε k exists because since the lim sup is ∆ we can take a sequence ε l → 0 such that the ratio given approaches ∆.
Claim:
The function ε k (x) is measurable for all k ∈ N.
Proof. Define
w(x, ε) = µ x (B d x (x, ε)) 2ε .
As observed in the proof of Lemma 5.2, r → w(x, r) is continuous and x → w(x, r) is measurable. Given any k ∈ N, k > 0, the continuity of w(x, ·) implies that
ε −1 k ((0, β)) ={x : ε k (x) ∈ (0, β)} = β≤r≤1 w(·, r) −1 ∆ + 1 k , ∞ ∪ w(·, r) −1 0, ∆ − 1 k = β≤r≤1,r∈Q w(·, r) −1 ∆ + 1 k , ∞ ∪ w(·, r) −1 0, ∆ − 1 k .
Therefore ε −1 k ((0, β)) is measurable, as it is a countable intersection of measurable sets, and consequently ε k is a measurable function for every k.
As ε k (x) is f -invariant, by ergodicity we may take the full measure set R k where ε k (x) is constant equal to ε k . The sequence ε k goes to 0 as k goes to infinity, so we setR := +∞ k=1 R k . Since each R k has full measure,R has full measure and clearly satisfies what we want for the sequence {ε k } k . The set R = i∈Z f i ( R) is f -invariant, has full measure and satisfies (i) and (ii) as we wanted.
Similar to the definitions made in section 5.1 we set
Π U := x∈U\(Z U ∪A U ) Π x,U . where Π x,U := y ∈ F |U(x) \ Z U : 1 2ε k · µ U y (B d x (y, ε k )) µ U y (B d x (y, r)) − ∆ ≤ 1 k , ∀k with B d x (y, r) ⊂ U . Lemma 5.5. For every x ∈ R ∩ U, consider δ(x) > 0 for which B d x [x, 2 · δ + r] ⊂ U. The set Π x,U ∩ B d x [x, δ]
is a closed subset on the plaque F |U(x).
Proof. Identical to the proof of Lemma 5.3.
Similar to the definition made in section 5.1, we consider the set
D U := F |U(Π U ) \ (F |U)(Z U ), and D := n∈Z,U∈U f n D U .
Similarly D U is measurable for all U ∈ U and again by ergodicity we have µ(D) = 0 or µ(D) = 1.
After proving the auxiliary lemmas for ∆ ∞ (resp. ∆) and obtaining the sets D (resp. D ∞ ) we divide the next part of the proof into four cases. 5.3. Case 1: ∆ < ∞ and µ(D) = 0. In this case we will show that the support of the conditional measures is a Cantor set for almost every x ∈ M.
As fixed in the beginning , consider {ω x } x the disintegration of µ along F and consider G a full measure F -saturated set of points where
f j * ω x = ω f j (x) , ∀j ∈ Z. Let G U := {x ∈ U : µ U x = ω x (·|F |U(x))} ∩ {x : µ U
x is non-atomic}. Consider:
• Φ U 1/n (Z U ) := {x ∈ U : d x (x, Z U ) < 1/n}, • E n = j G ∩ f j (Φ U 1/n (Z U ) ∩ G U ). As Z U = χ U ∩ (U \ A U ) we have Φ U 1/n (Z U ) = {x ∈ M : d x (x, ≻ U ) < 1/n} ∩ (U \ A U ),
which is measurable for n ≥n, for somen ∈ N not depending on U, by Lemma 3.10 since χ U is a Borel subset of U. Therefore the set of the second item is a f -invariant measurable subset of M, thus it either has full or null measure. Now, again we separate two cases:
• Case 3.1: Assume that for all n ≥n, µ(E n ) = 1. Then,
E U n = G U ∩ j G ∩ f j (Φ U 1/n (Z U ) ∩ G U ) ,
has full measure in U for every n ≥n. For z ∈ E U = n≥n E U n , let n 0 > 0 such that for n ≥ n 0 ≥n we have B d x (z, 1/n 0 ) ⊂ U. For n ≥ n 0 , let j with
f −j (z) ∈ Φ U 1/n (Z U ) ∩ G U , and p ∈ Z U with d x (p, f −j (z)) < 1/n. Then d x ( f j (p), z) < 1/n, which implies f j (p) ∈ U, and if µ U p (B d x (p, δ)) = 0, for δ small, then since µ U p ∼ ω f −j (z) (because f −j (z) ∈ G U ) and z ∈ G, it follows that ω z ( f −j (I p )) = ω f j (z) (I p ) = 0 and z ∈ G u , thus µ U z ( f −j (I p )) = 0. Therefore, f j (p) ∈ Z U and z ∈ Φ 1/n (Z U ). That is, the set Z U ∩ F |U(z) is dense in F |U(z), for almost every z ∈ E U . Claim: For x ∈ E U then C x := F |U(x) Z U is a Cantor set in F |U(x).
Proof. To prove that C x is a Cantor set, we will show that this set is a nowhere dense and perfect set. Since Z U ∩ F |U(x) is a dense and open set in F |U(x) we have that C x is a nowhere dense and closed set. Now let us see that C x has no isolated points. Let y ∈ C x , suppose that there exist r > 0 with B d x (y, r) ⊂ U and B d x (y, r) ∩ C x = {y}. Since y ∈ C x ,
0 < µ U x (B d x (y, r)) = µ U x (B d x (y, r) \ C x ) + µ U x ({y}) = µ U x ({y}),
thus µ U x ({y}) > 0, which is a contradiction since x / ∈ A U . Therefore C x is indeed a Cantor subset.
Thus, for almost every x ∈ M and any local chart U the conditional measures µ U x is supported in the Cantor set C x . We remark that for the Claim we did not use the fact that ∆ = ∞, thus the same argument will work when ∆ < ∞.
• Case 3.2: If, on the other hand, there exists N 0 ∈ N with N 0 ≥n such that µ(E N 0 ) = 0, then µ(E U N 0 ) = 0 and moreover µ(E U N ) = 0, for any N ≥ N 0 . Since
G U ∩ G ∩ ϕ 1/N (Z U ) ⊂ E U N 0 ,
we have
µ(Φ U 1/N (Z U )) = µ(G U ∩ G ∩ Φ 1/N (Z U )) ≤ µ(E U n ) = 0, ∀N ≥ N 0 .
In particular, for almost every x ∈ U we have (9) µ U x (Φ U 1/N (Z U ∩ F |U(x))) = 0.
As Φ U 1/N (Z U ∩ F |U(x))) is an open subset of F |U(x), by (9) we have, for almost every x ∈ U, Φ U 1/N (Z U ∩ F |U(x))) ⊂ Z U ∩ F |U(x)). But clearly the other continence holds, thus Z U ∩ F |U(x) = Φ U 1/N (Z U ∩ F |U(x))) and this implies Z U ∩ F |U(x)) = F |U(x). As this happens for almost every x ∈ U we fall in contradiction with the fact that µ(Z U ∩ U) = 0. Therefore this case does not happen. 5.4. Case 2: ∆ = ∞ and µ(D ∞ ) = 0. In particular for every U ∈ U we must have µ(D ∞ U ) = 0 which implies µ(F |U(Z U )) = µ(U). In this case we will proceed very similarly to the previous Case. For U ∈ U , consider the sets E n and E U n defined in Case 1. Again, if µ(E n ) = 1 for every n ∈ Z, then there exists a full measure subset of U, namely E U , such that z ∈ E U implies Z U ∩ F |U(z) is dense is F |U(z). Hence, as showed by the Claim in Case 1, it follows that the support of µ U x is a Cantor subset of the plaque F |U(x) for almost every x ∈ U. Otherwise, if µ(E N 0 ) = 0 for some N 0 ∈ N, then as in Case 1 we conclude that Z U ∩ F |U(x)) = F |U(x) contradicting the fact that µ(Z U ∩ U) = 0, thus this case does not occur. 5.5. Case 3: ∆ = ∞ and µ(D ∞ ) = 1. Let us prove that this case cannot occur. Since µ(D ∞ ) = 1, there exists U ∈ U with µ(D ∞ U ) > 0. Since µ(F |U(Π ∞ U )) = µ(U), for almost every pointx ∈ D ∞ U we have (10) µ Ū x (Π ∞ U ∩ F |U(x)) = 1.
Take any such typicalx and consider x ∈ F |U(x) ∩ Π ∞ U , in particular B d x (x, r) ⊂ U. Also,
Π ∞ x,U ∩ B d x [x, δ] is closed in F |U(x) for some δ > 0 small, if there exists z ∈ B d x [x, δ] \ Π ∞ x,U ∩ B d x [x, δ] then for some δ 2 > 0 we have B d x (z, δ 2 ) ⊂ B d x [x, δ] \ Π ∞ x,U ∩ B d x [x, δ]
and µ U x (B d x (z, δ 2 )) = 0 by (10). But this cannot happens since this would imply z ∈ Z U and, consequently, F |U(Z U ) ∩ D ∞ U = ∅, falling in contradiction with the definition of D ∞ U . Therefore
Π ∞ x,U ∩ B d x [x, δ] = B d x [x, δ].
Consider 0 < r 0 < δ small enough so that B d x (x, r + 2 · r 0 ) ⊂ U. By hypothesis, for k with ε k < r 0 , by Lemma 3.4, we can take ⌊r 0 /ε k ⌋ disjoint balls of radius ε k inside B d x (x, r 0 ), say with center a 1 , a 2 , . . . , a ⌊r 0 /ε k ⌋ . Then
⌊r 0 /ε k ⌋ ∑ i=1 µ U x (B d x (a i , ε k )) ≤ µ U x (B d x (x, r 0 )) ⇒ ⌊r 0 /ε k ⌋ ∑ i=1 µ U x (B d x (a i , ε k )) µ U x (B d x (x, r)) ≤ µ U x (B d x (x, r 0 )) µ U x (B d x (x, r))
B d x (x, r) with balls of radius ε k . Let α i := Then, β 1 ∆ · λ x ≤ µ x , λ x a.e y ∈ I.
In particular, if µ U x (E) = 0 then, for any compact subset I ⊂ {y ∈ F |U(x) : d x (y, ∂F |U(x)) ≥ r} we have λ x (E ∩ I) = 0. Since F (x) may be written as a countable union of increasing compact subsets, we conclude that λ x (E ∩ {y ∈ F |U(x) : d x (y, ∂F |U(x)) ≥ r}) = 0. Again, since r may be taken to be arbitrarily small we conclude that λ x (E) = 0, thus λ x << µ U x as we wanted to show.
FUNDING
Gabriel Ponce has received research support from FAPESP, grants # 2018/25624-0 and # 2022/07762-2. Régis Varão also received a research support from FAPESP, grant # 2016/22475-9. Marcielis Espitia Noriega received a Ph.d fellowship from CAPES, Finance code 001. This study was also partially supported by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior -Brasil (CAPES) and CNPq, Finance Code 001.
Proposition 2. 1 .
1(see for example[7, Proposition 5.17]) If U 1 and U 2 are domains of two local charts ϕ 1 and ϕ 2 of F , then for almost every x the conditional measures µ U 1
Proposition 3. 7 .
7The metric system given in Definition 3.3 is plaque-continuous.
Lemma 3. 10 .
10There exists t 0 > 0 such that for every 0 ≤ t ≤ t 0 and every Borel subset A ⊂ M the set
Lemma 4. 3 .
3For each U ∈ U , the set
Consider {q 1 ,
1q 2 , . . .} an enumeration of the Q ∩ [0, 1] and U be the given finite family of local charts covering M. For each U ∈ U and i
Definition 5. 1 .
1Let (M, µ) be a probability space and F be a continuous one-dimensional foliation of M. Let {µ x } to be the system of conditional measures along F given by (3), for x ∈ M 0 4
Let f : M → M be a homeomorphism over a compact smooth manifold and F be a finvariant one dimensional continuous foliation of M by C 1 -submanifolds and {l x } a F -arc length system. If f is ergodic with respect to a f -invariant measure µ then one of the following holds: a) the disintegration of µ along F is atomic. b) for almost every x ∈ M, the conditional measure on F (x) is equivalent to the measure λ x defined on simple arcs of F (x) by:
Given a sub-σ-algebra E ⊂ B generated by a countable family {E n } n∈N , the atom of x is the setConsequently, [x] is a Borel set for every x ∈ M and {[x] : x ∈ M} is a partition of M.Given E ⊂ B a countably generated sub-σ-algebra. A family of measures {µ x } x∈M is called a system of conditional measures of µ associated to E if i) for every Bgiven by
[x] :=
E∈E : x∈E
E.
Since {E n } n∈N generates E we may also write
[x] =
x∈E n
E n .
That is, S is open as we wanted to show. Proposition 3.9. Given a finite open cover U of M by local charts of F . There exists r > 0 such that for all x
Here we are using the identification S 1 = [0, 1]/ ∼ where 0 ∼ 1, thus the point 0 stands for the equivalence class of 0 in S 1 .
see Lemma 4.3 and recall that µ(M 0 ) = 1.
A subset of a Polish space Y is called a Souslin set, or an analytical set, if it is the image of a Polish space X by a continuous map from X to Y.
. By Lemma 4.1,) is continuous, hence there exists η > 0 such that.Taking k → ∞, the left side goes to infinity from where we conclude that µ U x (B d x (x, r 0 )) = ∞ falling in contradiction. Thus, this case does not occur. 5.6. Case 4: ∆ < ∞ and µ(D) = 1. We will prove that if this case occurs then for almost every x ∈ U, U ∈ U , the conditional measure in F |U(x) is equivalent to the measure λ x given in definition 3.11.Lemma 5.6. The constant ∆ is bounded away from zero andProof. Let y ∈ D. Then, for some n 0 ∈ Z and U ∈ U we have f n 0 (y) ∈ D U . Call x = f n 0 (y). AsTherefore we conclude (by the same argument used in Case 3 -Section 5.5) thatFor any given k ∈ N * , U ∈ U and x ∈ Π U we haveGiven ε > 0 take k 0 ∈ N such that k −1 0 < ε. Again since {d x } is a F -metric system, given a constant r > 0 we need at most s(k) = ⌊r/ε k ⌋ + 1 points, say a 1 , a 2 , ..., a s(k) , to cover the ballx,r)) . Again by continuity (see Lemma 4.1) there exists β > 0 such that α i ≤ β for all i. ThereforeSince lim s(k)ε k = r we have that β · s(k) ε k k goes to zero as k → ∞, we haveTherefore µ U x << λ x when restricted to {y ∈ F |U(x) : d x (y, ∂F |U(x)) ≥ r} and ∆ > 0. As r can be taken to be arbitrarily small in the beginning, it follows that µ U x << λ x as we wanted to show.Next we are able to conclude that µ U x is equivalent to the measure λ x .Proof. By Lemma 5.6 we know that µ U x << λ x . Since λ x is a doubling measure we have that the Radon-Nikodyn derivative dµ U x /dλ x exists and is given at λ x -almost every point y ∈ {y ∈ F |U(x) : d x (y, ∂F |U(x)) ≥ r} byIn particular, by taking the limit along the subsequence ε k , k → ∞, we conclude that, λ x − a.e y ∈ {y ∈ F |U(x) : d x (y, ∂F |U(x)) ≥ r}, which implies dµ Ux dλ x (y) = β(y) · ∆, λ x − a.e y ∈ {y ∈ F |U(x) : d x (y, ∂F |U(x)) ≥ r}.where β(y) = µ U x (B d x (y, r)). Since β is a continuous function, given any compact I ⊂ {y ∈ F |U(x) : d x (y, ∂F |U(x)) ≥ r} we have β 1 ∆ ≤ dµ x dλ x (y) ≤ β 2 ∆, λ x a.e y ∈ I.
Infinite dimensional analysis. D Charalambos, Kim C Aliprantis, Border, SpringerBerlinCharalambos D. Aliprantis and Kim C. Border. Infinite dimensional analysis. Springer, Berlin, 2006.
Geodesic flows on closed riemannian manifolds with negative curvature. D Anosov, Proc. Steklov Inst. Math. 90D. Anosov. Geodesic flows on closed riemannian manifolds with negative curvature. Proc. Steklov Inst. Math., 90:1- 235, 1969.
Absolute continuity, Lyapunov exponents and rigidity I: geodesic flows. A Avila, M Viana, A Wilkinson, Journal of European Math. Soc. 17A. Avila, M. Viana, and A. Wilkinson. Absolute continuity, Lyapunov exponents and rigidity I: geodesic flows. Journal of European Math. Soc., 17:1435-1462, 2015.
Dynamics of Systems with Nonzero Lyapunov Exponents. L Barreira, Y Pesin, Cambridge University PressL. Barreira and Y. Pesin. Dynamics of Systems with Nonzero Lyapunov Exponents. Cambridge University Press, 2007.
Measure Theory I. V I Bogachev, Springer-Verlag1BerlinV. I. Bogachev. Measure Theory I, volume 1. Springer-Verlag, Berlin, 2007.
Transitive partially hyperbolic diffeomorphisms with one-dimensional neutral center. C Bonatti, J Zhang, 1904.05295C. Bonatti and J. Zhang. Transitive partially hyperbolic diffeomorphisms with one-dimensional neutral center. ArXiv 1904.05295, 2019.
Diagonal actions on locally homogeneous spaces. In Homogeneous flows, moduli spaces and arithmetic, number 10 in Clay Math. M Einsiedler, E Lindenstrauss, Proc. nullProvidence, RIAmerican Math. SocM. Einsiedler and E. Lindenstrauss. Diagonal actions on locally homogeneous spaces. In Homogeneous flows, moduli spaces and arithmetic, number 10 in Clay Math. Proc., pages 155-241. American Math. Soc., Providence, RI, 2010.
Ergodic Theory with a view towards Number Theory. M Einsiedler, T Ward, Graduate Texts in Mathematics. Springer-Verlag London Limited259M. Einsiedler and T. Ward. Ergodic Theory with a view towards Number Theory. Graduate Texts in Mathematics, 259. Springer-Verlag London Limited 2011.
Atomic disintegrations for partially hyperbolic diffeomorphisms. A J Homburg, preprintA.J. Homburg. Atomic disintegrations for partially hyperbolic diffeomorphisms. preprint, 2015.
Recurrent measures and measure rigidity. Elon Lindenstrauss, Dynamics and randomness II. DordrechtKluwer Acad. Publ10Elon Lindenstrauss. Recurrent measures and measure rigidity. In Dynamics and randomness II, volume 10 of Non- linear Phenom. Complex Systems, pages 123-145. Kluwer Acad. Publ., Dordrecht, 2004.
Fubini foiled: Katok's paradoxical example in measure theory. J Milnor, The Mathematical Intelligencer. 192J. Milnor. Fubini foiled: Katok's paradoxical example in measure theory. The Mathematical Intelligencer, 19(2):30-32, 1997.
Ergodic properties of partially hyperbolic diffeomorphisms with topological neutral center. G Ponce, in preparationG. Ponce. Ergodic properties of partially hyperbolic diffeomorphisms with topological neutral center. in prepara- tion.
Central lyapunov exponents of partially hyperbolic diffeomorphisms on T 3. G Ponce, A Tahzibi, Proc. Amer. Math. Soc. 142G. Ponce and A. Tahzibi. Central lyapunov exponents of partially hyperbolic diffeomorphisms on T 3 . Proc. Amer. Math. Soc., 142:3193-3205, 2014.
Minimal yet measurable foliations. G Ponce, A Tahzibi, R Varão, Journal of Modern Dynamics. 81G. Ponce, A. Tahzibi, and R. Varão. Minimal yet measurable foliations. Journal of Modern Dynamics, 8(1):93-107, 2014.
Geometric properties of disintegration of measures. Renata Possobon, Christian S Rodrigues, Renata Possobon and Christian S. Rodrigues. Geometric properties of disintegration of measures, 2022.
A survey of partially hyperbolic dynamics. Maria Alejandra Rodriguez Federico Rodriguez Hertz, Raul Hertz, Ures, Partially hyperbolic dynamics, laminations, and Teichmüller flow. Providence, RIAmer. Math. Soc51Federico Rodriguez Hertz, Maria Alejandra Rodriguez Hertz, and Raul Ures. A survey of partially hyperbolic dynamics. In Partially hyperbolic dynamics, laminations, and Teichmüller flow, volume 51 of Fields Inst. Commun., pages 35-87. Amer. Math. Soc., Providence, RI, 2007.
Lectures on the entropy theory of transformations with invariant measure. Uspehi Mat. Nauk. V A Rohlin, 22V. A. Rohlin. Lectures on the entropy theory of transformations with invariant measure. Uspehi Mat. Nauk, 22(5 (137)):3-56, 1967.
Absolutely singular dynamical foliations. D Ruelle, A Wilkinson, Comm. Math. Phys. 219D. Ruelle and A. Wilkinson. Absolutely singular dynamical foliations. Comm. Math. Phys., 219:481-487, 2001.
Conditional measures and conditional expectation. David Simmons, David Simmons. Conditional measures and conditional expectation;
Rohlin's disintegration theorem. Discrete Contin. Dyn. Syst. 327Rohlin's disintegration theorem. Discrete Con- tin. Dyn. Syst., 32(7):2565-2582, 2012.
Disintegrations of non-hyperbolic ergodic measures along the center foliation of da maps. Ali Tahzibi, Jinhua Zhang, Ali Tahzibi and Jinhua Zhang. Disintegrations of non-hyperbolic ergodic measures along the center foliation of da maps, 2022.
. Estatística E Computação Departamento De Matemática, Científica, Campinas-Sp Imecc-Unicamp, Brazil , address: [email protected] DE MATEMÁTICA, ESTATÍSTICA E COMPUTAÇÃO CIENTÍFICA, IMECC-UNICAMP, CAMPINAS- SP, BRAZIL. Email address: [email protected]
. Estatística E Computação Departamento De Matemática, Científica, Campinas-Sp Imecc-Unicamp, Brazil , address: [email protected] DE MATEMÁTICA, ESTATÍSTICA E COMPUTAÇÃO CIENTÍFICA, IMECC-UNICAMP, CAMPINAS- SP, BRAZIL. Email address: [email protected]
. Estatística E Computação Departamento De Matemática, Científica, Campinas-Sp Imecc-Unicamp, Brazil , address: [email protected] DE MATEMÁTICA, ESTATÍSTICA E COMPUTAÇÃO CIENTÍFICA, IMECC-UNICAMP, CAMPINAS- SP, BRAZIL. Email address: [email protected]
| []
|
[
"A THEORY TO DESCRIBE EMERGENT PROPERTIES OF COMPOSITE F-ACTIN AND VIMENTIN NETWORKS",
"A THEORY TO DESCRIBE EMERGENT PROPERTIES OF COMPOSITE F-ACTIN AND VIMENTIN NETWORKS"
]
| [
"Horacio Lopez-Menendez ",
"Libardo Gonzalez-Torres "
]
| []
| []
| The synthetic biopolymeric gels demand a great interest as bio-materials to mimic many biological scaffolding structures, which can contribute to a better understanding of the cytoskeleton-like structural building blocks and soft nanotechnology. In particular semiflexible Factin and vimentin intermediate filaments (IF) form complex networks, and are key regulators of cellular stiffness. While the mechanics of F-actin networks or IF have already been characterised, the interaction between this two networks is largely unknown. Experimental studies using large deformations rheology show that co-polymerisation of F-actin and IF can produce composite networks either stronger or weaker than pure F-actin networks. We theoretically verify these effects developing a model into the framework of nonlinear continuum mechanics, in which we define a free energy functional considering the role of the entropic-elastic for semiflexible networks with transient crosslinks and also an energetic term to describe the interaction parameter which allows the coupling among the two networks. We validate the theoretical model with measurements performed performed by Jensen et al. on large deformations rheological experiments with different concentrations of actin and vimentin 1 arXiv:1811.05576v1 [cond-mat.soft] | 10.1016/j.jmps.2019.03.017 | [
"https://arxiv.org/pdf/1811.05576v1.pdf"
]
| 53,687,472 | 1811.05576 | 74fe3c869e0e962137974c009efe412c61f2f122 |
A THEORY TO DESCRIBE EMERGENT PROPERTIES OF COMPOSITE F-ACTIN AND VIMENTIN NETWORKS
Horacio Lopez-Menendez
Libardo Gonzalez-Torres
A THEORY TO DESCRIBE EMERGENT PROPERTIES OF COMPOSITE F-ACTIN AND VIMENTIN NETWORKS
The synthetic biopolymeric gels demand a great interest as bio-materials to mimic many biological scaffolding structures, which can contribute to a better understanding of the cytoskeleton-like structural building blocks and soft nanotechnology. In particular semiflexible Factin and vimentin intermediate filaments (IF) form complex networks, and are key regulators of cellular stiffness. While the mechanics of F-actin networks or IF have already been characterised, the interaction between this two networks is largely unknown. Experimental studies using large deformations rheology show that co-polymerisation of F-actin and IF can produce composite networks either stronger or weaker than pure F-actin networks. We theoretically verify these effects developing a model into the framework of nonlinear continuum mechanics, in which we define a free energy functional considering the role of the entropic-elastic for semiflexible networks with transient crosslinks and also an energetic term to describe the interaction parameter which allows the coupling among the two networks. We validate the theoretical model with measurements performed performed by Jensen et al. on large deformations rheological experiments with different concentrations of actin and vimentin 1 arXiv:1811.05576v1 [cond-mat.soft]
Introduction
The mechanical scaffolding of the cell, called cytoskeleton, is defined by bio-polymeric structures such as F-actin, microtubules, and intermediate filaments (IF) that creates networks which are critical in determining the mechanical properties of the cells. Thus, the cytoskeleton conducts many mechanical duties such as mechano-sensing, motility, contraction, division and extrusion. Their dysfunctions are strongly associated with several pathological conditions. In vivo, it can be found from dense amorphous networks to well-organized bundled arrays. These varieties of assemblies are very dynamic, and evolving by non-equilibrium actin polymerization/depolymerization and also by the active force such as myosin motors. An ideal system for such studies is the in vitro network, as it provides a well-controlled environment. Previous in vitro studies have reported the mechanics of either single filaments [1,2], or networks of filaments comprised of single biopolymer species [3,4,5]. Complementing the advances developed using in-vitro networks, a large number of theoretical and computational models have been developed, providing new ways of thinking about cellular mechanics. In this sense, some microstructural approaches based on the worm-like chain model represent an excellent description of the actin mechanics [6,7,8,9,10]. Furthermore, several computational efforts evaluating a large-scale fiber models have recently made a substantial progress, but they are still limited to passive situations without considering the internal stresses due to the effect of the entanglements and polymerisation dynamic [11,12]. On the other side, in the context of hydrogels, a much more relevant work has been made on interpenetrating polymer networks (IPN) which consists of two or more polymer networks, at least one of which is polymerised and / or crosslinked in the immediate presence of the other. The polymer networks are interlaced on a molecular scale but not covalently bonded to each other. Above glass transition temperature, IPN are capable to achieve large deformations and to manifest high toughness, Mullins effect and necking instabilities [13,14,15]. In order to improve the understanding of the micromechanics of IPN, a constitutive modelling of interpenetrating networks has also been proposed [16,17].
Nevertheless, studies combining F-actin and IF are few, if we consider that together represent the majority of the intracellular network [18]. This sort of studies are notably interesting because the co-polymerisation of the two networks will shape a resultant structural state strongly modified by alterations in assembly kinetics and steric constraints, where the presence of an IF network is likely to alter the actin assembly [19,20]. Thus, gaining a deep understanding of the emergent behaviour of composite networks will provide better ways to control and build complex structures. In this regard Jensen et al. elaborated a crosslinked F-actin network interpenetrated with a vimentin IF network and used bulk rheology to investigate the composite network mechanics in both the linear and nonlinear regimes. They found that co-polymerisation with vimentin strengthens F-actin networks when actin crosslinks are abundant, as expected from the overall increase in the amount of polymer in the network. Unexpectedly they found that the mechanical response of the F-actin networks are weakened due to the co-polymerisation with vimentin when the F-actin crosslinking density is low compared to the network mesh size. Due to the changes in the network elasticity, the yield stress, and the strain stiffening, they suggest that this surprising emergent response comes from steric constraints on F-actin by vimentin (IF), promoting a lower degree of F-actin crosslinking in the final network.
The aim of this work is to develop a mechanical model capable to explain the observed rheological experiments performed by Jensen et al. [21]. Interestingly, for the range of explored concentrations in the reported experiments, the vimentin network has a small role in the definition of the mechanical properties of the composite network, showing a high flexibility; but, nevertheless it plays a significant role setting physical crosslinks or steric constrains over the actin network. Then according to that, the main component of the structural mechanics will be given by the crosslinked F-actin network. Therefore, we develop a model defining an effective actin network which condenses the alteration of its structure on its main physical variables. In order to do so, we propose a mathematical model into the framework of non-linear continuum mechanics by using the semiflexible filament described by a worm-like chain following the Blundel-Terentjev formalism [22], and to homogenise the F-actin network we follow the 3-chain model as was implemented Meng et al [23,8]. On the basis of this model, we introduce the dynamic effect of the crosslinks in order to capture the strengtheningweakening transition manifested by the network [7,24,25]. Next, to capture the effects associated with the interaction F-actin/vimentin we propose an energy term associated with the interaction energy by using the Landau model of phenomenological continuous phase transition where we define an interaction parameter that captures the effects of the alteration over the F-actin network due to the interaction [26,25]. Finally, we validate the model with experimental data coming from Jensen et al [21], and discuss the results and future works.
Methods
In order to describe the theoretical constitutive model we first introduce the basic results associated with the framework non-linear continuum mechanics:
Basic results of the continuum mechanics. Let B 0 be a continuum body defined as a set of points in a certain assumed reference configuration. Denote by {χ : B 0 → R 3 } the continuously differentiable, one to one mapping (as well as its inverse χ −1 ) which puts into correspondence B 0 with some region B, the deformed configuration, in the Euclidean space. This one-to-one mapping χ transforms a material point X ∈ B 0 to a position x = χ(X) ∈ B in the deformed configuration.
The deformation gradient F is defined as
F := ∂χ(X) ∂X ,(1)
with J(X) = det(F) > 0 the local volume ratio. It is sometimes useful to consider the multiplicative split of
F F = J 1/3 1F,(2)
into dilatational and distortional (isochoric) parts, where 1 is the second-order identity tensor. Note that det(F) = 1. From this, it is now possible to define the right and left Cauchy-Green deformation tensors, C and b respectively, and their corresponding isochoric counterpartsC andb
C = F T F = J 2/3C ,C =F TF , b = FF T = J 2/3b ,b =FF T ,(3)
For a hyperelastic material, the stress at a point x = χ(X) is only a function of the deformation gradient F at that point. A change in stress obeys only to a change in configuration. In addition, for isothermal and reversible processes, there exists a scalar function, a strain energy function (SEF) Ψ, from which the hyperelastic constitutive equations at each point X can be derived. For materials with a particular symmetry group, the dependence of Ψ on the deformation gradient is affected by the symmetry group itself. Further, Spencer [27] showed that the irreducible integrity bases for the symmetric second-order tensors C and a 0 ⊗ a 0 , correspond to four invariants:
I 1 = trC, I 2 = 1 2 [(trC) 2 − trC 2 ], I 3 = det C = 1,(4)
Invariants I 1 , I 2 , I 3 are standard invariants of the Cauchy-Green deformation tensor, and are associated with the isotropic material behaviour. Invariant I 4 , arises from the anisotropy introduced by the remodelling. Next, it was proposed a representation of quasi-incompressible elasticity in which the SEF takes an uncoupled form in which the dilatational and deviatoric parts are such that
Ψ(X, C, a 0 ) = U (J) +Ψ(X,Ī 1 ,Ī 2 ,Ī 4 )(5)
whereĪ k , k = 1, . . . , 4, are the invariants of the isochoric Cauchy-Green tensorC (note thatĪ 3 = 1). In the developments in the next section, we use a SEF of the form given in Eq. 5. For a hyperelastic material with a SEF, Ψ, defined above, the second Piola-Kirchhoff stress can be written as:
S = 2 ∂Ψ ∂C = pJC −1 + 2J −2/3 DEV ∂Ψ ∂C ,(6)
where p = U (J), is the hydrostatic pressure, and DEV[·] is the deviatoric projection operator in the material description
DEV[·] ≡ [·] − 1 3 ([·] :C)C −1 .(7)
The Cauchy stress tensor is found by the weighted pushed forward of Eq. 6
σ = J −1 FSF T = p1 + 2J −1 dev F ∂Ψ ∂CF T ,(8)
where
dev[·] ≡ [·] − 1 3 ([·] : 1)1.(9)
Free Energy. In a first approximation we consider a Helmholtz free energy which accounts the strain energy functions associated with the elasticity of the crosslinked F-actin network and the intermediate filaments network made by vimentin, where its mechanical strain deformation is described by the Cauchy-Green tensor C. Also, a last energy term associated with the interaction between networks; this term is proportional to the ratio between the concentration of actin and the concentration of vimentin (c), this potential will allow us to define an interaction parameter (Γ)
Ψ(C, Γ, c) = Ψ IF (C, Γ) + Ψ actin (C, Γ) + Ψ inter (c, Γ) + U (J),(10)
In the following we describe first the strain energy functions without considering the effects of the interaction defined by Γ; these effects will be defined further for clarity. The first term in the free energy function refers to the intermediate filaments (IF). It has a long contour length and a low bending stiffness. They manifest a much higher flexibility in comparison with the F-actin network. In order to describe the soft mechanics of intermediate filaments we consider an isotropic Neo-Hookean strain energy function as:
Ψ IF (C) = c 1 2 Ī 1 − 3(11)
where c 1 > 0 is a stiffness parameter. The second term in the free energy function represents the strain energy function for the crosslinked actin network. This is modelled by means of a strain energy function (SEF) based on the wormlike chain model for semi-flexible filaments. In order to do so, we propose a mathematical model into the framework of non-linear continuum mechanics by using the semiflexible filament described by a worm-like chain following the Blundel-Terentjev formalism [22]. The two main physical parameters are the contour length of the filament, L c , and the persistence length l p , which represents a measure of the bundle stiffness and it compares the bending energy with the thermal energy, l p = EI/k B T . The chain is considered as semiflexible when L c ∼ l p . Combining the effects of enthalpy arising from bending and entropy of conformational fluctuations, the closed form of the single chain free energy can be expressed as a function of its end-to-end factor, x = ξ/L c :
ψ chain = k B T π 2 l p L c (1 − x 2 ) + k B T (1 − x 2 )(12)
Next, we build the continuum elastic free energy of the network; in this sense several ways to perform the homogenisation have been proposed into the context of rubber and biopolymer based eight chains model, [28,29,24,7] or by micro-sphere integration, [10]. Here, we choose to apply the three chain scheme, as was proposed by Meng et al, [23] because it allows the correct calculation of the normal stress. The primitive cube for the homogenisation is constructed with lattice points representing the crosslink sites, and the edges are aligned along the principle directions of deformation tensor C. Three chains are linked with their end-to-end vectors along the edges and the equilibrium mesh size ξ. On deformation, the lengths of the perpendicular edges over the lattice point become λ 1 ξ, λ 2 ξ and λ 3 ξ respectively. Then the free energy density of a semiflexible network can be expressed as:
Ψ 3c (λ i=1,2,3 ) = n 3 i=1,2,3 ψ chain (λ i ξ) (13) Ψ 3c = nk B T 3 (3 − x 2 I 1 ) + 3 − 2I 1 x 2 + I 2 x 4 (1 − I 1 x 2 + I 2 x 4 − I 3 x 6 )(14)
with x = ξ/L c If the stress tensor is expressed as a function of the strain invariants for an incompressible material, where the I 3 = 1 can be expressed as: Formerly, we have described the constitutive model for a semiflexible network with rigid crosslinks (covalent bonds). In the following we address the necessary modifications into the model, in order to capture the network fluidisation due to transient crosslinks. Network fluidisation: The network is buildup by the interaction between the actin filaments and the crosslinks, and this defines the mechanical properties of the structure. If these interactions are stable (for the stress and the time scales of the experiments), it provides a strong gelation process and the network shows a solid-like behaviour under deformation. Nevertheless, many biological crosslink molecules have at least two properties that could cause network fluidisation: (i) forceinduced unbinding and (ii) unfolding of the multiple internal protein domains that elongates the molecules. Previous studies have shown by computational simulation of the protein structure that it has flexible terminal regions which can twist and extend under mechanical stress without unbinding and lead to the lost of degrees of freedom [30]. Then if the crosslinks are not completely stable, but they are associated with a reaction that can proceed in both directions, folding/unfolding, flexible/rigid states, binding/unbinding, we then speak of a weak gelation process,with the network showing a fluid-like behaviour and potentially without manifesting a complete unbinding. As we mentioned previously the pre-strain into the structure produces an internal load for the bundle [31]. Nevertheless, this also affects the crosslinks, where the level of pre-strain leaves them closer to the transition. Then, to describe within the model the interaction between the crosslinks and the size of the mesh, i.e the contour length, L c , we propose, in a similar manner as proposed Lopez-Menendez et al. [7], the following expression as:
σ = 2 ∂Ψ ∂Ī 1 +Ī 1 ∂Ψ ∂Ī 2 C − Ī 1 ∂Ψ ∂Ī 1 + 2Ī 2 ∂Ψ ∂Ī 2 I 3 − ∂Ψ ∂Ī 2C .C − pI (15) Lc r 0 Lc Pub a. b. c. E b E bL c = L min c + δL cl c P ub ,(16)
where P ub defines the unfolding probability encompassing the states of unfolding or flexible crosslink, L min c represents the contour length when P ub = 0 (folded crosslink), and δL cl c represents the average increment of the contour length when the unbinding probability is one.
In this sort of networks the chemical crosslinks are not covalent bonds with high adhesion energy, in fact their adhesion energy is in the order of tens of k B T , an having a transient dynamics [32]. In general terms, this kind of gels with chemical crosslinks (proteins as α-actinin) behaves as a physical gels [33]. This sort of interactions can be modelled as a reversible two-state equilibrium process [34,35,36]. Moreover, taking into account that the shear velocity is much slower than the internal crosslinks dynamics we can consider the interaction at steady state. Then the process can be described as:
P ub P b = exp − (E b − w ext ) k B T ,(17)
where P b the binding probability encompassing the states folding or rigid crosslink. Since only these two states are possible, then P ub + P b = 1. The two-state model has the folded state as the preferred low free energy equilibrium state at zero force and the unfolded state as the high free energy equilibrium state at zero force. E b represents the difference in the free-energy between these states. w ext represents the external mechanical work that induces the deformation of the crosslink.
As we are developing a mesoscale model, in the following we write an expression for the unbinding probability considering the shear strain as the main driving force, by using scaling arguments [37,33]. Then, in order to do so we can re-write as w ext = f.a, where a is a length scale in the order of the monomer size. The force f can be expressed as f ∼ Gγξ 2 in which γ is the shear strain, G is the shear modulus which can be estimated as G ∼ lpk B T Lcξ 3 , and ξ the network mesh size. Also, taking into account that the unbinding transition due to the bundle strain happens in the semiflexible regime when ξ ∼ L c . Therefore, reorganising the terms we arrive to an expression for the P ub as a function of the shear strain as:
P ub = 1 1 + exp [κ (γ 0 − γ)] , γ 0 ∼ E b ξ 2 k B T l p a(18)
where the parameter κ ∼ lpa ξ 2 is proportional to the sharpness of the transition between states and γ 0 is the characteristic strain which is proportional to the adhesion energy; it defines the point at which the probability of unbinding is 0.5. If γ 0 << γ, the network is easy to be remodelled showing a fluid-like behaviour. On the contrary, if γ 0 >> γ, the crosslinks are stable and the probability of transition is low, consequently the network behaves as a solid-like structure. Moreover, we can clearly identify that the characteristic strain γ 0 , scales proportionally with the adhesion energy E b , with the mesh size ξ and increases when the bundle stiffness l p , becomes smaller.
In order to express qualitatively the behaviour of the coupled set of equations under alterations in the pre-strain and the adhesion energy of the crosslinks, we evaluate them in the regime of semi-flexible response i.e L c ∝ l p . Figure 2.a describes the effect of an increment on the pre-strain (1+ ) on network response, with the remaining parameters keeping constant. As can be observed, as the pre-strain increases, the network stiffness increases and is able to reach a higher level of stress (higher yield point). However, the yielding point (fluidisation of the network) occurs earlier reducing the solid-like regime of the network. different values of γ 0 . Contrary to the pre-strain, as γ 0 increases the initial stiffness of the network remains unaltered while the yielding stress and strain increase, extending the solid-like regime. This implies that as γ 0 increases the crosslinks become more stable.
Interaction between actin and vimentin.
In order to consider the interaction between networks we expect that for very low concentrations the changes over the mechanical response of the actin network is almost negligible, but once a certain value is overpassed the effects associated with the interaction are most relevant until some asymptotic value. This effect can be interpreted by using arguments from phase transition, according to that we propose to use an interaction energy Ψ int by means the Landau functional that couple the effects with the networks [26,25]. This energy is written in terms of an interaction parameter defined Γ = Γ(c) where c represents the ratio between the concentrations of actin and vimentin. As we are interested to know when the effect of the vimentin (IF) becomes relevant on the mechanical response we focus on the critical phenomena, when the concentration of c is near the critical point and the interaction parameter Γ assumes a very small value. This allows us to expand the free energy in even powers of Γ and retain only the lowest order terms.
Then we re-write the Helmholtz free energy as follows:
Ψ(C, Γ) = α 2 Γ 2 + β 4 Γ 4 + Ψ actin (C, Γ) + Ψ IF (C)(19)
Where the first two terms define the Landau energy associated with the interaction parameter; the third term interprets the strain energy for the network without any coupling as a function of the isochoric Cauchy strain tensor and the interaction parameter. Since the equilibrium position (minimum) of Ψ(Γ, c) changes at α = 0, we identify α = 0 with the critical point c = c cr . It allows us to choose mĉ as α, where m is a positive constant andĉ = (c − c cr )/c cr is the deviation of the concentration ratio from the critical point normalised by c cr which we define as a reduced concentration ratio. Then, the simplest election is α = kĉ, for which α > 0 above the critical point and α < 0 below. The dependence of β withĉ does not affect qualitative the behaviour of the free energy in the vicinity of the critical point and therefore we take β as a constant. Then minimising the free energy to obtain the equilibrium condition with respect to the interaction parameter Γ yields:
∂Ψ ∂Γ ≈ 2αΓ + 4βΓ 3 = 0.(20)
Thus, the equilibrium value of remodelling, Γ is
Γ ≈ −α 2β 1/2 = m(c−ccr) 2βccr 1/2 ∀ c > c cr 0 ∀ c < c cr(21)
The interaction parameter is canceled when the concentration ratio c ≈ c cr , above the critical value scale as Γ ∼ (c − c cr ) 1 2 . For values of c below the critical the level of interaction is zero. Then, once the interaction parameter has been defined, we will describe in the following, the internal variables that encode the interplay between the two networks and how they are driven by the interaction parameter Γ(c). We consider the following hypothesis:
Interaction induce strengthening: i. The increment in the concentration of IF promotes an increment of the physical crosslinks over the F-actin bundles reducing the contour length and reducing the degree of fluctuations of the actin. As the L c is reduced the ratio r/L c tends to one and the composite network manifests a rise in the stress. Nevertheless, as the IF filaments are very flexible, the increment in the density of physical crosslinks due to the interaction with F-actin does not produce a relevant change over the stress sustained by the network. Therefore, in order to simplify the model we neglect the effect of the physical crosslinks over the IF and only focus on the role of physical crosslinks over the F-actin, as can be observed in the figure 3a (top). Then finally the effective contour length L c due to the alterations of vimentin.
L c (Γ) = L 0 c − δL Γ c Γ + δL cl c P ub ,(22)
where δL Γ c Γ represents the reduction of the contour length due to vimentin interaction, as was outlined previously L 0 c represents the contour length of the for the mesh without vimentin. The second term δL Γ c Γ represents the effective reduction in the length associated with the formation of the physical crosslinks promoted by the vimentin (figure 3a.).
ii. The effective network is a representation of a network buildup by two kinds of transient crosslinks. On the one hand the chemical interactions given by the crosslinks of neutravidin; on the other hand the physical crosslinks due to the interaction between F-actin with vimentin. Therefore, we expect that the effective γ 0 will be smaller because it represents a lower effective adhesion energy, which is proportional to the mixture of physical and chemical crosslinks. The adhesion energy promoted by the physical crosslinks and given by friction among filaments (without strong entanglements), is lower, in comparison with the chemical crosslinks E b [33]. Moreover, another effect that promotes the reduction of the yielding strain is associated with the fact that the rise of the internal stress, associated with the physical crosslinks is propagated towards the chemical crosslinks lowering the characteristic strain γ 0 , as shown in the figure 3a. (bottom part), where the red and black dots illustrate the effect of the pre-stress over the P ub of the chemical crosslinks [38,31].
Hence from the perspective of the proposed model the changes induced by the increment of the density of IF in the F-actin network are encoded as decreases in L c , and in γ 0 . Based on that we write them as a combination of the parameters associated in a network without IF plus a perturbation associated with the interaction parameter Γ, as follows:
γ 0 (Γ) =γ 0 − δγ 0 Γ,(23)
Interaction induce weakening: i. Surprisingly, at high actin concentrations, the additional polymer results in an unexpectedly weaker composite network with a lower elasticity and yield stress. The crossover between the strengthening and weakening regimes observed in the composite network, occurs when the estimated F-actin network mesh size is comparable to the distance between F-actin crosslinking sites. When the actin concentration increases and the concentrations of crosslinks and vimentin are the same as in the strengthening experiments, the ratio between χ = [crosslinks]
[actin] is lowered and the resultant mesh size increases as well as the level of thermal fluctuations. Consequently, the probability of bond formation is lower. In addition to that when the networks are co-polymerised with these ratio crosslinks / actin and with the range of concentrations of vimentin.
The interaction disturbs the crosslinking process providing an additional steric constraint imposed by vimentin IF which results in a loss of F-actin crosslinking. Based on that we write them as combination of the parameters associated in a network without IF plus a perturbation associated with the interaction parameter Γ, as follows:
L c (Γ) = L 0 c + δL Γ c Γ + δL cl c P ub ,(24)
where as was previously described L 0 c represents the contour length of the mesh without vimentin. The second term δL Γ c Γ represents the effective increase in the length associated with the steric interaction promoted by the vimentin.
ii. The rise of the internal stress is propagated towards the chemical crosslinks lowering the characteristic strain γ 0 , as we shown in the figure 3 where the red and black dots describe the effect of the pre-stress over the P ub of the chemical crosslinks [38,31]. Hence, from the perspective of the proposed model, the changes induced by the increment of the density of IF in the F-actin network are encoded as a decrease in γ 0 .
γ 0 (Γ) =γ 0 + δγ 0 Γ,(25)
Results
The proposed theory is used to depict the experiments conducted by Jensen et al. [21] on copolymerised F-actin/vimentin network. We evaluate the proposed model for the set of parameters identified by means of nonlinear least-square fit with experiments of monotonic shear tests, in a regime of large deformation, as is reported in [21]. Subsequently, solving the following coupled set of equations we can obtain the stress-strain relation for the different analysed networks:
γ 0 (c) =γ 0 ± δγ 0 m(c cr − c) 2βc cr 1/2(26)
where ±, as was explained above, will depend on the actin concentration. Next, the contour length is:
L c (c, γ) = L 0 c ± δL Γ c m(c cr − c) 2βc cr 1/2 + δL cl c 1 + exp [κ (γ 0 − γ)] .(27)
where the updated mesh for the reference configuration becomes
x(L c ) = (1 + ) 1 − 2L c (c, γ) l p π 3 2 1 2 ,(28)
Finally rewriting the eq.8 and the eq.15 considering that the incompressibility is satisfied automatically and the remaining of the invariants are: I 1 = I 2 = 3 + γ 2 we obtain the new expression for the shear stress as:
σ xz (γ) = c 1 γ + 2 3 nk B T γx 2 (1 − x 2 ) cπ [1 − (2 + γ 2 )x 2 + x 4 ] 2 − cπ 2(29)
The parameters of the model can be divided in two types: (i) Rigid-wormlike chain parameters L 0 c , l p , δL Γ c , δL cl c and which are of the order of magnitude of the values used to describe in experiments of in-vitro F-actin networks and to keep on the regime of semi-flexible entropic elasticity [29,23,7]. (ii) The parameters associated with the remodeling dynamics of the crosslinks κ and γ 0 , and the parameters that describe the interaction parameter Γ(c). These parameters encode the transitions to induce the fluidisation of the network and represent an indirect measure of the adhesion force of crosslinks.These values were identified in order to fit the experimental data. [21]. We notice that the model is capable of capture the general trend of the experimental results, associated with the strengthening as well as the increment of the σ max and the reduction of the γ c when the concentration of vimentin (intermediate filaments) increases. Moreover, to better illustrate the increment of the linear modulus due to the presence of vimentin, we plot in the figure 4b. the modulus K = dσ dγ . It can clearly be observed that the value of G 0 ≈ K γ=0 rises with the concentration of vimentin.
Furthermore, to better characterise the alterations over the mesh size on the effective network we illustrate figure 4c., showing the changes over the contour length L c , due to the strain γ and the interaction parameter Γ(c). As described above with the eq.27, the L c is reduced by the term δL Γ c Γ, this express the increment of the density of the physical crosslinks. The reached reduction can be in the order of 30% with respect to the contour length L 0 c , without vimentin. In addition to that, the second term into eq.27 describes the the increment in the contour length due to the rise of γ, which finally enhances the unbinding probability, P ub (γ). As can be also observed in the figure, if the concentration of vimentin increases, the effects of crosslinks fluidisation becomes more relevant. This is due to the negative coupling between Γ and γ 0 following the eq.26. Finally and Γ that allow finally the described stress-strain curves.
The scaled material parameters for the simulation are: Moreover, to better illustrates the increment of the linear modulus due to the presence of vimentin we plot in the figure 5b. the modulus K = dσ dγ . It can clearly be observed that the value of G 0 ≈ K γ=0 rises with the concentration of vimentin. The alterations over the mesh size on the effective network we illustrate in figure 5c., showing the changes, due to the strain γ and the interaction parameter Γ(c), over the contour length L c respect the contour length for a mesh without vimentin L 0 c . The eq.27, the L c is increased by the term δL Γ c Γ, which describes the steric interaction, where the increment can be in the order of 10% with respect to the contour length L 0 c , without vimentin. Next, the second term into eq.27 depicts the extension of the contour length due to the increment in γ which enhances the unbinding probability, P ub (γ), where γ 0 denotes the crosslinks fluidisation transition. As can also be observed 5.c if the concentration of vimentin increases the effects of crosslinks stabilisation becomes more relevant. This effect is due to the positive coupling between Γ and γ 0 following the eq.26. Lastly, the figure 5d. depicts the positive functional form of the interaction parameter Γ(c) as a function of the ratio of concentrations c. The points express the associated values of the ratio c = [vimentin] [F −actin] and Γ the described stress-strain curves.
Discussion and Conclusions
In summary we provide in this work a first study of a constitutive model for composite networks of crosslinked F-actin/vimentin. It was motivated by the fact that some previous rheological measurements on composite biopolymer networks, such as F-actin/microtubules, showed that the composite networks always induce strain strenghening in comparison with the single F-actin network. Nevertheless, the experiments of Jensen et al. demonstrated that the composite semiflexible networks of F-actin/vimentin can drive either the mechanical strengthening or weakening, during the co-polymerisation of the two semiflexible species.
The model has successfully reproduced the experimental observations. More importantly, it can be readily implemented into a field theory and used to calculate the behaviour of a composite networks of actin/vimentin under complex loading conditions. Our theory was developed into the framework of nonlinear continuum mechanics, in which we define a free energy functional considering the role of the entropic-elastic for semiflexible networks with weak crosslinks and also an energetic term to describe the interaction parameter, which allows the coupling between the two networks. Surprisingly, our phenomenological approach provides a very simple and useful constitutive model, which can capture the two described mechanisms of strengthening and softening just as a change in the sign of the interaction parameter Γ(c).
This effect leads us to think that the formation of the cytoskeleton scaffolding elements can drive to a broad phase diagram for the cellular mechanical properties. In this sense, the figure 6 condenses our interpretation of the process. We consider that the effects of strengthening and weakening can be considered as the action of two ratios of concentrations (which could also be described as chemical potentials), one is defined as c = [actin] [vimentin] , and the other as χ = [crosslinks]
[actin]
. The first one, c will define the intensity of the interaction, where we find a phase transition in which above a certain critical value the coupling between networks becomes more dominant. The second concentration ratio controls the sort of interaction. For χ = χ c exists a crossover between the two regimes: On the one hand, below the crossover χ < χ c the strengthening, where the interaction creates physical crosslinks, in which the effective network has a reduced contour length. Moreover the effective adhesion energy (∝ γ 0 ), is a weighting between the chemical crosslinks and the new physical crosslinks, then seems plausible to expect that the yielding strain decreases. In addition to that, the rise of the physical crosslinks makes higher the network pre-strain, and consequently the mechanical stress over the chemical crosslinks which reduce the γ 0 . On the other hand, above the crossover χ > χ c the formation of the transient chemical crosslinks becomes scare, which increases the mesh size. On that condition, the only way to raise the chance of crosslinks formation is by the rise of the level of fluctuations. Nevertheless, the co-polymerisation with vimentin reduces the level of the internal fluctuations and consequently the effective mesh size becomes smaller. Furthermore, as the mesh size becomes higher, the level of pre-strain over the crosslinks becomes smaller and the interactions do not reduce the adhesion energy. Therefore it explains why the yielding strain becomes higher.
Taking all the observations as a whole, we propose a phase diagram where the coupling between χ and Γ could have a functional form as ∼ tanh(χ − χ c ) (see figure 6). Thus, it allows the change of the sign in the interaction parameter depending on whether the value is above or below the crossover χ c . Future experiments will provide better arguments to validate the speculative relation. Essentially we propose the use of an effective crosslinked F-actin network, which incorporates all the associated actin/vimentin interactions that drives the microstructural remodelling effects via the alterations of the contour length L c , and the characteristic stretch γ 0 . Taking a broad perspective, several models have described the effects of composite materials where one is considered the most relevant and the other one is considered as the surrounding matrix. Generally, in all these models the coupling between the two components enhances the strengthening, but never the weakening [39]. This approach has also been used in different studies of composite materials which address the mechanical interactions between filaments of fibres with the surrounding matrix. In this sense the formalism developed by Winkler accounts for coupling with an elastic foundation that resists a lateral displacement of a slender structure. This approach was used to describe the alterations in the mechanical response of the microtubules due to the effects produced by the surrounding actin network [40,41]. This kind of interaction is studied by the modification of the buckling modes. Lee and Terentjev describe this interplay by means a partition function which considers a Hamiltonian associated with the bending energy of the microtubule, plus the Winkler interaction energy due to the physical constrains, introduced by the actin filaments [40]. Future works exploring the interaction between intermediate filaments and F-actin could gain novel insights by using a similar techniques.
In a similar manner, the definition of an effective actin network that considers the interplay with vimentin can be seen as Winkler-like model. Our methodology can be thought as a balance between microstructural and phenomenological formulations. The Landau phase transition formalism, provides a phenomenological description that allows to introduce the remodelling effect exerted by vimentin without the details associated with the microstructural origin of the steric interaction, which would demand a more detailed description. Nevertheless, our aim is to provide a useful model to improve the characterisation of this kind of experiments helping with the definition of better metrics based on the complete description of the nonlinear elasticity inherent to the mechanical response.
As a future work we expect to develop experimental and theoretical studies of this composite combined networks with the aim to better characterise the role of the phase transitions controlled by the ratios of concentrations, as described above with the aim to predict the susceptibility of the emergent network to alterations of the concentrations. This kind of studies will provide a very relevant ability to predict mechanical properties for these sorts of synthetic networks, cells and tissues.
Date: November 15, 2018 1 Institut Jacques Monod (IJM), CNRS UMR 7592 et Université Paris Diderot, 75013 Paris, France. To whom correspondence should be addressed. E-mail: [email protected] 2 Institute of Science and Technology, Federal University of the Valleys of Jequitinhonha and Mucuri, 39100-000, Minas Gerais, Brazil.
Figure 1 .
1(a) Three-chain homogenisation lattice. (b) filaments and crosslinks energy landscape. (c) unbinding probability P ub .
Figure 2 Figure 2 .
22(a) Describes the network response for different levels of pre-strain. It shows the increment in the slope for low values of network deformation. (b) The figure details the network response for increasing values of γ 0 showing an extension of the solid-like regime.
Figure 3 .
3(a) Strengthening promoted by the formation of physical crosslinks. (b) Weakening promoted by the steric interaction that disturb the formation of chemical crosslinks, which increase the contour length.
Figure 4 .
4Strengthening effect. (a) stress-strain plots under shear strain regime, showing the non-linear inelastic effects. It can be observed a good agreement between the model predictions and the experimental measurements. (b) Effect of initial strengthening is illustrated by K = dσ dγ , the blue arrow point the direction of the strengthening increase. (c) Lc L 0 c for different concentrations of vimentin and shear strain; the figure condenses the effects associated with the interaction parameter Γ and crosslinks fluidisation. (d) interaction parameter Γ(c) as a function of the concentrations ratio c=[vimentin]/[F-actin]. 1.0.1. strengthening phase: The strengthening effect is a consequence of the formation of physical crosslinks. In this case, the concentration of F-actin keeps constant at 6µM and the vimentin encompasses in the range: 0µM, 0.3µM, 1.5µM, 3µM . Then, in the figure 4a. we plot the model predictions and the experimental measurements from Jensen et al. for the stress-strain curve of the composite actin-vimentin network under the application of a simple shear
, with the figure 4.d we can observe the functional form of the the interaction parameter Γ(c) as a function of the ratio of concentrations c where the points express the associated values of the ratio c = [vimentin] [F −actin]
γ
max = 0.9; δγ0 γ max = 0.4; c1 σ 0 = 0.2; κ = 30; m 2β = 0.5; = 0.03. 1.0.2. weakening phase: In the following we describe the results provided by the model to capture the experimentally reported emergent softening phase into the composite networks F-actin/ vimentin which promotes a steric interaction that blocks the formation of crosslinks. The studied concentration of F-actin kept constant at 18µM and the vimentin encompasses in the range: 0µM, 0.3µM, 1.5µM, 3µM . In order to show the results of the weakening, in the figure 5a. we plot the model predictions and the experimental measurements from Jensen et al. for the stress-strain curve of the composite actin-vimentin network. The model is capable of capture the general trend of the experimental results, associated with the weakening as well as the reduction of the σ max and the increment of the γ c when the concentration of vimentin (intermediate filaments) increases.
Figure 5 .
5Weakening effect into composite networks F-actin/vimentin where the copolymerisation promotes steric interactions that reduce the formation of crosslinks. In this case the concentration of actin keeps constant and the vimentin rises in the range (0µM, 0.3µM, 1.5µM, 3µM ). (a) stress-strain curves under shear strain loading regime showing the non-linear elastic effects. It can be observed a good agreement between the model predictions and the experimental measurements. (b) The effect of the initial weakening is illustrated by K = dσ dγ , the blue arrow points the direction of the stiffening decrease. (c) Alterations in the contour length due to the steric interaction that reduce the chance of crosslinks formation, increasing the mesh size (blue arrow). (d) Interaction parameter Γ(c) as a function of the concentrations ratio.The scaled material parameters for the simulation are:
Figure 6 .
6Proposed phase diagram to describe the effect of strengthening and weakening
AcknowledgmentsWe thanks Prof. Eugene Terentjev from the University of Cambridge for his valuable feedback.
Flexural rigidity of microtubules and actin filaments measured from thermal fluctuations in shape. Frederick Gittes, Brian Mickey, Jilda Nettleton, Jonathon Howard, The Journal of cell biology. 1204Frederick Gittes, Brian Mickey, Jilda Nettleton, and Jonathon Howard. Flexural rigidity of microtubules and actin filaments measured from thermal fluctuations in shape. The Journal of cell biology, 120(4):923-934, 1993.
Assessing the flexibility of intermediate filaments by atomic force microscopy. N Mücke, Kreplak, Kirmse, H Wedig, Herrmann, J Aebi, Langowski, Journal of molecular biology. 3355N Mücke, L Kreplak, R Kirmse, T Wedig, H Herrmann, U Aebi, and J Langowski. Assessing the flexibility of intermediate filaments by atomic force microscopy. Journal of molecular biology, 335(5):1241-1250, 2004.
Viscoelastic properties of vimentin compared with other filamentous biopolymer networks. A Paul, Ursula Janmey, Peter Euteneuer, Manfred Traub, Schliwa, The Journal of cell biology. 1131Paul A Janmey, Ursula Euteneuer, Peter Traub, and Manfred Schliwa. Viscoelastic properties of vimentin compared with other filamentous biopolymer networks. The Journal of cell biology, 113(1):155-160, 1991.
Elastic behavior of cross-linked and bundled actin networks. Jennifer Hyunjong Ml Gardel, Shin, Mackintosh, P Mahadevan, D A Matsudaira, Weitz, Science. 3045675ML Gardel, Jennifer Hyunjong Shin, FC MacKintosh, L Mahadevan, P Matsudaira, and DA Weitz. Elastic behavior of cross-linked and bundled actin networks. Science, 304(5675):1301-1305, 2004.
Mechanical Response of Cytoskeletal Networks. M L Gardel, K E Kasza, C P Brangwynne, Jiayu Liu, D A Weitz, Methods In Cell Biology. 8908M L Gardel, K E Kasza, C P Brangwynne, Jiayu Liu, and D A Weitz. Mechanical Response of Cytoskeletal Networks. Methods In Cell Biology, 89(08):487-519, 2008.
Modeling semiflexible polymer networks. P Chase, Fred C Broedersz, Mackintosh, Reviews of Modern Physics. 863995Chase P Broedersz and Fred C MacKintosh. Modeling semiflexible polymer networks. Reviews of Modern Physics, 86(3):995, 2014.
Microstructural model for cyclic hardening in f-actin networks crosslinked by α-actinin. Horacio López, -Menéndez , José Félix Rodríguez, Journal of the Mechanics and Physics of Solids. 91Horacio López-Menéndez and José Félix Rodríguez. Microstructural model for cyclic hardening in f-actin net- works crosslinked by α-actinin. Journal of the Mechanics and Physics of Solids, 91:28-39, 2016.
Theory of semiflexible filaments and networks. Fanlong Meng, Eugene M Terentjev, Polymers. 9252Fanlong Meng and Eugene M Terentjev. Theory of semiflexible filaments and networks. Polymers, 9(2):52, 2017.
Transient response of nonlinear polymer networks: A kinetic theory. J Franck, Vernerey, Journal of the Mechanics and Physics of Solids. 115Franck J Vernerey. Transient response of nonlinear polymer networks: A kinetic theory. Journal of the Mechanics and Physics of Solids, 115:230-247, 2018.
Continuum mechanical model for cross-linked actin networks with contractile bundles. Jps Ferreira, Mpl Parente, Jorge, Journal of the Mechanics and Physics of Solids. 110JPS Ferreira, MPL Parente, and RM Natal Jorge. Continuum mechanical model for cross-linked actin networks with contractile bundles. Journal of the Mechanics and Physics of Solids, 110:100-117, 2018.
Computational analysis of viscoelastic properties of crosslinked actin networks. T Kim, W Hwang, H Lee, R Kamm, PLoS computational biology. 57T. Kim, W. Hwang, H. Lee, and R. Kamm. Computational analysis of viscoelastic properties of crosslinked actin networks. PLoS computational biology, 5(7), 2009.
Dynamic mechanisms of cell rigidity sensing: insights from a computational model of actomyosin networks. Carlos Borau, Taeyoon Kim, Tamara Bidone, José Manuel García-Aznar, Roger D Kamm, PLoS One. 71149174Carlos Borau, Taeyoon Kim, Tamara Bidone, José Manuel García-Aznar, and Roger D Kamm. Dynamic mechanisms of cell rigidity sensing: insights from a computational model of actomyosin networks. PLoS One, 7(11):e49174, 2012.
Characterization of internal fracture process of double network hydrogels under uniaxial elongation. Tasuku Nakajima, Takayuki Kurokawa, Saika Ahmed, Wen-Li Wu, Jian Ping Gong, Soft Matter. 96Tasuku Nakajima, Takayuki Kurokawa, Saika Ahmed, Wen-li Wu, and Jian Ping Gong. Characterization of internal fracture process of double network hydrogels under uniaxial elongation. Soft Matter, 9(6):1955-1966, 2013.
Multi-scale multi-mechanism design of tough hydrogels: building dissipation into stretchy networks. Xuanhe Zhao, Soft Matter. 105Xuanhe Zhao. Multi-scale multi-mechanism design of tough hydrogels: building dissipation into stretchy net- works. Soft Matter, 10(5):672-687, 2014.
Toughening elastomers with sacrificial bonds and watching them break. E Ducrot, Y Chen, M Bulters, R Sijbesma, C Creton, Science. 3446180E. Ducrot, Y. Chen, M. Bulters, R. Sijbesma, and C. Creton. Toughening elastomers with sacrificial bonds and watching them break. Science, 344(6180):186-189, 2014.
Dielectric elastomers of interpenetrating networks. Zhigang Suo, Jian Zhu, Applied Physics Letters. 9523232909Zhigang Suo and Jian Zhu. Dielectric elastomers of interpenetrating networks. Applied Physics Letters, 95(23):232909, 2009.
A theory for large deformation and damage of interpenetrating polymer networks. Xuanhe Zhao, Journal of the Mechanics and Physics of Solids. 602Xuanhe Zhao. A theory for large deformation and damage of interpenetrating polymer networks. Journal of the Mechanics and Physics of Solids, 60(2):319-332, 2012.
Cytoarchitecture and physical properties of cytoplasm: volume, viscosity, diffusion, intracellular surface area. Katherine Luby-Phelps, International review of cytology. Elsevier192Katherine Luby-Phelps. Cytoarchitecture and physical properties of cytoplasm: volume, viscosity, diffusion, intracellular surface area. In International review of cytology, volume 192, pages 189-221. Elsevier, 1999.
Microrheology of microtubule solutions and actin-microtubule composite networks. Vincent Pelletier, Naama Gal, Paul Fournier, Maria L Kilfoil, Physical review letters. 10218188303Vincent Pelletier, Naama Gal, Paul Fournier, and Maria L Kilfoil. Microrheology of microtubule solutions and actin-microtubule composite networks. Physical review letters, 102(18):188303, 2009.
Assembly kinetics determine the structure of keratin networks. Jona Kayser, Heinrich Grabmayr, Markus Harasim, Harald Herrmann, Andreas R Bausch, Soft Matter. 834Jona Kayser, Heinrich Grabmayr, Markus Harasim, Harald Herrmann, and Andreas R Bausch. Assembly kinetics determine the structure of keratin networks. Soft Matter, 8(34):8873-8879, 2012.
. H Mikkel, Eliza J Jensen, Robert D Morris, David A Goldman, Weitz, Emergent properties of composite semiflexible biopolymer networks. BioArchitecture, 4(4-5Mikkel H Jensen, Eliza J Morris, Robert D Goldman, and David A Weitz. Emergent properties of composite semiflexible biopolymer networks. BioArchitecture, 4(4-5):138-143, 2014.
Stretching semiflexible filaments and their networks. J R Blundell, E M Terentjev, Macromolecules. 4214JR Blundell and EM Terentjev. Stretching semiflexible filaments and their networks. Macromolecules, 42(14):5388-5394, 2009.
Nonlinear elasticity of semiflexible filament networks. Fanlong Meng, Eugene M Terentjev, Soft Matter. 1232Fanlong Meng and Eugene M Terentjev. Nonlinear elasticity of semiflexible filament networks. Soft Matter, 12(32):6749-6756, 2016.
Towards the understanding of cytoskeleton fluidisationsolidification regulation. Horacio López, -Menéndez , José Félix Rodríguez, Biomechanics and modeling in mechanobiology. 164Horacio López-Menéndez and José Félix Rodríguez. Towards the understanding of cytoskeleton fluidisation- solidification regulation. Biomechanics and modeling in mechanobiology, 16(4):1159-1169, 2017.
Unjamming and nematic flocks in endothelial monolayers during angiogenesis: theoretical and experimental analysis. Horacio Lopez, - Menendez, Joseph D'alessandro , arXiv:1809.03824arXiv preprintHoracio Lopez-Menendez and Joseph D'Alessandro. Unjamming and nematic flocks in endothelial monolayers during angiogenesis: theoretical and experimental analysis. arXiv preprint arXiv:1809.03824, 2018.
Elements of phase transitions and critical phenomena. Hidetoshi Nishimori, Gerardo Ortiz, OUPOxfordHidetoshi Nishimori and Gerardo Ortiz. Elements of phase transitions and critical phenomena. OUP Oxford, 2010.
. A J Spencer, Continuum Mechanics. Longman Scientific & Technical. A.J.M Spencer. Continuum Mechanics. Longman Scientific & Technical, Essex, 1980.
A three-dimensional constitutive model for the large stretch behaviour of rubber elastic materials. E Arruda, M Boyce, Journal of the Mechanics and Physics of Solids. 41E. Arruda and M. Boyce. A three-dimensional constitutive model for the large stretch behaviour of rubber elastic materials. Journal of the Mechanics and Physics of Solids, 41:389-412, 1993.
Constitutive modeling of the stress-strain behavior of F-actin filament networks. J Palmer, M Boyce, Acta biomaterialia. 43J. Palmer and M. Boyce. Constitutive modeling of the stress-strain behavior of F-actin filament networks. Acta biomaterialia, 4(3):597-612, 2008.
Molecular mechanics of the α-actinin rod domain: Bending, torsional, and extensional behavior. J Golji, R Collins, M Mofrad, PLoS computational biolog. 5J. Golji, R. Collins, and M. Mofrad. Molecular mechanics of the α-actinin rod domain: Bending, torsional, and extensional behavior. PLoS computational biolog, 5, 2009.
Slow dynamics and internal stress relaxation in bundled cytoskeletal networks. O Lieleg, J Kayser, G Brambilla, L Cipelletti, A Bausch, Nature Materials. 103O. Lieleg, J. Kayser, G. Brambilla, L. Cipelletti, and A. Bausch. Slow dynamics and internal stress relaxation in bundled cytoskeletal networks. Nature Materials, 10(3):236-242, 2011.
Measuring molecular rupture forces between single actin filaments and actin-binding proteins. M Jorge, Hyungsuk Ferrer, Jiong Lee, Benjamin Chen, Fumihiko Pelz, Nakamura, D Roger, Matthew J Kamm, Lang, Proceedings of the National Academy of Sciences. the National Academy of Sciences105Jorge M Ferrer, Hyungsuk Lee, Jiong Chen, Benjamin Pelz, Fumihiko Nakamura, Roger D Kamm, and Matthew J Lang. Measuring molecular rupture forces between single actin filaments and actin-binding pro- teins. Proceedings of the National Academy of Sciences, 105(27):9221-9226, 2008.
Scaling concepts in polymer physics. Pierre-Gilles De Gennes, Cornell university pressPierre-Gilles De Gennes. Scaling concepts in polymer physics. Cornell university press, 1979.
Multiscale mechanics of fibrin polymer: gel stretching with protein unfolding and loss of water. A Brown, R Litvinov, D Discher, P Purohit, J Weisel, Science. 3255941A. Brown, R. Litvinov, D. Discher, P. Purohit, and J. Weisel. Multiscale mechanics of fibrin polymer: gel stretching with protein unfolding and loss of water. Science, 325(5941):741-4, 2009.
Structure and dynamics of cross-linked actin networks. Oliver Lieleg, Mae Mireille, Andreas R Claessens, Bausch, Soft Matter. 62Oliver Lieleg, Mireille MAE Claessens, and Andreas R Bausch. Structure and dynamics of cross-linked actin networks. Soft Matter, 6(2):218-225, 2010.
Protein unfolding accounts for the unusual mechanical behavior of fibrin networks. P Purohit, Litvinov, Brown, Discher, Acta biomaterialia. 76P Purohit, R Litvinov, A Brown, D Discher, and J Weisel. Protein unfolding accounts for the unusual mechanical behavior of fibrin networks. Acta biomaterialia, 7(6):2374-2383, 2011.
Models for the specific adhesion of cells to cells. G Bell, Science. 2004342G. Bell. Models for the specific adhesion of cells to cells. Science, 200(4342):618-627, 1978.
Cytoskeletal polymer networks: viscoelastic properties are determined by the microscopic interaction potential of cross-links. O Lieleg, K Schmoller, M Claessens, A Bausch, Biophysical journal. 9611O. Lieleg, K. Schmoller, M. Claessens, and A. Bausch. Cytoskeletal polymer networks: viscoelastic properties are determined by the microscopic interaction potential of cross-links. Biophysical journal, 96(11):4725-32, 2009.
A stochastic-structurally based three-dimensional finitestrain damage model for fibrous soft tissue. J F Rodriguez, F Cacho, J A Bea, M Doblare, Journal of the Mechanics and Physics of Solids. 54J. F. Rodriguez, F. Cacho, J. A. Bea, and M. Doblare. A stochastic-structurally based three-dimensional finite- strain damage model for fibrous soft tissue. Journal of the Mechanics and Physics of Solids, 54:564-886, 2006.
Microtubule buckling in an elastic matrix with quenched disorder. Tai Cheng, Eugene M Lee, Terentjev, The Journal of chemical physics. 14914145101Cheng-Tai Lee and Eugene M Terentjev. Microtubule buckling in an elastic matrix with quenched disorder. The Journal of chemical physics, 149(14):145101, 2018.
Microtubules can bear enhanced compressive loads in living cells because of lateral reinforcement. Clifford P Brangwynne, C Frederick, Sanjay Mackintosh, Kumar, A Nicholas, Jennifer Geisse, Talbot, Kevin K Mahadevan, Donald E Parker, David A Ingber, Weitz, J Cell Biol. 1735Clifford P Brangwynne, Frederick C MacKintosh, Sanjay Kumar, Nicholas A Geisse, Jennifer Talbot, L Ma- hadevan, Kevin K Parker, Donald E Ingber, and David A Weitz. Microtubules can bear enhanced compressive loads in living cells because of lateral reinforcement. J Cell Biol, 173(5):733-741, 2006.
| []
|
[
"Asymptotic bias reduction of maximum likelihood estimates via penalized likelihoods with differential geometry",
"Asymptotic bias reduction of maximum likelihood estimates via penalized likelihoods with differential geometry"
]
| [
"Masayo Y Hirose ",
"Shuhei Mano "
]
| []
| []
| A procedure for asymptotic bias reduction of maximum likelihood estimates of generic estimands was developed. The estimator is realized as a plug-in estimator, where the parameter maximizes the penalized likelihood with a penalty function that satisfies a quasi-linear partial differential equation of the first order. The integration of the partial differential equation with the aid of differential geometry is discussed. Applications to generalized linear models, linear mixed-effects models, and a location-scale family are presented. | null | [
"https://export.arxiv.org/pdf/2011.14747v3.pdf"
]
| 257,913,225 | 2011.14747 | 5ed7f19baf269745cf3a81f535c73419155d1233 |
Asymptotic bias reduction of maximum likelihood estimates via penalized likelihoods with differential geometry
Apr 2023
Masayo Y Hirose
Shuhei Mano
Asymptotic bias reduction of maximum likelihood estimates via penalized likelihoods with differential geometry
Apr 2023Bias reductioninformation geometryJeffreys priorpartial dif- ferential equationplug-in estimatorshrinkage 2020 Mathematics Subject Classification Numbers Primary 62F12; Secondary 62B1162H12
A procedure for asymptotic bias reduction of maximum likelihood estimates of generic estimands was developed. The estimator is realized as a plug-in estimator, where the parameter maximizes the penalized likelihood with a penalty function that satisfies a quasi-linear partial differential equation of the first order. The integration of the partial differential equation with the aid of differential geometry is discussed. Applications to generalized linear models, linear mixed-effects models, and a location-scale family are presented.
Introduction
For a sample space X , consider a parametric model M := {p(·; ξ) : ξ ∈ Ξ}, or a family of probability measures p(·; ξ) with parameter ξ ∈ Ξ, where the parameter space Ξ is an open subset of R d , d ≥ 1. We assume that p(x; ξ) is a C ∞ -function, that is, an infinitely-differentiable function of ξ for each x ∈ X .
For a given function f of ξ, where f : Ξ → R, the parameterization, that is, the result φ of the mapping φ = f (ξ), is called an estimand. We call f the estimand function. In this paper, we discuss the estimation of the result φ by the mapping φ = f (ξ).
An estimator δ(x), x ∈ X of an estimand f (ξ), ξ ∈ Ξ is unbiased if E ξ δ(X) = f (ξ), ∀ξ ∈ Ξ, where the expectation E ξ is with respect to p(·; ξ). If an unbiased estimator of f (ξ) exists, the estimand f (ξ) is said to be Uestimable. The unbiased estimator δ of f (ξ) is the uniform minimum variance unbiased estimator (UMVUE) of f (ξ) if var ξ δ(x) ≤ var ξ δ ′ (x), ∀ξ ∈ Ξ, where δ ′ is any other unbiased estimator of f (ξ).
In this paper, we present a procedure for asymptotic bias reduction of maximum likelihood estimates. The resulting estimator asymptotically coincides with the UMVUE if a complete sufficient statistic exists. Our procedure may be regarded as a generalization of Firth's proposal [10] on bias reduction of maximum likelihood estimates of parameters. Before establishing a scene with differential geometry, we recall Firth's idea with minimal notation.
In a regular model with parameter ξ, the asymptotic bias of the maximum likelihood estimateξ MLE can be expressed as
b(ξ) := E ξξMLE − ξ = b 1 (ξ) n + b 2 (ξ) n 2 + · · · ,(1)
where n is usually interpreted as the number of observations, or some other measure of the rate at which information accrues. Using d-dimensional parameter ξ = {ξ 1 , . . . , ξ d }, the maximum likelihood estimate is derived as the solution to the system of score equations:
u i (ξ; x) := ∂ i l(ξ; x) = 0, ∂ i := ∂ ∂ξ i , i ∈ {1, . . . , d},
where l(ξ; x) := log p(x; ξ) and u(ξ; x) are the log-likelihood and score functions, respectively. The indices follow those of tensors otherwise stated; the upper index should not be confused with power [22]. Firth [10] proposed a method to remove the O(n −1 ) term from (1). His bias-reduced estimator (p. 29 of [10]) is the solution to the modified score equations:
u * i (ξ; x) := u i (ξ; x) − d j=1 κ i,j (ξ)b j 1 (ξ) = 0, i ∈ {1, . . . , d},(2)
where κ i,j (ξ) = n −1 E ξ [u i u j ]. Firth's [10] proposal was inspired by a geometrical interpretation of modified score equations (2) in one-dimensional exponential families with canonical parameterization. For the log-likelihood l(ξ; t) = tξ − ψ(ξ) with the canonical parameter ξ and sufficient statistic t, the score function is given by u(ξ; t) = t − ψ ′ (ξ). The bias b(ξ) ofξ MLE arises from the combination of the unbiasedness of the score function E ξ u(ξ; T ) = 0 and the curvature of the score function, u ′′ (ξ; t) = 0. Because 0 = u(ξ MLE ; t) = u(ξ; t) + u ′ (ξ; t)(ξ MLE − ξ) + 1 2 u ′′ (ξ; t)(ξ MLE − ξ) 2 + · · · , if u(ξ; t) is linear in ξ, then there is no bias. Otherwise,ξ MLE subjects to bias. Firth's idea is to shift the score function downward at each point ξ by an amount i(ξ)b(ξ), where −i(ξ) = u ′ (ξ; t) = −ψ ′′ (ξ) is the gradient of the score function at ξ (see Figure 1 of [10]); this defines a modified score function u * (ξ; t) := u(ξ; t) − i(ξ)b(ξ). Hence, a modified estimateξ is given as a solution to u * (ξ; t) = 0. Here, −i(ξ)b(ξ) corresponds to the second term on the right-hand side of (2) in one-dimensional exponential families. In Section 3.1 of [10], Firth argued that in multidimensional exponential families with canonical parameterization, his method may be regarded as a penalized maximum likelihood estimation with penalized likelihood l * (ξ; t) := l(ξ; t) +l(ξ), where the non-random penalty functionl(ξ) satisfies the following system of partial differential equations
∂ il (ξ) = − d j=1 κ i,j (ξ)b j 1 (ξ) for all i ∈ {1, . . . , d}.(3)
He obtainedl(ξ) = log deti(ξ) as a solution of (3), where i(ξ) is the information matrix. Firth pointed out that the penalty function coincides with the logarithm of the Jeffreys prior. Firth's geometric interpretation poses two questions: 1) What about the bias reduction for generic estimands? Not only the curvature of the score function but also the curvature of the estimand function is involved. 2) In multidimensional models, the system of partial differential equations (3) appears. However, a system of partial differential equations is not always integrable. Under what situations is the system integrable?
In this paper, we addresses these two questions. To the best of the authors' knowledge, the system of partial differential equations (3) appeared in the literature many years ago, such as [11]; however, its integrability has never been investigated. In this paper, we show that the system (3) is an unnecessarily stringent condition for bias reduction. A milder condition represented by a single partial differential equation is identified.
We consider a generic estimand function f represented as a scalar function of a multidimensional parameter ξ ∈ Ξ. We employ the plug-in estimator f (ξ(x)) = f •ξ(x), whereξ(x), x ∈ X maximizes the suitable penalized likelihood. In addition to its straightforward construction, the plug-in estimator has a practical advantage: the range of the plug-in estimator f (ξ) is in that of f as long as the range ofξ is in the parameter space Ξ. By contrast, a popular bias-corrected estimator, the additive bias-corrected estimator of an estimator δ of f (ξ), that is, δ(x) − b 1 (ξ)/n, may suffer from giving unrealistic estimate because it could be out of the range of f . The remainder of this paper is organized as follows. In Section 2, our procedure for asymptotic bias reduction of maximum likelihood estimates is presented, and the condition for second-order asymptotic unbiasedness (unbiased up to O(n −1 )) is given. The condition requires that the penalty function satisfies a quasi-linear partial differential equation. It is shown that the partial differential equation can be integrated for generic models and estimands. In Section 3, we discuss the special cases in which the model manifolds are flat. Several applications of our bias-reduction procedure are presented in Section 4, and Section 5 provides a discussion.
Geometry of asymptotic bias reduction
This section presents the main results of the study. After preparing geometric concepts described in Section 2.1, our procedure for asymptotic bias reduction is presented in Section 2.2. This is reduced to solving a quasi-linear partial differential equation for a suitable penalty function. In Section 2.3, we present a result in terms of the geodesic distances.
Differential geometric preliminaries
Multivariate statistical calculations can be simplified if tensors in differential geometry are used [22]. In the following, we adopt the summation convention for tensors, such that whenever an index appears in an expression as upper and lower, we sum over the index (see Chapter 1 of [22] for the index notation). In addition to tensors, standard concepts in differential geometry, such as connection, curvature, foliation, geodesic, and gradient, are useful for displaying the following results in concise forms. In this subsection, we review these concepts in order to state our results. For general background information on differential geometry, see [16]. A detailed account of integral manifolds and foliations is provided in Chapter 19 of [21]. A concise summary of the terminology in affine differential geometry is provided in Chapter I of [25]. Differential geometry of statistical model manifolds, known as information geometry, is discussed in a book [1] and a concise survey [19]. For a parametric model M = {p(·; ξ) : ξ ∈ Ξ}, mapping φ : M → R d with φ(p) → {ξ 1 , . . . , ξ d } for each point (i.e., a probability measure) p ∈ M can be regarded as a local coordinate of M. If we consider parameterizations that are C ∞ -diffeomorphic to each other as equivalent, then M may be considered as a C ∞ -differentiable manifold. In this sense, a parameterization of M is a local coordinate system of M.
For each point ξ ∈ M, we define a Riemannian metric called the Fisher metric, which is a tensor with components
g ij (ξ) := E ξ [u i u j ], u i (ξ; x) := ∂ i l(ξ; x), ∂ i := ∂ ∂ξ i , i, j ∈ {1, . . . , d},(4)
in a local coordinate system {ξ 1 , . . . , ξ d }, where l(ξ; x) := log p(x; ξ) and expectation E ξ is with respect to the probability measure p(·; ξ). We assume the Fisher metric tensor is an invertible matrix everywhere. The inverse matrix of g ij is denoted by g ij , where g ij converts an upper index into a lower index, while g ij converts a lower index into an upper index. The determinant of the matrix g ij is denoted by g. The Fisher metric tensor is the null cumulant of derivatives of the log-likelihood denoted by κ i,j in Chapter 7 of [22].
A tangent vector (or simply, a vector) is represented as X = x i ∂ i , where x i are components with respect to the local coordinate system {ξ 1 , . . . , ξ d }.
The set of tangent vectors at p ∈ M, denoted by T p M, is called the tangent space of M at p. The metric tensor defines the inner product of vector fields
X = x i ∂ i and Y = y i ∂ i , as in X, Y = ∂ i , ∂ j x i y j := g ij x i y j .
An affine connection on a manifold M is a rule of covariant differentiation on M, denoted by ∇ (see Chapter III of [16] or Section I.3 of [25] for affine connections). We write
∇ i ∂ j = Γ k ij ∂ k ,(5)
where the system of functions Γ k ij are the Christoffel symbols for the affine connection relative to the local coordinate system (see Proposition III.7.4 of [16]), and ∇ also refers to an affine connection.
We consider a one-parameter family of affine connections called α-connections [1]. The covariant differentiation with respect to the α-connection, which is specifically denoted by ∇ (α) , is defined by (5) with the Christoffel symbols
Γ (α) ij,k := E ξ [(∂ i ∂ j l)u k ] + 1 − α 2 S ijk , α ∈ R(6)
for i, j, k ∈ {1, . . . , d}. Here, the symmetric tensor of order three, S ijk := E ξ [u i u j u k ] is called the skewness tensor. The skewness tensor is a null cumulant denoted by κ i,j,k in [22]. The vector obtained by the contraction S i = g jk S ijk appears. In particular, the 0-connection is called the Levi-Civita connection (also called the Riemannian connection). The covariant derivative of the metric tensor is
∇ (α) i g jk = ∂ i g jk − Γ (α) ik,j − Γ (α) ij,k = αS ijk .(7)
The α-connection and the (−α)-connection are called conjugate (or dual ) to each other with respect to the metric tensor (4); namely, we have identities
∂ k g ij = Γ (α) kj,i + Γ (−α) ki,j(8)
(see Sections I.4 and I.5 of [25] or Section 3.1 of [1]). Because of the expression (7), the covariant derivative of the metric with the 0-connection is 0; thus, the Christoffel symbols for the 0-connection can be represented by the derivatives of the metric: Γ
(0) ij,k = 1 2 (∂ i g jk + ∂ j g ik − ∂ k g ij ).(9)
In this paper, we discuss Riemannian manifolds identified with a parametric models equipped with the Fisher metric tensors (4) and α-connections (6). We call such manifolds model manifolds.
For each function f ∈ C 1 , the total derivative of f at p ∈ M is denoted as Xf = x i ∂ i f for X ∈ T p M. There exists a unique vector field denoted by gradf , called the gradient of f , satisfying gradf, X = Xf . In a local coordinate system, we have gradf = g ij ∂ j f ∂ i . The gradient of f is a normal vector field to each (d − 1)-dimensional hypersurface in the d-dimensional manifold M on which f is a constant. Such a hypersurface is called a level surface of f (see Section 4.1 of [24]).
An r-dimensional distribution D on a manifold M is an assignment to each point p ∈ M a subspace D p of the tangent space T p M. The distribution here is a collection of subspaces, and should not be confused with a probability distribution. A connected submanifold N of M is called an integral manifold of D if T p N = D p for all p ∈ N. If an integral manifold exists through each point of M, D is said to be completely integrable. The maximal integral manifolds do not intersect each other and completely cover the whole of M. We say that the maximal integral manifolds of D form the leaves of a foliation of M (see Chapter 2 of [24] or Chapter 19 of [21]).
Finding integral manifolds is to solve a system of partial differential equations. Suppose we seek a solution f for a system of linear first-order system of partial differential equations
∂ i f (ξ) = h i (ξ), i ∈ {1, . . . , d},(10)
where h i are smooth functions defined on an open subset of R d . If d > 1, the system is overdetermined, indicating that there are more equations than unknown function f . An overdetermined system has a solution only if the partial differential equations satisfy certain compatibility conditions. In fact,
because ∂ i ∂ j f = ∂ j ∂ i f for all i = j, it is obvious that ∂ j h i (ξ) = ∂ i h j (ξ) for all i = j(11)
is a necessary condition for (10) to have a solution in a neighborhood of any point with an arbitrary initial value. By virtue of the Frobenius theorem, we can show that (11) is sufficient (Proposition 19.17 of [21]). The condition of (11) is called the integrability condition for (10). Now, we introduce the concepts of α-flatness and α-harmonicity, which are special properties of a manifold and function, respectively, with respect to an α-connection.
The Riemann curvature tensor field R is defined through
R(X, Y )Z := ∇ X ∇ Y Z − ∇ Y ∇ X Z − ∇ [X,Y ] Z, where ∇ X = x i ∇ i and [X, Y ] := (x i ∂ i y j − y i ∂ i x j )∂ j for vector fields X = x i ∂ i and Y = y i ∂ i . The components are introduced by R(∂ j , ∂ k )∂ i =: R l ijk ∂ l , where R l ijk = ∂ j Γ l ki − ∂ k Γ l ji + Γ m ki Γ l jm − Γ m ji Γ l km(12)
and Γ l ki = g lj Γ ki,j (see Proposition III.7.6 of [16]). Note that this definition of the components is the same as that in [16,25] and different from that in [1]. The Riemann curvature tensor with respect to the α-connection, that is, the expression (12) with Christoffel symbols defined by (6), is specifically denoted by R (α) l
ijk . An affine connection on a manifold M is said to be flat if and only if the curvature tensor field vanishes identically on M (see Theorem II.9.1 of [16]). If M is one-dimensional, any affine connection on M is flat because the component (12) is anti-symmetric in indices j and k; namely, R 1 111 = −R 1 111 = 0. It is known that an affine connection on M is flat if and only if a local coordinate system exists around each point such that Γ i jk = 0 for all i, j, and k [25].
A manifold with a flat affine connection is said to be locally affine. A Riemannian manifold M is said to be flat (or locally Euclidean) if the 0connection on M is a flat affine connection. In addition, following [1], we introduce the following analogous concept for α-connections.
Definition 2.1. A manifold with a flat α-connection is locally affine. Specifically, a manifold with a flat α-connection is said to be α-flat. A local coordinate system such that Γ (α) i jk = 0 for all i, j, and k is called an α-affine coordinate system. Remark 2.2. We may work with arbitrary α-connection and local coordinate system, but the expressions depend on the choice of α and the coordinate system. If a manifold is α-flat for some α, it is convenient to work with the α-connection and the α-affine coordinate system, because the Christoffel symbols vanish. Example 1. An exponential family with canonical parameterization is a 1flat manifold. The log-likelihood is l(ξ; t) = t i ξ i − ψ(ξ) and the Christoffel symbols vanish: Γ (1) ij,k = −∂ i ∂ j ψE ξ [T k −∂ k ψ] = 0 for all i, j, and k. Therefore, ξ comprises an α-affine coordinate system.
Consider an affine connection ∇ on M. If there exists a parallel volume element, that is, a non-zero d-dimensional differential form ω satisfying ∇ω = 0 around each point of the manifold M, then the affine connection ∇ is said to be locally equiaffine (see Section I.3 of [25]). In other words, if an affine connection ∇ on M is locally equiaffine, the system of partial differential equations ∇ω = 0 for ω can be integrated everywhere.
For the relationship between the α-flatness of a manifold M and the locally equiaffineness of the affine connection on M, we have the following technical lemma. The proof is provided in Appendix A. (ii) If M is α 0 -flat for a non-zero α 0 ∈ R, then the α-connection on M is locally equiaffine for all α ∈ R.
(iii) The converse of (ii) is not true.
In this paper, following [27], we call the solution of the condition ∇ (α) ω = 0 obtained in the proof of Lemma 2.3, that is,
h(α; α 0 ) := g 1 2 − α 2α 0 , α ∈ R and α 0 ∈ R \ {0},(13)
the density of the α-parallel volume element with respect to the α 0 -flat manifold. Note that h(0; α 0 ) = √ g is the Jeffreys prior irrespective of α 0 . Special forms of α-parallel volume elements have appeared in the statistical literature, as shown in the historical remark in Appendix A.
In this paper, we call the operator ∆ (α) acting on a scalar field given by the function f (ξ), ξ ∈ Ξ:
∆ (α) f := ∇ (α)i ∇ (α) i f = g ij ∂ i ∂ j f − g ij g kl Γ (α) kl,i ∂ j f(14)
the α-Laplacian. If α = 0, the 0-Laplacian reduces to the Laplace-Beltrami operator that satisfies
∆ (0) f = 1 √ g ∂ i ( √ gg ij ∂ j f ).(15)
If a function f satisfies ∆ (α) f = 0 for some α, we say that the function f is α-harmonic. The authors did not find the notion of α-harmonicity in the literature, however, the α-Laplacian is an example of an extension of the Laplace-Beltrami operator proposed in [9]. From the definition of αconnections (6), we have a useful relationship among α-Laplacians:
(∆ (α 1 ) − ∆ (α 2 ) )f = α 1 − α 2 2 S i ∂ i f for all α 1 , α 2 ∈ R.(16)
Example 2. For an α-flat manifold, a component of the α-affine coordinate system ξ, say
ξ i 0 , is α-harmonic, because ∆ (α) ξ i 0 = ∂ i ∂ i ξ i 0 = 0.
In a local coordinate system {ξ 1 , · · · , ξ d }, consider a curve γ = ξ(t),
a < t < b, where −∞ ≤ a < b ≤ ∞, of class C 1 in a manifold M. The length of γ is defined as b a g ijξ iξj dt.
A curve γ is called a geodesic if the vector field X =ξ(t) defined along γ is parallel along γ, that is, if ∇ X X exists and equals 0 for all t, wherė ξ(t) := dξ(t)/dt denotes the vector tangent to γ at ξ(t). In a local coordinate system, the geodesic equation is expressed as
ξ i + Γ i jkξ jξk = 0,(17)
whereξ = d 2 ξ/dt 2 . The parameter t is normalized such that we have g ijξ iξj = 1 and is called the canonical parameter of the geodesic γ. The canonical parameter of a geodesic should not be confused with that of an exponential family. The distance dis(ζ, ξ) on M is the infimum length of all piecewise differentiable curves of class C 1 joining ζ and ξ in M. Let the neighborhood of ζ in M be U ζ . With the 0-connection, it can be shown that every point ξ ∈ U ζ can be joined to ζ by the unique geodesic lying in U ζ , and the length is equal to dis(ζ, ξ) (Proposition IV.3.4 of [16]). In this sense, we call this distance the geodesic distance. In an Euclidean manifold, the geodesic is the straight line joining ζ and ξ.
Condition for asymptotic unbiasedness
The first assertion of the following lemma comes from the bias of the maximum likelihood estimator of a parameter. The result seems classic and can be found in the literature (e.g., Equation 20 of [6] and Section 7.3 of [22]), but the authors did not find a rigorous form. Therefore, we present the assertion with regularity conditions. The proof is provided in Appendix B.
(i) E ξ [(ξ − ξ) i ] = g ij ∂ jl − 1 2 g kr Γ (−1) kr,j + o(n −1 ), and (ii) E ξ [(ξ − ξ) i (ξ − ξ) j ] = g ij + o(n −1 ),
for large n, whereξ maximizes the penalized likelihood.
Assertion (i) reveals that Firth's [10] condition (3) for removing the O(n −1 ) term of the bias of the maximum likelihood estimateξ MLE is a sufficient condition. This condition is expressed as
∂ il − 1 2 g kr Γ (−1) kr,i = 0 for all i ∈ {1, . . . , d}.(18)
Now, let us discuss bias reduction of generic estimands. Suppose an estimand function f : Ξ → R is given. Under the regularity conditions, the bias of the plug-in estimator f (ξ) of an estimand f (ξ) is given as
E ξ [f (ξ) − f (ξ)] =E ξ [(ξ − ξ) i ]∂ i f + E ξ [(ξ − ξ) i (ξ − ξ) j ] 1 2 ∂ i ∂ j f + E ξ [(ξ − ξ) i (ξ − ξ) j (ξ − ξ) k ] 1 3! ∂ i ∂ j ∂ k f + o(n −4/3 ).
Assertion (i) of Lemma 2.4 implies that if we expand the penalty functioñ l(ξ) = O(1) for large n in fractional powers of n and choose it to cancel out the bias of f (ξ) for each order in n, then we can remove the bias up to the desired order. We demonstrate this scheme up to O(n −1 ) in the following lemma. The proof is provided in Appendix C.
gradf, gradl + 1 2 ∆ (−1) f = o(n −1 ).(19)
Remark 2.6. An estimator with an asymptotic bias of o(n −1 ) is said to be second-order unbiased. Lemma 2.5 implies that the plug-in estimator f (ξ) with a penalty functionl satisfying the condition (19) is second-order unbiased. Moreover, Lemma 2.4 (ii) shows that both the plug-in estimators f (ξ MLE ) and f (ξ) are second-order efficient (saturates the Cramér-Rao bound up to O(n −1 )). A stronger result is Theorem 2.8.
Recall the Lehmann-Scheffé theorem:
Theorem 2.7 (Theorem 2.1.11 of [20]). Let a sample be distributed according to a parametric model M = {p(·; ξ) : ξ ∈ Ξ}, and suppose that T is a complete sufficient statistic for M.
(i) For every U-estimable estimand f (ξ), ξ ∈ Ξ, there exists a UMVUE.
(ii) The UMVUE in (i) is the unique unbiased estimator that is a function of T .
If we have a complete sufficient statistic, an optimality result of the plugin estimator in terms of the asymptotic bias follows. The main results of this paper are stated in the following theorem. The proof is provided in Appendix D. For a given estimand function f , the condition (19) is fulfilled if we can find a penalty functionl that satisfies the following first-order quasi-linear partial differential equation forl:
gradf, gradl + 1 2 ∆ (−1) f = 0.(20)
Hence, our task now is integrating (20) forl. Here, "quasi-linear" means that (20) is linear in the partial derivatives ∂ il . If f is (−1)-harmonic, ∆ (−1) f = 0, and (20) is linear. In this case, we achieve asymptotic unbiasedness without a penalty.
Corollary 2.9. If an estimand function f is (−1)-harmonic, then the plug- in estimator f (ξ MLE ) of an estimand f (ξ), ξ ∈ Ξ is unbiased up to O(n −1 )
and coincides with the UMVUE up to O(n −1 ) given that a complete sufficient statistic for the parametric model M = {p(·; ξ) : ξ ∈ Ξ} exists.
A well-known (−1)-harmonic function is the expectation parameter of an exponential family as a function of the canonical parameter.
Example 3. An exponential family with canonical parameterization is a 1-flat manifold (Example 1). It can be seen that
E ξ η i (ξ MLE ) = η i (ξ), ∀ξ ∈ Ξ for the expectation parameter η i := E ξ T i = ∂ i ψ (see Example 6.6.3 of [20]). We have g ij = ∂ i ∂ j ψ = ∂ i η j and Γ (−1) ij,k = ∂ i g jk − Γ (1) ik,j = ∂ i g jk by the conjugacy of ±1-connections (8). Then, η i 0 for an index i 0 ∈ {1, . . . , d} is a (−1)-harmonic function, because ∆ (−1) η i 0 = g ij ∂ i ∂ j η i 0 − g ij g kl ∂ k g li ∂ j η i 0 = g ij ∂ i g ji 0 − g kl ∂ k g li 0 = 0.
Therefore, Corollary 2.9 concludes that the estimator η i 0 (ξ MLE ) of the estimand η i 0 (ξ) coincides with the UMVUE up to O(n −1 ). In fact, since η i 0 (ξ MLE ) is an unbiased estimator of η i 0 (ξ) and is the complete sufficient statistic of the exponential family, Theorem 2.7 implies that η i 0 (ξ MLE ) is the UMVUE of η i 0 (ξ) exactly for any n ≥ 1.
Before considering the cases with ∆ (−1) f = 0, we revisit the bias reduction of maximum likelihood estimates of parameters proposed by Firth [10].
Remark 2.10. The condition of Firth (18) is sufficient for (20). Suppose we are interested in reducing the bias of the maximum likelihood estimatorξ MLE . We may consider the bias reduction for each component ofξ MLE separately. If we consider the estimand function f = ξ i 0 for an index i 0 ∈ {1, . . . , d}, the condition (20) for the penalty function, sayl (i 0 ) , becomes
g jk ∂ jl (i 0 ) ∂ k ξ i 0 − 1 2 g jk g lm Γ (−1) lm,j ∂ k ξ i 0 = g i 0 j (∂ jl (i 0 ) − 1 2 g kl Γ (−1) kl,j ) = 0.
This is a single partial differential equation forl (i 0 ) because all indices other than i 0 are summed over. The superscript (i 0 ) should not be confused with indices of components of a vector field. In contrast, if we want to reduce the biases of all the components ofξ MLE using a single penalty functionl simultaneously, the condition (20) requires
g ij (∂ jl − 1 2 g kr Γ (−1) kr,j ) = 0 for all i ∈ {1, . . . , d}.
Multiplying the left-hand side by the Fisher metric tensor (4) reproduces the condition (18) considered by Firth [10], which is the system of d partial differential equations for the single unknown functionl. Hence, we have to consider the integration of an overdetermined system of partial differential equations mentioned in Section 2.1, except for one-dimensional models (d = 1). Exponential families with canonical parameterization considered by Firth in Section 3.1 of [10] are a special case in which we know that this overdetermined system is integrable, thanks to the flatness of the model manifold (see Remark 3.5).
Let us discuss the cases with ∆ (−1) f = 0. It is known that a generic quasilinear partial differential equation of the first order is equivalent to a certain system of ordinary differential equations, and geometric interpretations are useful for elucidating this equivalence (see Sections I.4, I.5, II.2 of [4] and Chapter 17 of [21]).
We assume that the normal vector field to a level surface f in a model manifold M is not degenerated everywhere: |gradf | 2 := gradf, gradf > 0. Consider an embedding of M in M × R by introducing a local coordinate
system (ξ 1 , . . . , ξ d , ξ d+1 ), where ξ d+1 =l(ξ 1 , . . . , ξ d ).
Suppose that a solution of the partial differential equation (20) forl is given as an implicitization of ξ d+1 =l(ξ 1 , . . . , ξ d ) by the equation
φ(ξ 1 , . . . , ξ d+1 ) = φ 0(21)
for a constant φ 0 . This equation determines a d-dimensional level surface of φ in M × R, on which φ is the constant φ 0 , and is the integral manifold determined by the partial differential equation (20). In fact, the constancy gives
∂φ ∂ξ i + ∂φ ∂ξ d+1 ∂l ∂ξ i = 0, i ∈ {1, . . . , d},
and multiplying each equation by (gradf ) i and summing them yields
(gradf ) i ∂ i φ + ∂φ ∂ξ d+1 (gradf ) i ∂ il = gradf, gradφ + ∂φ ∂ξ d+1 gradf, gradl = gradf, gradφ − ∂φ ∂ξ d+1 ∆ (−1) f 2 = (gradf, − ∆ (−1) f 2 ), grad * φ = 0, (22) where grad * φ is the gradient of φ defined in M×R with components (grad * φ) i = g ij ∂ j φ for i ∈ {1, . . . , d} and (grad * φ) d+1 = ∂ d+1 φ.
The second-last equality follows by (20). Here, grad * φ is the normal vector field to each integral manifold determined by (21). The last equality of (22) requires that at each point tangent planes of all hypersurfaces through the point belong to a single pencil of planes whose axis is given by the direction field (gradf, −∆ (−1) f /2) in M × R, called the Monge axis of the partial differential equation (20). The integral curves along this direction field are defined by a system of ordinary differential equations and are called the characteristic curves of the partial differential equation (20) (see Figure 1). In other words, a dparameter family of characteristic curves parameterized by s: satisfies the system of ordinary differential equations:
{ξ 1 (s), . . . , ξ d (s), ξ d+1 (s) =l(ξ 1 (s), . . . , ξ d (s))}, s ≥ 0 (gradf, − ∆ (−1) f 2 ) grad * φ φ = φ 0dξ i (s) ds = (gradf ) i , i ∈ {1, . . . , d} and dξ d+1 (s) ds = dl(s) ds = − ∆ (−1) f 2 ,(23)
where the initial values {ξ 1 (0), . . . , ξ d (0)} and s are the parameters of the characteristic curves. These parameters should not be confused with statistical parameters. By virtue of the theory of ordinary differential equations, the unique solution exists if the right-hand sides of (23) are Lipschitz continuous. Eliminating the parameters from the expression yields an explicit expression for the penalty functionl =l(ξ 1 , . . . , ξ d ). This procedure is discussed in Sections 2.1 and 2.2 of [4]. As in (22), we see that
gradf, gradφ − ∂φ ∂ξ d+1 ∆ (−1) f 2 = 0.(24)
Thus, a solution of the system of ordinary differential equations (23) gives
dφ ds = ∂φ ∂ξ i dξ i ds + ∂φ ∂ξ d+1 dξ d+1 ds = gradf, gradφ − ∂φ ∂ξ d+1 ∆ (−1) f 2 = 0,
which means that along each characteristic curve φ(ξ 1 (s), . . . , ξ d+1 (s)), s ≥ 0 is a constant value, as expected from (21). In particular, if f is (−1)harmonic, the Monge axis is always parallel to the model manifold M and ξ d+1 =l(ξ 1 , . . . , ξ d ) has a constant value. This is a geometric interpretation of why we do not need a penalty, as seen in Corollary 2.9.
In principle, we can solve the partial differential equation (20) for generic model manifolds and estimand functions. To obtain an explicit expression, we have to eliminate parameters s and {ξ 1 (0), . . . , ξ d (0)} from the solution of the system of the ordinary differential equations (23). However, this process is tedious in practice. In fact, elimination of parameters to obtain an implicit representation of a manifold is an important topic in computational algebraic geometry and is called the implicitization problem (see Chapter 3 of [5]). Further investigation of the implicitization problem seems interesting, but in this study we developed the following trick to circumvent the implicitization problem.
Suppose that we seek a solution to the partial differential equation (24) for an integral of the system of ordinary differential equations (23) of the form
φ(ξ 1 , . . . , ξ d+1 ) = χ(f (ξ 1 , . . . , ξ d )) − ξ d+1 = χ(f (ξ 1 , . . . , ξ d )) −l(ξ 1 , . . . , ξ d )
with an injective map χ : R → R. This form of φ has a geometric interpretation. Note that on an integral manifold, that is, φ = const., f = const. if and only ifl = const. We have (d − 1)-dimensional level surfaces of the estimand function f in the d-dimensional model manifold M, namely, f gives a codimension-1 foliation of M, and the level surfaces of f are the leaves. Let the foliation be denoted by
{N w (f ) : w ∈ R} with N w (f ) = {(ξ 1 , . . . , ξ d ) ∈ M : f (ξ 1 , . . . , ξ d ) = w}.
On the other hand, on the d-dimensional integral surface φ = u for a constant u ∈ R, we have (d − 1)-dimensional level surfaces ofl. Let the foliation be denoted by
{N * u,v (l) : u, v ∈ R} with N * u,v (l) = {(ξ 1 , . . . , ξ d , v) ∈ M × R : φ(ξ 1 , . . . , ξ d , v) = u,l(ξ 1 , . . . , ξ d ) = v)}.
Let us consider the projection:
π(N * u,v (l)) := {(ξ 1 , . . . , ξ d ) : (ξ 1 , . . . , ξ d , ξ d+1 ) ∈ N * u,v (l)}.
Then, if we consider the integral manifold with u = χ(w) − v, we have which means that the foliation of the integral manifold φ = φ 0 by the level surfaces ofl projected onto M constitutes the foliation of M by the level surfaces of f ( Figure 2). The condition under which φ is an integral of the system of ordinary differential equations (23) can be expressed as
{N w (f ) : w ∈ R} = {π(N * φ 0 ,χ(w)−φ 0 (l)) : w ∈ R} = {π(N * φ 0 ,v (l)) : v ∈ R}, l M φ = φ 0 πdφ ds = dχ df ∂f ∂ξ i dξ i ds − dξ d+1 ds = dχ df |gradf | 2 + ∆ (−1) f 2 = 0.
Therefore, by integrating the differential equation
dχ df = − 1 2 ∆ (−1) f |gradf | 2 ,(25)
we obtain the penalty functionl(ξ 1 , . . . , ξ d ) = χ(f (ξ 1 , . . . , ξ d )) + const. In particular, if the right-hand side of (25) is a function of f , the integration is straightforward. The constant can be chosen arbitrarily since the maximizer ξ of the penalized likelihood is our concern and does not depend on the constant. Therefore, the penalty function is not unique.
In terms of geodesic distances
An explicit result can be obtained if an estimand is represented as a function of the squared geodesic distance. An orthonormal frame at ζ ∈ M defines a coordinate system in the tangent space T ζ M. The diffeomorphism of the neighborhood of ζ in T ζ M to the neighborhood of ζ, U ζ , defines a local coordinate system in U ζ . We denote the local coordinate system by {η 1 , . . . , η d } and call it a normal coordinate system at ζ. Note that {∂/∂η 1 , . . . , ∂/∂η d } forms an orthonormal frame at ζ, but may not be orthonormal at other points. If {η 1 , . . . , η d } is a normal coordinate system at ζ, then the geodesic γ = η(t) with the initial condition
η(0) = 0 andη(0) = c is expressed as η i = c i t, i ∈ {1, . . . , d} (see Proposition III.8.3 of [16]).
Our result is the following theorem. The proof is provided in Appendix E. The lemmas used in the proof appeared in Riesz's work [26] on a construction of the fundamental solution, or Green's functions, of a partial differential equation as a convergent series in the squared geodesic distance associated with the Laplace-Beltrami operator.
Theorem 2.11. Consider a parametric model M = {p(·; ξ) : ξ ∈ Ξ}. Fix a point ζ ∈ M and consider the geodesic joining ζ and point ξ ∈ U ζ with the the geodesic distance t(ξ) := dis(ζ, ξ). Suppose we have an estimand function of the squared geodesic distance f (ξ) = ϕ(t 2 (ξ 1 , . . . , ξ d )). Then, under the regularity conditions of Lemma 2.5, the plug-in estimator f (ξ) of the estimand f (ξ), whereξ maximizes the penalized likelihood with the penalty functionl(ξ) = χ(t 2 (ξ 1 , . . . , ξ d )) satisfying
χ(t 2 ) = 1 4 ξ S i (ξ)dξ i − 1 2 log |ϕ ′ (t 2 )|t d √ h(26)
is unbiased up to O(n −1 ) and coincides with the UMVUE up to O(n −1 ) if a complete sufficient statistic for M exists. Here, h is the determinant of the Fisher metric tensor in the normal coordinate system at ζ, ϕ ′ (x) = dϕ/dx for x ∈ R >0 , andξ is the variable of integration.
The implication of this theorem becomes clear in the following example.
Example 4. If the estimand is the squared geodesic distance, that is, ϕ is the identity map, the right-hand side of (25) is represented by a function of the squared geodesic distance:
− d 4t 2 − 1 4 d log h dt 2 + S i 4 dξ i dt 2 ,
where the relation between α-Laplacians (16) and the results in Appendix E are used. Subsequently, the solution to (25) coincides with that of Theorem 2.11. The foliation of M by the level surfaces of f is reduced to leaves consisting of equidistant points from the point ζ ∈ M. For any point ξ in M, there exists a geodesic γ joining ζ and ξ, which is transverse to the foliation and diagonally intersects the leaves (see Figure 3).
Results on flat model manifolds
In this section, we discuss the special cases in which model manifolds are flat. In Section 3.1 we discuss one-dimensional models (a one-dimensional manifold is always flat). Section 3.2 describes α-flat manifolds defined in Section 2.1 and revisits the results of Firth [10]. In this section, we assume the assumptions of Lemma 2.5.
One-dimensional model manifolds
For one-dimensional models, the partial differential equation (20) is reduced to an ordinary differential equation and can be integrated as in the following corollary. The proof is provided in Appendix F.
el (ξ) ∝ {g(ξ)} 1/4 |f ′ (ξ)| e 1 4 ξ S 1 (ξ)dξ ,(27)
is unbiased up to O(n −1 ), and coincides with the UMVUE up to O(n −1 ) if a complete sufficient statistic for M exists. Here, S 1 = g 11 S 111 , g 11 = g −1 11 , and g = g 11 . In particular, for an α-flat manifold of a non-zero α ∈ R, the condition becomes
el (ξ) ∝ h((α − 1)/2; α) |f ′ (ξ)| ,(28)
where h((α − 1)/2; α) is the density of the (α − 1)/2-parallel volume element with respect to an α-flat manifold defined in (13), and ξ is an α-affine coordinate system. Remark 3.2. A one-dimensional manifold is flat, and a curve in the manifold is always a geodesic. Therefore, any local coordinate is a normal coordinate, and the expression (26) with d = 1 should be consistent with the expression (27). In fact, (26) yields
ψ(t 2 ) = 1 4 ξ S 1 (ξ)dξ − 1 2 log |f ′ (ζ)|,
where we use the facts that h = h 11 is a constant since h 11 {ζ(0)} 2 = 1 by the definition of the canonical parameter (see Section 2.1) and f ′ (ζ) = dϕ(t 2 )/dt 2 · dt 2 /dζ = 2ϕ ′ (t 2 )t/ζ(0). This expression coincides with (27).
Flat model manifolds
For α-flat model manifolds, a sufficient condition for (20) in a spirit similar to Firth [10] mentioned in Remark 2.10 can be justified, thanks to the equiaffiness of the manifold established as Lemma 2.3. We demonstrate obtaining conditions under which an overdetermined system of partial differential equations is integrable. If a manifold M is α-flat, we have Γ (α) i jk = 0 for all i, j, and k in an αaffine coordinate system. Using the relationship between α-Laplacians (16), the partial differential equation (20) can be recast into
gradf, gradl = − 1 2 ∆ (α) f + 1 + α 4 S i (gradf ) i ,(29)
where ∆ (α) f = g ij ∂ i ∂ j f in the α-affine coordinate system. A sufficient condition for (29) is that the penalty functionl satisfies the system of partial differential equations
∂ il = − 1 2 ∆ (α) f (gradf ) i + 1 + α 4 S i for all i ∈ {1, . . . , d},(30)
which is overdetermined, except for d = 1; that is, we have d ≥ 2 partial differential equations for the single functionl. The integrability condition (11) is now given by
1 + α 2 (∂ i S j − ∂ j S i ) = ∂ i ∆ (α) f (gradf ) j − ∂ j ∆ (α) f (gradf ) i for all i = j.(31)
If α = 0, the expression (7) yields S ikr = ∂ i g kr /α, and we have S i = g jk S ijk = ∂ i (log g)/α. The integrability condition (31) is reduced to a simpler form:
∂ i ∆ (α) f (gradf ) j = ∂ j ∆ (α) f (gradf ) i for all i = j.
An obvious class of estimand functions satisfying this integrability condition is the α-harmonic functions defined in Section 2.2, that is, ∆ (α) f = 0. The system of partial differential equations (30) becomes
∂ il = 1 + α 4α ∂ i log g for all i ∈ {1, . . . , d}
and can be immediately integrated. The solution coincides with the (α − 1)/2-parallel volume element (13). The conclusions are summarized in the following corollary. Firth's bias-reduction procedure can be generalized as follows. Note that the bias of all the components of the parameter estimate can be reduced using a single penalty function (see Remark 2.10). Remark 3.5. Corollary 3.3 is a multidimensional version of (28) of Corollary 3.1 (f ′ (ξ) = const. if f (ξ) is α-harmonic). For an exponential family with canonical parameterization, α = 1 and the 0-parallel volume element is reduced to the Jeffreys prior, which is a reproduction of Firth's result in Section 3.1 of [10].
Note that the α-connection comes into play even for 0-flat model manifolds. For a 0-flat model manifold with a 0-harmonic estimand function, the integrability condition (31) implies that the system of partial differential equations (30) is integrable if and only if ∂ i S j = ∂ j S i for all i = j. The proof of assertion (iii) of Lemma 2.3 gives the following result.
Examples
In this section, several examples are presented with simulation results to illustrate how the developed bias-reduction procedure works for problems of statistical interest.
Generalized linear models
Generalized linear models constitute a class of exponential families that cover both discrete and continuous measurements. They share several common properties, and the bias reduction of the maximum likelihood estimator of a parameter has been actively studied (see Section 15.2 of [23] and [18] for a survey). Bias reduction of generic estimands seems to be more challenging. In this subsection, we apply our bias-reduction procedure to estimate the expected responses of the generalized linear models. We will see that the 'design dependent shrinkage' property of our bias reduction works effectively.
For simplicity, we concentrate on a canonical link and a known dispersion φ. A vector of observations y with n components is assumed to be the realization of a random vector whose components are independently distributed with mean vector µ. The log-likelihood of a generalized linear model is expressed as
l(ξ; y) = y i ξ i − ψ(ξ) φ + c(y, φ)(32)
for some specific functions c and ψ. The linear predictor ξ i is modeled by a linear combination of unknown parameters {β 1 , . . . , β d }:
ξ i = x i r β r , i ∈ {1, . . . , n},(33)
where x i r is the design matrix. By substituting (33) into (32), it can be observed that the model is an exponential family with canonical parameter {β 1 , . . . , β d } and sufficient statistics t r = y i x i r , r ∈ {1, . . . , d}. The score functions are
u r = ∂ r l = ∂l ∂ξ i ∂ξ i ∂β r = 1 φ (y i − µ i )x i r , ∂ r := ∂ ∂β r , r ∈ {1, . . . , d},
with the means µ i = E β [Y ] = ∂ψ/∂ξ i . The link function of a generalized linear model relates the linear predictors ξ i to the means µ i . We assume that the link function is invertible, and let the inverse be denoted by h, that is, µ i = h(ξ i ). The Fisher metric tensor is given by
g rs = −∂ r ∂ s l = 1 φ ∂ s µ i x i r = 1 φ dµ i dξ i ∂ s ξ i x i r = 1 φ h ′ (ξ i )x i r x i s .
It is convenient to use the following representation in matrices:
(g rs ) = 1 φ n i=1 (x r i )(w ii )(x i s ) = 1 φ X T W X,(34)
where W = diag(w 11 , . . . , w nn ) and w ii = h ′ (ξ i ). Since the model is an exponential family with canonical parameterization, the Christoffel symbols are Γ (1) rs,t = 0, for all r, s, and t (cf. Example 1), and we have 3 is the third cumulant of the response vector.
Γ (−1) rs,t = S rst = E β [u r u s u t ] = n i=1 κ 3i x i r x i s x i t ,(35)where κ 3i = E β (Y i − µ i )
To discuss the expected response, we choose the inverse link function h(ξ i 0 ) for the i 0 -th design as the estimand. The condition (20) for the designspecific penalty functionl (i 0 ) is
gradh(ξ i 0 ), gradl (i 0 ) + 1 2 ∆ (−1) h(ξ i 0 ) = 0.(36)
Using the matrix representation (34), we obtain
∆ (1) h(ξ i 0 ) = g rs ∂ r ∂ s h(ξ i 0 ) = φx i 0 (X ⊤ W X) −1 x ⊤ i 0 h ′′ (ξ i 0 ),
where x i 0 is the i 0 -th row vector of X. It is evident that a linear link is 1-harmonic. For a linear link, according to Corollary 3.3, the Jeffreys prior is a penalty function that achieves an asymptotic unbiased estimation up to O(n −1 ). Incidentally, this conclusion is the same as that for the estimation of parameter β discussed by Firth [10]. Otherwise, by using the relation between α-Laplacians (16) and the expression (35), we have
∆ (−1) h(ξ i 0 ) =φx i 0 (X ⊤ W X) −1 x ⊤ i 0 h ′′ (ξ i 0 ) − 1 φ n j=1 κ 3j x j (X ⊤ W X) −1 x ⊤ i 0 x j (X ⊤ W X) −1 x ⊤ j h ′ (ξ i 0 ).
If this expression is identically zero, or h(ξ i 0 ) is (−1)-harmonic, asymptotic unbiasedness can be achieved without a penalty (Corollary 2.9). If the inverse link function is neither linear nor (−1)-harmonic, we have to solve the partial differential equation (36). Noting the expression
|gradh(ξ i 0 )| = φx i 0 (X ⊤ W X) −1 x ⊤ i 0 {h ′ (ξ i 0 )} 2 ,
the differential equation (25) takes the form
dχ dh = − 1 2 h ′′ h ′2 + 1 2φ 2 n j=1 κ 3j x j (X ⊤ W X) −1 x ⊤ i 0 x j (X ⊤ W X) −1 x ⊤ j x i 0 (X ⊤ W X) −1 x ⊤ i 0 h ′ .
Multiplying the right-hand side by h ′ (ξ i 0 ) and integrating with respect to ξ i 0 yields
− 1 2 log |h ′ (ξ i 0 )| + 1 2φ 2 ξ i 0 n j=1 κ 3j x j (X ⊤ W X) −1 x ⊤ i 0 x j (X ⊤ W X) −1 x ⊤ j x i 0 (X ⊤ W X) −1 x ⊤ i 0 dξ.(37)
If (37) is reduced to an injective function ψ of h(ξ i 0 ), then it becomes the desired penalty functioñ
l (i 0 ) (β 1 , . . . , β d ) = χ(h(ξ i 0 )) = χ(h(x i 0 r β r )).
The penalized likelihood can be maximized with Fisher's scoring. By modifying the metric tensorg rs := g rs − ∂ r ∂ sl (i 0 ) , the updated rule for parameter β is represented as β r ←g rs (g st β t + u s + ∂ sl (i 0 ) ).
Bias reduction of maximum likelihood estimates of parameters in logistic regressions has been actively studied, including [3] and [10]. In logistic regression, the expected response is the success probability of each design, which is given by the logistic function of the linear predictor π i = e ξ i /(1+e ξ i ) for the i-th design. It is known that the maximum likelihood estimate of parameter β is found to be biased away from the origin β = 0. The 'separation' phenomenon, in which a parameter estimate diverges to infinity, is a problem in applications. This occurs when the success or failure is completely determined by whether the linear predictor is positive or negative in a sample. In fact, Firth's bias-reduction procedure has been used to avoid this separation phenomenon [18]. Therefore, a bias reduction requires some degree of 'shrinkage' of an estimate toward the origin.
As an illustration, we discuss a variant of the univariate logistic regression used by Copas [3] with the severe setting considered by Firth [10]; that is, one observation is made at each of the five design points with ξ i = x i 1 β 1 = iβ for i = 0, ±1, ±2 (the numbering is different from (33)). The Fisher metric tensor (34) is
g 11 = 4π −2 (1 − π −2 ) + π −1 (1 − π −1 ) + π 1 (1 − π 1 ) + 4π 2 (1 − π 2 ).
AUE
Sampling probabilities The penalty function can be obtained by calculating (37); however, this model is one-dimensional, and the application of Corollary 3.1 is easier. Given that the 0-parallel volume element is the Jeffreys prior, we immediately obtain the penalty functioñ
t 1 MLE Firth π ±2 π ±1 π 0 β = 0.5 β = 1 β = 1.5 −3 −∞ −1.383 −∞ −∞ −1.l (i 0 ) (β) = 1 − i 0 2 β + log(1 + e i 0 β ) − log(1 + e β ) − log(1 + e 2β ) + 1 2 log{(1 + e β ) 4 + 4e 2β }.
The sufficient statistic t 1 = y i x i 1 = 2 i=−2 iy i has seven possible values. Table 1 is an extension of Table 1 in Firth [10]. It displays the distribution of the estimates of parameter β for each value by the three methods: the maximum likelihood estimation (MLE), the maximizer of the penalized likelihood proposed by Firth (Firth), and ours (asymptotic unbiased estimator, abbreviated as AUE). The AUE of β is design-specific since it is chosen to reduce the bias of the estimate of the success probability of each design. The limiting behaviors with β → ±∞ show that the maximum likelihood estimatê β MLE does not exist if t 1 = ±3 and the AUE of β for success probabilities π ±1 or π ±2 does not exist if t 1 = ±3.
The results were obtained with Fisher's scoring, and the iterations were continued until the absolute value of the update of the estimate became smaller than 10 −5 . The average number of required iterations was less than 50. Comparing the results of Firth with AUE, it can be noted that in AUE, shrinkage is weak, or even 'anti-shrinkage' occurs for large absolute values of the linear predictor. This is reasonable because estimating responses becomes easier with an increase in the absolute value of the predictor. Table 2 summarizes the performance of the estimators of the success probabilities. The maximum likelihood estimates are in the rows of MLE. The plug-in estimates by the maximizer of Firth's penalized likelihood are The results showed that the AUE had a significantly smaller bias than the MLE. The mean squared errors were similar in magnitude. This is because both estimators are second-order efficient (Remark 2.6). It can be observed that the the bias of Firth becomes larger with the increase in β, which is caused by the increase in the non-linearity of the logistic function. Of course, this is not a fair comparison since Firth's bias reduction was designed to estimate β.
Linear mixed-effects models
Linear mixed-effects models are among the most widely used classes of statistical models (see Section 3.4 of [20]). In this subsection, we explain how our bias reduction works for the estimation of the 'shrinkage factor' in linear mixed-effects models. Efron and Morris [8] discussed an estimation of the mean vector z in R n , n ≥ 3 of a linear mixed-effects model with unknown variance σ 2 > 0:
x i |z i ∼ N(z i , 1), z i iid ∼ N(0, σ 2 ), i ∈ {1, . . . , n}.(38)
If we regard the normal distribution N(0, σ 2 ) as the prior distribution for z i , i ∈ {1, . . . , n}, the marginal log-likelihood is
l(σ 2 ; x) = − |x| 2 2(1 + σ 2 ) − n 2 log 2π(1 + σ 2 )(39)
which is obtained by integrating out z from (38). The estimator of σ 2 which maximizes (39) isσ 2 MLE = |x| 2 /n − 1. The best predictor of the mean vector z under the squared error loss is (see Section 3.5 of [20])
z(σ 2 ) = {1 − s(σ 2 )}x, s(σ 2 ) := 1 1 + σ 2 ,
where s(σ 2 ) is called the shrinkage factor, because the predictorẑ is shrunken toward the ground mean, 0, as the variance σ 2 decreases. In applications of the model we are interested in the shrinkage factor. The plug-in estimator s(σ 2 MLE ) has bias, 2/{(1 + σ 2 )n} + o(n −1 ), and our bias reduction can be applied to remove this bias as follows.
The log-likelihood (39) is a one-dimensional exponential family with the canonical parameter ξ = −1/{2(1 + σ 2 )}. The manifold is 1-flat, and the canonical parameter is a 1-affine coordinate. The Fisher metric tensor is g 11 = n/(2ξ 2 ). As |x| 2 is a complete sufficient statistic, Corollary 3.1 immediately provides the UMVUE of the shrinkage factor up to O(n −1 ). By substituting f (ξ) = s(σ 2 ) = −2ξ into (28), we obtaiñ l(ξ) = log h(0; 1) + const. = log(1 + σ 2 ), which coincides with the Jeffreys prior.
A two-parameter family of linear mixed-effects models that generalizes the model expressed by (38) is
x ij |z i iid ∼ N(z i , δ), j ∈ {1, ..., m i }, z i iid ∼ N(0, α), i ∈ {1, ..., n},
where the variances α > 0 and δ > 0 are unknown. The parameter is denoted by ξ = (α, δ). This model is a simplified version of the linear regression model discussed in [2]. Let us consider the asymptotics n → ∞ under the assumption that sup i m i < ∞. We exclude the case of m 1 = · · · = m n = 1, which reduces to (38). The best predictor of each mean z i iŝ
z i = {1 − s (i) (ξ)}x i , s (i) (ξ) := δ δ + m i α ,x i = m i j=1 x ij m i for i ∈ {1, . . . , n}. The log-likelihood is l(ξ; x) = 1 2δ n i=1 (m ixi ) 2 α δ + m i α − n i=1 m i j=1 x 2 ij − 1 2 n i=1 log (δ + m i α)δ m i −1 + const.,
and the Fisher metric tensor is [15] (
g ij ) = 1 2 n i=1 m 2 i (δ + m i α) −2 n i=1 m i (δ + m i α) −2 n i=1 m i (δ + m i α) −2 n i=1 (δ + m i α) −2 + (m − n)δ −2 with the determinant g = 1 4 i<j (m i − m j ) 2 (δ + m i α) 2 (δ + m j α) 2 + m − n δ 2 n i=1 m 2 i (δ + m i α) 2 > 0,
where m := n i=1 m i . After some calculations, we obtain the skewness tensor:
S ααα = n i=1 m 3 i (δ + m i α) 3 , S ααδ = n i=1 m 2 i (δ + m i α) 3 , S αδδ = n i=1 m i (δ + m i α) 3 , S δδδ = n i=1 1 (δ + m i α) 3 + m − n δ 3 .
The model manifold is the orthant {ξ = (α, δ) : ξ ∈ R 2 >0 }. We can confirm that Γ (−1) ij,k = 0 for all i, j, and k. Therefore, the model manifold is (−1)flat, and ξ is a (−1)-affine coordinate system. Corollary 3.4 implies that the maximum likelihood estimateξ MLE of parameter ξ is second-order unbiased without penalty.
The asymptotic bias reduction of the shrinkage factor s (i) up to O(n −1 ) is as follows. The condition (20) for the penalty functionl (i) becomes As s (i) is not (−1)-harmonic, the penalty function is required. By using
grads (i) , gradl (i) + 1 2 ∆ (−1) s (i) = 0, where ∆ (−1) s (i) = 1 g j =i m i (m i − m j ) 2 (δ + m j α)(d + m i α) 3 + m 2 i (m − n) (δ + m i α) 3 > 0.− 1 2 ∆ (−1) s (i) |grads (i) | = 1 s (i) n m − 1 ,
the differential equation (25) can be integrated immediately and we havẽ Table 3 summarizes the performance of the estimators of the shrinkage factor s (1) . The results for the plug-in estimator with the maximum likelihood estimate of the parameter ξ = (α, δ) are shown in the row of MLE, wheres those with the maximizer of the penalized likelihood are shown in the row of AUE. The mean squared errors are in the columns of MSE. We set n = 50 and m 1 = · · · = m 50 = 10. The results were obtained from 10,000 experiments. The estimates of parameter ξ were computed with Fisher's scoring, as described in Section 4.1. The average number of iterations until the absolute value of the update of the estimate of the shrinkage factor became smaller than 10 −5 was equal to or less than seven. The results showed that the AUE had a significantly smaller bias than the MLE. The MSEs were in similar magnitude.
l (i) (ξ) = χ(s (i) ) + const. = 1 − n m log 1 + m i α δ .
Remark 4.1. Hirose and Lahiri [14] obtained a penalty function for a known δ, where s (i) (α, δ) is an unbiased estimator of s (i) (α, δ) up to O(n −1 ). The case of an unknown δ has been discussed in [15], but a penalty function was not obtained. The argument above provides a penalty function.
Location-scale family and hyperbolic space
The location-scale family is a representative group family whose group of transformations is x → σx + µ ∈ PSL(2, R) for x ∈ R, where µ ∈ R and σ > 0 are the location and scale parameter, respectively, and PSL(2, R) denotes the projective special linear group in R 2 (see Sections 3.3 and 4.4 of [20]). From a geometrical perspective, the family is represented as a twodimensional hyperbolic space and is not α-flat for any α. The purpose of this subsection is to illustrate how our bias-reduction procedure works for non-flat manifolds. The Neyman-Scott model, in which the number of parameters increases with the number of observations and the maximum likelihood estimator is inconsistent, can also be discussed in a similar manner, because the model manifold is also a hyperbolic space.
First, we explain why the location-scale family is not flat. Let p 0 (z), z ∈ R is a probability density symmetric about the origin. The location-scale family of the standard density p 0 is given by the density
p(x)dx = 1 σ p 0 {(x − µ)/σ}dx,(40)
where the parameter is denoted by ξ = (µ, σ). The one-sample log-likelihood is l(ξ; x) = l 0 {(x − µ)/σ} − log σ, where l 0 (z) = log p 0 (z), and we have
∂l ∂µ = − l ′ 0 σ , ∂l ∂σ = − zl ′ 0 σ − 1 σ , ∂ 2 l ∂µ 2 = l ′′ 0 σ 2 , ∂ 2 l ∂σ 2 = 2zl ′ 0 σ 2 + z 2 l ′′ 0 σ 2 + 1 σ 2 , ∂ 2 l ∂σ∂µ = l ′ 0 σ 2 + zl ′′ 0 σ 2 .
It can be seen that
E ξ [∂ µ l] = E ξ [∂ σ l] = E ξ [∂ µ ∂ σ l] = 0 and E ξ − ∂ 2 l ∂µ 2 = − 1 σ 2 E(l ′′ 0 ), E ξ − ∂ 2 l ∂σ 2 = {1 − E(z 2 l ′′ 0 )} 1 σ 2 ,
where expectations E ξ are taken with respect to the density of (40). For simplicity, we assume the standard density p 0 (z) satisfies the condition E(l ′′ 0 ) = E(z 2 l ′′ 0 )−1 =: −R 2 < 0, which is a choice of the scale. Then, the Fisher metric tensor becomes g ij = (R/σ) 2 δ ij . This metric is known as the Poincaré metric of the upper half-plane model of the two-dimensional hyperbolic space of the sectional curvature (−R −2 ), which is denoted by H(−R −2 ). The skewness tensor has components S µµµ = S µσσ = 0, S µµσ = c 1 σ −3 , and S σσσ = c 2 σ −3 , where
c 1 := −E(zl ′3 0 + l ′ 2 0 ), c 2 := −E(zl ′ 0 + 1) 3 .
The vector derived by the contraction of the skewness tensor has components S µ = 0, S σ = cσR −4 , c := c 1 + c 2 . The Christoffel symbols are
Γ (α) σσ,µ = Γ (α) µσ,σ = Γ (α) µµ,µ = 0, Γ (α) σσ,σ = − R 2 σ 3 − α 2 c 2 σ 3 , Γ (α) µσ,µ = − R 2 σ 3 − α 2 c 1 σ 3 , Γ (α) µµ,σ = R 2 σ 3 − α 2 c 1 σ 3 ,
and the non-zero components of the Riemann curvature tensor are
R (α) µσµσ = − R 2 σ 4 1 + α 2 c 1 R 2 1 + α 2 c 1 − c 2 R 2 , R (α) σµµσ = R 2 σ 4 1 − α 2 c 1 R 2 1 − α 2 c 1 − c 2 R 2 ,
which shows that the location-scale model is not α-flat for any α. Firth ([10], Section 4.2) discussed the bias reduction of the maximum likelihood estimate of the variance of the normal distribution with parameterization (µ, σ 2 ) and found that the bias-reduced estimator is the unbiased sample variance. His result can be reproduced as follows.
The condition of (20) for penalty functionl is the partial differential equation
∂ σl = − 1 − c 2R 2 1 2σ ,(41)
and a solution isl(ξ) = −{1 − c/(2R 2 )} log σ/2. The system of estimating equations for an n-sample is
∂ σ l * (ξ; x) = 2 σ 3 n i=1 (x i − µ) 2 − n σ − 1 − c 2R 2 1 2σ = 0 and ∂ µ l * (ξ; x) = 2 σ 2 n i=1 (x i − µ) 2 = 0,
where l * (ξ; x) := l(ξ; x 1 , . . . , x n ) +l(ξ). The solution isμ =x and
σ 2 2 = n n + (1 − c/(2R 2 )) s 2 , s 2 := 1 n n i=1 (x i −x) 2 ,(42)
wherex and s 2 are the sample mean and variance, respectively. If we choose p 0 (z) = e −z 2 / √ π, where c 1 = 4, c 2 = 8, and R 2 = 2, (42) becomes the unbiased sample variance of the normal distribution. Lauritzen [19] proposed hypothesis testing for the coefficient of variation γ := σ/µ for the normal distribution. He called s/x the geometric ancillary test statistic for the hypothesis that γ = γ 0 for certain γ 0 values. Here,x and s are the maximum likelihood estimates of µ and σ, respectively, and the test statistic has a bias of O(n −1 ). Our bias-reduction procedure can be applied to remove this bias as follows.
The condition of (20) for the penalty functionl becomes
gradl, gradγ + 1 2 ∆ (−1) γ = 0, ∆ (−1) γ = γ 2 + γ 4 R 2 . Since − 1 2 ∆ (−1) γ |gradγ| = − γ 2 − c/(4R 2 ) γ + γ 3 ,
the differential equation (25) can be integrated immediately. If we choose p 0 (z) = e −z 2 / √ π, a solution is
l(ξ) = χ(γ) + const. = c 4R 2 log σ µ − 1 2 1 + c 4R 2 log 1 + σ µ 2 ,
and the system of estimating equations forξ = (μ,σ) that maximizes the penalized likelihood is
µ =x + 5 4n σ 4 µ(µ 2 + σ 2 ) − 3 4n
σ 2 µ and Table 4 summarizes the performance of plug-in estimators of the coefficient of variation γ: the statistic s/x and the plug-in estimatorξ =σ/μ. We set n = 100. The results were obtained from 10,000 experiments. The results for the estimator s/x are in the row of MLE, whereas those for the estimatorσ/μ are in the row of AUE. The system (43) was numerically solved by iteratively updating the estimates of µ and σ through the substitution of the current estimates into the right-hand side. This is repeated until the ratio of the estimates converged, which is equivalent to the gradient descent. The average number of iterations until the absolute value of the update became smaller than 10 −5 was less than five. The results did not change when µ and σ were multiplied by a common positive number. The results indicate that ξ =σ/μ has a significantly smaller bias than s/x. The mean squared errors (MSEs) were of similar magnitudes for γ = 0.2 and 1, but the MSE of the MLE was larger than the AUE for γ = 2. Let us discuss the estimation of of squared geodesic distance t 2 from the standard density, where t = dis((0, 1), (µ, σ)). For each point (µ, σ) ∈ H(−R 2 ), there exists the unique geodesic joining (0, 1) and (µ, σ), where the geodesic distance t is the length of the geodesic (see Section 2.2). From a statistical perspective, the squared geodesic distance is a natural estimand of the deviation from a hypothesized density.
σ 2 = 2(x − µ) 2 + 2s 2 − 5 2n σ 4 µ 2 + σ 2 + 3 2n σ 2 .(43)
The geodesic equations (17) arë
µ − 2 σμσ = 0,σ +μ 2 −σ 2 σ = 0.
The solution satisfying the initial condition ξ(0) = (µ(0), σ(0)) = (0, 1) anḋ ξ(0) = (μ(0),σ(0)) becomes
µ(t) = Rμ(0) tanh(t/R) 1 − Rσ(0) tanh(t/R) , σ(t) = 1 cosh(t/R) − Rσ(0) sinh(t/R) .(44)
As expected, the geodesic distance is dis((0, 1), (µ(t 0 ), σ(t 0 ))) =
t 0 0 R σ μ 2 +σ 2 dt = t 0 .
The geodesic is a portion of the semicircle on the upper half-plane (see Figure 4):
µ −σ (0) µ(0) 2 + σ 2 = 1 (Rμ(0)) 2 , σ > 0.
The normal coordinate system η at (0, 1) is defined as η = (η 1 , η 2 ) = (μ(0)t,σ(0)t), where (η 1 ) 2 + (η 2 ) 2 = (t/R) 2 . The relationship (44) can be regarded as a change of the basis between η and ξ, and the inverse is given by Figure 4: The geodesic distance between a point ξ = (µ, σ) and the point (0, 1), which corresponds to the standard density, is the length of the curve shown in the Poincaré metric. Our task is to find the estimateξ such that t 2 (ξ) is an asymptotically unbiased estimate of squared geodesic distance t 2 (ξ).
η 1 = ρ √ ν 2 − 1 µ σ , η 2 = ρ √ ν 2 − 1 ν − 1 σ , 1 µ σ ξ
where ν = (µ 2 + σ 2 + 1)/(2σ) ≥ 1 and ρ = log(ν + √ ν 2 − 1). The squared geodesic distance is expressed as a function of ξ:
t 2 (ξ) = R 2 ρ 2 = R 2 log µ 2 + σ 2 + 1 2σ + µ 2 + σ 2 + 1 2σ 2 − 1 2 .(45)
Theorem 2.11 provides us an asymptotic unbiased estimator of the squared geodesic distance up to O(n −1 ). The expression of (26) contains the determinant of the Fisher metric tensor in the normal coordinate system η. Therefore, we have to calculate it and then express it in the original coordinate system ξ. By using the transformation rule of components of a tensor under the change of basis (see Section I.2 of [16]):
h ij (η) = ∂ξ k ∂η i ∂ξ l ∂η j g kl (ξ), we obtain h(η(ξ)) = Rµ(νσ − 1) σ 2 (ν 2 − 1) 4 1 + µ 2 (νσ − 1) 2 + σ 2 (ν 2 − 1) 2 ρ 2 µ 2 × 1 + (νσ − 1) 2 µ 2 + σ 2 (ν 2 − 1) 2 ρ 2 (νσ − 1) 2 .
The penalty function is It is straightforward to obtain the system of estimating equations forξ that maximizes the penalized likelihood. The explicit expressions are provided in Appendix G. Table 5 summarizes the performance of the estimators of the squared geodesic distance: the one is by substituting (x, s) for (µ, σ) (MLE), and the other is by substituting (μ,σ) (AUE), where (μ,σ) is the maximizer of the penalized likelihood. The density was specified to be p(z) = e −z 2 √ π. We set n = 100, and the results were based on 10,000 experiments. The average number of iterations until the absolute value of the update of the estimate of the geodesic distance became smaller than 10 −5 was less than of equal to four. The results showed that the AUE had a significantly smaller bias than the MLE. The MSEs were similar in magnitude. It might be surprising that we can reduce the bias of the maximum likelihood estimate for such a complicated estimand function as in (45).
l(ξ) = c 4R 2 log σ − log ρ − 1 4 log h(η(ξ)).(46)
Discussion
We have discussed the asymptotic bias reduction of maximum likelihood estimates of generic estimands with parameter estimates obtained by maximizing suitable penalized likelihoods. We have answered the two questions raised in Introduction.
The first question concerns bias reduction of generic estimands. We demonstrated that the problem of finding a penalty function that achieves asymptotic unbiasedness up to O(n −1 ) can be boiled down to the integration of a quasi-linear partial differential equation (20). This integration is equivalent to that of a system of ordinary differential equations. Since the latter system has a solution, we can obtain the desired penalty function for generic model manifolds and estimands.
The second question concerns the system of partial differential equations (3) obtained by Firth [10]. We pointed out that the system is overdetermined, except for one-dimensional models. The integration of an overdetermined system requires an integrability condition, but we do not have to solve the system for bias reduction. In fact, we successfully identified a milder condition, that is, (20). Flat model manifolds are exceptions; thanks to the equiaffiness of manifolds, we could assess the integrability of the overdetermined system, and we obtained some explicit results.
Another natural question is asking what estimand is asymptotically unbiased for a given penalty, because a penalty function can be given a priori, for example, as a regularizer or a prior. This question would be studied in the context of integral geometry, because the partial differential equation for an estimand function is a variant of the Laplace equation and the integration relies on the group isometries of manifolds (see Chapter II of [13]). We aim to address this issue in future research.
Finally, we comment on the concept of α-Laplacians introduced in (14). We introduced this concept because it simplifies various arguments in this study. This simplification is not restricted to bias reduction. For a curved exponential family, Komaki [17] constructed an optimal predictive distribution in terms of the Kullback-Leibler loss by shifting the distribution in a direction orthogonal to the model manifold with the amount ∆ (−1) q(·;ξ MLE )/2 (the last equation on page 307 of [17]). This expression implies that if the distribution function is (−1)-harmonic, optimality has already been achieved. Further investigation of α-Laplacians might be interesting.
A Proof of Lemma 2.3
Proof. With a local coordinate system {ξ 1 , . . . , ξ d }, a d-dimensional differential form ω is written as ω(ξ) = ω i 1 ...i d (ξ)dξ i 1 ∧ · · · ∧ dξ i d , where the density ω i 1 ...i d (ξ) is antisymmetric in indices, so ω i 1 ...i d (ξ) = 0 if and only if all indices are distinct (see Section I.1 of [16] for differential forms). In the local coordinate system, the components of the condition ∇ (α) ω = 0 gives the following system of partial differential equations:
∂ i ω 1...d − d j=1 Γ (α) k ji ω 1...j−1kj+1...d = ∂ i ω 1...d − Γ (α) j ji ω 1...d = 0 for all i ∈ {1, . . . , d}.(47)
Let h = ω 1...d . If α = 0, the differential from with h = √ g is called the volume element and satisfies the system (47) identically, because we have Γ (0) j ij = ∂ i log √ g from the expression of (9). Therefore 0-connection is locally equiaffine and (i) holds. For (ii), if M is α 0 -flat for a non-zero α 0 , since the α 0 -flatness implies that we can take a local coordinate system satisfying Γ (α 0 ) j ij = 0 around each point of M for all i, by using (6) and (7), we can deduce that
Γ (α) j ji = α 0 − α 2α 0 g jk ∂ i g jk = α 0 − α 2α 0 ∂ i log g
for all α ∈ R. Then, the system (47) becomes
∂ i h = α 0 − α 2α 0 ∂ i log g for all i ∈ {1, . . . , d}.
This system is integrable, because it satisfies the integrability conditions (11), i.e., ∂ i ∂ j h = ∂ j ∂ i h for all i = j. We immediately obtain log h = (1/2 − α/(2α 0 )) log g + const. For (iii), since the integrability condition yields
∂ j (Γ (α) i ik h) = ∂ k (Γ (α) i ij h), ∀α, j = k, we have ∂ j Γ (α) i ik = ∂ k Γ (α) i ij , ∀α, j = k.
On the other hand, by the definition of α-connections (6), we have
Γ (α) j ji = Γ (0) j ji − α 2 S j = 1 2 ∂ i log g − α 2 S j .
Therefore, we have
∂ i S j = ∂ j S i for all i = j.(48)
But we can construct a counter example, namely, there exists a manifold satisfying (48) but is not α-flat for any α (see Section 4.2).
Remark A.1. Hartigan (Section 6 of [11]) defined the asymptotically locally invariant prior by h which solves the partial differential equation (47) for α = 1, and satisfies h(f (ξ))f ′ (ξ) ∝ h(ξ) asymptotically and locally for any estimand function f , with assuming the existence. He proposed a oneparameter family of invariant priors, which is a solution of (47) [11]. An example of α-parallel volume elements with respect to an α 0 -flat manifold is h(α 0 ; α 0 ) = 1 for any non-zero α 0 , where the case of the exponential family (α 0 = 1) was discussed in [12]. Another example is h(α; ±1), which appeared in [27]. Takeuchi and Amari [27] obtained assertion (ii) with using Corollary 3.12 of [19], but the proof above seems more straightforward.
B Proof of Lemma 2.4
Consider a log-likelihood function l(ξ, x 1 , ..., x n ) := n i=1 log q(x i ; ξ) of a sample (x 1 , ..., x n ) ∈ X n with a parametric model q(x; ξ), ξ ∈ Ξ. The parameter space Ξ is an open subset of R d for a fixed d ∈ Z >0 for large n. The true value of the parameter, ξ 0 , is assumed to be in the interior of Ξ. Expectations are taken with respect to the product probability measure P ξ 0 (dx) = e l(ξ 0 ;x) n i=1 dx i and a derivative is denoted by ∂ i := ∂/∂ξ i . We prepare regularity conditions: A1: The map X ∋ x → l(ξ; x) is measurable for each ξ ∈ Ξ; A2: The map Ξ ∋ ξ → l(ξ; x) is three times differentiable for each ξ ∈ Ξ; A3: ∂ i l(ξ 0 ; x), i ∈ {1, . . . , d} are square integrable with respect to P ξ 0 (dx); A4: The largest eigenvalue of −C −1 GC −1 , λ max , for a matrix C = (c ij ) = diag(c 1 , ..., c d ) with c i > 0 such that c * := min i c i → ∞ as n → ∞ satisfies lim sup n→∞ λ max ∈ (−∞, 0); A5: For an r ∈ Z >0 , the r-th moments of the following are bounded:
1 c i |∂ i l(ξ 0 ; x)|, 1 √ c i c j |∂ i ∂ j l(ξ 0 ; x) + g ij (ξ 0 )|, c * c i c j c k M ijk (ξ 0 ), where M ijk (ξ 0 ) := supξ ∈B δ (ξ 0 ) |∂ i ∂ j ∂ k l(ξ; x)| with a ball B δ (ξ 0 ) := {ξ : |ξ − ξ 0 | ≤ δc * /c i , i ∈ {1, ..., d}}.
Under the regularity conditions A1-A5, Das et al. [7] proved, as their Theorem 2.1, the following theorem for an asymptotic representation ofξ − ξ 0 to study the mean squared error of the empirical predictor in linear mixed-effects models, whereξ is the solution of the system of score equations u i (ξ; x) = 0, i ∈ {1, . . . , d}.
Theorem B.1 ([7]). Under the regularity conditions A1-A5, (i) Aξ ∈ Ξ exists such that for any ρ ∈ (0, 1) there is a set of events E satisfying for large n and on E, ∂ i l(ξ; x) = 0, |c ij (ξ − ξ 0 ) j | < c 1−ρ * , and
ξ i = ξ i 0 + g ij (ξ 0 )∂ j l(ξ 0 ; x) + R, |R| ≤ c −2ρ * u * , i ∈ {1, . . . , d} with Eu r * bounded, where g ij (ξ 0 ) := E[ n k=1 ∂ i log q(x k ; ξ 0 )∂ j log q(x k ; ξ 0 )]; (ii) P(E c ) ≤ c 0 c −τ r
This theorem states that the solution of the system of score equations u i (ξ; x) = 0 exists, and lies in the parameter space Ξ with probability tending to one.
Consider the penalized log-likelihood of a sample (x 1 , ..., x n ) ∈ X n with a penalty functionl(ξ) = O(1): l * (ξ; x 1 , ..., x n ) := l(ξ; x 1 , ..., x n ) +l(ξ), ξ ∈ Ξ.
The maximizer of l * (ξ; x 1 , ..., x n ) is denoted byξ n . Expectations are still taken with respect to the product probability measure, P ξ 0 (dx) = e l(ξ 0 ;x) n i=1 dx i , and we have E[ n k=1 ∂ i l * (ξ 0 ; x k )∂ j l * (ξ 0 ; x k )] = g ij (ξ 0 ) + O(1). Theorem B.1 is used in the following proof of Lemma 2.4. For simplicity of expressions, we set c 1 = · · · = c n = c * = √ n, because a single asymptotics is sufficient for our purpose.
We prepare the regularity conditions with some modifications:
B1: The map X ∋ x → l * (ξ; x) is measurable for each ξ ∈ Ξ;
B2: The map Ξ ∋ ξ → l * (ξ; x) is four times differentiable for each ξ ∈ Ξ;
B3: ∂ i l(ξ 0 ; x), i ∈ {1, . . . , d} are square integrable with respect to P ξ 0 (dx); B4: G = (g ij ), g ij (ξ 0 ) := E[ n k=1 ∂ i log q(x k ; ξ 0 )∂ j log q(x k ; ξ 0 )] is a regular matrix;
B5: For an r ∈ Z ≥9 , the r-th moments of the following are bounded: 1 √ n |∂ i l * (ξ 0 ; x)|, 1 n |∂ i ∂ j ∂ k ∂ r l * (ξ 0 ; x)|, 1 √ n |∂ i ∂ j l * (ξ 0 ; x) + g ij (ξ 0 )|, 1 n |∂ i ∂ j ∂ k l * (ξ 0 )| 1 n |∂ i ∂ j ∂ k l * (ξ 0 ; x) − E{∂ i ∂ j ∂ k l * (ξ 0 ; x)}|.
B6: max i∈{1,...,d} |ξ i n | < d 0 n s for some constant d 0 and 0 < s < r/16 − 1/2.
Proof of Lemma 2.4. With setting ρ ∈ (2/3, 3/4), Theorem B.1 concludes that (a) Aξ n ∈ Ξ exists such that there is a set of events E satisfying for large n and on E, ∂ i l * (ξ; x) = 0, |(ξ n − ξ 0 ) j | < n −ρ/2 and ξ n = ξ 0 + g ij (ξ 0 )∂ j l * (ξ 0 ; x) + R, |R| ≤ n −ρ u *
with E(u r * ) bounded;
(b) P(E c ) ≤ c 0 n −r/8 for some constant c 0 .
For simplicity of expressions, in the following expressions, g ij (ξ 0 ) will be denoted by g ij . In addition, E E (·) and E E C (·) will denote E(·1 E ) and E(·1 E C ), respectively. By Taylor's theorem,
∂ i l * (ξ n ; x) − ∂ i l * (ξ 0 ; x) =(ξ n − ξ 0 ) j ∂ i ∂ j l * (ξ 0 ; x) + 1 2 (ξ n − ξ 0 ) j (ξ n − ξ 0 ) k ∂ i ∂ j ∂ k l * (ξ 0 ; x) + R 1 = − (ξ n − ξ 0 ) j g ij (ξ 0 ) + (ξ n − ξ 0 ) j {∂ i ∂ j l * (ξ 0 ; x) + g ij (ξ 0 )} + 1 2 (ξ n − ξ 0 ) j (ξ n − ξ 0 ) k ∂ i ∂ j ∂ k l * (ξ 0 ; x) + R 1 ,(50)
where
R 1 = 1 3! (ξ n − ξ 0 ) j (ξ n − ξ 0 ) k (ξ n − ξ 0 ) r ∂ i ∂ j ∂ k ∂ r l * (ξ; x)
for a pointξ betweenξ n and ξ 0 . Since ∂ j l * (ξ n ; x) = 0, we have (ξ n − ξ 0 ) i =g ij {∂ j l * (ξ 0 ; x) + (ξ n − ξ 0 ) k (∂ j ∂ k l * (ξ 0 ; x) + g jk )
+ 1 2 (ξ n − ξ 0 ) k (ξ n − ξ 0 ) r ∂ j ∂ k ∂ r l * (ξ 0 ; x) + R 1 } = g ij ∂ j l * (ξ 0 ; x) + R 2 .(51)
Here, g ij = O(n −1 ) and (49) gives |R 2 | ≤ n −ρ u * with E E (u * ) bounded. Then, substituting (51) into (50), we have E E (ξ n − ξ 0 ) i =g ij E E [∂ j l * (ξ 0 ; x)] + g ij E E (R 1 ) + g ij E E [(g kr ∂ r l * (ξ 0 ; x) + R 2 )(∂ j ∂ k l * (ξ 0 ; x) + g jk )] + g ij 1 2 E E [(g ks ∂ s l * (ξ 0 ; x) + R 2 )(g rt ∂ t l * (ξ 0 ; x) + R 2 )∂ j ∂ k ∂ r l * (ξ 0 ; x)]
The Cauchy-Schwarz inequality, the condition B5, and |R 2 | ≤ n −ρ u * lead to
g ij |E E [R 2 (∂ j ∂ k l * (ξ 0 ; x) + g jk )]| = g ij (E E R 2 2 ) 1/2 {E E (∂ j ∂ k l * (ξ 0 ; x) + g jk ) 2 } 1/2 ≤ {E E (u 2 * )} 1/2 o(n −1 ),
In the same way, we observe that
g ij g ks E E [∂ s l * (ξ 0 ; x)R 2 ∂ j ∂ k ∂ r l * (ξ 0 ; x)], g ij E E [R 2 2 ∂ j ∂ k ∂ r l * (ξ 0 ; x)]
are o(n −1 ). The condition B5 and |(ξ − ξ 0 ) i | < n −ρ/2 lead to g ij |E E (R 1 )| < g ij 1 3! n −3ρ/2 |E E [∂ i ∂ j ∂ k ∂ r l * (ξ; x)]| = o(n −1 ),
jk,r and E[∂ j ∂ k ∂ r l(ξ 0 ; x)] = −∂ r g jk − Γ (1) jk,r . In addition, by using (6) and (7), we have g kr (Γ (1) jk,r − ∂ r g jk ) = −g kr Γ (−1) jk,r . Hence, assertion (i) is established. For assertion (ii), by using (49), we have E[(ξ n − ξ 0 ) i (ξ n − ξ 0 ) j ] = g ik g js E[∂ k l * (ξ; x)∂ s l * (ξ; x)] + o(n −1 ) = g ij + o(n −1 ) in a similar way as in the proof of assertion (i).
C Proof of Lemma 2.5
Proof. With Lemma 2.4, the bias is evaluated as
E ξ [f (ξ) − f (ξ)] =E ξ [(ξ − ξ) i ]∂ i f (ξ) + E ξ [(ξ − ξ) i (ξ − ξ) j ] 1 2 ∂ i ∂ j f (ξ) + E ξ [(ξ − ξ) i (ξ − ξ) j (ξ − ξ) k ] 1 3! ∂ i ∂ j ∂ k f (ξ) =g ij ∂ jl − 1 2 g kr Γ (−1) kr,j ∂ i f + 1 2 ∂ i ∂ j f + o(n −1 ),
whereξ is a point betweenξ and ξ. The assertion follows from the definitions of the (−1)-Laplacian (14). The evaluation of the last term of the middle expression is similar to that of R 1 in the proof of Lemma 2.4.
D Proof of Theorem 2.8
Proof. By the factorization criterion (Theorem 1.6.5 of [20]), if we have a statistic T to be sufficient for a family of probability measures M of a sample x, there exist non-negative functions h ξ and k such that the density of q(·; ξ) satisfy q(x; ξ) = h ξ (T (x))k(x). Since k does not depend on ξ, the maximum likelihood estimator is a function of T . On the other hand, since the penaltyl does not depend on x, the maximizer of the penalized likelihood el (ξ) h ξ (T (x)) modulo constant, is also a function of T . Let the maximizer be denoted bŷ ξ(T ). According to Theorem 2.7, the UMVUE of f (ξ) exists uniquely and is a function of T . Let the UMVUE be denoted by δ(T ). Now, f (ξ(T )) and δ(T ) coincide precisely up to O(n −1 ), otherwise f (ξ(T )) should have bias of O(n −1 ), which contradicts to Lemma 2.5.
E Proof of Theorem 2.11
Formulas (i) and (ii) in the following lemma appear as equations (55) and (57), respectively, in Chapitre VII of [26]. Proofs are given for readers' convenience, because the proofs were not explicitly given in [26]. Let us consider a geodesic γ = ξ(t). The length minimizing property of geodesics gives the following useful relation
∂t ∂ξ i = g ijξ j ,(52)
which is derived as equation (52) in Chapitre VII of [26].
G Estimating equations for the squared geodesic distance in the hyperbolic space
The system of estimating equations forξ = (μ,σ) which maximizes the penalized likelihood (46) is
µ =x + σ 2 2n 2ν ν 2 − 1 µ σ − 1 4f ∂f ∂µ − 1 4g
∂g ∂µ ,
σ 2 = 2(s 2 + (x − µ) 2 ) + σ 3 n c 4R 2 + 2 1 σ + 2(σ − ν)ν (ν 2 − 1)σ − 1 4f ∂f ∂σ − 1 4g ∂g ∂σ ,
where f =µ 4 ρ 2 + {µ 2 ρ 2 + σ 2 (ν 2 − 1) 2 }(σν − 1) 2 , g =(σν − 1) 4 ρ 2 + {(σν − 1) 2 ρ 2 + σ 2 (ν 2 − 1) 2 }µ 2 , ∂f ∂µ = 2µ 3 ρ σ √ ν 2 − 1 {µ 2 + (σν − 1) 2 } + 2µ{µ 2 (σν + 1) + (σν − 1) 2 }ρ 2 + 2µσ(σν − 1){σ(3ν 2 − 1) − 2ν}(ν 2 − 1), ∂g ∂µ = 2µ(σν − 1) 2 ρ σ √ ν 2 − 1 {µ 2 + (σν − 1) 2 } + 2µσ(ν 2 − 1){2µ 2 ν + σ(ν 2 − 1)} + 2µ(σν − 1){µ 2 + (2σν − 1)(σν − 1)}ρ 2 , ∂f ∂σ = 2µ 2 (σ − ν)ρ σ √ ν 2 − 1 {µ 2 + (σν − 1) 2 } + 2µ 2 σ(σν − 1)ρ 2 + 2σ(σν − 1){(σν − 1)(2σν − ν 2 − 1) + σ 2 (ν 2 − 1)}(ν 2 − 1), ∂g ∂σ = 2(σν − 1) 2 (σ − ν)ρ σ √ ν 2 − 1 {µ 2 + (σν − 1) 2 } + 2σ(σν − 1){µ 2 + 2(σν − 1) 2 }ρ 2 + 2µ 2 (2σν − ν 2 − 1)σ(ν 2 − 1).
Lemma 2 . 3 .
23Consider a C ∞ -manifold M.
( i )
iThe 0-connection on M is locally equiaffine.
Lemma 2 . 4 .
24Under the regularity conditions B1-B6 in Appendix B, for a penalty functionl(ξ) = O(1) ∈ C 4 of parameters ξ for large n, where C 4 is the set of four-times differentiable functions, we have
Lemma 2. 5 .
5Consider a model manifold M and an estimand function f ∈ C 3 . Under the regularity conditions of Lemma 2.4, for a U-estimable estimand f (ξ), ξ ∈ Ξ, the bias of the plug-in estimator f (ξ) of f (ξ), whereξ maximizes the penalized likelihood, is o(n −1 ) if and only if the penalty functionl satisfies
Theorem 2 . 8 .
28Let a sample be distributed according to a parametric model M = {p(·; ξ) : ξ ∈ Ξ}, and suppose there exists a complete sufficient statistic for M. Then, under the assumptions of Lemma 2.5, the plug-in estimator f (ξ) of an estimand f (ξ), ξ ∈ Ξ, whereξ maximizes the penalized likelihood with the penalty functionl satisfying the condition (19), coincides with the UMVUE of f (ξ) up to O(n −1 ).
Figure 1 :
1An integral manifold φ = φ 0 , which encodes the penalty functionl by implicitization, and a characteristic curve on it. The normal vector field to the integral manifold, grad * φ, and the Monge axis (gradf, −∆ (−1) f /2) at a point on the characteristic curve are orthogonal.
Figure 2 :
2The foliation of the integral manifold φ = φ 0 by the level surfaces ofl projected onto the model manifold M constitute the foliation of M by the level surfaces of f . The contours are the level surfaces.
Figure 3 :
3The foliation of a model manifold M by the equidistant points from the point ζ ∈ M. A geodesic γ joining ζ and a point ξ ∈ M is also shown.
Corollary 3. 1 .
1Consider a one-dimensional parametric model M = {p(·; ξ) : ξ ∈ Ξ ⊂ R}. The plug-in estimator f (ξ) of an estimand f (ξ), ξ ∈ Ξ satisfying f ′ (ξ) = df /dξ = 0, whereξ maximizes the penalized likelihood with the penalty functionl(ξ) satisfying
Corollary 3. 3 .
3Consider a parametric model M = {p(·; ξ) : ξ ∈ Ξ}, and suppose that the model manifold M is α-flat for a non-zero α ∈ R. If an estimand function f is α-harmonic, then the plug-in estimator f (ξ) of an estimand f (ξ), ξ ∈ Ξ, whereξ maximizes the penalized likelihood with the penalty function log h((α − 1)/2; α), is unbiased up to O(n −1 ) and coincides with the UMVUE up to O(n −1 ) if a complete sufficient statistic for M exists.Here, h((α − 1)/2; α) is the density of the (α − 1)/2-parallel volume element with respect to the α-flat model manifold defined in(13), and ξ comprises an α-affine coordinate system.
Corollary 3. 4 .
4For an α-flat model manifold of a non-zero α ∈ R, the estimatorξ of the α-affine coordinate ξ that maximizes the penalized likelihood with the penalty function log h((α − 1)/2; α) is unbiased up to O(n −1 ).
Corollary 3 . 6 .
36Consider a 0-flat model manifold M and a 0-harmonic estimand function f . If α-connections on M are locally equiaffine for every α ∈ R, there exists a penalty function with which the plug-in estimator f (ξ) of an estimand f (ξ), ξ ∈ Ξ, whereξ maximizes the penalized likelihood, is unbiased up to O(n −1 ).
2 :
2Biases and mean squared errors of the estimators of the success probabilities in a logistic regression model. the rows of Firth, and those of our penalized likelihood are in the rows of AUE. The mean squared errors are shown in the columns of MSE.
Remark 4. 2 .
2The location-scale family is a counter example mentioned in the proof of Assertion (iii) of Lemma 2.3. The α-connection on the location-scale family is locally equiaffine for every α, or satisfies (48), but is not α-flat for any α.
Remark 4. 3 .
3Firth [10] (page 33) obtained a system of partial differential equations comprising (41) and ∂ µl = 0. For example, −{1−c/(2R 2 )} log σ/2+ µ does not satisfy Firth's condition, but still achieves asymptotic unbiasedness up to O(n −1 ) because it satisfies the condition (41). In this sense, ∂ µl = 0 was unnecessary.
4 :
4Biases and mean squared errors of the estimators of the coefficients of variation of the location-scale family.
Table 1 :
1Distribution of the estimates of the parameter in a logistic regression model.
Table
Table 3 :
3Biases and mean squared errors of the estimators of the shrinkage factor in a linear mixed-effects model.
Table
Table 5 :
5Biases and mean squared errors of the estimators of the geodesic distance.
Institute of Mathematics for Industry, Kyushu University, Fukuoka 819-0395, Japan; E-mail: [email protected] 2 The Institute of Statistical Mathematics, Tokyo 190-8562, Japan; E-mail: [email protected]
AcknowledgementsThe authors thank Professors Shiro Ikeda and Tao Zhou for drawing their attention to the work[27]and[10], respectively. The first author was supported in part by JSPS KAKENHI Grant 18K12758. The second author was supported in part by JSPS KAKENHI Grants 18H00835 and 20K03742.Lemma E.1. For a function ϕ(t 2 ), where t = dis(ζ, ξ) is the geodesic distance, we have (i) ∆ (0) ϕ(t 2 ) = ϕ ′ (t 2 )∆ (0) (t 2 ) + 4t 2 ϕ ′′ (t 2 ), and (ii) ∆ (0) (t 2 ) = 2d + t d dt {log h(η)}, where η =ξ(0)t is the normal coordinate system at the origin ζ, and h(η) is the determinant of the Fisher metric tensor.Proof. (i):where (52) in the normal coordinate η was used to prove the second last equality.(ii):where (52) in the normal coordinate η was used to prove the last equality.the assertion holds.Proof of Theorem 2.11. Let us work with the normal coordinate system at ζ, denoted by η =ξ(0)t. By the relation between α-Laplacians (16), we haveIn the normal coordinate system at ζ, the partial differential equation(20)is written as42 where ∂ i := ∂/∂η i andIf we set the penalty functionl(η) as a function of the squared geodesic distance, we may write l * (η) = χ * (t 2 ) and (53) becomesBy using (52), the canonical parameterization of the geodesic h ijη iηj = 1 , and (i) of Lemma E.1, (54) is recast into the ordinary differential equation for ϕ:where (ii) of Lemma E.1 yieldsIntegrating (55), we obtain χ * (t 2 ) = − log |ϕ ′ (t 2 )|t d √ h /2 + const., and (26) follows.F Proof of Corollary 3.1Proof. By using the relation between α-Laplacians (16) and the expression of 0-Laplacian (15), the partial differential equation(20)forl becomes gradl, gradf = − 1 2When d = 1, this reduces to the ordinary differential equation:and the integration gives(27). For the second assertion, (29) reduces to the ordinary differential equation:Noting the definition of the α-parallel volume element (13), the integration gives (28).
Methods of Information Geometry. S Amari, H Nagaoka, Amer. Math. SocProvidence, RIAmari, S., Nagaoka, H. (2000). Methods of Information Geometry. Amer. Math. Soc., Providence, RI.
An errorcomponents model for prediction of county crop areas using survey and satellite data. G E Battese, R M Harter, W A Fuller, J. Amer. Statist. Assoc. 83Battese, G.E., Harter, R.M., Fuller, W.A. (1988). An error- components model for prediction of county crop areas using survey and satellite data. J. Amer. Statist. Assoc. 83 28-36.
Binary regression models for contaminated data. J B Copas, J. R. Statist. Soc. 50Copas, J.B. (1988). Binary regression models for contaminated data. J. R. Statist. Soc. B50 225-65.
. R Courant, D Hilbert, Methods of Mathematical Physics. IIWiley-InterscienceCourant, R., Hilbert, D. (1962). Methods of Mathematical Physics. Vol II. Wiley-Interscience, New York.
Ideals, Varieties, Algorithms. D Cox, J Little, D O'shea, SpringerNew York3rd EdCox, D., Little, J., O'Shea, D. (2007). Ideals, Varieties, Algorithms. 3rd Ed. Springer, New York.
A general definition of residuals. D R Cox, E J Snell, J. R. Statist. Soc. 30Cox, D.R., Snell, E.J. (1968). A general definition of residuals. J. R. Statist. Soc. B30 248-275.
Mean squared error of empirical predictor. K Das, J Jiang, J N K Rao, Ann. Statist. 32Das, K., Jiang, J., Rao, J. N. K. (2004). Mean squared error of empirical predictor. Ann. Statist. 32 818-840.
Empirical Bayes on vector observations: an extension of Stein's method. B Efron, C Morris, Biometrika. 59Efron, B., Morris, C. (1972). Empirical Bayes on vector observations: an extension of Stein's method. Biometrika 59 335-347.
Asymptotical improvement of maximum likelihood estimators on Kullback-Leibler loss. S Eguchi, T Yanagimoto, J. Statist. Plann. Inference. 138Eguchi, S. Yanagimoto, T. (2008). Asymptotical improvement of maximum likelihood estimators on Kullback-Leibler loss. J. Statist. Plann. Inference 138 3502-3511.
Bias reduction of maximum likelihood estimates. D Firth, Biometrika. 80Firth, D. (1993). Bias reduction of maximum likelihood estimates. Biometrika 80 27-38.
Invariant prior distributions. J A Hartigan, Ann. Statist. 35Hartigan, J.A. (1964). Invariant prior distributions. Ann. Statist. 35 836-845.
The maximum likelihood prior. J A Hartigan, Ann. Statist. 26Hartigan, J.A. (1998). The maximum likelihood prior. Ann. Statist. 26 2083-2103.
Groups and Geometric Analysis: Integral Geometry, Invariant Differential Operators, and Spherical Functions. S Helgason, Academic PressOrlandoHelgason, S. (1984). Groups and Geometric Analysis: Integral Geom- etry, Invariant Differential Operators, and Spherical Functions. Academic Press, Orlando.
Estimating variance of random effects to solve multiple problems simultaneously. Y M Hirose, P Lahiri, Ann. Statist. 46Hirose, Y.M., Lahiri, P. (2018). Estimating variance of random ef- fects to solve multiple problems simultaneously, Ann. Statist. 46 1721- 1741.
Multi-goal prior selection: a way to reconcile Bayesian and classical approaches for random effects models. Y M Hirose, P Lahiri, J. Amer. Statist. Assoc. 116Hirose, Y.M., Lahiri, P. (2021). Multi-goal prior selection: a way to reconcile Bayesian and classical approaches for random effects models, J. Amer. Statist. Assoc. 116 1487-1497.
S Kobayashi, K Nomizu, Foundations of Differential Geometry. New YorkWiley and SonsKobayashi, S., Nomizu, K. (1963). Foundations of Differential Ge- ometry, Vol. I. Wiley and Sons, New York.
On asymptotic properties of predictive distributions. F Komaki, Biometrika. 83Komaki, F. (1996). On asymptotic properties of predictive distribu- tions. Biometrika 83 299-313.
Bias in parametric estimation: reduction and useful side-effects. I Kosmidis, WIREs Comput Stat. 6Kosmidis, I, (2014). Bias in parametric estimation: reduction and use- ful side-effects. WIREs Comput Stat 6 185-196.
Statistical manifolds. S L Lauritzen, Differential Geometry in Statistical Inference. Hayward, CA10IMSLauritzen, S.L. (1987). Statistical manifolds. In Differential Geometry in Statistical Inference, IMS Lecture Notes-Monograph Series 10 163-216. IMS, Hayward, CA.
Theory of Point Estimation, Second Edition. E L Lehmann, G Casella, SpringerNew YorkLehmann, E.L., Casella, G. (1998). Theory of Point Estimation, Second Edition. Springer, New York.
Introduction to Smooth Manifolds. J M Lee, SpringerNew YorkLee, J.M. (2002). Introduction to Smooth Manifolds. Springer, New York.
Tensor Methods in Statistics. P Mccullagh, Chapman and HallNew YorkMcCullagh, P. (1987). Tensor Methods in Statistics. Chapman and Hall, New York.
Generalized Linear Models. 2nd. P Mccullagh, J A Nelder, Chapman and HallNew YorkMcCullagh, P., Nelder, J.A. (1987). Generalized Linear Models. 2nd. ed. Chapman and Hall, New York.
Geometry of Differential Forms. S Morita, Amer. Math. SocProvidence, RIMorita, S. (2001). Geometry of Differential Forms. Amer. Math. Soc., Providence, RI.
K Nomizu, T Sasaki, Affine Differential Geometry. CambridgeCambridge Univ. PressNomizu, K., Sasaki, T. (1994). Affine Differential Geometry. Cam- bridge Univ. Press, Cambridge.
L'integrale de Riemann-Liouville et le problème de Cauchy. M Riesz, Acta Math. 81Riesz, M. (1949). L'integrale de Riemann-Liouville et le problème de Cauchy, Acta Math. 81 1-223.
α-parallel prior and its properties. J Takeuchi, S Amari, IEEE Trans. Inform. Theory. 51Takeuchi, J., Amari, S. (2005). α-parallel prior and its properties. IEEE Trans. Inform. Theory. 51 1011-1023.
| []
|
[
"A FLOER HOMOLOGY INVARIANT FOR 3-ORBIFOLDS VIA BORDERED FLOER THEORY",
"A FLOER HOMOLOGY INVARIANT FOR 3-ORBIFOLDS VIA BORDERED FLOER THEORY"
]
| [
"Biji Wong "
]
| []
| []
| Using bordered Floer theory, we construct an invariant HFO(Y orb ) for 3orbifolds Y orb with singular set a knot that generalizes the hat flavor HF (Y ) of Heegaard Floer homology for closed 3-manifolds Y . We show that for a large class of 3-orbifolds HFO behaves like HF in that HFO, together with a relative Z 2 -grading, categorifies the order of H orb 1 . When Y orb arises as Dehn surgery on an integer-framed knot in S 3 , we use the {−1, 0, 1}-valued knot invariant ε to determine the relationship between HFO(Y orb ) and HF (Y ) of the 3-manifold Y underlying Y orb . arXiv:1808.09026v1 [math.GT] | null | [
"https://arxiv.org/pdf/1808.09026v1.pdf"
]
| 119,154,832 | 1808.09026 | a665d95e268f6b619aed48afea883fedaecef061 |
A FLOER HOMOLOGY INVARIANT FOR 3-ORBIFOLDS VIA BORDERED FLOER THEORY
Biji Wong
A FLOER HOMOLOGY INVARIANT FOR 3-ORBIFOLDS VIA BORDERED FLOER THEORY
Using bordered Floer theory, we construct an invariant HFO(Y orb ) for 3orbifolds Y orb with singular set a knot that generalizes the hat flavor HF (Y ) of Heegaard Floer homology for closed 3-manifolds Y . We show that for a large class of 3-orbifolds HFO behaves like HF in that HFO, together with a relative Z 2 -grading, categorifies the order of H orb 1 . When Y orb arises as Dehn surgery on an integer-framed knot in S 3 , we use the {−1, 0, 1}-valued knot invariant ε to determine the relationship between HFO(Y orb ) and HF (Y ) of the 3-manifold Y underlying Y orb . arXiv:1808.09026v1 [math.GT]
Introduction
Heegaard Floer homology, introduced by Ozsváth and Szabó in [23], is a package of invariants for closed 3-manifolds that has produced a wealth of results in a variety of areas such as contact topology [24,13], Dehn surgery [4], and knot theory [20,19,25,17,7,18]. The purpose of this paper is to extend the hat version HF of Heegaard Floer homology (with Z 2 coefficients) to (orientable) 3-orbifolds Y orb with singular locus a knot K.
Three-orbifolds are spaces that locally look like quotients of R 3 by finite subgroups of SO (3). Over the past twenty years, much work has been done to construct homology invariants for 3-orbifolds using gauge-theoretic ideas from Floer's original instanton homology theory [3], first by Collin and Steer in [2], then by Kronheimer and Mrowka in [11,10,12]. In this paper we offer up another homological invariant using the more combinatorial tool of bordered Heegaard Floer homology developed by Lipshitz, Ozsváth, and D. Thurston in [16,15] for 3-manifolds with boundary. Specifically, we fix an equivariant neighborhood N of the singular curve K (together with some additional data for the equivariant torus boundary ∂N ) and decompose the 3-orbifold Y orb along ∂N . To N we associate a (bounded) Type D structure that is sensitive to the equivariance around K. To the complement of N (with induced data for its boundary) we associate the Type A structure given to us by bordered Floer theory. Motivated by the pairing theorem in bordered Floer theory, we define HFO(Y orb ) to be the homology of the box tensor product of the Type A structure with the Type D structure.
Theorem 1.1. HFO(Y orb ) is a well-defined invariant of Y orb . Furthermore, when Y orb is a 3-manifold, HFO(Y orb ) agrees with HF (Y orb ).
The underlying space Y orb of any 3-orbifold Y orb is a 3-manifold in a natural way, so one might wonder how HFO(Y orb ) compares to HF ( Y orb ). When the 3-orbifold comes from Dehn surgery on an integrally framed knot K ⊂ S 3 , we prove that the difference between HFO(Y orb ) and HF ( Y orb ) depends on 3 integers: the framing on K, the singular order around K, and the {−1, 0, 1}-valued knot invariant ε(K) introduced by Hom in [7]. Theorem 1.2. Let Y be r-surgery on a knot K ⊂ S 3 where r is any integer. Let Y orb be the 3-orbifold with underlying space Y and singular curve K of order n. If ε(K) = 0 and r = 0, then rank HFO(Y orb ) = n · rank HF (Y ) − 2n + 2. Otherwise, rank HFO(Y orb ) = n · rank HF (Y ) .
As an example, take r = 0 and K the unknot. Then Y = S 2 × S 1 and ε(K) = 0. Theorem 1.2 tells us that for every n, rank HFO(Y orb ) = 2.
For 3-manifolds Y , it's well-known that HF (Y ) categorifies the order of H 1 (Y ), see [22]. We have an analogous result for a large class of 3-orbifolds Y orb : Theorem 1.3. There exists a relative Z 2 -grading on HFO(Y orb ) so that if Y orb has nullhomologous singular curve or comes from Dehn surgery on a framed knot in S 3 , then up to sign χ HFO(Y orb ) = H orb 1 (Y orb ) . Closely related to HF is the plus version HF + of Heegaard Floer homology, and for 3manifolds Y with b 1 (Y ) > 0 it's known that HF + (Y ) categorifies the Turaev torsion invariant of Y [22]. Recently, the author extended the Turaev torsion invariant to 3-orbifolds (with singular set a link) [30], so it is natural to ask if there is a homology theory for 3-orbifolds generalizing HF + that categorifies this orbifold torsion invariant. The present paper can be thought of as a first step towards this goal.
Due to recent work of Hanselman, Rasmussen, and Watson [5], the bordered Floer invariants for 3-manifolds with torus boundary can be thought of geometrically as decorated immersed curves on the punctured torus. Using this we get a geometric formulation of the orbifold homology invariant, the details of which will appear in a subsequent paper.
At the Perspectives in Bordered Floer Conference in May 2018, a connection between the orbifold invariant and Heegaard Floer with twisted coefficients was pointed out to the author by Matt Hedden and Adam Levine. This too will be written up in a later paper. This paper is structured as follows. Section 2 collects the background on 3-orbifolds, bordered Floer homology, and knot Floer homology that we will need, adapting some of it a bit to our situation. In Section 3 we define the orbifold invariant, prove Theorem 1.1, and compute the invariant for several examples. In Section 4 we prove Theorem 1.2 and give more examples. In Section 5 we prove Theorem 1.3.
Acknowledgements. The author is grateful to Robert Lipshitz, Liam Watson, and Adam Levine for helpful conversations, and to Ina Petkova and Steve Boyer for encouragement and support.
Background
3-orbifolds.
Here we give a brief overiew of 3-orbifolds. For a more in-depth discussion, we refer the reader to [29,28,1,9]. A 3-orbifold Y orb is a Hausdorff, second-countable space Y orb with an atlas {(U i , U i , G i , φ i )} consisting of an open cover {U i } of Y orb , connected and open sets U i ⊂ R 3 , continuous and effective actions of finite subgroups G i of O(3) on U i , and homeomorphisms φ i : U i /G i → U i . If U i ⊂ U j , then there is an injective homomorphism f ji : G i → G j and a topological embedding φ ji : U i → U j , equivariant with respect to f ji , that makes the following diagram commute:
U i φ ji − −− → U j q q U i /G i φ ji − −− → U j /G j φ i φ j U i incl − −− → U j
Here q is the quotient map and φ ji is the map induced by φ ji . Note the top square always commutes, so the overlapping condition is really about the bottom square. We call Y orb the underlying space of Y orb . We say a 3-orbifold is oriented when we have the following: in each chart, U i oriented, G i lies in SO (3), and the action of G i on U i preserves orientation, and on overlaps U i ⊂ U j the embedding φ ji preserves orientation. The 3-orbifolds in this paper will be oriented. Y orb is connected (respectively compact) when Y orb is connected (respectively compact).
Given a point p ∈ Y orb , let (U, U , G, φ) be a chart containing p and let p be a lift of p to U . Then the local group G p is the isotropy group {g ∈ G : g · p = p}. Note the isomorphism class of G p does not depend on the choice of chart or lift, so is well-defined. In particular, if we fix a chart but vary the lifts, then the local groups we get are all conjugate. The singular locus ΣY orb of Y orb is the set of all points p in |Y orb | with nontrivial local group G p . Note that if the singular locus is empty, then we recover the definition of a 3-manifold. In this paper we will focus on 3-orbifolds with singular locus a knot. By general theory every point on the knot has local group equal to Z n for the same n. Furthermore, we can identify a neighborhood of the knot with (D 2 × S 1 )/Z n where Z n acts by rotations about the core circle 0 × S 1 . Now let E denote the complement of the interior of the neighborhood. Then H orb 1 (Y orb ) is defined to be H 1 (E)/ µ n , where µ is a meridian of the singular knot. Note that when n = 1, Y orb is just a 3-manifold and H orb 1 (Y orb ) is just H 1 (Y orb ). As an example, consider the n-fold cyclic branched cover Σ n (K) of K ⊂ S 3 . There is a natural action of Z n on Σ n (K), and the quotient space Σ n (K)/Z n can be thought of as the 3-orbifold (S 3 , K, n), where the underlying space is S 3 , the singular locus is K, and every point y on K has isotropy group G y equal to Z n . Furthermore, it's not hard to see that H orb 1 (S 3 , K, n) ∼ = Z n . Finally, an (orientation-preserving) homeomorphism f : (Y 1 , K 1 , n) → (Y 2 , K 2 , n) between oriented 3-orbifolds (Y 1 , K 1 , n) and (Y 2 , K 2 , n) is an (orientation-preserving) homeomorphism |f | : Y 1 → Y 2 between the underlying oriented 3-manifolds Y 1 and Y 2 that takes the singular curve K 1 to the singular curve K 2 .
2.2. Bordered Heegaard Floer homology. In this section we give an overview of the bordered Floer invariants. We focus on the torus boundary case because for the most part this is the setting we'll be working in. The details are covered in [16,15,5].
2.2.1. Algebraic preliminaries. We start by recalling the two algebraic structures (Type D and Type A) that give rise to CFD and CFA, the two bordered Floer invariants for the torus boundary case. Let A be the unital path algebra over Z 2 associated to the quiver in Figure 1 modulo the relations ρ 2 ρ 1 , ρ 3 ρ 2 , in other words we only compose paths when the indices increase. As a Z 2 -vector space, A is generated by eight elements: the two idempotents ι 1 and ι 2 , and the six "Reeb" elements ρ 1 , ρ 2 , ρ 3 , ρ 12 := ρ 1 ρ 2 , ρ 23 := ρ 2 ρ 3 , and ρ 123 := ρ 1 ρ 2 ρ 3 . The multiplicative identity 1 in A is given by ι 1 + ι 2 . We will also need to work with the subalgebra I generated by ι 1 and ι 2 , this is a commutative ring with multiplicative identity
1 = ι 1 + ι 2 . ι 1 ι 2 ρ 1 ρ 3 ρ 2
Figure 1. Quiver for torus algebra A
A (left) type D structure over A is a pair N, δ 1 consisting of a finite-dimensional Z 2 -vector space N that's equipped with a (left) action by I so that
N = ι 1 N ⊕ ι 2 N
as a vector space, together with a map δ 1 : N → A ⊗ I N that satisfies the following relation
(µ ⊗ id N ) • (id A ⊗ δ 1 ) • δ 1 = 0,
where µ : A ⊗ A → A denotes the multiplication in A. Given a type D structure N, δ 1 and k ∈ N ∪ {0}, we have maps
δ k : N → A ⊗ I . . . ⊗ I A k times ⊗ I N
defined inductively as follows: δ 0 = id N and δ k = (id A ⊗k−1 ⊗ δ 1 ) • δ k−1 . We say that N, δ 1 is bounded if δ k ≡ 0 for all k sufficiently large. Note that the above relation on δ 1 can be thought of as (µ ⊗ id N ) • δ 2 = 0. Type D structures (N, δ 1 ) can be represented by decorated directed graphs. First choose a basis for N by choosing a basis for each subspace ι * N . Then for each basis element take a vertex. If the basis element lies in ι 1 N , decorate the vertex with •, otherwise decorate the vertex with •. Whenever basis elements x and y are related in the following way: ρ I ⊗ y is a summand of δ 1 (x) with ρ I ∈ {ρ ∅ := 1, ρ 1 , ρ 2 , ρ 3 , ρ 12 , ρ 23 , ρ 123 }, put a directed edge from vertex x to vertex y, and decorate the edge with ρ I . The relation on δ 1 then translates into the following condition on the graph: for any directed path of length 2, the product of the labels equals 0 in A. The higher maps δ k can be recovered by following directed paths of length k.
We call a type D structure reduced if the associated graph has no edges labelled 1. Because of how the idempotents ι 1 and ι 2 interact with the Reeb elements ρ 1 , ρ 2 , and ρ 3 in A, the graph of any reduced type D structure can only contain edges that look like
• ρ 1 − → •, • ρ 2 − → •, • ρ 3 − → •, • ρ 12 − − → •, • ρ 23 − − → •, or • ρ 123 − − → •.
Conversely, to every directed graph with vertices decorated by {•, •} and edges of the above form so that for any directed path of length 2 the product of the labels equals 0 in A, we can associate a (reduced) type D structure (N, δ 1 ) as follows. Take N to be the Z 2 -vector space generated by the vertices. If we identify • with ι 1 and • with ι 2 , then we get the following action of I on N : for every vertex x labelled by •, set ι 1 · x = x and ι 2 · x = 0, and for every vertex x labelled by •, set ι 1 · x = 0 and ι 2 · x = x. The edges encode the map δ 1 , and it's clear that (N, δ 1 ) forms a reduced type D structure.
A (right) type A structure over A is a pair M, {m k } ∞ k=1 consisting of a finite-dimensional Z 2 -vector space M that's equipped with a (right) action by I so that
M = M ι 1 ⊕ M ι 2
as a vector space, together with multiplication maps
m k : M ⊗ I A ⊗ I . . . ⊗ I A k -1 times → M
that satisfy the following relation for any x ∈ M , k ∈ N, and a 1 , . . . , a k−1 ∈ A:
0 = k j=1 m k−j+1 m j (x ⊗ a 1 . . . ⊗ a j−1 ) ⊗ a j ⊗ . . . ⊗ a k−1 + k−2 j=1 m k−1 (x ⊗ a 1 . . . ⊗ a j−1 ⊗ a j a j+1 ⊗ a j+2 ⊗ . . . ⊗ a k−1 ). A type A structure M, {m k } ∞ k=1 is said to be (1) unital if
• m 2 (x, 1) = x, and • m k (x, a 1 , . . . , a k−1 ) = 0, for k ≥ 3 and at least one a i = 1, and (2) bounded if m k ≡ 0 for all k sufficiently large. Using an algorithm by Hedden and Levine [6, Theorem 2.2], one can construct a (nonunital) type A structure M, {m k } ∞ k=1 from a (reduced) type D structure (N, δ 1 ). We keep M the same as N , both in terms of underlying vector space and idempotent action, and dualize the map δ 1 to maps m k by doing the following. First relabel the edges of the graph that's associated to (N, δ 1 ) by swapping indices 1 and 3, keeping index 2 the same. Next represent every directed path in the new graph by a string of numbers, by concatenating the indices. For example, the directed path •
ρ 1 − → • ρ 21
− − → • gives the string 121. Then rewrite every string of numbers as a string of increasing sequences I = I 1 , . . . , I k−1 so that the last element of I j is bigger than the first element of I j+1 . For example, the string 121 gets rewritten as 12, 1. For every directed path with source vertex x, target vertex y, and associated string I = I 1 , . . . , I k−1 , we define m k (x ⊗ ρ I 1 ⊗ . . . ⊗ ρ I k−1 ) = y. For everything else, we define the multiplication to be zero. As an example, consider the type D directed path
x • ρ 3 − → • ρ 23 − − → y •.
It gives rise to the multiplication m 3 (x, ρ 12 , ρ 1 ) = y.
If M, {m k } ∞ k=1 is a type A structure over A, N, δ 1 is a type D structure over A, and at least one of them is bounded, then we can form the box tensor product M N , a Z 2 -chain
complex (M ⊗ I N, δ ) with differential δ : M ⊗ I N → M ⊗ I N given by δ (x ⊗ y) = ∞ k=0 m k+1 ⊗ id N x ⊗ δ k (y) .
In addition to type D and type A structures over A, we will also need to work with with type DA structures over (A, A). This is a Z 2 -vector space N with the structure of an (I, I)-bimodule, together with maps
δ k 1 : N ⊗ I A ⊗ I . . . ⊗ I A k -1 times → A ⊗ I N that(1) δ 2 1 (x, 1) = 1 ⊗ x, and (2) δ k 1 (x, a 1 , .
. . , a k−1 ) = 0, when k ≥ 3 and at least one a i = 1. All of our type DA structures will be unital. Like with type D and type A structures, we can take the box tensor product of a type DA structure with a type D structure, or the box tensor product of a type A structure with a type DA structure, when at least one of the factors is bounded. For details, see [15,Definition 2.3.9].
Invariants for bordered 3-manifolds.
A bordered 3-manifold is a pair (Y, φ) consisting of a connected, compact, oriented 3-manifold Y with connected boundary, together with a homeomorphism φ from a fixed model surface F to the boundary of Y . Two bordered 3-manifolds (Y 1 , φ 1 ) and (Y 2 , φ 2 ) are called equivalent if there is an orientation-preserving homeomorphism ψ :
Y 1 → Y 2 so that φ 2 = ψ| ∂ • φ 1 .
As noted earlier, we will restrict to the case of torus boundary. Then F is the oriented torus associated to the pointed matched circle Z in Figure 2, with 1-handles represented by α a 1 and α a 2 , and orientation given by
α a 1 , α a 2 . If φ is orientation-preserving, (Y, φ) is said to be type A, otherwise (Y, φ) is said to be type D.(Σ; {α c 1 , . . . , α c g−1 } α c ; {α a 1 , α a 2 } α a α ; {β 1 , . . . , β g } β ; z)
consisting of • a connected, compact, oriented surface Σ of genus g with connected boundary,
• two sets α c and β of pairwise disjoint circles in the interior of Σ,
• pairwise disjoint properly embedded arcs α a 1 and α a 2 in Σ, and • a point z on ∂Σ missing the endpoints of α a 1 and α a 2 so that α c and α a are disjoint, and Σ − α and Σ − β are connected. To recover Y , we attach 2-handles to Σ × I along the α c circles in Σ × {0} and the β circles in Σ × {1}. The parameterization φ of ∂Y is specified by the pointed matched circle (∂Σ, α a 1 , α a 2 , z) coming from H, where ∂Σ is given the induced boundary orientation. If (∂Σ, α a 1 , α a 2 , z) is identified with Z, then φ is orientation-preserving, and H describes a type A bordered 3-manifold (Y, φ), otherwise we're identifying (∂Σ, α a 1 , α a 2 , z) with −Z, and we get a type D bordered 3-manifold (Y, φ). See Figure 3 for an example of a type D bordered 3-manifold. Bordered Floer theory, as defined by Lipshitz, Ozsváth, and D. Thurston in [16,15], associates to a bordered Heegaard diagram H representing a bordered 3-
manifold (Y, φ) a type A structure ( CFA(H), {m k } ∞ k=1 ) if (Y, φ) is type A, and a type D structure ( CFD(H), δ 1 ) if (Y, φ) is type D.
As Z 2 -vector spaces, CFA(H) and CFD(H) are generated by g-tuples x of points in α ∩ β with one point on each α c circle, one point on each β circle, and one point on one of the α a arcs. The right I-action on CFA(H) is given by
x · ι 1 = x, if x occupies the α a 1 arc 0, otherwise x · ι 2 =
x, if x occupies the α a 2 arc 0, otherwise while the left I-action on CFD(H) is given by
ι 1 · x =
x, if x occupies the α a 2 arc 0, otherwise
ι 2 · x = x, if x occupies the α a 1 arc 0, otherwise.
The type A and type D structure maps As an example, consider D 2 × S 1 with boundary parameterization ψ :
F → ∂(D 2 × S 1 ) defined by α a 1 → {1} × S 1 and α a 2 → ∂D 2 × {1}.
Using the bordered Heegaard diagram in Figure 3 for (D 2 × S 1 , ψ), we get that CFD(D 2 × S 1 , ψ) is given by the decorated, directed graph in Figure 4. When we vary the parameterization of the boundary of (Y, φ), the bordered invariants CFA(Y, φ) and CFD(Y, φ) change by a type DA structure over (A, A). Specifically, given an orientation-preserving homeomorphism ψ of the model torus F , there exists a type DA structure CFDA(ψ) so that
CFDA(ψ) CFD(Y, φ) CFD(Y, φ • ψ)
as type D structures over A, and
CFA(Y, φ) CFDA(ψ) CFA(Y, φ • ψ −1 ) (2.1)
as type A structures over A. For details, see [15,Theorem 2]. Given a type A bordered 3-manifold (Y 1 , φ 1 ) and a type D bordered 3-manifold (Y 2 , φ 2 ), we can build a closed, oriented, smooth 3-manifold Y by gluing Y 1 and Y 2 together along their boundaries via the homeomorphism
φ 2 • φ −1 1 : ∂Y 1 → ∂Y 2 .
To the bordered pieces (Y 1 , φ 1 ) and (Y 2 , φ 2 ) we associate the bordered invariants CFA(Y 1 , φ 1 ) and CFD(Y 2 , φ 2 ), and to Y we associate the hat flavor of Heegaard Floer homology HF (Y ). The pairing theorem tells us that if at least one of the bordered invariants is bounded, then HF (Y ) is determined by CFA(Y 1 , φ 1 ) and CFD(Y 2 , φ 2 ):
HF (Y ) ∼ = H * CFA(Y 1 ) CFD(Y 2 ) . (2.2)
This will motivate our definition of the orbifold Heegaard Floer invariant.
2.3. CFA of bordered knot exteriors. Let Y be the exterior of a knot K ⊂ S 3 . Given r ∈ Z, let φ r : F → ∂Y to be an orientation-preserving parameterization that sends α a 1 to a meridian m of K and α a 2 to an r-framed longitude γ of K. In this section we recall the algorithm for computing CFA(Y, φ r ) from the knot Floer chain complex CFK − (K). This is due to Lipshitz, Ozsváth, and D. Thurston in [16, Theorems 11.26 and A.11] technically their algorithm computes CFD(Y, α a 1 → γ, α a 2 → m), but by [6, Theorem 2.2] we can pass from CFD(Y, α a 1 → γ, α a 2 → m) to CFA(Y, φ r ) . We start by recalling the definition of CFK − (K). The details can be found in [21,27]. First take a doubly-pointed Heegaard diagram (Σ, α, β, w, z) of genus g for K ⊂ S 3 . If we ignore the base point z, then we get a pointed Heegaard diagram (Σ, α, β, w) of genus g for S 3 . To this we can associate the
Z 2 [U ]-chain complex (CF − (S 3 ), ∂ − ), where • CF − (S 3 ) is the finite-dimensional Z 2 [U ]
-vector space generated by g-tuples x of points in α ∩ β with one point on each α circle and one point on each β circle, and
• the diffferential ∂ − : CF − (S 3 ) → CF − (S 3 )
is given by counting certain pseudoholomorphic curves in Sym g (Σ). When we bring back the z base point, which we should think of as representing the knot K, we get a Z-grading on CF − (S 3 ), called the Alexander grading. This is a function A :
CF − (S 3 ) → Z that satisfies the property A(U i x) = A(x) − i. Using A, we can define a Z- filtration {F i } on the Z 2 [U ]-chain complex (CF − (S 3 ), ∂ − ), where each F i is a Z 2 [U ]-module and ∂ − (F i ) ⊆ F i . Then CFK − (K) is defined to be the Z 2 [U ]-chain complex (CF − (S 3 ), ∂ − ) with this Z-filtration {F i }.
By negating the powers of U , we get a second Z-filtration I on CFK − (K). We can visualize CFK − (K), together with the I filtration, as a directed graph in Z × Z ⊂ R × R as follows. First pick a basis {x 0 , . . . , x 2n } for CFK − (K) over Z 2 [U ] as above. Then {U i x k | i ∈ Z and k = 0, . . . , 2n} is a basis for CFK − (K) over Z 2 , and it's these elements that form the vertices of our graph, with U i x k at point
I(U i x k ), A(U i x k ) = −i, A(x k )−i in Z × Z ⊂ R × R.
The edges of the graph are given by the differential ∂ − , namely we draw
a directed edge from U i x k to U j x l if ∂ − (U i x k ) contains U j x l as a summand.
Note that the graph of CFK − (K) lies in the part of the (I, A)-plane with I ≤ 0.
Let C vert be the Z 2 -chain complex CFK − (K)/ U ·CFK − (K) . We'll denote the differential by ∂ vert , and call C vert as the vertical complex associated to CFK − (K). If we think of CFK − (K) as a directed graph in Z × Z ⊂ R × R, then the graph of C vert is the part of CFK − (K) that lies on the vertical A-axis (with directed edges pointing down).
To CFK − (K) with the Alexander filtration {F i }, we can associate the finitely generated,
free Z 2 [U ]-module gr(CFK − (K)) := i∈Z F i /F i−1 .
Given any
x ∈ CFK − (K), denote by [x] the image of x in F A(x) /F A(x)−1 . We call a basis {x 0 , . . . , x 2n } for CFK − (K) over Z 2 [U ] filtered if {[x 0 ], . . . , [x 2n
]} is a basis for gr(CFK − (K)). We will be interested in filtered bases for CFK − (K) that take a particularly simple form, which we now describe.
Let
CFK ∞ (K) denote the Z 2 [U, U −1 ]-chain complex CFK − (K)⊗ Z 2 [U ] Z 2 [U, U −1 ].
There is a natural way to extend the Alexander and I filtrations on CFK − (K) to CFK ∞ (K). Then we can view CFK ∞ (K) as a directed graph in Z×Z ⊂ R×R, with CFK − (K) as a subgraph. To CFK ∞ (K) we can associate the Z 2 -chain complex
C horz := F 0 CFK ∞ (K) /F −1 CFK ∞ (K)
with differential denoted by ∂ horz . We'll refer to this as the horizontal complex associated to CFK ∞ (K). If we view CFK ∞ (K) as a directed graph in Z × Z ⊂ R × R, then C horz can be thought of as the part of CFK ∞ (K) lying on the horizontal I-axis (with directed edges pointing to the left).
We're now ready to define those nice filtered bases for CFK − (K). Let {x 0 , . . . , x 2n } be a filtered basis for CFK − (K), and let {x 0 , . . . , x 2n } denote the induced basis for the vertical complex C vert . We define {x 0 , . . . , x 2n } to be vertically simplified if each basis element x i satisfies one of the following: There is a horizontal analogue of the above definition. Given a filtered basis {y 0 , . . . , y 2n } for CFK − (K), we can define a basis {U A(y 0 ) y 0 , . . . , U A(y 2n ) y 2n } for C horz . Then {y 0 , . . . , y 2n } is called horizontally simplified if each basis element U A(y i ) y i satisfies one of the following:
• x i ∈ im(∂ vert ) ⊆ ker(∂ vert ) and ∂ vert (x i−1 ) = x i , • x i ∈ ker(∂ vert ), but x i / ∈ im(∂ vert ), or • x i / ∈ ker(∂ vert ) and ∂ vert (x i ) = x i+1 . When ∂ vert (x i ) = x i+1 , we say that there is a vertical arrow from x i to x i+1 of length A(x i )−A(x i+1 ). Because H * (C vert ) ∼ = Z 2• U A(y i ) y i ∈ im(∂ horz ) ⊆ ker(∂ horz ) and ∂ horz U A(y i−1 ) y i−1 = U A(y i ) y i , • U A(y i ) y i ∈ ker(∂ horz ), but U A(y i ) y i / ∈ im(∂ horz ), or • U A(y i ) y i / ∈ ker(∂ horz ) and ∂ horz U A(y i ) y i = U A(y i+1 ) y i+1 .
When ∂ horz U A(y i ) y i = U A(y i+1 ) y i+1 , we say that there is a horizontal arrow from U A(y i ) y i to U A(y i+1 ) y i+1 of length A(y i+1 ) − A(y i ). Like in the vertical case, H * (C horz ) ∼ = Z 2 and ∂ horz pairs up basis elements in {U A(y 0 ) y 0 , . . . , U A(y 2n ) y 2n }, so there is a distinguished basis element in {U A(y 0 ) y 0 , . . . , U A(y 2n ) y 2n } with no incoming and outgoing horizontal arrows. Without loss of generality, we assume it's U A(y 0 ) y 0 , and we call y 0 the generator of the horizontal complex C horz .
We can now explain how to go from CFK − (K) to a decorated, directed graph that describes CFA(Y, φ r ). First, take a vertically simplified basis {w i } for CFK − (K). Since we can identify the vertical complex C vert with CFA(Y, φ r ) · ι 1 , {w i } (or really {w i }) induces a basis for CFA(Y, φ r ) · ι 1 . We represent each of these basis elements in CFA(Y, φ r ) · ι 1 by a •-labelled vertex. Next, for each vertical arrow from w i to w i+1 of length i , we introduce i basis elements κ i 1 , . . . , κ i i for CFA(Y, φ r ) · ι 2 (thought of as vertices labelled by •) and differentials
w i • ρ 3 − → κ i 1 • ρ 21 ← − − . . . ρ 21 ← − − κ i i • ρ 321 ← − − w i+1 • .
Now take a horizontally simplified basis {w i } for CFK − (K). In a similar way, we can identify the horizontal complex C horz with CFA(Y, φ r ) · ι 1 , and so {w i } induces a basis for CFA(Y, φ r ) · ι 1 . We'll think of each of these basis element in CFA(Y, φ r ) · ι 1 as a vertex labelled by •. For each horizontal arrow from w i to w i+1 of length i , we introduce i basis elements λ i 1 , . . . , λ i i for CFA(Y, φ r ) · ι 2 (thought of as vertices labelled by •) and differentials
w i • ρ 1 − → λ i 1 • ρ 21 − − → . . . ρ 21 − − → λ i i • ρ 2 − → w i+1
• .
The graph of CFA(Y, φ r ) contains one more component called the unstable chain running from the generator w 0 of the vertical complex to the generator w 0 of the horizontal complex. What this looks like depends on the integer 2τ (K) − r, where τ (K) is an integer-valued invariant of K due to Ozsváth and Szabó in [19] (for a quick explanation see Section 2.4).
• Suppose r < 2τ (K). Let d = 2τ (K) − r > 0. Then we introduce d basis elements γ 1 , . . . , γ d for CFA(Y, φ r ) · ι 2 (thought of as vertices labelled by •) and differentials
w 0 • ρ 3 − → γ 1 • ρ 21 ← − − . . . ρ 21 ← − − γ d • ρ 1 ← − w 0
• .
• Suppose r > 2τ (K). Let d = r − 2τ (K) > 0. Then we introduce d basis elements γ 1 , . . . , γ d for CFA(Y, φ r ) · ι 2 (thought of as vertices labelled by •) and differentials
w 0 • ρ 321 − − → γ 1 • ρ 21 − − → . . . ρ 21 − − → γ d • ρ 2 − → w 0 • .
• Finally suppose r = 2τ (K). Then the unstable chain from w 0 to w 0 takes the form
w 0 • ρ 32 − − → w 0 • .
Note that CFA(Y, φ r ) · ι 2 has Z 2 -dimension ( i i + i ) + |2τ (K) − r| and that the elements κ i e , λ i f , and γ g introduced above form a basis for CFA(Y, φ r ) · ι 2 .
2.4. The knot invariant ε. In [7, Section 3], Hom defined a {−1, 0, 1}-valued invariant ε(K) for knots K ⊂ S 3 in terms of τ (K) and two other knot invariants ν(K) [26] and ν (K) [7] coming from the knot Floer complex CFK ∞ (K) for K. In this subsection we recall the definition of ε(K). Throughout, we'll think of CFK ∞ (K), with its two Z-filtrations I and A, as a directed graph in Z × Z, with I represented by the first component and A by the second. Given S ⊆ Z × Z, one can consider the free Z 2 -vector space C{S} generated by S ∩ CFK ∞ (K). Suppose S has the property that every point in Z × Z that's either to the left or below some point in S is already an element of S, in other words S is closed under the operations of looking down and to the left. Then C{S}, together with the differential induced by ∂ ∞ , gives us a Z 2 -chain complex. When S 1 and S 2 are two subsets of Z × Z with the above property, and S 1 ⊇ S 2 , we can form the quotient chain complex C{S 1 }/C{S 2 }.
We define τ (K) to be the minimum Alexander filtration level s so that the inclusion map
incl : C{I ≤ 0, A ≤ s}/C{I < 0, A ≤ s} → C{I ≤ 0}/C{I < 0} CF (S 3 )
of Z 2 -chain complexes induces a non-trivial map on homology. The invariants ν(K) and ν (K) come from studying more complicated regions of the CFK ∞ (K) graph. ∀s ∈ Z, let A s be the Z 2 -vector space
A s quot − − → C{I ≤ 0, A ≤ s}/C{I < 0, A ≤ s} incl −→ C{I ≤ 0}/C{I < 0} CF (S 3 )
and ν s is the composition
CF (S 3 ) C{I ≤ 0}/C{I < 0} quot − − → C{I ≤ 0}/C{(I < 0) ∪ (I = 0, A < s)} incl −→ A s .
We define the invariant ν(K) to be the minimum Alexander filtration level s so that the chain map ν s induces a nontrivial map on homology, and the invariant ν (K) to be the maximum Alexander filtration level s so that the chain map ν s induces a nontrivial map on homology. Then the invariant ε(K) is the integer 2τ (K) − ν(K) − ν (K). That ε(K) ∈ {−1, 0, 1} is due to Hom in [7, Lemmas 3.2 and 3.3].
HFO(Y orb ): Definition, Theorem 1.1, and Examples
3.1. Definition of HFO(Y orb ). Let Y orb be a compact, connected, oriented 3-orbifold with singular set a knot K of order n. Fix a neighborhood N of K modeled on (D 2 × S 1 )/Z n and an orientation-preserving homeomorphism φ N : (D 2 × S 1 )/Z n → N . What will be important for us is the induced orientation-preserving parameterization of the boundary:
φ ∂N : ∂ (D 2 × S 1 )/Z n → ∂N.
There's a natural orientation-reversing identification of the oriented torus F associated to the pointed matched circle Z from Figure 2 with ∂ (D 2 ×S 1 )/Z n , taking α a 1 to the longitude {1}×S 1 and α a 2 to the meridian ∂D 2 /Z n ×{1}. This allows us to view φ ∂N as an orientationreversing parameterization of ∂N by F .
If we remove (the interior of) the singular neighborhood N , we're left with an honest 3-manifold E with torus boundary. Using the orientation-reversing parameterization φ ∂N of ∂N , we can define the following orientation-preserving parameterization φ ∂E of ∂E:
φ ∂E := id • φ ∂N : F → ∂E.
Then E, together with φ ∂E , forms a type A bordered 3-manifold. To (E, φ ∂E ) we associate the type A structure CFA(E, φ ∂E ) coming from bordered Floer theory.
Generalizing the type D structure CFD(D 2 × S 1 , ψ) in Figure 4, we associate to the singular piece N the type D structure D N in Figure 5. Figure 6. Here we're starting with a Z n -equivariant torus that has been punctured once, together with two properly embedded arcs α a 1 and α a 2 . When we fill in the puncture, we recover ∂N . Like before, β represents a meridian of an honest handlebody, but unlike before, β sits immersed in the punctured Z n -equivariant torus, wrapping n times around α a 2 because β represents one full meridian, while α a 2 represents a meridian of the Z n -equivariant solid torus N , i.e. an nth of a full meridian. The generators x i of the type D structure D N correspond to where the β curve intersects the α a 1 arc. The differential corresponds to counting domains with corners only at the generators. For an example of a domain that doesn't contribute to the differential, see Figure 7. Remark 3.1. The type D structure D N isn't bounded, but by performing a "finger move" on one of the edges we can pass to a homotopy equivalent type D structure that is bounded. See Figure 8 for an example. Figure 8. A bounded type D structure that is homotopy equivalent to D N and an orbifold bordered Heegaard diagram for (N, φ ∂N ) that gives rise to it in the case when n = 3 Definition 3.2. Let CFO(Y orb ) be the box tensor product CFA(E, φ ∂E ) D N . We define HFO(Y orb ) to be the homology of CFO(Y orb ).
Remark 3.3. CFA(E, φ ∂E ) D N only makes sense for bounded CFA(E, φ ∂E ). When CFA(E, φ ∂E ) isn't bounded, we consider CFA(E, φ ∂E ) D N instead, where D N is any type D structure obtained from D N by a finger move as described above. Note that CFA(E, φ ∂E ) D N and CFA(E, φ ∂E ) D N are homotopy equivalent, so we haven't lost anything by passing to CFA(E, φ ∂E ) D N .
3.2.
Proof of Theorem 1.1. Here we prove that HFO(Y orb ) is a well-defined invariant of Y orb that generalizes HF for 3-manifolds.
Proof that HFO(Y orb ) is well-defined. We need to show that HFO(Y orb ) is independent of the equivariant neighborhood N and the orientation-preserving parameterization φ N : (D 2 × S 1 )/Z n → N . First we argue that for a fixed neighborhood N , HFO(Y orb ) is independent of the parameterization φ N . Let φ 1 N and φ 2 N be two orientation-preserving parameterizations of N by (D 2 × S 1 )/Z n . From φ 1 N and φ 2 N , we get two type A bordered 3-manifolds (E, φ 1 ∂E ) and (E, φ 2 ∂E ). It suffices to show that the resulting Z 2 -chain complexes CFA(E, φ 1 ∂E ) D N and CFA(E, φ 2 ∂E ) D N are chain homotopy equivalent. Because CFA(E, φ 1 ∂E ) and CFA(E, φ 2 ∂E ) may not be bounded, we'll need to replace D N with a bounded type D structure D N that's homotopy equivalent to D N ; we'll use the one in Figure 9.
(φ 1 ∂E ) −1 • φ 2 ∂E −1 . Note that φ 2 ∂E = φ 1 ∂E • ψ −1 . By Equation 2.1, CFA(E, φ 2 ∂E ) CFA(E, φ 1 ∂E ) CFDA(ψ). Then CFA(E, φ 2 ∂E ) D N CFA(E, φ 1 ∂E ) CFDA(ψ) D N CFA(E, φ 1 ∂E ) CFDA(ψ) D N ,
so if we can show CFDA(ψ) D N D N , then we have the claim.
Lemma 3.4. Let τ α a 2 : F → F be the Dehn twist about the curve α a 2 . Then ψ is isotopic to a power of τ α a 2 . Proof. It's enough to show that the composition (φ 1 ∂N ) −1 •φ 2 ∂N : ∂ (D 2 ×S 1 )/Z n → ∂ (D 2 × S 1 )/Z n is isotopic to the Dehn twist about the meridian ∂D 2 /Z n × {1}, which we denote by m for convenience. By construction, (φ 1 ∂N ) −1 • φ 2 ∂N extends to a homeomorphism of (D 2 × S 1 )/Z n . Then (φ 1 ∂N ) −1 • φ 2 ∂N has to send m to a meridian of (D 2 × S 1 )/Z n , which means that (φ 1 ∂N ) −1 • φ 2 ∂N is isotopic to a power of D m . We can assume ψ (τ α a 2 ) n for some n ∈ N because there's a similar argument for ψ (τ −1 α a 2 ) n . From [15,Theorem 5],
CFDA(ψ) CFDA(τ α a 2 ) · · · CFDA(τ α a 2 ) n times , so it suffices to show CFDA(τ α a 2 ) D N D N . (3.1)
From [15,Proposition 10.6], CFDA(τ α a 2 ) which Lipshitz, Ozsváth, and D. Thurston call CFDA(τ m , 0) has generators p, q, and r, with the non-zero (I, I)-actions given by ι 1 · p · ι 1 = p ι 2 · q · ι 2 = q ι 2 · r · ι 1 = r, and the non-trivial differentials given by
δ 2 1 (p, ρ 1 ) = ρ 1 ⊗ q δ 2 1 (p, ρ 12 ) = ρ 123 ⊗ r δ 2 1 (p, ρ 123 ) = ρ 123 ⊗ q δ 3 1 (p, ρ 3 , ρ 2 ) = ρ 3 ⊗ r δ 3 1 (p, ρ 3 , ρ 23 ) = ρ 3 ⊗ q δ 2 1 (q, ρ 2 ) = ρ 23 ⊗ r δ 2 1 (q, ρ 23 ) = ρ 23 ⊗ q δ 2 1 (r) = ρ 2 ⊗ p δ 2 1 (r, ρ 3 ) = 1 ⊗ q.
By direct computation, we get that the type D structure CFDA(τ α a 2 ) D N is given by the decorated, directed graph in Figure 10. If we cancel the edges
p⊗a • 1 − → p⊗b • and r⊗a • 1 − → r⊗b
• as prescribed by the well-known "edge reduction" algorithm [14], Figure 10 reduces to Figure 5. This shows that CFDA(τ α a 2 ) D N and D N are homotopy equivalent. Now we check that HFO(Y orb ) is independent of the singular tubular neighborhood N . Let N 1 and N 2 be two singular tubular neighborhoods of K. Since D N 1 = D N 2 , it's enough to show that the type A structures CFA(E 1 , (φ 1 ) ∂E 1 ) and CFA(E 2 , (φ 2 ) ∂E 2 ) are homotopy equivalent, for some choice of φ 1 : (D 2 × S 1 )/Z n → N 1 and φ 2 : (D 2 × S 1 )/Z n → N 2 . Let φ 1 : (D 2 × S 1 )/Z n → N 1 be any orientation-preserving parameterization of N 1 . Since N 1 and N 2 are tubular neighborhoods of the same K, N 1 and N 2 are ambiently isotopic, so pick an ambient isotopy H t of |Y orb | that takes N 1 to N 2 . Then we can define φ 2 to be the
(τ α a 2 ) D N composition H 1 | N 1 • φ 1 .
By construction, we have the commutative diagram in Figure 11. This implies that the bordered 3-manifolds (E 1 , (φ 1 ) ∂E 1 ) and (E 2 , (φ 2 ) ∂E 2 ) are equivalent, which in turn implies that CFA(E 1 , (φ 1 ) ∂E 1 ) and CFA(E 2 , (φ 2 ) ∂E 2 ) are homotopy equivalent. This concludes the proof that HFO(Y orb ) is well-defined. Figure 11. (E 1 , φ 1 ∂E 1 ) and (E 2 , φ 1 ∂E 2 ) are equivalent
F ∂ (D 2 × S 1 )/Z n ∂ (D 2 × S 1 )/Z n ∂N 1 ∂N 2 ∂E 1 ∂E 2 ∼ = ∼ = φ 1 | ∂ φ 2 | ∂ id id H 1 | ∂E 1
Proof that HFO(Y orb ) is an invariant of 3-orbifolds. Let (Y 1 , K 1 , n) and (Y 2 , K 2 , n) be homeomorphic (oriented) 3-orbifolds. We need to show HFO(Y 1 , K 1 , n) ∼ = HFO(Y 2 , K 2 , n). We have an orientation-preserving homeomorphism |f | : Y 1 → Y 2 between the underlying oriented 3-manifolds Y 1 and Y 2 taking a singular neighborhood N 1 of K 1 to a singular neighborhood
N 2 of K 2 . Let E i = Y i − int(N i ).
Then |f |(E 1 ) = E 2 . Now pick any orientation-preserving parameterization φ 1 : (D 2 × S 1 )/Z n → N 1 of N 1 . As described above, we get an orientation-preserving parameterization (φ 1 ) ∂E 1 : F → ∂E 1 of ∂E 1 . Define (φ 2 ) ∂E 2 : F → ∂E 2 to be the composition |f | • (φ 1 ) ∂E 1 . Similar to the argument above, the bordered 3-manifolds (E 1 , (φ 1 ) ∂E 1 ) and (E 2 , (φ 2 ) ∂E 2 ) are equivalent, which means that the associated type A structures CFA(E 1 , (φ 1 ) ∂E 1 ) and CFA(E 2 , (φ 2 ) ∂E 2 ) are homotopy equivalent. This implies HFO(Y 1 , K 1 , n) ∼ = HFO(Y 2 , K 2 , n).
Proof that HFO(Y orb ) generalizes HF (Y ). When n = 1, N is modeled on D 2 × S 1 and D N is the type D structure for D 2 × S 1 with boundary parameterization given by α a
1 → {1} × S 1 and α a 2 → ∂D 2 × {1}. By 2.2, CF (Y orb ) CFA(E, φ ∂E ) D N = CFO(Y orb ), which implies HF (Y orb ) ∼ = HFO(Y orb ).
3.3.
Examples. In this subsection we calculate HFO(Y orb ) for some 3-orbifolds. For more examples see Section 4.1.
3.3.1. (S 3 , K, n), where K is any knot in S 3 . Given any choice of N and φ : (D 2 ×S 1 )/Z n → N , we can represent (E, φ ∂E ) by the bordered Heegaard diagram in Figure 12a. Then the associated type A structure CFA(E, φ ∂E ) is given by the graph in Figure 12b. It's not hard to check that CFA(E, φ ∂E ) D N has trivial differential, which means HFO(S 3 , K, n) ∼ = Z 2 y ⊗ x 1 , . . . , y ⊗ x n ∼ = Z 2 n . Note that the rank of HFO(S 3 , K, n) is n times the rank of HF (S 3 ).
(a) (b) Figure 12. On the left a bordered Heegaard diagram for (E, φ ∂E ) when the 3-orbifold is (S 3 , K, n). On the right the corresponding type A structure CFA(E, φ ∂E ).
(S
2 × S 1 , {1} × S 1 , n).
In this example we take our bordered Heegaard diagram for (E, φ ∂E ) to be Figure 13a. The corresponding type A structure CFA(E, φ ∂E ) is pictured in Figure 13b. The differential in CFA(E, φ ∂E ) D N is again trivial, so HFO(
S 2 × S 1 , {1} × S 1 , n) ∼ = Z 2 y ⊗ a, y ⊗ b ∼ = Z 2 2 .
Unlike the first example, the rank of HFO(S 2 × S 1 , {1} × S 1 , n) equals the rank of HF (S 2 × S 1 ) for every n.
(a) (b) Figure 13. On the left a bordered Heegaard diagram for (E, φ ∂E ) when the 3-orbifold is (S 2 × S 1 , {1} × S 1 , n). On the right the corresponding type A structure CFA(E, φ ∂E ).
3.3.3. L(p, −q), K, n . Think of L(p, −q) as two copies of D 2 ×S 1 glued together. Singularize one of them. This will be N and K will be the core of N . Take p ≥ 2, 1 ≤ q ≤ p − 1, and gcd(p, q) = 1. Then Figure 14a gives a bordered Heegaard diagram for (E, φ ∂E ). The induced type A structure CFA(E, φ ∂E ) is shown in Figure 14b. Note that CFA(E, φ ∂E ) is bounded, unlike the previous examples. The differential in CFO L(p, −q), K, n = CFA(E, φ ∂E ) D N is trivial, so HFO L(p, −q), K, n = Z 2 y 1 ⊗x 1 , . . . , y 1 ⊗x n , . . . , y p ⊗x 1 , . . . , y p ⊗x n ∼ = Z 2 np .
The rank of HFO L(p, −q), K, n is n times the rank of HF L(p, −q) .
Proof of Theorem 1.2
We now restrict our attention to 3-orbifolds coming from integral surgeries on knots in S 3 , and prove Theorem 1.2. Let Y be r-surgery on a knot K ⊂ S 3 . Think of Y as (D 2 × S 1 ) ∪ E, where E is the exterior of K in S 3 and we're identifying the meridian ∂D 2 × {1} ⊂ ∂D 2 × S 1 with the curve γ = rm + ⊂ ∂E. If we replace (D 2 × S 1 ) with (D 2 × S 1 )/Z n , then we get the 3-orbifold Y orb = (Y, K, n). As in Section 2.3, let φ r : F → ∂E be an orientation-preserving parameterization that sends α a 1 to m and α a 2 to γ. We first consider the case ε(K) = 1. By Part 1 of [7, Lemma 3.2], we can find vertically and horizontally simplified bases {w 0 , . . . , w 2s } and {w 0 , . . . , w 2s } for CFK − (K) over Z 2 [U ], with the following properties (possibly after reordering):
• w 0 is the generator of the vertical complex C vert , (a) (b) Figure 14. On the left a bordered Heegaard diagram for (E, φ ∂E ) when the 3-orbifold is L(p, −q), K, n . On the right the corresponding type A structure CFA(E, φ ∂E ).
• w 0 is the generator of the horizontal complex C horz , • ∂ horz U A(w 1 ) w 1 = U A(w 2 ) w 2 , and • w 2 = w 0 . Fix such bases {w 0 , . . . , w 2s } and {w 0 , . . . , w 2s } for CFK − (K). As discussed in Section 2.3, any pair of horizontally and vertically simplified bases for CFK − (K) gives rise to a decorated, directed graph that represents CFA(E, φ r ). Let Γ r be the graph for CFA(E, φ r ) coming from {w 0 , . . . , w 2s } and {w 0 , . . . , w 2s }. We know that Γ r can't contain any coherently oriented cycles because w 0 = w 2 = w 0 (and because there's no other way to get coherently oriented cycles in Γ r ). This implies that CFA(E, φ r ) is bounded. Now consider the Z 2 -chain complexes
CF (Y ) = CFA(E, φ r ) D D 2 ×S 1 , ∂ and CFO(Y orb ) = CFA(E, φ r ) D (D 2 ×S 1 )/Zn , ∂ orb .
We want to show n · rank HF (Y ) = rank HFO(Y orb ) . We do this by comparing ker(∂ ) to ker(∂ orb ), and im(∂ ) to im(∂ orb ). Recall from Section 2.3 that κ i e , λ i f , and γ g form a basis for CFA(E, φ r ) · ι 2 . Here i ∈ {0, . . . , 2s}, e ∈ {1, . . . , i }, f ∈ {1, . . . , i }, and g ∈ {1, . . . , d = |2τ (K) − r|}. For convenience, let α be any one of these basis elements. Then CF (Y ) is generated by elements of the form α ⊗ x and CFO(Y orb ) is generated by elements of the form α ⊗ x j , where j ∈ {1, . . . , n}.
Claim 4.1. n · rank ker(∂ ) = rank ker(∂ orb ) .
Proof. Because the type D structure map in D (D 2 ×S 1 )/Zn is essentially n copies of the type D structure map in D D 2 ×S 1 , ∂ (α ⊗ x) = 0 implies ∂ orb (α ⊗ x j ) = 0 for every j. Since there is no other way for ∂ orb to be trivial on a basis element α ⊗ x j of CFO(Y orb ), we have that n · rank ker(∂ ) = rank ker(∂ orb ) .
Claim 4.2. Suppose ∂ (α ⊗ x) = 0. Then for every j, ∂ orb (α ⊗ x j ) = 0. Furthermore, there exists β ∈ {κ i 1 , γ 1 } so that ∂ (α ⊗ x) = β ⊗ x and for every j, ∂ orb (α ⊗ x j ) = β ⊗ x j+1 , with j + 1 considered mod n.
Proof. The first statement is clear. As for the second one, if ∂ (α ⊗ x) is nontrivial, then β is the target of a directed edge labeled ρ 3 in Γ r , and this happens exactly when β ∈ {κ i 1 , γ 1 }. For example, when r < 2τ (K), Γ r contains a piece that looks like
λ 1 1 • ρ 2 − → w 2 =w 0 • ρ 3 − → γ 1 • .
This gives us the nontrivial multiplication m 2 (λ 1
1 , ρ 23 ) = γ 1 in CFA(E, φ r ), which implies that ∂ (λ 1 1 ⊗ x) = γ 1 ⊗ x and ∂ orb (λ 1 1 ⊗ x j ) = γ 1 ⊗ x j+1 .
It follows from Claims 4.1 and 4.2 that n · rank HF (Y ) = rank HFO(Y orb ) .
A similar argument shows that when ε(K) = −1, n · rank HF (Y ) = rank HFO(Y orb ) . This is because Part 2 of [7, Lemma 3.2] gives us vertically and horizontally simplified bases {w 0 , . . . , w 2s } and {w 0 , . . . , w 2s } for CFK − (K) over Z 2 [U ], with the following properties (possibly after reordering):
• w 0 is the generator of the vertical complex C vert , • w 0 is the generator of the horizontal complex C horz , • ∂ horz U A(w 1 ) w 1 = U A(w 2 ) w 2 , and • w 1 = w 0 . Now suppose ε(K) = 0. By [7,Lemma 3.3], we can find vertically and horizontally simplified bases {w 0 , . . . , w 2s } and {w 0 , . . . , w 2s } for CFK − (K) over Z 2 [U ] so that the generator w 0 of the vertical complex C vert equals the generator w 0 of the horizontal complex C horz . Fix such bases {w 0 , . . . , w 2s } and {w 0 , . . . , w 2s }. Let Γ r be the graph for CFA(E, φ r ) coming from {w 0 , . . . , w 2s } and {w 0 , . . . , w 2s }. Note that τ (K) = 0 because ε(K) = 0. We have two cases: either r = 0 or r = 0. If r = 0, then r = 2τ (K). This means that Γ r doesn't contain any coherently oriented cycles, and we can use the argument in the ε(K) = 1 case above to show that n · rank HF (Y ) = rank HFO(Y orb ) .
Assume r = 0. Then r = 2τ (K) and the unstable chain in Γ r is a coherently oriented cycle, which implies that CFA(E, φ r ) is unbounded. Since the unstable chain doesn't interact with the rest of the type A structure, we can express CFA(E, φ r ) as CFA(E, φ r ) 1 ⊕ CFA(E, φ r ) 2 , where CFA(E, φ r ) 1 is the unbounded type A structure corresponding to the unstable chain and CFA(E, φ r ) 2 is the bounded type A structure corresponding to the complement of the unstable chain. Then we have
CF (Y ) CFA(E, φ r ) 1 D D 2 ×S 1 ⊕ CFA(E, φ r ) 2 D D 2 ×S 1 and CFO(Y orb ) CFA(E, φ r ) 1 D (D 2 ×S 1 )/Zn ⊕ CFA(E, φ r ) 2 D (D 2 ×S 1 )/Zn ,
where D D 2 ×S 1 and D (D 2 ×S 1 )/Zn are the bounded type D structures in Figure 9.
This means that HF (Y ) and HFO(Y orb ) admit the following decompositions:
HF (Y ) ∼ = H 1 ⊕ H 2 and HFO(Y orb ) ∼ = H orb 1 ⊕ H orb 2 ,
where H i denotes the homology of the ith piece in CF (Y ) and H orb i denotes the homology of the ith piece in CFO(Y orb ). From Example 3.3.2, we have that rank(H orb 1 ) = 2 = rank(H 1 ). By the argument in the ε(K) = 1 case, rank(H orb 2 ) = n · rank(H 2 ). Consequently, we get that rank HFO(Y orb ) = 2 + n · rank(H 2 ), which implies that rank HFO(Y orb ) = n · rank HF (Y ) − 2n + 2, as needed. Figure 16. Let C 1 denote the unbounded type A structure represented by the unstable loop, and let C 2 be the bounded type A structure represented by everything else. Then CFO(Y orb )
Y orb ) = CFA(E, φ 0 ) D (D 2 ×S 1 )/Zn is generated by κ 1 1 ⊗ x j , λ 1 1 ⊗ x j , γ 1 ⊗ x j , and γ 2 ⊗ x j . The only nontrivial differential is ∂ (γ 2 ⊗ x j ) = κ 1 1 ⊗ x j+1 . This implies that HFO(Y orb ) = Z 2 λ 1 1 ⊗ x j , γ 1 ⊗ x j ,C 1 D (D 2 ×S 1 )/Zn ⊕ C 2 D (D 2 ×S 1 )/Zn , which implies that HFO(Y orb ) ∼ = H orb 1 ⊕ H orb 2 ,
where H orb 1 denotes the homology of C 1 D (D 2 ×S 1 )/Zn and H orb 2 denotes the homology of C 2 D (D 2 ×S 1 )/Zn . As noted above, Example 3.3.2 tells us that H orb
1 ∼ = ω 0 ⊗ a, ω 0 ⊗ b . Now C 2 D (D 2 ×S 1 )/Zn is generated by λ 1 1 ⊗ x j , λ 3 1 ⊗ x j , κ 1 1 ⊗ x j , and κ 3 1 ⊗ x j . Let ∂ 2 denote the differential in C 2 D (D 2 ×S 1 )/Zn . Then ∂ 2 (λ 1 1 ⊗ x j ) = κ 3 1 ⊗ x j+1
, and on all other generators ∂ 2 is trivial. This implies that H orb 2 = Z 2 λ 3 1 ⊗ x j , κ 1 1 ⊗ x j , which has rank 2n. Altogether, HFO(Y orb ) has rank 2n + 2, which agrees with Theorem 1.2, since ε(K) = 0 and rank HF (Y ) = 4. Figure 16. CFA(E, φ 0 ) 5. Categorifying |H orb 1 | to HFO(Y orb ) 5.1. Background. We start by reviewing the relative Z 2 -grading gr on CF . The details are in [22,8]. Let H = (Σ g , α, β, z) be a Heegaard diagram for a closed 3-manifold Y . Order and orient the α and β circles. Then given any generator x of the Z 2 -chain complex CF (H), we have two integers inv(σ x ) and o(x) defined as follows. σ x is the permutation in S g that allows us to express x as {x 1 , . . . , x g } where x i ∈ β i ∩ α σx(i) , and inv(σ x ) counts the number of inversions in σ x , i.e. the number of pairs (i, j) where i < j, but σ x (i) > σ x (j). At every intersection point x i we can assign an orientation: positive if α σx(i) followed by β i gives the orientation on Σ g , and negative otherwise. Up to a possible overall shift, gr is well-defined, i.e. does not depend on how we order and orient the α and β circles. So we'll think of gr as a relative Z 2 -grading on CF (H). gr induces a relative Z 2 -grading on HF (H), which we also call gr. With respect to both relative Z 2 -gradings, we have |χ CF (H) | = |χ HF (H) | = |H 1 (Y )|, for details see [22].
There's an analogous story for the bordered invariants CFA and CFD, due to Hom, Lidman, and Watson in [8]. To explain this, we'll need the notion of a bordered partial permutation. Recall [n] denotes the set {1, . . . , n}.
Definition 5.1. Let g ∈ N. Fix B ⊆ [g + 1] with |B| = 2. Suppose σ : [g] → [g + 1]
is a function that satisfies the following:
(1) σ is injective and (2) the complement of B in [g + 1] lies in Im(σ).
Then we call σ a bordered partial permutation. Furthermore, we say that σ is type A if
B = {g, g + 1}, and type D if B = {1, 2}.
Given a bordered partial permutation σ, we can consider its sign sgn(σ). For type A bordered partial permutations σ, we define sgn A (σ) = inv(σ) (mod 2), and for type D bordered partial permutations σ, we define sgn D (σ) = inv(σ) + i∈Im(σ) #{j | j > i, j / ∈ Im(σ)} (mod 2). Now let H = (Σ g ; α; β; z) be a bordered Heegaard diagram for a bordered 3-manifold (Y, φ). There's a canonical way to order and orient the two α arcs α 1 and α 2 . The ordering is given by the indices. The orientations are defined as follows. If (Y, φ) is type A, we orient α a 1 and α a 2 so that when we follow ∂Σ g in the direction of its orientation, we hit the initial point of α a 1 , then the initial point of α a 2 , followed by the terminal point of α a 1 and then the terminal point of α a 2 . If (Y, φ) is type D, we orient α a 1 and α a 2 so that when we follow ∂Σ g in the direction of its orientation, we hit the initial point of α a 2 , then the initial point of α a 1 , followed by the terminal point of α a 2 and then the terminal point of α a 1 . Doing this ensures that when we glue the type A and type D α a i arcs together along their boundaries, we get a coherently oriented α a i circle. Now fix an ordering of the α and β circles. If (Y, φ) is type A, we assume the circles in α c are ordered before the arcs in α a . If (Y, φ) is type D, we choose the opposite ordering: α a before α c . This, coupled with the above ordering on the α arcs, is an ordering on all of α and β. Note that if we fix orientations on the α and β circles, then we've oriented all of α and β.
Let x be a g-tuple of points in β ∩ α, with one point on each β circle, one point on each of the g − 1 α c circles, and one point on one of the two α a arcs. Express x as {x 1 , . . . ,
x g }, where x i ∈ β i ∩ α σx(i) for some injection σ x : [g] → [g + 1] satisfying α c ⊂ {α σx(i) | i ∈ [g]}.
When (Y, φ) is type A, σ x is a type A bordered partial permutation, and when (Y, φ) is type D, σ x is a type D bordered partial permutation. We can now define the relative Z 2 -gradings gr A and gr D on CFA and CFD: Up to a possible overall shift, gr A and gr D do not depend on how we order and orient the α and β circles. So we'll think of gr A and gr D as relative Z 2 -gradings on CFA and CFD.
Definition 5.2. The type A grading of a generator x of CFA(H) is gr A (x) = sgn A (σ x ) + o(x) (mod 2).
Example 5.4. Consider D 2 × S 1 with the type D parameterization ψ : F → ∂(D 2 × S 1 ) defined by α a 1 → {1} × S 1 and α a 2 → ∂D 2 × {1}. Let H D 2 ×S 1 be the bordered Heegaard diagram for (D 2 × S 1 , ψ) in Figure 3. Then the type D grading gr D on CFD(H D 2 ×S 1 ) is given by x → 1. Note that if we change the orientation on β, we get gr D (x) = 0 instead. We next explain how to recover the relative Z 2 -grading gr on CF from the relative type A and type D gradings gr A and gr D on CFA and CFD. This is due to Hom, Lidman, and Watson in [8,Proposition 3.17]. Let (H 1 , Z) and (H 2 , −Z) be bordered Heegaard diagrams for (Y 1 , F ) and (Y 2 , −F ). If we glue (H 1 , Z) and (H 2 , −Z) together along Z, we get a Heegaard diagram H = H 1 ∪ Z H 2 that describes the closed 3-manifold Y 1 ∪ F Y 2 . In particular, the α arcs in (H 1 , Z) and (H 2 , −Z) give rise to two α circles in H, and the preferred orientations on the α arcs induce coherent orientations on the resulting α circles. Furthermore, if we orient the α and β circles in (H 1 , Z) and (H 2 , −Z), we get induced orientations for the remaining α and β circles in H. In a similar way, given any ordering on the α and β circles in (H 1 , Z) and (H 2 , −Z), there is an induced ordering on the α and β circles in H. To get the ordering on the α circles in H, we take the α circles in (H 1 , Z) first, followed by the glued up α arcs in H, and then the α circles in (H 2 , −Z). The ordering on the β circles in H is similar. Now let y and x be generators of CFA(H 1 , Z) and CFD(H 2 , −Z), respectively. Suppose y ⊗ x = 0. Then y ⊗ x is a generator of CF (H), and [8,Proposition 3.17] states that up to a possible overall shift independent of both y and x gr(y ⊗ x) = gr A (y) + gr D (x) (mod 2).
(5.1)
5.2.
Proof of Theorem 1.3. Recall we have the following set-up: Y orb is a 3-orbifold with singular set a knot K of multiplicity n, N is a Z n -equivariant tubular neighborhood of K parameterized by φ N : (D 2 × S 1 )/Z n → N , and E is the complement of int(N ) with (orientation-preserving) boundary parameterization φ ∂E : F → ∂E induced by φ N . Choose a bordered Heegaard diagram (H E , Z) for the type A bordered 3-manifold (E, φ ∂E ). Without loss of generality, we'll assume the associated type A structure CFA(H E , Z) is bounded. Let gr A be the relative Z 2 -grading on CFA(H E , Z) coming from bordered Floer theory. Figure 6 gives an orbifold bordered Heegaard diagram for N ; call this (H N , −Z). We can define a relative Z 2 -grading gr orb D on the type D structure D N by setting gr orb D (x i ) = 1 for every i. If we pick a different orientation on β, we'll need to take gr orb D (x i ) = 0 instead. We define the relative Z 2 -grading gr orb on the Z 2 -chain complex CFO(Y orb ) = CFA(H E , Z) D N to be gr A + gr orb D (mod 2). Note that gr orb generalizes Equation 5.1 because gr orb D generalizes the relative Z 2 -grading gr D on CFD(H D 2 ×S 1 ) from Example 5.4. With respect to the induced relative Z 2 -grading gr orb on HFO(Y orb ), we have the following: Proof. For the 3-manifold |Y orb |, we have |χ CF (|Y orb |) | = |H 1 (|Y orb |)|, if |H 1 (|Y orb |)| finite 0, otherwise.
Since χ HFO(Y orb ) = χ CFO(Y orb ) , it suffices to show that |χ CFO(Y orb ) | = n · |χ CF (|Y orb |) | when |H 1 (|Y orb |)| finite. Note that for every i, y ⊗ x i = 0 exactly when y ⊗ x = 0 and gr orb D (x i ) = gr D (x). Then up to sign χ CFO(Y orb ) =#{y ⊗ x i = 0 | gr A (y) + gr orb D (x i ) = 0}− #{y ⊗ x i = 0 | gr A (y) + gr orb D (x i ) = 1} =n #{y ⊗ x = 0 | gr A (y) + gr D (x) = 0|}− #{y ⊗ x = 0 | gr A (y) + gr D (x) = 1|} =n · χ CF (|Y orb |) .
Suppose K is nullhomologous in |Y orb |. We want to show |χ HFO(Y orb ) | = |H orb 1 (Y orb )|, if |H orb 1 (Y orb )| finite 0, otherwise.
By [30,Lemma 6.4], H orb 1 (Y orb ) ∼ = H 1 (|Y orb |) × Z n µ , where µ is a meridian of K. This, combined with Lemma 5.5, tells us that if |H orb 1 (Y orb )| is finite, then |χ HFO(Y orb ) | = n · |H 1 (|Y orb |)| = |H orb 1 (Y orb )|, as needed. Now suppose |H orb 1 (Y orb )| is infinite. Then |H 1 (|Y orb |)| is infinite, and by Lemma 5.5, |χ HFO(Y orb ) | = 0. This concludes the proof of Theorem 1.3 for K nullhomologous in |Y orb |. Now let Y be p q -surgery on a knot K ⊂ S 3 with gcd(p, q) = 1 and p ≥ 0. Let Y orb be the 3-orbifold with underlying space Y and singular curve K of multiplicity n. Again we want to show |χ HFO(Y orb ) | = |H orb 1 (Y orb )|, if |H orb 1 (Y orb )| finite 0, otherwise.
It's not hard to see that H 1 (Y ) ∼ = Z p µ and H orb 1 (Y orb ) ∼ = Z np µ , where µ is a meridian of K. We again have two cases. First suppose |H orb 1 (Y orb )| is finite. Then p = 0 and |H orb 1 (Y orb )| = n · |H 1 (Y )|. From Lemma 5.5, |χ HFO(Y orb ) | = n · |H 1 (Y )| = |H orb 1 (Y orb )|, as desired. Now suppose |H orb 1 (Y orb )| is infinite. Then p = 0, which means |H 1 (Y )| is infinite. Again by Lemma 5.5, |χ HFO(Y orb ) | = 0. This concludes the proof of Theorem 1.3 for Y orb = (Y, K, n), where Y is p q -surgery on a knot K ⊂ S 3 .
Figure 2 .
2Pointed matched circle Z for torus F Any bordered 3-manifold (Y, φ) can be represented by a (sufficiently admissible) bordered Heegaard diagram H. This is a tuple
Figure 3 .
3On the left, a genus 1 bordered Heegaard diagram for D 2 ×S 1 with standard product orientation and boundary parameterized by α a 1 → {1} × S 1 and α a 2 → ∂D 2 × {1}. On the right, the same bordered Heegaard diagram thought of as a decorated square missing four corners with opposite sides identified.
by counting certain J-holomorphic curves in Σ × [0, 1] × R, for a sufficiently nice almost complex structure J on Σ × [0, 1] × R, with Σ the interior of Σ. Details can be found in [16, Chapters 6 and 7]. Up to homotopy equivalence, the type A and type D structures ( CFA(H), {m k } ∞ k=1 ) and ( CFD(H), δ 1 ) don't depend on the choice of J, and so we get invariants of H. Because different bordered Heegaard diagrams for equivalent bordered 3-manifolds produce homotopy equivalent bordered invariants, this process gives us an invariant of any bordered 3-manifold (Y, φ) considered up to equivalence. If (Y, φ) is of type A, we denote the invariant by CFA(Y, φ), and if (Y, φ) is of type D, we denote the invariant by CFD(Y, φ).
Figure 4 .
4The type D structure for (D 2 × S 1 , ψ)
and ∂ vert pairs up basis elements in {x 0 , . . . , x 2n }, there is a distinguished basis element in {x 0 , . . . , x 2n } with no incoming and outgoing vertical arrows. Without loss of generality, we assume it's x 0 , and we call x 0 the generator of the vertical complex C vert .
C{max(I, A − s) ≤ 0}/C{max(I, A − s) < 0}and let A s be the Z 2 -vector spaceC{min(I, A − s) ≤ 0}/C{min(I, A − s) < 0}.By equipping A s and A s with the differentials induced by ∂ ∞ , we can think of A s and A s as Z 2 -chain complexes. Like we did for τ (K), we have chain maps ν s : A s → CF (S 3 ) and ν s : CF (S 3 ) → A s given as follows: ν s is the composition
Figure 5 .
5The type D structure D N Similar to CFD(D 2 × S 1 , ψ), D N arises naturally from an "orbifold bordered Heegaard diagram" for (N, φ ∂N ); see
Figure 6 .
6Two ways to describe a genus 1 orbifold bordered Heegaard diagram for (N, φ ∂N ) that yields D N in the case when n = 3. The other values of n are similar. Compare with Figure 3.
Figure 7 .
7A domain that doesn't get counted towards the type D structure D N in the case when n = 3
Figure 9 .
9A bounded type D structure D N homotopy equivalent to D N We claim that the Z 2 -chain complexes CFA(E, φ 1 ∂E ) D N and CFA(E, φ 2 ∂E ) D N are chain homotopy equivalent. We prove this as follows. Let ψ : F → F be the composition
Figure 10 .
10The type D structure CFDA
4. 1 .
1Examples. We conclude with a couple of examples. 4.1.1. Let K be the left-handed trefoil T (2, −3). Fix n ∈ Z. Take r = 0. Then CFA(E, φ 0 ) is given by the graph in Figure 15, and CFO(
which has rank 2n. Note that this agrees with Theorem 1.2, since ε(K) = −1 and rank HF (Y ) = 2.
Figure 15 .
15CFA(E, φ 0 ) 4.1.2. Let K be the figure-eight knot. Again fix n ∈ Z and assume r = 0. CFA(E, φ 0 ) is given by the graph in
Write o(x i ) = 0 if x i is positively oriented and o(x i ) = 1 if x i is negative oriented. Then o(x) is the sum o(x 1 ) + . . . + o(x g ), and we define gr(x) = inv(σ x ) + o(x) (mod 2).
Definition 5 . 3 .
53The type D grading of a generator x of CFD(H) is gr D (x) = sgn D (σ x ) + o(x) (mod 2).
HFO(Y orb ) | = n · |H 1 (|Y orb |)|, if |H 1 (|Y orb |)| finite 0,otherwise.(5.2)
Three-dimensional orbifolds and their geometric structures. Michel Boileau, Sylvain Maillot, Joan Porti, Panoramas et Synthèses. Société Mathématique de France15ParisMichel Boileau, Sylvain Maillot, and Joan Porti. Three-dimensional orbifolds and their geometric structures, volume 15 of Panoramas et Synthèses. Société Mathématique de France, Paris, 2003.
Instanton Floer homology for knots via 3-orbifolds. O Collin, B Steer, J. Differential Geom. 511O. Collin and B. Steer. Instanton Floer homology for knots via 3-orbifolds. J. Differential Geom., 51(1):149-202, 1999.
An instanton-invariant for 3-manifolds. A Floer, Comm. Math. Phys. 1182A. Floer. An instanton-invariant for 3-manifolds. Comm. Math. Phys., 118(2):215-240, 1988.
The lens space realization problem. J Greene, Ann. of Math. 1772J. Greene. The lens space realization problem. Ann. of Math. (2), 177(2):449-511, 2013.
Bordered Floer homology for manifolds with torus boundary via immersed curves. J Hanselman, R Rasmussen, L Watson, arXiv:1604.03466v2J. Hanselman, R. Rasmussen, and L. Watson. Bordered Floer homology for manifolds with torus boundary via immersed curves. arXiv:1604.03466v2, 2017.
Splicing knot complements and bordered Floer homology. M Hedden, A Levine, J. Reine Angew. Math. 720M. Hedden and A. Levine. Splicing knot complements and bordered Floer homology. J. Reine Angew. Math., 720:129-154, 2016.
Bordered Heegaard Floer homology and the tau-invariant of cable knots. J Hom, J. Topol. 72J. Hom. Bordered Heegaard Floer homology and the tau-invariant of cable knots. J. Topol., 7(2):287-326, 2014.
The Alexander module, Seifert forms, and categorification. Jennifer Hom, Tye Lidman, Liam Watson, J. Topol. 101Jennifer Hom, Tye Lidman, and Liam Watson. The Alexander module, Seifert forms, and categorification. J. Topol., 10(1):22-100, 2017.
Geometrization of three-dimensional orbifolds via Ricci flow. B Kleiner, J Lott, Astérisque. 365B. Kleiner and J. Lott. Geometrization of three-dimensional orbifolds via Ricci flow. Astérisque, (365):101-177, 2014.
Khovanov homology is an unknot-detector. P Kronheimer, T Mrowka, Publ. Math. Inst. HautesÉtudes Sci. 113P. Kronheimer and T. Mrowka. Khovanov homology is an unknot-detector. Publ. Math. Inst. HautesÉtudes Sci., (113):97-208, 2011.
Knot homology groups from instantons. P Kronheimer, T Mrowka, J. Topol. 44P. Kronheimer and T. Mrowka. Knot homology groups from instantons. J. Topol., 4(4):835-918, 2011.
P Kronheimer, T Mrowka, arXiv:1508.07205v1Tait colorings, and an instanton homology for webs and foams. P. Kronheimer and T. Mrowka. Tait colorings, and an instanton homology for webs and foams. arXiv:1508.07205v1, 2015.
C Kutluhan, G Matić, J Van Horn-Morris, A Wand, arXiv:1603.02673v4Filtering the Heegaard Floer contact invariant. C. Kutluhan, G. Matić, J. Van Horn-Morris, and A. Wand. Filtering the Heegaard Floer contact invariant. arXiv:1603.02673v4, 2018.
Knot doubling operators and bordered Heegaard Floer homology. Adam Simon Levine, J. Topol. 53Adam Simon Levine. Knot doubling operators and bordered Heegaard Floer homology. J. Topol., 5(3):651-712, 2012.
Bimodules in bordered Heegaard Floer homology. R Lipshitz, P Ozsváth, D Thurston, Geom. Topol. 192R. Lipshitz, P. Ozsváth, and D. Thurston. Bimodules in bordered Heegaard Floer homology. Geom. Topol., 19(2):525-724, 2015.
Bordered Heegaard Floer homology: Invariance and pairing. Memoirs of the. R Lipshitz, P Ozsváth, D Thurston, American Mathematical Societyto appearR. Lipshitz, P. Ozsváth, and D. Thurston. Bordered Heegaard Floer homology: Invari- ance and pairing. Memoirs of the American Mathematical Society, to appear.
Knot Floer homology detects fibred knots. Y Ni, Invent. Math. 1703Y. Ni. Knot Floer homology detects fibred knots. Invent. Math., 170(3):577-608, 2007.
Concordance homomorphisms from knot Floer homology. P Ozsváth, A Stipsicz, Z Szabó, Adv. Math. 315P. Ozsváth, A. Stipsicz, and Z. Szabó. Concordance homomorphisms from knot Floer homology. Adv. Math., 315:366-426, 2017.
Knot Floer homology and the four-ball genus. P Ozsváth, Z Szabó, Geom. Topol. 7P. Ozsváth and Z. Szabó. Knot Floer homology and the four-ball genus. Geom. Topol., 7:615-639, 2003.
Holomorphic disks and genus bounds. P Ozsváth, Z Szabó, Geom. Topol. 8P. Ozsváth and Z. Szabó. Holomorphic disks and genus bounds. Geom. Topol., 8:311- 334, 2004.
Holomorphic disks and knot invariants. P Ozsváth, Z Szabó, Adv. Math. 1861P. Ozsváth and Z. Szabó. Holomorphic disks and knot invariants. Adv. Math., 186(1):58- 116, 2004.
Holomorphic disks and three-manifold invariants: properties and applications. P Ozsváth, Z Szabó, Ann. of Math. 1592P. Ozsváth and Z. Szabó. Holomorphic disks and three-manifold invariants: properties and applications. Ann. of Math. (2), 159(3):1159-1245, 2004.
Holomorphic disks and topological invariants for closed threemanifolds. P Ozsváth, Z Szabó, Ann. of Math. 1592P. Ozsváth and Z. Szabó. Holomorphic disks and topological invariants for closed three- manifolds. Ann. of Math. (2), 159(3):1027-1158, 2004.
Heegaard Floer homology and contact structures. P Ozsváth, Z Szabó, Duke Math. J. 1291P. Ozsváth and Z. Szabó. Heegaard Floer homology and contact structures. Duke Math. J., 129(1):39-61, 2005.
Knots with unknotting number one and Heegaard Floer homology. P Ozsváth, Z Szabó, Topology. 444P. Ozsváth and Z. Szabó. Knots with unknotting number one and Heegaard Floer homology. Topology, 44(4):705-745, 2005.
Knot Floer homology and rational surgeries. S Peter, Zoltán Ozsváth, Szabó, Algebr. Geom. Topol. 111Peter S. Ozsváth and Zoltán Szabó. Knot Floer homology and rational surgeries. Algebr. Geom. Topol., 11(1):1-68, 2011.
Floer homology and knot complements. J Rasmussen, Harvard UniversityPhD thesisJ. Rasmussen. Floer homology and knot complements. PhD thesis, Harvard University, 2003.
The geometries of 3-manifolds. P Scott, Bull. London Math. Soc. 155P. Scott. The geometries of 3-manifolds. Bull. London Math. Soc., 15(5):401-487, 1983.
W Thurston, Three-dimensional geometry and topology. Princeton, NJPrinceton University Press1W. Thurston. Three-dimensional geometry and topology. Vol. 1, volume 35 of Princeton Mathematical Series. Princeton University Press, Princeton, NJ, 1997.
Turaev torsion invariants of 3-orbifolds. B Wong, H3C 3P8 E-mail address: [email protected] URL. Station Centre-ville, Montréal, Québec187Université du Québecà MontréalCIRGETPO BoxB. Wong. Turaev torsion invariants of 3-orbifolds. Geom. Dedicata, 187:179-197, 2017. CIRGET, Université du Québecà Montréal, PO Box 8888, Station Centre-ville, Montréal, Québec H3C 3P8 E-mail address: [email protected] URL: https://sites.google.com/view/cirget-bijiwong
| []
|
[
"Twist-3 effects for polarized virtual photon structure function g",
"Twist-3 effects for polarized virtual photon structure function g",
"Twist-3 effects for polarized virtual photon structure function g",
"Twist-3 effects for polarized virtual photon structure function g"
]
| [
"K Sasaki \nDepartment of Physics\nFaculty of Engineering\nYokohama National University\n240-8501YokohamaJapan\n",
"K Sasaki \nDepartment of Physics\nFaculty of Engineering\nYokohama National University\n240-8501YokohamaJapan\n"
]
| [
"Department of Physics\nFaculty of Engineering\nYokohama National University\n240-8501YokohamaJapan",
"Department of Physics\nFaculty of Engineering\nYokohama National University\n240-8501YokohamaJapan"
]
| []
| We investigate twist-3 effects in the polarized virtual photon. The structure function g γ 2 , which exists only for the virtual photon target and can be measured in future polarized e + e − collider experiments, receives both twist-2 and twist-3 contributions. The twist-3 part is analyzed in pure QED interaction as well as in LO QCD. We find the twist-3 contribution is appreciable for the photon in contrast to the nucleon case. | 10.1016/s0920-5632(03)80155-x | [
"https://export.arxiv.org/pdf/hep-ph/0211130v1.pdf"
]
| 16,932,133 | hep-ph/0211130 | 4b1809e929c84bb7f8ad6509d75de6d810d07d0f |
Twist-3 effects for polarized virtual photon structure function g
Nov 2002
K Sasaki
Department of Physics
Faculty of Engineering
Yokohama National University
240-8501YokohamaJapan
Twist-3 effects for polarized virtual photon structure function g
Nov 2002arXiv:hep-ph/0211130v1 9 1
We investigate twist-3 effects in the polarized virtual photon. The structure function g γ 2 , which exists only for the virtual photon target and can be measured in future polarized e + e − collider experiments, receives both twist-2 and twist-3 contributions. The twist-3 part is analyzed in pure QED interaction as well as in LO QCD. We find the twist-3 contribution is appreciable for the photon in contrast to the nucleon case.
Introduction
In experiments of polarized deep inelastic lepton-nucleon scattering, we can obtain information on the two spin-dependent structure functions g nucl 1 and g nucl 2 of the nucleon. In the language of operator product expansion (OPE), the twist-2 operators contribute to g nucl 1 in the leading order of 1/Q 2 . On the other hand, g nucl 2 receives contributions from both twist-2 and twist-3 operators in the leading order. The twist-2 part of g nucl 2 can be extracted, once g nucl 1 is measured, by so-called Wandzura-Wilczek (WW) relation [1]: , contains the twist-3 contribution. The experimental data so far obtained show that the twist-3 contribution to g nucl 2 appears to be negligibly small [2,3]. In recent years, there has been growing interest in the study of spin structures of photon. The polarized photon structure functions can be measured by the polarized e + e − collision experiments in the future linear colliders (Fig.1), where −Q 2 (−P 2 ) is the mass squared of the probe (target) photon. For the virtual photon target, there appears two structure functions g γ 1 (x, Q 2 , P 2 ) and g γ 2 (x, Q 2 , P 2 ), which are the analogues to the * Talk presented at the International Symposium Radcor 2002 and Loops and Legs 2002, September 8-13, Kloster Banz, Germany.
g(target) photon is −Q 2 (−P 2 ) (P 2 ≪ Q 2 ).
spin-dependent nucleon structure functions g nucl 1 and g nucl 2 , respectively. Now we may ask about the photon structure function g γ 2 : (i) Does g γ 2 also receive twist-3 contribution? (ii) If so, is it small like the nucleon case, or, appreciable? (iii) Does the WW relation also hold for g γ 2 , in other words, is the twist-2 part of g γ 2 expressible in terms of g γ 1 ? (iv) Does any complication occur in the QCD analysis for g γ 2 ? These issues will be discussed [4] in the following. in the simple parton model, in the kinematical region P 2 ≪ Q 2 . Evaluating the box diagrams (massless quark-loops) depicted in Fig.2 with the power corrections of P 2 /Q 2 being neglected, we obtain
g γ(Box) 1 (x, Q 2 , P 2 ) = 3α π N f e 4 (2x − 1) ln Q 2 P 2 −2(2x − 1)(ln x + 1) , (2) g γ(Box) 2 (x, Q 2 , P 2 ) = 3α π N f e 4 −(2x − 1) ln Q 2 P 2 +2(2x − 1) ln x + 6x − 4 ,(3)where x = Q 2 /(2p · q), e 4 = N f i=1 e 4
i /N f with N f being the number of active quark flavours and α = e 2 /4π.
First, note that g γ(Box) 2 satisfies the Burkhardt-Cottingham (BC) sum rule [5],
1 0 dxg γ(Box) 2 (x, Q 2 , P 2 ) = 0 .(4)
In fact, we will see from the OPE analysis in Sec.3 that the BC sum rule for g γ 2 generally holds in the deep-inelastic region Q 2 ≫ P 2 . Now we apply the the WW relation to the above results for g γ(Box) 1 and g γ(Box) 2 , and define
g γWW(Box) 2 (x, Q 2 , P 2 ) ≡ −g γ(Box) 1 (x, Q 2 , P 2 ) + 1 x dy y g γ(Box) 1 (y, Q 2 , P 2 ) .(5)
The difference, g
γ(Box) 2 = g γ(Box) 2 − g γWW(Box) 2
, is then given by
g γ(Box) 2 = 3α π N f e 4 (2x − 2 − ln x) ln Q 2 P 2 −2(2x − 1) ln x + 2(x − 1) + ln 2 x ,(6)
and its n-th moment is In Fig.3, we plot the parton model results, g
g γ(Box) 2, n = 3α π N f e 4 n − 1 n × − 1 n(n + 1) ln Q 2 P 2 + 2 (n + 1) 2 − 2 n 2 . (7)γ(Box) 1 , g γ(Box) 2 and g γ(Box) 2
given in Eqs. (2,3,6), as functions of x for Q 2 = 30 GeV 2 and P 2 = 1 GeV 2 . We can see that g
γ(Box) 2 is comparable in magnitude with g γ(Box) 2
for large region of x. It is now expected by analogy with the nucleon case that g γ(Box) 2 arises from the twist-3 effects. In fact, we will be convinced in Sec. 3 that g γ(Box) 2 is the twist-3 contribution.
3. OPE and Pure QED Effects on g γ 2 Applying OPE for the product of two electromagnetic currents, we get for the µ-ν antisymmetric part,
i d 4 xe iq·x T (J µ (x)J ν (0)) A = −iǫ µνλσ q λ n=1,3,··· 2 Q 2 n q µ1 · · · q µn−1 (8) × i E n (2)i R σµ1···µn−1 (2)i + i E n (3)i R σµ1···µn−1 (3)i ,
where R n (2)i and R n (3)i are the twist-2 and twist-3 operators, respectively, and E n (2)i and E n (3)i are corresponding coefficient functions. The twist-2 operators R n (2)i have totally symmetric Lorentz indices σµ 1 · · · µ n−1 , while the indices of twist-3 operators R n (3)i are totally symmetric among µ 1 · · · µ n−1 but antisymmetric under σ ↔ µ i . Thus the "matrix elements" of operators R n and R n (3)i sandwiched by two photon states with momentum p have the following forms:
0|T (A ρ (−p)R σµ1···µn−1 (2)i A τ (p))|0 Amp = −ia n (2)i ǫ ρτ α {σ p µ1 · · · p µn−1} p α − (traces) , 0|T (A ρ (−p)R σµ1···µn−1 (3)i A τ (p))|0 Amp = −ia n (3)i ǫ ρτ α [σ, p {µ1 ] · · · p µn−1} p α − (traces),
where the suffix 'Amp' stands for the amputation of the external photon lines. Then the moment sum rules for g γ 1 and g γ 2 are written as follows:
1 0 dxx n−1 g γ 1 (x, Q 2 , P 2 ) = i a n (2)i E n (2)i (Q 2 ), 1 0 dxx n−1 g γ 2 (x, Q 2 , P 2 ) = n − 1 n × − i a n (2)i E n (2)i (Q 2 ) + i a n (3)i E n (3)i (Q 2 ) .
From this general OPE analysis we conclude:
(i) The BC sum rule holds for g γ 2 ,
1 0 dxg γ 2 (x, Q 2 , P 2 ) = 0 .(9)
(ii) The twist-2 contribution to g γ 2 is expressed by the WW relation
− n − 1 n i a n (2)i E n (2)i (Q 2 ) = 1 0 dxx n−1 g γWW 2 (x, Q 2 , P 2 ) , (10) with g γWW 2 (x, Q 2 , P 2 ) ≡ −g γ 1 (x, Q 2 , P 2 ) + 1 x dy y g γ 1 (y, Q 2 , P 2 ) . (11) (iii) The difference, g γ 2 = g γ 2 − g γWW 2 ,
contains only the twist-3 contribution,
1 0 dxx n−1 g γ 2 (x, Q 2 , P 2 ) = n − 1 n i a n (3)i E n (3)i (Q 2 ) . (12)
Let us now analyze the twist-3 part of g γ 2 in pure QED, i.e., switching off the quark-gluon coupling, in the framework of OPE and the renormalization group (RG) method. In this case the relevant twist-3 operators are the quark and photon operators, which are given, respectively, by
R σµ1···µn−1 (3)q = i n−1 e 2 q ψγ 5 γ [σ, D {µ1] · · · D µn−1} ψ ,(13)R σµ1···µn−1 (3)γ = 1 4 i n−1 ǫ [σ αβγ F α{µ1] ∂ µ2 · · · ∂ µn−1} F βγ ,(14)
where e q is the quark charge, D µ = ∂ µ + ieA µ is the covariant derivative, F αβ is the photon field strength, { } means complete symmetrization over the indices, while [σ, µ j ] denotes antisymmetrization on σµ j , and trace terms are omitted. With the choice of the above photon operator R n (3)γ , we have a n (3)γ = 1 . Solving the RG equation for the coefficient functions corresponding to operators R n (3)q and R n (3)γ , we obtain, to lowest order in α,
E n (3)q Q 2 µ 2 , α = 1 + O(α) (15) E n (3)γ Q 2 µ 2 , α = α 8π K n (3)q ln Q 2 µ 2 + α 4π 3e 4 q B n (3)γ ,
where K n (3)q is the mixing anomalous dimension between the twist-3 photon operator R n (3)γ and quark operator R n (3)q and is given by
K n (3)q = −24e 4 q 1 n(n + 1) .(16)
The "matrix element" a n (3)q of the quark operator R n (3)q between the photon states is calculated to be a n
(3)q = α 4π − 1 2 K n (3)q ln P 2 µ 2 + 3e 4 q A n (3)q .(17)
Inserting Eqs.(15)-(17) into (12) and remembering a n (3)γ = 1, we obtain for the n-th moment of g γ 2 in pure QED,
g γ, n 2 | QED = n − 1 n α 4π 3e 4 q − 4 n(n + 1) ln Q 2 P 2 +A n (3)q + B n (3)γ .(18)
The dependence on the renormalization point µ disappears. And we note that although A n (3)q and B n (3)γ are individually renormalization-scheme dependent, the sum A n (3)q + B n (3)γ is not. The calculation of box diagrams in Fig.2 gives
A n (3)q + B n (3)γ = 8 1 (n + 1) 2 − 1 n 2 .(19)
Now adding all the quark contributions of active flavours and replacing 3e 4 q in (18) with 3N f e 4 , we find that the result is nothing but g γ(Box) 2, n given in Eq. (7), which is derived from the box-diagram calculation. Thus it is now clear that g γ(Box) 2, n , and the whole contribution in LO is represented by one type of operators. In the photon case, the relevant twist-3 operators for g
γ(N S) 2 are R σµ1···µn−1 (3)F = i n−1 ψγ 5 γ [σ, D {µ1] · · · D µn−1} Q ch ψ ,(20)
and the photon operators R n (3)γ given in Eq.(14). Here D µ = ∂ µ − igA a µ T a + ieA µ is the covariant derivative, and Q ch is the quark-charge factor and defined by Q ch = Q 2 − e 2 1, where Q is the
N f ×N f quark-charge matrix, e 2 = N f i=1 e 2
i /N f and 1 is an N f × N f unit matrix. In the approximation of neglecting terms of order O(1/N 2 C ) and thus putting 2C F = C G , the mixing anomalous dimensions between R n (3)F and other hadronic (quark and gluon) operators turn out to vanish. Those which remain non-zero are only the (F, F ) element and the mixing anomalous dimension be-tween R n (3)F and the photon operator R n (3)γ :
γ (0) n,F F = 8C F (S n − 1 4 − 1 2n ) ,(21)
with S n = n j=1 1 j ,
K (0) n,F = −24N f ( e 4 − e 2 2 ) 1 n(n + 1)
.
The corrections are of O(1/N 2 C ), about 10% for QCD (N C = 3). Using the above results, we find that, for large N C , the n-th moment of g γ(N S) 2 in LO QCD is given by
1 0 dxx n−1 g γ(N S) 2 (x, Q 2 , P 2 ) = n − 1 n α 4π 2π β 0 α s (Q 2 ) K (0) n,F 1 1 +γ (0) n,F F /2β 0 × 1 − α s (Q 2 ) α s (P 2 ) γ (0) n,F F /2β0+1 ,(23)
where α s (Q 2 ) is the QCD running coupling constant and β 0 = (11N C − 2N f )/3 is the one-loop coefficient of the β function. We perform the Mellin transform of Eq.(23) to get g γ(N S) 2 (x, Q 2 , P 2 ) as a function of x. The result is plotted in Fig.4. Comparing with the pure QED box-graph contribution, we find that the LO QCD effects are sizable and tend to suppress the structure function g γ(N S) 2 both in the large x and small x regions, so that the vanishing n = 1 moment of g γ(N S) 2 , i.e. the BC sum rule, is preserved.
Conclusion
We have analyzed the twist-3 effects in g γ 2 for the virtual photon target, in pure QED interaction as well as in LO QCD. We have found that the twist-3 contribution is appreciable for the photon in contrast to the nucleon case. In this sense, the virtual photon structure function g γ
Figure 1 .
1Deep inelastic scattering on a polarized virtual photon in polarized e + e − collision, e + e − → hadrons (quarks and gluons). The mass squared of the probe
2. g γ 2
2in Parton ModelLet us begin with the analysis of g γ 1 and g γ 2
Figure 2 .
2The box diagrams contributing to g γ 1 and g γ 2 in pure QED interaction.
Figure 3 .
3The Box-diagram contributions to g γ 1 (dashed line), g γ 2 (solid line) and g γ 2 (dash-2dotted line) for Q 2 = 30 GeV 2 and P 2 = 1 GeV 2 . The 2x − 1 line shows the leading logarithmic term of g γ 1 .
is indeed the twist-3 contribution.QCD Effects on g γ 2We now switch on the quark-gluon coupling and consider the QCD effects on g γ 2 , the twist-3 part of g γ 2 . In the nucleon case, the analysis of g nucl 2 , the twist-3 part of the structure function g nucl 2 , turns out to be very complicated[6]. This is due to the fact that the number of participating twist-3 operators grows with spin (moment of g nucl 2 ) and that these operators mix among themselves through renormalization. Therefore, the Q 2 evolution equation for the moments of g nucl 2 cannot be written in a simple form, but in a sum of terms, the number of which increases with spin.The same is true for g γ 2 . However, in certain limits the analysis for the moments of g γ 2 becomes tractable. One is when n is a small number and the other is the large N C (the number of colours) limit for the analysis of g γ(N S) 2 , the flavour nonsinglet part of g γ 2 . Indeed, for n = 3 (the nontrivial lowest moment), we can get all the information on the necessary anomalous dimensions of participating operators, and thus we obtain the LO QCD prediction for the third moment of g γ 2[4]. On the other hand, for large N C , we can evade the problem of operator mixing for g γ(N S) 2, and obtain the moments of g γ(N S) 2 in a compact form for all n.In the case of g nucl(N S) 2, the twist-3 and flavour nonsinglet part of the nucleon structure function g nucl 2 , it has been observed[7,8]that at large N C , the operators involving gluon field strength G µν decouple from the evolution equation of g nucl(N S) 2 2 provides us with a good testing ground for studying the twist-3 effects. We expect that the future polarized version of e + e − colliders may bring us important information on spin structures of photon.
. S Wandzura, F Wilczek, Phys. Lett. 172195S. Wandzura and F. Wilczek, Phys. Lett. B172, 195 (1977).
. K Abe, E143 CollaborationPhys. Rev. Lett. 76587E143 Collaboration, K. Abe et al., Phys. Rev. Lett. 76, 587 (1996).
. P L Anthony, E155 CollaborationPhys. Lett. 458529E155 Collaboration, P. L. Anthony et al., Phys. Lett. B458, 529 (1999).
. H Baba, K Sasaki, T Uematsu, Phys. Rev. 65114018H. Baba, K. Sasaki and T. Uematsu, Phys. Rev. D65, 114018 (2002).
. H Burkhardt, W N Cottingham, Ann. Phys. 56453H. Burkhardt and W. N. Cottingham, Ann. Phys. 56, 453 (1970).
. E V Shuryak, A I Vainshtein, Nucl. Phys. 199451E. V. Shuryak and A. I. Vainshtein, Nucl. Phys. B199, 451 (1982);
. A Ali, V M Braun, G Hiller, Phys. Lett. 266117A. Ali, V. M. Braun and G. Hiller, Phys. Lett. B266, 117 (1991).
. K Sasaki, Phys. Rev. 5894007K. Sasaki, Phys. Rev. D58, 094007 (1998).
| []
|
[
"Efficient Joint Learning for Clinical Named Entity Recognition and Relation Extraction Using Fourier Networks: A Use Case in Adverse Drug Events",
"Efficient Joint Learning for Clinical Named Entity Recognition and Relation Extraction Using Fourier Networks: A Use Case in Adverse Drug Events"
]
| [
"Anthony Yazdani \nFaculty of medicine\nDepartment of radiology and medical informatics\nUniversity of Geneva\nData science for digital health\n\n",
"Dimitrios Proios \nFaculty of medicine\nDepartment of radiology and medical informatics\nUniversity of Geneva\nData science for digital health\n\n",
"Hossein Rouhizadeh \nFaculty of medicine\nDepartment of radiology and medical informatics\nUniversity of Geneva\nData science for digital health\n\n",
"Douglas Teodoro \nFaculty of medicine\nDepartment of radiology and medical informatics\nUniversity of Geneva\nData science for digital health\n\n"
]
| [
"Faculty of medicine\nDepartment of radiology and medical informatics\nUniversity of Geneva\nData science for digital health\n",
"Faculty of medicine\nDepartment of radiology and medical informatics\nUniversity of Geneva\nData science for digital health\n",
"Faculty of medicine\nDepartment of radiology and medical informatics\nUniversity of Geneva\nData science for digital health\n",
"Faculty of medicine\nDepartment of radiology and medical informatics\nUniversity of Geneva\nData science for digital health\n"
]
| []
| Current approaches for clinical information extraction are inefficient in terms of computational costs and memory consumption, hindering their application to process large-scale electronic health records (EHRs). We propose an efficient end-to-end model, the Joint-NER-RE-Fourier (JNRF), to jointly learn the tasks of named entity recognition and relation extraction for documents of variable length. The architecture uses positional encoding and unitary batch sizes to process variable length documents and uses a weight-shared Fourier network layer for low-complexity token mixing. Finally, we reach the theoretical computational complexity lower bound for relation extraction using a selective pooling strategy and distance-aware attention weights with trainable polynomial distance functions. We evaluated the JNRF architecture using the 2018 N2C2 ADE benchmark to jointly extract medication-related entities and relations in variable-length EHR summaries. JNRF outperforms rolling window BERT with selective pooling by 0.42%, while being twice as fast to train. Compared to state-of-the-art BiLSTM-CRF architectures on the N2C2 ADE benchmark, results show that the proposed approach trains 22 times faster and reduces GPU memory consumption by 1.75 folds, with a reasonable performance tradeoff of 90%, without the use of external tools, hand-crafted rules or post-processing. Given the significant carbon footprint of deep learning models and the current energy crises, these methods could support efficient and cleaner information extraction in EHRs and other types of large-scale document databases. | 10.48550/arxiv.2302.04185 | [
"https://export.arxiv.org/pdf/2302.04185v1.pdf"
]
| 256,662,515 | 2302.04185 | 1886f92825dc8327ee0c4ee00888941dc89b7164 |
Efficient Joint Learning for Clinical Named Entity Recognition and Relation Extraction Using Fourier Networks: A Use Case in Adverse Drug Events
Anthony Yazdani
Faculty of medicine
Department of radiology and medical informatics
University of Geneva
Data science for digital health
Dimitrios Proios
Faculty of medicine
Department of radiology and medical informatics
University of Geneva
Data science for digital health
Hossein Rouhizadeh
Faculty of medicine
Department of radiology and medical informatics
University of Geneva
Data science for digital health
Douglas Teodoro
Faculty of medicine
Department of radiology and medical informatics
University of Geneva
Data science for digital health
Efficient Joint Learning for Clinical Named Entity Recognition and Relation Extraction Using Fourier Networks: A Use Case in Adverse Drug Events
Current approaches for clinical information extraction are inefficient in terms of computational costs and memory consumption, hindering their application to process large-scale electronic health records (EHRs). We propose an efficient end-to-end model, the Joint-NER-RE-Fourier (JNRF), to jointly learn the tasks of named entity recognition and relation extraction for documents of variable length. The architecture uses positional encoding and unitary batch sizes to process variable length documents and uses a weight-shared Fourier network layer for low-complexity token mixing. Finally, we reach the theoretical computational complexity lower bound for relation extraction using a selective pooling strategy and distance-aware attention weights with trainable polynomial distance functions. We evaluated the JNRF architecture using the 2018 N2C2 ADE benchmark to jointly extract medication-related entities and relations in variable-length EHR summaries. JNRF outperforms rolling window BERT with selective pooling by 0.42%, while being twice as fast to train. Compared to state-of-the-art BiLSTM-CRF architectures on the N2C2 ADE benchmark, results show that the proposed approach trains 22 times faster and reduces GPU memory consumption by 1.75 folds, with a reasonable performance tradeoff of 90%, without the use of external tools, hand-crafted rules or post-processing. Given the significant carbon footprint of deep learning models and the current energy crises, these methods could support efficient and cleaner information extraction in EHRs and other types of large-scale document databases.
Introduction
Adverse drug events (ADEs) are defined as any injury resulting from medication use and comprise the largest category of adverse events (Leape et al., 1991;Bates et al., 1995). Serious ADEs have been estimated to cost from $30 to $137 billion in ambulatory settings in the US (Johnson and Booman, 1996), and their costs have been doubling since then (Ernst and Grizzle, 2001). Due to safety concerns, between 21% to 27% of marketed drugs in the US have received black-box warnings or have been withdrawn by the Food and Drug Administration (FDA) within the first 16 years of marketing (Frank et al., 2014).
Clinical notes stored in electronic health record (EHRs) systems are a valuable source of information for pharmacovigilance (Boland and Tatonetti, 2015). However, only 1% of ADEs recorded in EHRs are reported to ADE registries, such as the FDA Adverse Event Reporting System (FAERS), while coded diagnoses have low sensitivity for ADEs (Nadkarni, 2010;Classen et al., 2011). Recognizing medication-related entities in clinical notes, extracting relations among them, and structuring this information can help identify ADEs in early stages of the drug marketing process, thus improving patient safety (Luo et al., 2017).
The state-of-the-art for biomedical named entity recognition (NER) and relation extraction (RE) is dominated by bidirectional LSTM (Hochreiter and Schmidhuber, 1997) or BERT (Devlin et al., 2018) architectures, combined with a CRF (Lafferty et al., 2001) layer and often hand-crafted rules (Xu et al., 2017;Christopoulou et al., 2020;Wei et al., 2020;Henry et al., 2020;Fang et al., 2021). Despite the high performance of end-to-end (E2E) NER+RE models, they have some important limitations imposed by the model complexity, e.g., quadratic in terms of entity types in the CRF layer or in terms of tokens in the dot-product attention mechanisms (Sutton et al., 2012;Shen et al., 2021), which hinders their effective application in the biomedical domain due to its large number of entities and large size of free text databases.
A particularity of NER and RE for pharmacovigilance is that efficient recall of entities and rela-tions is of utmost importance, as we would like to avoid missing a serious ADE. Nevertheless, current approaches tend to automatically discard long distance (or inter-passage) relations (Yao et al., 2019;Christopoulou et al., 2020). Moreover, EHR documents varies significantly in length, containing from a few hundred tokens for simpler patient records up to several thousand tokens for more complex patients (e.g., chronic diseases) (Henry et al., 2020). Due to their computational complexity, these methods cannot process EHRs in their integrity without resorting to impractical and/or inefficient techniques such as windowing strategies (Ding et al., 2020;Pappagari et al., 2019;Yang et al., 2016).
Ongoing research is predominantly performancedriven, leading to a resurgence of resourceintensive models, neglecting the carbon footprint of deep learning models in favor of often marginal improvement in effectiveness (Wei et al., 2020;Fang et al., 2021;Naderi et al., 2021). As a consequence of the technical constraints induced by highly complex models, these methods are currently being associated to a significant excess on carbon emissions (Gibney, 2022). The most direct impact of training and deploying a machine learning model is the emission of greenhouse gases due to the increased hardware energy consumption (Ligozat and Luccioni, 2021). Therefore, a direct way to reduce the ecological impact of training and deploying machine learning models is to reduce the training and inference time, i.e., providing the community with low memory and computational cost models.
To tackle these limitations and issues, we propose the Joint-NER-RE-Fourier (JNRF) model with a reduced algorithmic complexity for information extraction. We combine positional encoding with unitary batch size training so that the model processes automatically variable size EHRs with consistent performance. We use a Fourier network to contextualize tokens with fair time and space complexity, allowing to process long documents with low-resource hardware and avoid rolling window strategies. Finally, we reach the theoretical computational complexity lower bound for relation extraction using a selective pooling strategy and distance-aware attention weights with trainable polynomial distance functions. The main contributions of this paper are as follows:
• We propose a general, lightweight, and efficient model to jointly detect clinical entities and multiple relations, while requiring low computational power and memory, without the use of external tools or handcrafted rules. The code is available at https://github.com/ds4dh/JNRF.
• We show that this model can be applied to variable length documents, without any architectural changes. More importantly, it has robust performance independent of the document size.
• To the best of our knowledge, this is the first effort to model ADE and medication extraction at the document level. Unlike existing models in the literature, we demonstrate that our approach is able to identify inter-passage relations without the need of window/input size tuning, post-processing or any further engineering.
Related work
The main methods to produce E2E information extraction systems are the so called pipeline (Sorokin and Gurevych, 2017;Chapman et al., 2018;Christopoulou et al., 2020) and joint modeling (Xu et al., 2017;Wei et al., 2020;Bekoulis et al., 2018;Nguyen and Verspoor, 2019;. The pipeline method consists of training two independent modules, one for NER and one for RE. These models naturally suffer from cascading errors, as the error signal from one module is not back-propagated to the other. Joint modeling aims to overcome this shortcoming by learning a unique model on a combination of NER and RE losses. Joint modeling tends to outperform pipeline methods, consistently achieving state-of-the-art performance (Wei et al., 2020;Fang et al., 2021;Bekoulis et al., 2018;Nguyen and Verspoor, 2019;. In addition, joint modeling techniques have some major advances as they allow to train two models at the same time, saving time and computation, and minimizing engineering efforts. In both cases, the E2E approach has been dominated by LSTM-CRF architectures (Xu et al., 2017;Christopoulou et al., 2020;Wei et al., 2020;Henry et al., 2020). However, they suffer from two main limitations: i) the computational complexity of the CRF layer (Jeong et al., 2009); and ii) the auto-regressive nature of the LSTM model, which prevents full parallel training (Xu et al., 2021).
Joint learning in the general domain
Bekoulis et al. (2018) proposed a joint neural model using CRFs and a multi-headed selection module allowing for multiple relation detection. The model requires the computation of scores on every pair of input tokens, which consumes O(n 2 ) time and space. To improve generalisation, their approach does not rely on external NLP tools, such as part-ofspeech (POS) tagger or dependency parser. More recently, Nguyen and Verspoor (2019) proposed a joint BiLSTM-CRF architecture combined with a biaffine attention mechanism (Dozat and Manning, 2016), improving upon Bekoulis et al. (2018) in terms of time complexity. utilizes dynamic span graphs to learn useful information from a broader context. The graph is built by picking the most confident entity spans and linking them with confidence-weighted relation types and correlations. The model does not require preprocessing syntactic tools and significantly outperforms the previous approaches across several entityrelated tasks. Lastly, DYGIE++ enumerates candidate text spans and encodes them using BERT and task-specific message updates passed over a text span graph to achieve stateof-the-art performance across entity, relation, and event extraction tasks.
Joint learning for medication-related entity and relation extraction
Most of the medication-related NER and RE studies are performed using the N2C2 ADE benchmark (Henry et al., 2020). Wei et al. (2020) proposed a system consisting of a LSTM-CRF layer for NER joint learned with a CNN-RNN layer for RE. They utilized CLAMP (Soysal et al., 2018) for the text pre-processing pipeline, including sentence boundary detection and POS labeling, and to extract a set of hand-crafted features to feed the NER module. Similarly to approaches for general corpora, Fang et al. (2021) replaced the LSTM layer by a BERT model for feature extraction, achieving 1.5 percentage point improvement in the strict F1-score metric. In their approach, a CRF layer is still used on top of a BERT model for the NER part, while a multi-head selection module (Bekoulis et al., 2018) combines the output of the BERT and CRF layers to predict relation among the detected entities.
Fourier networks
To overcome algorithmic complexity limitations in the Transformers architecture (Vaswani et al., 2017), Fourier networks (FNet) have been proposed (Lee- Thorp et al., 2021). The main innovation of FNets is that the classic Transformers attention mechanism can be mimicked using simple, non-trainable token mixing strategies. One can obtain O(n × log(n)) complexity using the Cooley-Tukey Fast Fourier Transform algorithm (Cooley and Tukey, 1965) instead of the attention mechanism, which consumes O(n 2 ) with respect to the input sequence length (n). FNets achieve 92 and 97% of BERT-Base and BERT-Large (Devlin et al., 2018) accuracy on the GLUE benchmark , but train 70-80% faster on GPUs/TPUs. In addition to matching the accuracy of competing linear-complexity transformers Jaegle et al., 2021;Wu et al., 2021;Lee-Thorp et al., 2021), the FNet is faster and memory efficient due to the unparameterized contextualization layer, i.e., it has no parameters to train for token mixing, thus requires virtually no memory usage.
Approach
In this section, we provide a step-by-step formal description of the proposed architecture using the forward pass representations and operations, as illustrated in Figure 1. First, we describe i) the vectorial token representation strategy, then ii) the language/contextualization layer, next iii) how the NER and RE task is jointly modelled, and finally iv) the cost functions used. Lastly, we conduct a computational complexity analysis of the proposed model.
Model formalisation
Token representation layer: We use static embeddings (BioClinicalBERT-base (Alsentzer et al., 2019) in our experiments) and freeze these parameters during training for better generalization.
We also decided to use positional encoding as in Vaswani et al. (2017) so as not to fix a predefined input length.
Language model: We use FNets to perform token contextualization with fair time and space complexity. We integrate a FNet layer in our architecture as follows:
E (1) = MLP(E), E (2a) = EN LM (E (1) ), E (2b) = RE LM (E (1) ),
where E ∈ R n×d is the embedding matrix, in which each row represents a token, following their order in the input sequence (i.e., the document), n the input sequence length, d the token embedding dimension, MLP is a token-wise multilayer perceptron, EN LM and RE LM are NER and RE FNets respectively. In fact, we fully share the weights between EN LM and RE LM to further reduce the number of trainable parameters. We use superscripts ( (1) , (2a) , ...) to denote the transformed versions of the original embedding matrix.
NER and RE layers: 2b) , and subsequently compute:
We thus have E (2) = E (2a) = E (l = EN M LP (E (2) ), E (3) = RE M LP (E (2) ),
where EN M LP and RE M LP are two independent token-wise MLPs. EN M LP maps the contextualized embeddings E (2) to logits l ∈ R n×c for classification, where c is the number of entity classes, and RE M LP maps E (2) to a third version of the embedding matrix E (3) . We then compute a priori token classes
a i = argmax(l i ),
for i : 1 ... n, and apply a selective pooling strategy, i.e., we pool candidate entities for relation extraction from E (3) using a i . Some relations may never exist for a particular relation extraction task.
We use L to denote the set of entities that can only be linked to those of a set H. To avoid generating impossible candidate pairs, we perform two selective pooling for these two different sets: the key K ∈ R |L|×d , and the query Q ∈ R |H|×d . We then produce t heads M LP are tokenwise MLPs, and t represent the number of relation types. We then compute the scores between the query and the key entities
K (j) = K (j) M LP (K), Q (j) = Q (j) M LP (Q),A (j) = Q (j) K T (j) .
As the RE module is distance agnostic, we incorporate a trainable polynomial distance function to modify the logits as a function of distance between tokens:
Ψ (j) = A (j) + α j1 × D 2 + α j2 × D + α j3 × I,
where D φψ represents the number of tokens separating the φ th and ψ th pooled entities in the original input embedding matrix. The α's are learned through the minimization of the loss function and thus requires no predefined hand-crafted rules regarding short/long-distance relations.
Loss function: We use a cross-entropy loss for both NER and RE:
L N ER = − 1 n n i=1 c k=1 s(l i,k ) × e i,k , L RE = − 1 |H||L| |H| h=1 |L| p=1 t j=1 s(Ψ (j) h,p ) × r h,p,j ,
where s(x q,z ) = log(exp (x q,z ) / b exp (x q,b )), and e and r are the target entities and relations, respectively. Finally, we use the sum of L N ER and L RE as the final loss function to minimize
L = L N ER + L RE .
Computational complexity
The complexity of the RE model depends on the number of neighbors considered for candidate pair of entities, independently of the method. If one wants to detect relations between two entities regardless of the distance, then the lower bound is O(t×|H|×|L|); or min(O(t×|L|) , O(t×|H|)) if one fixes the number of candidate neighbors. We decided not to set a maximum number of neighbors for candidate pair generation. Thus, the RE model uses O(t × |H| × |L|) through selective pooling. For a fixed RE method, the complexity of the whole model is driven by the NER component. We achieved fair complexity by using an FNet (O(n×log(n))). Additionally, we used a softmax layer in place of CRF, which uses O(n × c) instead of CRF's O(n × c 2 ). This method also takes advantage of parallelization, making it a time complexity optimised method.
Benchmark dataset
We used the 2018 N2C2 ADE dataset 1 to evaluate our model. The data consists of 505 annotated discharge summaries from MIMIC-III (Johnson et al., 2016). The passages contains annotations for strength, form, dosage, frequency, route, duration, reason, and ADE entities, each associated with a drug entity. We used the official splits to train and evaluate our model, with 303 records for training and 202 for testing. Data summary statistics are presented in the Appendix A.1. Duration and ADE entities and their respective relations are not as well represented in the dataset (see Table 6). The document lengths vary widely depending on the patient's clinical history (see Table 7). There is a gap 1 Dataset available at https://portal.dbmi.hms.harvard.edu/. of more than 10k tokens between the smallest and largest documents (224 and 13990, respectively), which is too large to use padding efficiently. Moreover, the average document size is almost 8x larger than the typical input size of standard BERT-like implementations (4045 vs 512, respectively).
Experiments
We trained our models in three different data representation scenarios, where we use whole documents, sentences only, and a mixed configuration where we use both documents and sentences as training instances. Performance was then evaluated at both document and sentence levels for these different training scenarios. Our models were compared to baseline models based on MLP with selective pooling and a sliding window BioClinicalBERT-base model (WBERT) (Alsentzer et al., 2019) with selective pooling, both trained and evaluated using the whole documents.
We implemented our models using PyTorch and a single Tesla V100 GPU. We used Adam (Kingma and Ba, 2014), mini-batches of size 1 and 64 for documents and sentences, respectively. Models were trained using gradient accumulation to avoid using padding tokens. The final model was selected based on the best dev F1-score obtained during training. In the following, we present the results of our experiments using micro-lenient precision, recall, and F1-score using the challenge's official evaluation tool.
Data pre-processing
We split the provided training data into train and dev sets composed of 242 and 61 documents, respectively. We tokenize documents using BioClinicalBERT-base wordpiece tokenizer from HuggingFace (Wu et al., 2016;Wolf et al., 2019). For sentence-level modeling, we first tokenize sentences using Spacy (Honnibal and Montani, 2017) and then use aforementioned wordpiece algorithm. We encode the gold entity boundaries in the BIO scheme. The embedding matrix is initialized from BioClinicalBERT-base static embeddings. No other form a data pre-processing or external feature injection has been implemented. Table 1 shows the performance of the JNRF model in multiple settings. The best performance was obtained in the document-document setting, reach-ing an end-to-end F1-score of 80.49%, a precision of 91.65% and a recall of 71.76%. The JNRF outperformed WBERT with selective pooling by 0.42% in F1-score (0.09% in precision and 0.06% in recall), while reducing algorithmic complexity by one order of magnitude (O(n × (log(n) + c)) vs. O(n × (n + c))). We hypothesize that using WBERT does not improve the performance due to the lack of long-range token mixing and/or an inappropriate windowing strategy. We believe that further investigation of an optimal windowing strategy could improve its performance. Moreover, we observed a significant drop in performance (37% in F1-score) when the Fnet is replaced by an MLP, demonstrating the capacity of the FNet to better attend to the correct token representations.
End-to-end effectiveness
The JNRF model shows good performance when it is trained and evaluated with the same document representation (i.e., document-document or sentence-sentence) with similar precision in both cases and reduction in recall for the sentencesentence setup, due to the model's limitation to detect inter-sentence relations. It is unclear though whether further data engineering could still result in equivalent performance. For the mixed training setup, the model shows stronger power to infer at the sentence level. We believe this is due to the much higher number of examples at the sentence level, which bias the model towards such representation.
Train Language Test Precision
End-to-end efficiency
To compare the efficiency of our approach against architectures used in state-of-the-art approaches, we measured the time and memory used during training over 10 epochs (for the same training set) for a rolling window BERT (WBERT), a rolling window BERT-CRF (WBERT-CRF), and a BiLSTM-CRF. All window-based models used non-overlapping windows of size 512. We deliberately chose to use the minimum number of windows for these models to make them as fast as possible. Figure 2 shows the time and VRAM used by our model and state-of-the-art models. Results show that our model substantially improves upon the state-of-the-art in terms of time complexity. Forward and backward passes over the training dataset take an average of 30 seconds with our proposed architecture, while the average time for the above mentioned models is 54, 168 and 685 seconds, respectively. This increases the learning speed by a factor of 2, 6 and 22, respectively ( Figure 2a). In addition, we measured an average VRAM usage of 8 GB for the JNRF architecture while the average memory usage for the above mentioned models is 4, 5 and 14 GBs, respectively. This represents a 43% GPU memory saving compared to BiLSTM-CRF (Figure 2b). WBERT and WBERT-CRF uses around 2x less memory due to the windowing strategy. This increase in efficiency is due to the fact that, differently from the quadratic complexity in terms of the number of entities c, which is generally large in the biomedical field, our model complexity has a linear dependency in terms of the number of entities, and a log-linear dependency in terms of the number of tokens (overall O(n × (log(n) + c))).
Time inefficiency of windowing strategies
To demonstrate that windowing strategies are time inefficient, we measured the average forwardbackward time of a rolling window JNRF (WJNRF) and its average VRAM usage (Figure 2). JNRF is 20% faster than WJNRF but WJNRF uses 26% less memory (Figure 2). While windowing strategies save VRAM, they are an inefficient solution in terms of computation time. The average document size is 4045 (see Table 7) corresponding to an average of 8 forward passes per document using standard BERT-like implementations (512 tokens maximum input size) or 28 for the longest document. So that all tokens attend to each other, we would need overlapping windows. The worst case scenario is to drag the window token-by-token, leading to 3534 (n − W indowSize + 1) windows on average per document. Table 2 shows the performance of our model per entity and relation types. Our model suffers from poor performance in extracting Reason and ADE entities, with an F1-score of 50.26% and 16.40%, respectively. This lower performance is also seen in other competing solutions (Henry et al., 2020). In turn, both the detection of their respective relations are also negatively impacted, with a final E2E F1-score of only 29.92% and 7.21%, respectively. We believe this lower performance is a result of the confusion between these entities (as they are semantically similar) and of the small number of instances in the training set. Nevertheless, further investigation is needed to better understand the issue. Table 3 shows the performance as a function of the number of input tokens (document length). We followed the Freedman-Diaconis method (Freedman and Diaconis, 1981) to group documents into clusters of different lengths. These results highlights the ability of our architecture to perform consistently across clinical notes of varying sizes. Without any data pre-processing (e.g., sliding window or sentence tokenization), the model can elegantly generalise to document of different sizes. Figure 3 shows the distribution of relation types according to their sentence distance. We define the sentence distance between two related entities E1 and E2 as the number of sentences separating E1 from E2. A negative distance implies that the drug entity is mentioned before the related entity. Re-
Performance across entities, relations and document sizes
Performance on long range relations
Entity
Precision sentence relations with negative sentence distances (between 65% and 68% F1-score). The performance decreases substantially for inter-sentence relations with positive sentence distances. This is due to the fact that Reason and ADE entities and relations are actually harder to detect (see Table 2), and they represent the vast majority of relations with a positive sentence distance, as shown in Figure 3. It is important to note that using a fixedinput size models would only detect intra-sentence relations or inter-sentence through significant engineering, which may not necessarily generalise to other corpora and domains. In this section, for a reference we show our results against state-of-the-art E2E NER+RE models described in the N2C2 ADE challenge (Henry et al., 2020). Nevertheless, due to their different modelling strategy (e.g., multiple models, external tools, post-processing techniques and hand-crafted rules specifically designed for this dataset), they are not directly comparable.
UTH (Wei et al., 2020) used a joint learning model consisting of a LSTM-CRF layer for NER and a CNN-RNN layer for RE. CLAMP (Soysal et al., 2018) was employed for text pre-processing, including sentence boundary detection and POS labeling, and to create a set of hand-crafted features that fed the CRF layer. Entities without a relation were associated to the closest drug in the post-processing step.
NaCT (Christopoulou et al., 2020) used a majority voting ensemble of feature-based CRF, including ADE dictionary, and stacked BiLSTM-CRF for NER. For RE, they used an ensemble of LSTM for intra-sentence relations and a transformer network for inter-sentence relations.
BCH (Miller et al., 2019) used SVM to detect entities, and pair these detected entities for a second SVM relation classifier. They used cTAKES (Savova et al., 2010) to pre-process data and ClearTK (Bethard et al., 2014) API to extract features.
RA (Henry et al., 2020) used dictionary-based features, CRFs and logistic regression for NER. For RE, they used a tree-based boosting classifier (Chen and Guestrin, 2016). Table 5 shows the performance of our best model as well as the results of the previously described systems. As we can see, the performance of our E2E model (80.49% F1-score) achieves 90% of the F1-score of the best performing system (99% preci-sion and 84% recall), while significantly reducing algorithmic complexity. Moreover, it compares favorably to strong baseline methods (Chen and Guestrin, 2016)
Conclusion
In this paper, we proposed an end-to-end, generalizable, lightweight, and efficient model to jointly detect entities and multiple relations at the intraand inter-passage levels. We combined a Fourier network with a pooled attention layer to significantly reduce time and space complexity, thus providing the community with a low carbon footprint solution for end-to-end relation extraction. We demonstrated that our model outperformed the sliding window BERT with selective pooling by 0.42% in F1-score, while being 2 times faster to train. Furthermore, we showed that our model trains 22 times faster and consumes 1.75 times less GPU memory than state-of-the-art BiLSTM-CRF architectures, with a reasonable performance tradeoff of 90% on the N2C2 ADE benchmark, without using external tools or hand-crafted rules. Furthermore, we showed that this approach achieves consistent performance regardless of the length of the input sequence, eliminating the need for sliding window techniques and easing the overall data processing pipeline and engineering effort.
Figure 1 :
1Computational graph for the proposed JNRF network.
Figure 2 :
2(a) Cumulative training time of JNRF vs. WJNRF vs. WBERT vs. WBERT-CRF vs. BiLSTM-CRF. (b) GPU memory usage of JNRF vs. WJNRF vs. WBERT vs. WBERT-CRF vs. BiLSTM-CRF. For fair comparison, all systems use the selective pooling RE module.
Figure 3 :
3Probability density estimation of relation types as a function of the number of sentences separating two related entities (Sentence distance).
Table 2 :
2NER and E2E (NER+RE) performance of our
JNRF model.
sults show that although most related entities are in
the same sentence, there are a non-negligible num-
ber of relations with a sentence distance different
from zero. As we can see from Table 4, the JNRF
model is able to automatically detect distant rela-
tions. It has superior performance detecting intra-
sentence relations, i.e., better F1-score for sentence
distance 0, with a yet robust performance for inter-
Table 3 :
3Performance of our JNRF model across different document sizes.
F1-score (%) 64.76 68.22 83.57 9.29 0.41Sentence distance
-2
-1
0
1
2
Precision (%) 75.14 83.06 92.69 22.99 0.36
Recall (%)
56.90 57.88 76.08 5.82 0.49
Table 4 :
4Performance of our JNRF model as a function of sentence distance.6 Comparison with SOTA in the N2C2
ADE challenge
(80.49% vs. 80.37%), again with an order of magnitude in complexity reduction.Name
NER
Precision Recall F1
complexity
(%)
(%) (%)
UTH nc 2
92.92
85.49 89.05
NaCT nc 2
92.64
83.18 87.66
BCH n 3
89.63
76.40 82.49
JNRF n(log(n) + c) 91.65 a
71.76 b 80.49 c
RA
nc 2
86.89
74.75 80.37
Table 5 :
5E2E scores of the top performing systems submitted in the N2C2 ADE track, along with our JNRF model. Standard deviations: a=0.47, b=0.53, c=0.33.
A Appendix
Emily Alsentzer, R John, Willie Murphy, Wei-Hung Boag, Di Weng, Tristan Jin, Matthew Naumann, Mcdermott, arXiv:1904.03323Publicly available clinical bert embeddings. arXiv preprintEmily Alsentzer, John R Murphy, Willie Boag, Wei- Hung Weng, Di Jin, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clinical bert embeddings. arXiv preprint arXiv:1904.03323.
Incidence of adverse drug events and potential adverse drug events: implications for prevention. W David, David J Bates, Nan Cullen, Laura A Laird, Petersen, D Stephen, Deborah Small, Glenn Servi, Laffel, J Bobbie, Brian F Sweitzer, Robert Shea, Hallisey, Jama. 2741David W Bates, David J Cullen, Nan Laird, Laura A Petersen, Stephen D Small, Deborah Servi, Glenn Laffel, Bobbie J Sweitzer, Brian F Shea, Robert Hal- lisey, et al. 1995. Incidence of adverse drug events and potential adverse drug events: implications for prevention. Jama, 274(1):29-34.
Joint entity recognition and relation extraction as a multi-head selection problem. Giannis Bekoulis, Johannes Deleu, Thomas Demeester, Chris Develder, Expert Systems with Applications. 114Giannis Bekoulis, Johannes Deleu, Thomas Demeester, and Chris Develder. 2018. Joint entity recogni- tion and relation extraction as a multi-head selection problem. Expert Systems with Applications, 114:34- 45.
ClearTK 2.0: Design patterns for machine learning in UIMA. Steven Bethard, Philip Ogren, Lee Becker, Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14). the Ninth International Conference on Language Resources and Evaluation (LREC'14)Reykjavik, IcelandEuropean Language Resources Association (ELRASteven Bethard, Philip Ogren, and Lee Becker. 2014. ClearTK 2.0: Design patterns for machine learn- ing in UIMA. In Proceedings of the Ninth Interna- tional Conference on Language Resources and Eval- uation (LREC'14), pages 3289-3293, Reykjavik, Iceland. European Language Resources Association (ELRA).
Are all vaccines created equal? using electronic health records to discover vaccines associated with clinician-coded adverse events. Mary Regina Boland, Nicholas P Tatonetti, AMIA Summits on Translational Science Proceedings. 196Mary Regina Boland and Nicholas P Tatonetti. 2015. Are all vaccines created equal? using electronic health records to discover vaccines associated with clinician-coded adverse events. AMIA Summits on Translational Science Proceedings, 2015:196.
Hybrid system for adverse drug event detection. Alec B Chapman, S Kelly, Patrick R Peterson, Alba, L Scott, Olga V Duvall, Patterson, PMLRInternational Workshop on Medication and Adverse Drug Event Detection. Alec B Chapman, Kelly S Peterson, Patrick R Alba, Scott L DuVall, and Olga V Patterson. 2018. Hybrid system for adverse drug event detection. In Interna- tional Workshop on Medication and Adverse Drug Event Detection, pages 16-24. PMLR.
Xgboost: A scalable tree boosting system. Tianqi Chen, Carlos Guestrin, Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining. the 22nd acm sigkdd international conference on knowledge discovery and data miningTianqi Chen and Carlos Guestrin. 2016. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowl- edge discovery and data mining, pages 785-794.
Adverse drug events and medication relation extraction in electronic health records with ensemble deep learning methods. Fenia Christopoulou, Sunil Thy Thy Tran, Makoto Kumar Sahu, Sophia Miwa, Ananiadou, Journal of the American Medical Informatics Association. 271Fenia Christopoulou, Thy Thy Tran, Sunil Kumar Sahu, Makoto Miwa, and Sophia Ananiadou. 2020. Ad- verse drug events and medication relation extrac- tion in electronic health records with ensemble deep learning methods. Journal of the American Medical Informatics Association, 27(1):39-46.
global trigger tool'shows that adverse events in hospitals may be ten times greater than previously measured. Roger David C Classen, Frances Resar, Frank Griffin, Terri Federico, Nancy Frankel, Kimmel, C John, Allan Whittington, Andrew Frankel, Brent C Seger, James, Health affairs. 304David C Classen, Roger Resar, Frances Griffin, Frank Federico, Terri Frankel, Nancy Kimmel, John C Whittington, Allan Frankel, Andrew Seger, and Brent C James. 2011. 'global trigger tool'shows that adverse events in hospitals may be ten times greater than previously measured. Health affairs, 30(4):581-589.
An algorithm for the machine calculation of complex Fourier series. James W Cooley, John W Tukey, Mathematics of Computation. 1990James W. Cooley and John W. Tukey. 1965. An algorithm for the machine calculation of com- plex Fourier series. Mathematics of Computation, 19(90):297-301.
Contextualized french language models for biomedical named entity recognition. Jenny Copara, Julien Knafou, Nona Naderi, Claudia Moro, Patrick Ruch, Douglas Teodoro, 6e conférence conjointe Journées d'Études sur la Parole (JEP, 33e édition). Traitement Automatique des Langues Naturelles (TALN, 27e édition). Rencontre des Étudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (RÉCITAL, 22e éditionJenny Copara, Julien Knafou, Nona Naderi, Claudia Moro, Patrick Ruch, and Douglas Teodoro. 2020. Contextualized french language models for biomed- ical named entity recognition. In 6e conférence con- jointe Journées d'Études sur la Parole (JEP, 33e édition), Traitement Automatique des Langues Na- turelles (TALN, 27e édition), Rencontre des Étudi- ants Chercheurs en Informatique pour le Traitement Automatique des Langues (RÉCITAL, 22e édition).
Atelier DÉfi Fouille de Textes. ATALAAtelier DÉfi Fouille de Textes, pages 36-48. ATALA;
. AFCP. AFCP.
Named entity recognition in chemical patents using ensemble of contextual language models. Jenny Linet , Copara Zea, Nona Naderi, Julien David Marc Knafou, Patrick Ruch, Douglas Teodoro, Proceedings of CLEF (Conference and Labs of the Evaluation Forum) 2020 Working Notes. CLEF (Conference and Labs of the Evaluation Forum) 2020 Working NotesJenny Linet Copara Zea, Nona Naderi, Julien David Marc Knafou, Patrick Ruch, and Douglas Teodoro. 2020. Named entity recognition in chem- ical patents using ensemble of contextual language models. In Proceedings of CLEF (Conference and Labs of the Evaluation Forum) 2020 Working Notes.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, arXiv:1810.04805Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.
Cogltx: Applying bert to long texts. Ming Ding, Chang Zhou, Hongxia Yang, Jie Tang, Advances in Neural Information Processing Systems. Curran Associates, Inc33Ming Ding, Chang Zhou, Hongxia Yang, and Jie Tang. 2020. Cogltx: Applying bert to long texts. In Ad- vances in Neural Information Processing Systems, volume 33, pages 12792-12804. Curran Associates, Inc.
Timothy Dozat, D Christopher, Manning, arXiv:1611.01734Deep biaffine attention for neural dependency parsing. arXiv preprintTimothy Dozat and Christopher D Manning. 2016. Deep biaffine attention for neural dependency pars- ing. arXiv preprint arXiv:1611.01734.
Drug-related morbidity and mortality: updating the cost-of-illness model. R Frank, Amy J Ernst, Grizzle, Journal of the American Pharmaceutical Association. 412Frank R Ernst and Amy J Grizzle. 2001. Drug-related morbidity and mortality: updating the cost-of-illness model. Journal of the American Pharmaceutical As- sociation (1996), 41(2):192-199.
Joint extraction of clinical entities and relations using multi-head selection method. Xintao Fang, Yuting Song, Akira Maeda, 2021 International Conference on Asian Language Processing (IALP). IEEEXintao Fang, Yuting Song, and Akira Maeda. 2021. Joint extraction of clinical entities and relations us- ing multi-head selection method. In 2021 Interna- tional Conference on Asian Language Processing (IALP), pages 99-104. IEEE.
Era of faster fda drug approval has also seen increased black-box warnings and market withdrawals. Cassie Frank, Steffie David U Himmelstein, Woolhandler, H David, Sidney M Bor, Orlaith Wolfe, Leah Heymann, Karen E Zallman, Lasser, Health affairs. 338Cassie Frank, David U Himmelstein, Steffie Woolhan- dler, David H Bor, Sidney M Wolfe, Orlaith Hey- mann, Leah Zallman, and Karen E Lasser. 2014. Era of faster fda drug approval has also seen in- creased black-box warnings and market withdrawals. Health affairs, 33(8):1453-1459.
On the histogram as a density estimator: L 2 theory. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete. David Freedman, Persi Diaconis, 57David Freedman and Persi Diaconis. 1981. On the his- togram as a density estimator: L 2 theory. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebi- ete, 57(4):453-476.
How to shrink ai's ballooning carbon footprint. Elizabeth Gibney, Nature. 6077920Elizabeth Gibney. 2022. How to shrink ai's ballooning carbon footprint. Nature, 607(7920):648-648.
n2c2 shared task on adverse drug events and medication extraction in electronic health records. Sam Henry, Kevin Buchan, Michele Filannino, Amber Stubbs, Ozlem Uzuner, Journal of the American Medical Informatics Association. 271Sam Henry, Kevin Buchan, Michele Filannino, Amber Stubbs, and Ozlem Uzuner. 2020. 2018 n2c2 shared task on adverse drug events and medication extrac- tion in electronic health records. Journal of the American Medical Informatics Association, 27(1):3- 12.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural computation. 98Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.
2017. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing. Matthew Honnibal, Ines Montani, To appearMatthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with Bloom embed- dings, convolutional neural networks and incremen- tal parsing. To appear.
Andrew Jaegle, Felix Gimeno, Andrew Brock, Andrew Zisserman, arXiv:2103.03206Oriol Vinyals, and Joao Carreira. 2021. Perceiver: General perception with iterative attention. arXiv preprintAndrew Jaegle, Felix Gimeno, Andrew Brock, Andrew Zisserman, Oriol Vinyals, and Joao Carreira. 2021. Perceiver: General perception with iterative atten- tion. arXiv preprint arXiv:2103.03206.
Efficient inference of CRFs for large-scale natural language data. Minwoo Jeong, Chin-Yew Lin, Gary Geunbae Lee, Proceedings of the ACL-IJCNLP 2009 Conference Short Papers. the ACL-IJCNLP 2009 Conference Short PapersSuntec, SingaporeAssociation for Computational LinguisticsMinwoo Jeong, Chin-Yew Lin, and Gary Geunbae Lee. 2009. Efficient inference of CRFs for large-scale natural language data. In Proceedings of the ACL- IJCNLP 2009 Conference Short Papers, pages 281- 284, Suntec, Singapore. Association for Computa- tional Linguistics.
Mimic-iii, a freely accessible critical care database. E W Alistair, Johnson, J Tom, Lu Pollard, Shen, H Liwei, Mengling Lehman, Mohammad Feng, Benjamin Ghassemi, Peter Moody, Leo Anthony Szolovits, Roger G Celi, Mark, Scientific data. 31Alistair EW Johnson, Tom J Pollard, Lu Shen, Li- wei H Lehman, Mengling Feng, Mohammad Ghas- semi, Benjamin Moody, Peter Szolovits, Leo An- thony Celi, and Roger G Mark. 2016. Mimic-iii, a freely accessible critical care database. Scientific data, 3(1):1-9.
Drug-related morbidity and mortality. Jeffery Johnson, Lyle Booman, Journal of Managed Care Pharmacy. 21Jeffery Johnson and Lyle Booman. 1996. Drug-related morbidity and mortality. Journal of Managed Care Pharmacy, 2(1):39-47.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv:1412.6980.
Bitem at wnut 2020 shared task-1: Named entity recognition over wet lab protocols using an ensemble of contextual language models. Julien David Marc Knafou, Nona Naderi, Jenny Linet , Copara Zea, Douglas Teodoro, Patrick Ruch, Proceedings of the 2020 EMNLP Workshop W-NUT: The Sixth Workshop on Noisy User-generated Text. the 2020 EMNLP Workshop W-NUT: The Sixth Workshop on Noisy User-generated TextAssociation for Computational LinguisticsJulien David Marc Knafou, Nona Naderi, Jenny Linet Copara Zea, Douglas Teodoro, and Patrick Ruch. 2020. Bitem at wnut 2020 shared task-1: Named entity recognition over wet lab protocols using an ensemble of contextual language models. In Pro- ceedings of the 2020 EMNLP Workshop W-NUT: The Sixth Workshop on Noisy User-generated Text, pages 305-313. Association for Computational Lin- guistics.
Conditional random fields: Probabilistic models for segmenting and labeling sequence data. John Lafferty, Andrew Mccallum, Fernando Cn Pereira, John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Prob- abilistic models for segmenting and labeling se- quence data.
The nature of adverse events in hospitalized patients: results of the harvard medical practice study ii. Lucian L Leape, A Troyen, Nan Brennan, Ann G Laird, Russell Lawthers, Localio, A Benjamin, Liesi Barnes, Hebert, P Joseph, Newhouse, C Paul, Howard Weiler, Hiatt, New England journal of medicine. 3246Lucian L Leape, Troyen A Brennan, Nan Laird, Ann G Lawthers, A Russell Localio, Benjamin A Barnes, Liesi Hebert, Joseph P Newhouse, Paul C Weiler, and Howard Hiatt. 1991. The nature of adverse events in hospitalized patients: results of the harvard medical practice study ii. New England journal of medicine, 324(6):377-384.
James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon, arXiv:2105.03824Fnet: Mixing tokens with fourier transforms. arXiv preprintJames Lee-Thorp, Joshua Ainslie, Ilya Eckstein, and Santiago Ontanon. 2021. Fnet: Mixing to- kens with fourier transforms. arXiv preprint arXiv:2105.03824.
A Practical Guide to Quantifying Carbon Emissions for Machine Learning Researchers and Practitioners. Anne-Laure Ligozat, Sasha Luccioni, Ph.D. thesis, MILA; LISN.Anne-Laure Ligozat and Sasha Luccioni. 2021. A Practical Guide to Quantifying Carbon Emissions for Machine Learning Researchers and Practition- ers. Ph.D. thesis, MILA; LISN.
A general framework for information extraction using dynamic span graphs. Yi Luan, Dave Wadden, Luheng He, Amy Shah, Mari Ostendorf, Hannaneh Hajishirzi, 10.18653/v1/N19-1308Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Yi Luan, Dave Wadden, Luheng He, Amy Shah, Mari Ostendorf, and Hannaneh Hajishirzi. 2019. A gen- eral framework for information extraction using dy- namic span graphs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3036-3046, Minneapolis, Minnesota. Association for Computational Linguistics.
Natural language processing for ehrbased pharmacovigilance: a structured review. Yuan Luo, K William, Timothy M Thompson, Zexian Herr, Zeng, A Mark, Siddhartha R Berendsen, Jonnalagadda, B Matthew, Justin Carson, Starren, Drug safety. 4011Yuan Luo, William K Thompson, Timothy M Herr, Zexian Zeng, Mark A Berendsen, Siddhartha R Jonnalagadda, Matthew B Carson, and Justin Star- ren. 2017. Natural language processing for ehr- based pharmacovigilance: a structured review. Drug safety, 40(11):1075-1089.
Extracting adverse drug event information with minimal engineering. Timothy Miller, Alon Geva, Dmitriy Dligach, 10.18653/v1/W19-1903Proceedings of the 2nd Clinical Natural Language Processing Workshop. the 2nd Clinical Natural Language Processing WorkshopMinneapolis, Minnesota, USAAssociation for Computational LinguisticsTimothy Miller, Alon Geva, and Dmitriy Dligach. 2019. Extracting adverse drug event information with min- imal engineering. In Proceedings of the 2nd Clinical Natural Language Processing Workshop, pages 22- 27, Minneapolis, Minnesota, USA. Association for Computational Linguistics.
Ensemble of deep masked language models for effective named entity recognition in health and life science corpora. Nona Naderi, Julien Knafou, Jenny Copara, Patrick Ruch, Douglas Teodoro, Frontiers in research metrics and analytics. 6Nona Naderi, Julien Knafou, Jenny Copara, Patrick Ruch, and Douglas Teodoro. 2021. Ensemble of deep masked language models for effective named entity recognition in health and life science corpora. Frontiers in research metrics and analytics, 6.
Drug safety surveillance using de-identified emr and claims data: issues and challenges. M Prakash, Nadkarni, Journal of the American Medical Informatics Association. 176Prakash M Nadkarni. 2010. Drug safety surveillance using de-identified emr and claims data: issues and challenges. Journal of the American Medical Infor- matics Association, 17(6):671-674.
End-toend neural relation extraction using deep biaffine attention. Karin Dat Quoc Nguyen, Verspoor, European Conference on Information Retrieval. SpringerDat Quoc Nguyen and Karin Verspoor. 2019. End-to- end neural relation extraction using deep biaffine at- tention. In European Conference on Information Re- trieval, pages 729-738. Springer.
Hierarchical transformers for long document classification. Raghavendra Pappagari, Piotr Zelasko, Jesús Villalba, 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). IEEEYishay Carmiel, and Najim DehakRaghavendra Pappagari, Piotr Zelasko, Jesús Villalba, Yishay Carmiel, and Najim Dehak. 2019. Hierar- chical transformers for long document classification. In 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 838-844. IEEE.
Mayo clinical text analysis and knowledge extraction system (ctakes): architecture, component evaluation and applications. K Guergana, James J Savova, Masanz, Jiaping Philip V Ogren, Sunghwan Zheng, Karin C Sohn, Christopher G Kipper-Schuler, Chute, Journal of the American Medical Informatics Association. 175Guergana K Savova, James J Masanz, Philip V Ogren, Jiaping Zheng, Sunghwan Sohn, Karin C Kipper- Schuler, and Christopher G Chute. 2010. Mayo clin- ical text analysis and knowledge extraction system (ctakes): architecture, component evaluation and ap- plications. Journal of the American Medical Infor- matics Association, 17(5):507-513.
Efficient attention: Attention with linear complexities. Zhuoran Shen, Mingyuan Zhang, Haiyu Zhao, Shuai Yi, Hongsheng Li, Proceedings of the IEEE/CVF winter conference on applications of computer vision. the IEEE/CVF winter conference on applications of computer visionZhuoran Shen, Mingyuan Zhang, Haiyu Zhao, Shuai Yi, and Hongsheng Li. 2021. Efficient attention: At- tention with linear complexities. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pages 3531-3539.
Contextaware representations for knowledge base relation extraction. Daniil Sorokin, Iryna Gurevych, 10.18653/v1/D17-1188Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingCopenhagen, DenmarkAssociation for Computational LinguisticsDaniil Sorokin and Iryna Gurevych. 2017. Context- aware representations for knowledge base relation extraction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Process- ing, pages 1784-1789, Copenhagen, Denmark. As- sociation for Computational Linguistics.
Clamp-a toolkit for efficiently building customized clinical natural language processing pipelines. Ergin Soysal, Jingqi Wang, Min Jiang, Yonghui Wu, Serguei Pakhomov, Hongfang Liu, Hua Xu, Journal of the American Medical Informatics Association. 253Ergin Soysal, Jingqi Wang, Min Jiang, Yonghui Wu, Serguei Pakhomov, Hongfang Liu, and Hua Xu. 2018. Clamp-a toolkit for efficiently build- ing customized clinical natural language processing pipelines. Journal of the American Medical Infor- matics Association, 25(3):331-336.
Charles Sutton, Andrew Mccallum, An introduction to conditional random fields. Foundations and Trends® in Machine Learning. 4Charles Sutton, Andrew McCallum, et al. 2012. An introduction to conditional random fields. Founda- tions and Trends® in Machine Learning, 4(4):267- 373.
Attention is all you need. Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, 30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information process- ing systems, 30.
Entity, relation, and event extraction with contextualized span representations. David Wadden, Ulme Wennberg, Yi Luan, Hannaneh Hajishirzi, 10.18653/v1/D19-1585Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsDavid Wadden, Ulme Wennberg, Yi Luan, and Han- naneh Hajishirzi. 2019. Entity, relation, and event extraction with contextualized span representations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 5784- 5789, Hong Kong, China. Association for Computa- tional Linguistics.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel R Bowman, arXiv:1804.07461Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprintAlex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461.
Linformer: Selfattention with linear complexity. Sinong Wang, Z Belinda, Madian Li, Han Khabsa, Hao Fang, Ma, arXiv:2006.04768arXiv preprintSinong Wang, Belinda Z Li, Madian Khabsa, Han Fang, and Hao Ma. 2020. Linformer: Self- attention with linear complexity. arXiv preprint arXiv:2006.04768.
A study of deep learning approaches for medication and adverse drug event extraction from clinical text. Qiang Wei, Zongcheng Ji, Zhiheng Li, Jingcheng Du, Jingqi Wang, Jun Xu, Yang Xiang, Firat Tiryaki, Stephen Wu, Yaoyun Zhang, Journal of the American Medical Informatics Association. 271Qiang Wei, Zongcheng Ji, Zhiheng Li, Jingcheng Du, Jingqi Wang, Jun Xu, Yang Xiang, Firat Tiryaki, Stephen Wu, Yaoyun Zhang, et al. 2020. A study of deep learning approaches for medication and ad- verse drug event extraction from clinical text. Jour- nal of the American Medical Informatics Associa- tion, 27(1):13-21.
Huggingface's transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, arXiv:1910.03771arXiv preprintThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Rémi Louf, Morgan Fun- towicz, et al. 2019. Huggingface's transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771.
Fastformer: Additive attention can be all you need. Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang, Xing Xie, arXiv:2108.09084arXiv preprintChuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang, and Xing Xie. 2021. Fastformer: Additive at- tention can be all you need. arXiv preprint arXiv:2108.09084.
Google's neural machine translation system. Yonghui Wu, Mike Schuster, Zhifeng Chen, V Quoc, Mohammad Le, Wolfgang Norouzi, Maxim Macherey, Yuan Krikun, Qin Cao, Klaus Gao, Macherey, arXiv:1609.08144Bridging the gap between human and machine translation. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv:1609.08144.
Multi-head highly parallelized lstm decoder for neural machine translation. Hongfei Xu, Qiuhui Liu, Josef Van Genabith, Deyi Xiong, Meng Zhang, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingLong Papers1Hongfei Xu, Qiuhui Liu, Josef van Genabith, Deyi Xiong, and Meng Zhang. 2021. Multi-head highly parallelized lstm decoder for neural machine trans- lation. In Proceedings of the 59th Annual Meet- ing of the Association for Computational Linguistics and the 11th International Joint Conference on Nat- ural Language Processing (Volume 1: Long Papers), pages 273-282.
Uth_ccb system for adverse drug reaction extraction from drug labels at tac-adr 2017. Jun Xu, Hee-Jin Lee, Zongcheng Ji, Jingqi Wang, Qiang Wei, Hua Xu, TAC. Jun Xu, Hee-Jin Lee, Zongcheng Ji, Jingqi Wang, Qiang Wei, and Hua Xu. 2017. Uth_ccb system for adverse drug reaction extraction from drug labels at tac-adr 2017. In TAC.
Hierarchical attention networks for document classification. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, Eduard Hovy, 10.18653/v1/N16-1174Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSan Diego, CaliforniaAssociation for Computational LinguisticsZichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 1480-1489, San Diego, California. Associa- tion for Computational Linguistics.
Docred: A large-scale document-level relation extraction dataset. Yuan Yao, Deming Ye, Peng Li, Xu Han, Yankai Lin, Zhenghao Liu, Zhiyuan Liu, Lixin Huang, Jie Zhou, Maosong Sun, arXiv:1906.06127arXiv preprintYuan Yao, Deming Ye, Peng Li, Xu Han, Yankai Lin, Zhenghao Liu, Zhiyuan Liu, Lixin Huang, Jie Zhou, and Maosong Sun. 2019. Docred: A large-scale document-level relation extraction dataset. arXiv preprint arXiv:1906.06127.
| [
"https://github.com/ds4dh/JNRF."
]
|
[
"Effect of Nanoparticle Size on the Morphology of Adsorbed Surfactant Layers",
"Effect of Nanoparticle Size on the Morphology of Adsorbed Surfactant Layers"
]
| [
"Dersy Lugo \nStranski Laboratorium für Physikalische und Theoretische Chemie\nTechnische Universität Berlin\nStrasse des 17. Juni 124D-10623BerlinGermany\n",
"Julian Oberdisse \nLaboratoire des Colloïdes, Verres et Nanomatériaux\nUMR 5587\nCNRS\nUniversité Montpellier II\n34095MontpellierFrance\n",
"Alain Lapp \nLaboratoire Léon Brillouin\nCEA Saclay\n91191Gif-sur-Yvette CEDEXFrance\n",
"Gerhard H Findenegg \nStranski Laboratorium für Physikalische und Theoretische Chemie\nTechnische Universität Berlin\nStrasse des 17. Juni 124D-10623BerlinGermany\n"
]
| [
"Stranski Laboratorium für Physikalische und Theoretische Chemie\nTechnische Universität Berlin\nStrasse des 17. Juni 124D-10623BerlinGermany",
"Laboratoire des Colloïdes, Verres et Nanomatériaux\nUMR 5587\nCNRS\nUniversité Montpellier II\n34095MontpellierFrance",
"Laboratoire Léon Brillouin\nCEA Saclay\n91191Gif-sur-Yvette CEDEXFrance",
"Stranski Laboratorium für Physikalische und Theoretische Chemie\nTechnische Universität Berlin\nStrasse des 17. Juni 124D-10623BerlinGermany"
]
| []
| The surface aggregates structure of dimethyldodecylamine-N-oxide (C 12 DAO) in three silica dispersions of different particle sizes (16 -42 nm) was studied by small-angle neutron scattering (SANS) in a H 2 O/D 2 O solvent mixture matching the silica. At the experimental conditions (pH 9) the surfactant exists in its nonionic form and the structure of the adsorbed layer is not affected by added electrolyte. It is found that C 12 DAO forms spherical surface micelles of 2 nm diameter on the 16 nm silica particles, but oblate ellipsoidal surface micelles are formed on the 27 and 42 nm particles. The dimensions of these oblate surface aggregates (minor and major semi-axes R n and R lat ) are similar to those of C 12 DAO micelles in the aqueous solutions. It is concluded that the morphological transition from spherical to ellipsoidal surface aggregates is induced by the surface curvature of the silica particles. A comparison of the shape and dimensions of the surface aggregates formed by C 12 DAO and C 12 E 5 on the 16 nm silica particles demonstrates that the nature of the surfactant head group does not determine the morphology of the surface aggregates, but has a strong influence on the number of surface aggregates per particle, due to the different interactions of the head groups with the silica surface. | 10.1021/jp911400j | [
"https://export.arxiv.org/pdf/1008.5028v1.pdf"
]
| 24,528,783 | 1008.5028 | 0c769fbec123d4122e704bb0ebb6b292bf6062f7 |
Effect of Nanoparticle Size on the Morphology of Adsorbed Surfactant Layers
Dersy Lugo
Stranski Laboratorium für Physikalische und Theoretische Chemie
Technische Universität Berlin
Strasse des 17. Juni 124D-10623BerlinGermany
Julian Oberdisse
Laboratoire des Colloïdes, Verres et Nanomatériaux
UMR 5587
CNRS
Université Montpellier II
34095MontpellierFrance
Alain Lapp
Laboratoire Léon Brillouin
CEA Saclay
91191Gif-sur-Yvette CEDEXFrance
Gerhard H Findenegg
Stranski Laboratorium für Physikalische und Theoretische Chemie
Technische Universität Berlin
Strasse des 17. Juni 124D-10623BerlinGermany
Effect of Nanoparticle Size on the Morphology of Adsorbed Surfactant Layers
1
The surface aggregates structure of dimethyldodecylamine-N-oxide (C 12 DAO) in three silica dispersions of different particle sizes (16 -42 nm) was studied by small-angle neutron scattering (SANS) in a H 2 O/D 2 O solvent mixture matching the silica. At the experimental conditions (pH 9) the surfactant exists in its nonionic form and the structure of the adsorbed layer is not affected by added electrolyte. It is found that C 12 DAO forms spherical surface micelles of 2 nm diameter on the 16 nm silica particles, but oblate ellipsoidal surface micelles are formed on the 27 and 42 nm particles. The dimensions of these oblate surface aggregates (minor and major semi-axes R n and R lat ) are similar to those of C 12 DAO micelles in the aqueous solutions. It is concluded that the morphological transition from spherical to ellipsoidal surface aggregates is induced by the surface curvature of the silica particles. A comparison of the shape and dimensions of the surface aggregates formed by C 12 DAO and C 12 E 5 on the 16 nm silica particles demonstrates that the nature of the surfactant head group does not determine the morphology of the surface aggregates, but has a strong influence on the number of surface aggregates per particle, due to the different interactions of the head groups with the silica surface.
Introduction
Surfactants play an important role in many industrial processes involving colloidal dispersions, as their adsorption onto the particles often leads to enhanced colloid stability. A structural characterization of this adsorbed layer is a prerequisite for gaining a better understanding of its mode of operation in stabilizing or flocculating dispersion. Adsorption isotherms of nonionic surfactants on hydrophilic (oxide) surfaces commonly exhibit a pronounced sigmoidal shape, i.e., a low-affinity initial region followed by a region in which the adsorption increases steeply and reaches a plateau near the critical micelle concentration (CMC). 1 This behavior suggests that adsorption represents a surface aggregation similar to micelle formation in solution. Scanning probe microscopy (AFM) studies at planar surfaces indicated that either laterally uniform surfactant bilayers or small surface micelles may be formed, depending on the nature of the surfactant head group and the degree of hydrophilicity of the solid surface. 2 The nature of the surfactant layers adsorbed on colloidal particles in aqueous dispersions was studied by small-angle neutron scattering (SANS), which allows to highlight the adsorbed layer against a uniform scattering background by matching the colloidal particles with a partially deuterated aqueous solvent. 3,4,5,6,7 In the earlier of these studies the adsorbed surfactant was modeled as a laterally uniform layer, 3,4 but the existence of discrete micellar aggregates at the surface of the silica particles ('micelle-decorated silica beads') has been reported more recently. 5,6,7 Recently we reported that the surfactant penta(ethyleneglycol) monododecyl ether (C 12 E 5 ) is adsorbed in the form of individual spherical surface aggregates on silica nanoparticles of 16 nm diameter, 7 in agreement with earlier findings for the surfactant Triton X-100 on Bindziltype silica particles of similar size. 5,6 This finding is remarkable in view of the fact that C 12 E 5 prefers aggregates of lower mean curvature, viz., worm-like micelles in aqueous solutions, 8,9 and a laterally homogeneous bilayer on planar hydrophilic silica surfaces. 2a We conjectured that the preference for small surface micelles is a consequence of the high surface curvature of the silica nanoparticles, which prevents an effective packing of the hydrophobic tails in an adsorbed bilayer, whereas a favourable packing of the tails is possible in a spherical micelle.
In order to test this concept and to find out to what extent the structure of the adsorbed layer at the surface of the silica nanoparticles depends on the size and chemical nature of the surfactant head group it was of interest to extend this study to a different class of nonionic surfactants. On the other hand, it was of interest to study the influence of size of the silica nanoparticles on the surface aggregate structure of the surfactant.
The present study was performed with dimethyldodecylamine-N-oxide (C 12 DAO), an amphoteric surfactant that exists in a zwitterionic (net non-ionic) form at pH above 7, but in a cationic form at low pH due to a protonation of the head group. C 12 DAO has a much smaller head group of less hydrophilic character than C 12 E 5 . 10 Phase diagrams, thermodynamics and self-assembly structures of aqueous systems of alkyl dimethylamine oxides have been extensively studied, 11 and the interaction of alkyl DAO systems with hydrophilic and hydrophobic surfaces was investigated by adsorption calorimetry 12,13a and streaming potential measurements. 13b Based on the adsorption enthalpy results, Pettersson and Rosenholm 13a concluded that the adsorption mechanism at the solution/silica interface of C 12 DAO in its nonionic form is different from that in the protonated form, and they speculated that in the nonionic form C 12 DAO forms ellipsoidal aggregates; while in the protonated form C 10 DAO and C 12 DAO are likely to form spherical surface micelles. The conclusion about the formation of spherical surface micelles by C 10 DAO on silica was consistent with the sorption enthalpy results of Király and Findenegg. 12 However, in neither of these studies direct information about the surface aggregate structures was obtained. In the present work we use SANS to clarify the structure of the adsorbed layer of C 12 DAO on silica nanoparticles of three different sizes (16 to 42 nm diameter), with a focus on the effect of particle size on the type of surface aggregate formed.
Experimental Section
Materials
N,N-Dimethyldodecylamine-N-oxide, C 12 DAO (Fluka, purity ≥ 98%), tetraethyl orthosilicate, TEOS (Fluka, purity ≥ 99.0%), ammonia (Sigma-Aldrich, A.C.S. reagent, 30-33% in water), Ethanol, C 2 H 5 OH (Berkel AHK, purity ≥ 99.9%), and D 2 O (Euriso-top, 99.9% isotope purity), were used without further purification. Reagent-grade water was produced by a Milli-Q 50 filtration system (Millipore, Billerica, USA) and additionally passed through a 0.22 µm membrane to remove micrometer-sized particles. The colloidal silica suspensions, Ludox SM-30 (30 wt-% in water) and Ludox HS-40 (40 wt-% in water) were supplied by Sigma-Aldrich.
They were dialyzed with reagent-grade water (2 weeks) and filtered with a 0.8 µm Millipore Steril Filter. The silica concentration in the purified suspensions was about one half of the original concentration. Their pH was adjusted to 9 to preserve colloidal stability.
Sample Preparation
Preparation and Characterization of silica nanoparticles. Three samples of monodisperse silica nanoparticles were prepared by two variants of the Stöber synthesis. 14 Silica I (diameter 16 nm) and silica II (27 nm) were made by particle growth from Ludox SM-30 and Ludox HS-40 dispersions, respectively, using the procedure described earlier. 7 Silica III (diameter 42 nm) was prepared by condensation of TEOS starting from a mixture of 100 mL ethanol and 7.5 mL ammonia at 60 °C, and 3.0 mL TEOS was added dropwise in a 250 mL three-neck round flask equipped with a magnetic stirrer and reflux-condenser. Particle growth was allowed to continue for 24 h at 60 °C. The excess of ethanol and ammonia was then removed from the resulting suspension in a rotary evaporator (40°C, 160 mbar) by reducing the volume to 20% of the initial value. The suspension was dialyzed with reagent-grade water for 1 week, filtered and stored at pH 9 in a refrigerator at 280 K.
Sample preparation for SANS. Dilute silica dispersions (0.4 to 1.5 vol.-%) in a contrastmatching H 2 O/D 2 O mixture of scattering length density ρ = 3.54x10 10 cm -2 were prepared for the SANS measurements, as determined experimentally in the earlier work. 7 The total surface area of silica in the samples was obtained from the mass fraction of silica in the dispersions (determined gravimetrically) and the specific surface area a s of the silica. Samples with different adsorbed amounts of C 12 DAO, were prepared by adding appropriate amounts of the surfactant directly to the aqueous dispersion. The adsorption isotherm of C 12 DAO on Davisil silica gel reported by Pettersson and Rosenholm 13a was used to estimate the amounts of surfactant needed for a given surface concentration on the silica particles. According to that work, a plateau value Γ mx = 7.5 µmol m -2 is reached at a solution concentrations somewhat above the CMC (~ 2 10 -3 M), and the surface concentration at the CMC is about ⅞Γ mx . This value was chosen as the highest surface concentration of the surfactant on the silica particles, to avoid free micelles in the solution. All samples were kept at pH 9 to minimize particle aggregation.
Methods
Small-angle neutron scattering. Experiments were carried out on the spectrometer PAXY (Laboratoire Léon Brillouin, Saclay, France). Scattering profiles were taken in a range of the scattering vector q from 0.03 to 3 nm -1 using three wave-length/sample-to-detector distance configurations: wave length λ = 6Ǻ with sample-to-detector distances 1 m and 5 m (collimation distances 2.5 and 5 m); and λ = 12 Ǻ, sample-to-detector 5 m (collimation distance 5 m). Samples were contained in standard 2-mm path length quartz cells (QS, Hellma), thermostated at 25.3 ± 0.1 K. Intensities were divided by transmission and sample thickness (1 and 2 mm), empty cell scattering was subtracted and detector calibration achieved by dividing by 1 mm H 2 O scattering. Absolute units (cm -1 ) were obtained by measuring the incident flux and using a standard procedure. 15 The incoherent scattering Zeta Potential. Zeta potential measurements were carried out with a Malvern Zeta-Sizer 2000 using the diluted silica particle dispersions at pH 9 and at 298 K.
Results and Discussion
Characterization of the silica sols
The silica samples were characterized by transmission electron microscopy (TEM), smallangle neutron scattering (SANS), zeta potential and nitrogen adsorption measurements. TEM images indicate that silica I and II, which were obtained by silica deposition on Ludox, have a wider size distribution than silica III, which was prepared by direct Stöber synthesis ( Figure 1). The average particle radius, R TEM , and its standard deviation, SD TEM , were determined by Gaussian fits to the histograms in Fig. 1 (see Table 1). The silica particles prepared by overgrowth of Ludox (silica I and II) have a higher zeta potential than silica III at pH 9 (see Table 1), indicating a somewhat different surface-chemical behavior of the two types of silica.
The zeta potentials suggest that all silica dispersions are electrostatically stabilized. Quantitative information about the particle size and size distribution of the silica sols was obtained by SANS. The scattering profile of a dilute (1.5 vol.-%) dispersion of silica II in D 2 O-rich water is shown in Figure 2. In this case I(q) reflects the form factor of the particles.
It is characterized by a Guinier regime at low values of the scattering vector q, a characteristic oscillation at q ≈ 0.4 nm -1 which relates to the radius of the silica particles, and a Porod-law behaviour, I(q) ~ P⋅q -4 at high q. The entire scattering profile can be represented by the form factor of spheres with log-normal size distribution, characterized by a radius R S and polydispersity σ. 16 Values of R S and σ, average radius <R S >, average surface area <A S > and average volume <V S > of the particles are given in Table 2. The scattering profiles of all three silica sols, fits with the log-normal size distribution of spheres, and details of the data analysis are given in the Supporting Information (SI).
The specific surface area a s of the silicas was determined from the nitrogen adsorption isotherms by the BET method. 17 Linear BET plots (correlation coefficient ≥ 0.9996) were found for relative pressures p/p 0 from 0.05 to 0.3 for the three samples. (Adsorption isotherms and BET plots are shown in SI). Values of a s and the geometric surface area a geom = 3/ρ S R S derived from the particle radius R S (from SANS) and the mass density of silica ρ S (2.20 g cm -3 ) are given in Table 3. Values of a s /a geom between 1 and 2 are obtained, increasing with the particle radius. This trend indicates that the surface roughness of the particles increases with size. Silica I (a S /a geom = 1.14) has a very low surface roughness, while the value a S /a geom = 1.85 for silica III indicates significant roughness (surface corrugations at a periodicity l ≈ 1 nm and profile depth d ≈ l). Alternatively, the result for silica III may be explained by a moderate degree of microporosity.
SANS study of adsorbed C 12 DAO layer
Scattering profiles I(q) for C 12 DAO in the absence and presence of silica particles are shown in Figure 3. These SANS measurements (and those presented in the later figures) were made in contrast-matching H 2 O/D 2 O, so that the scattering contrast is solely due to the surfactant. Accordingly, the difference in the scattering profiles obtained in the absence and presence of silica in Fig. 3 must be due to a different organization of the surfactant. As discussed later, the scattering profile in the absence of silica is indicative of surfactant micelles of ellipsoidal shape, while the scattering profile in the presence of silica indicates that the surfactant forms an adsorbed layer on the silica particles. Figure 3 also shows that addition of an electrolyte (0.1 M NaBr) to the silica-containing system is causing no significant changes of the scattering profile, indicating that the electrolyte does not affect the adsorption at the silica surface. The profiles in Fig. 3 were obtained with silica II at a surfactant concentration corresponding to a nearly complete adsorbed layer (⅞Γ mx ), but analogous results were found with silica I in the absence and presence of 0.1 M NaBr. The absence of a salt effect on the adsorption is expected because at the given pH C 12 DAO behaves as a nonionic surfactant with negligible degree of protonation. 13 Scattering profiles for different surface concentrations of the surfactant (¼Γ mx , ½Γ mx , ¾Γ mx and ⅞Γ mx ) on silica I and II, and for ⅞Γ mx on silica III are presented in SI. Qualitatively, all scattering profiles are similar to those for C 12 DAO on silica II shown in Fig. 3, but significant differences in detail can be found, as will be shown below. The local maximum in I(q) appears at q max ≈ 0.42 nm -1 (silica I), 0.30 nm -1 (silica II), and 0.20 nm -1 (silica III). The overall scattering intensity as well as the relative height of the maximum at q max both increase with the surface concentration of C 12 DAO. The analysis of the SANS profiles was made in two steps: Initially, simple geometrical modelling was used to estimate the volume, effective layer thickness and volume-based surface area of the adsorbed C 12 DAO. In the second step, nonlinear least-squares fitting of the scattering data to appropriate structure factor models was employed in order to extract information about the size and shape of the surface aggregates.
Geometric Modelling.
A model-free analysis of the Guinier and Porod regimes of I(q) in terms of the dry volume, layer thickness, and volume-based surface area of the adsorbed surfactant was performed as a basis for simple geometrical models of the surface aggregates.
The Guinier expression I(q) = I 0 exp(-R g 2 q 2 /3) can be used to fit the data in the low-q region. For example, for silica II (R G = 15.8 nm) we find a radius of gyration R g = 17.3 nm at the highest surface concentration (⅞Γ mx ). In the contrast-match scenario of our experiment, R g must have a value between the silica radius R G and R G + δ, depending on the surfactant density profile. Assuming for simplicity that R g is half-way between these two values, a For non-interacting particles, the dry volume of adsorbed C 12 DAO per silica bead, V dry , can be derived from the scattering cross section at zero angle by the relation
dry V I 2 0 ρ ϕ∆ = ,
where φ= 0.00891 is the volume fraction of C 12 DAO in the dispersion and ∆ρ= 3.72 X 10 -4 nm -2 is the scattering contrast between surfactant and background. From V dry and the mean particle radius <R S > (Table 2) one can determine the effective layer thickness δ of dry surfactant. Values of V dry and δ derived from the SANS data in this way are given in Table 4.
They are compared with values estimated from the adsorption isotherm of Pettersson, 13 using the mass densities 0.88 g cm -3 (pure C 12 DAO) and 2.20 g cm -3 (silica), and the values of the mean surface area per silica particles <A S > (Table 2). Reasonable agreement between the two sets of values is found for most samples, but large deviations appear for the sample with silica III. The results in Table 4 indicate that at surface concentrations up to ½Γ mx , the effective layer thickness δ is significantly smaller than the extended tail length of C 12 DAO (l c = 1.67 nm) 18 while at surface concentrations above ½Γ mx , the layer thickness approaches l c . This suggests the existence of either a monolayer (which is physically implausible at hydrophilic surfaces) or patches of bilayer. 19 Simple geometric modeling based on surface area and volume of adsorbed surfactant (from the Porod constant and I 0 , respectively) indicates that these patches must have dimensions close to micelles. The possibility of discrete surface micelles as reported recently 5,6,7 thus appears plausible. On the assumption that the volume of such surface micelles is similar to that of micelles in solution, the number of surface micelles N mic can be estimated by dividing the dry volume of adsorbed surfactant by the volume of a free micelles. Values of N mic obtained in this way are given in Table 4.
Core-shell model
The spherical core-shell model 3,7,20 was adopted to see if the scattering profiles are consistent with a laterally homogeneous surfactant layer. Three different values of the layer thickness (1.6, 3.2, and 4.0 nm) were tested in the modeling of the data for high surface concentrations of C 12 DAO: The first value corresponds to the effective thickness, δ eff , as obtained for high surface concentrations from the simple geometric analysis (Table 4); the second value is the expected bilayer thickness, i.e. twice the monolayer thickness, and the third value represents the mean thickness of a bilayer of C 12 E 5 at the surface of silica I. 7 A fit of the data for the surface concentration ⅞Γ mx on silica II with the three values of layer thickness is shown in The shortcomings of the core-shell model suggest that the adsorbed surfactant does not form a laterally uniform layer but smaller surface aggregates, such as spherical or ellipsoidal surface micelles, which have a higher surface area at a given total adsorbed volume.
Accordingly, models of silica particles decorated with such small surface micelles were applied to the present data, as described below.
Micelle-decorated silica model
In previous publications 5,6,7 we have developed and applied a form factor model for objects made of small spherical micelles adsorbed on an indexed-matched silica bead. Parameters of the model are the radius of the silica bead R S , the radius of spherical surface micelles R mic , and number of surface micelles N mic per silica particle, as well as a polydispersity parameter.
Polydispersity of the silica bead is accounted for by performing the calculation for silica radii drawn from a distribution function, and averaging. For a given silica bead of radius R S , the micelle centers are supposed to sit on a spherical shell of radius R S + R mic . The excluded volume of the spherical micelles is determined by their radius R mic , which also acts as a lateral interaction parameter for spheres. It is assumed that the number density of spheres in the layer is independent of bead radius and that N mic corresponds to the number of micelles on a bead of average radius. The algorithm consists of the following steps: (i) positioning the micelles in a random manner on the shell, possibly allowing for lateral reorganization following a Monte Carlo motion; (ii) calculation of the micelle-micelle structure factor using the Debye formula;
(iii) calculation of the scattered intensity in absolute units; and (iv) convolution with the resolution function of the spectrometer. The dimensional parameters of surface aggregates of given shapes depend strongly on the layer thickness δ used to calculate V tot . At first we assumed the formation of isolated spherical aggregates of radius R mic and number of micelles N mic . In most cases a fits with a layer thickness δ = 4.0 nm (i.e., a value similar to that found by the Guinier approximation, Section 3.3) gave a somewhat better fit than with δ =3.2 nm, although the difference was within a 5% in most cases. Results for fixed δ = 4.0 nm are summarized in Table 5. A noteworthy finding is that the number of surface aggregates estimated by this model (N mic ) is similar to that obtained by the simple geometrical analysis (N mic , cf. Section 3.3). This indicates that the simple geometric analysis gives reliable information about the morphologies of the surface aggregates.
C 12 DAO on silica I. Figure 5 shows the scattering data for C 12 DAO on silica I at the surface concentrations ¾Γ mx and ⅞Γ mx , and fits with the micelle-decorated silica model. In this case, a good fit of the scattering profile was obtained by assuming that C 12 DAO is adsorbed in form of isolated spherical surface micelles of radius R mic = 2.0 nm. The good representation of the data in the high-q region supports the conjectured uniform size of the surface aggregates, and the fit of the data at intermediate q indicates that the increasing amplitude of the maximum at q max can be explained by an increasing number of surface aggregates as the surface concentration is increased (cf. Table 5). Some deviations between the experimental and predicted I(q) appear in the q regime just below q max , where the model somewhat overestimates the total volume of the adsorbed surfactant aggregates (see Fig. 5).
The strong increase of I(q) at the lower end of the experimental q range is a hint of a maximum in I(q) at lower q, which was not captured because measurements at smaller angles were not performed for this silica. Such a maximum indicates repulsive interaction between the silica beads coated with small surface aggregates of C 12 DAO. The quality of the fit of the low-q region could not be improved by decreasing N mic , the number of surface micelles, at fixed radius R mic = 2.0 nm (cf. Table 5), since this implies a decrease in the surfactant volume fraction in the layer and thus lowers the amplitude of the maximum at q max , which is inconsistent with the experimental I(q). Similarly, no better fit was found when R mic was increased at a fixed value of N mic . C 12 DAO on silica II and III. The scattering data for C 12 DAO on silica II and III were also analysed in terms of the model of spherical surface aggregates. However, for these silicas reasonable fits with spherical surface micelles could be obtained only by adopting unrealistic values of the micelle radius R mic . Specifically, with silica II a surface micelle radius R mic = 0.85 nm was obtained for low surface concentrations (¼Γ mx , and ¼Γ mx ) and an even lower value (R mic = 0.66 nm) for higher surface concentrations (¾Γ mx , and ⅞Γ mx ). These values of R mic are physically unrealistic as they are less than half the length of an extended surfactant molecule. For this reason, model calculations similar to those described above were also made for surface aggregates of different geometries, viz., patch-like, ellipsoidal and wormlike micelles. The most acceptable morphology was oblate ellipsoids, with now two structural parameters, R n and R lat , which also define the orientation of the micelle on the surface: the minor semi-axis R n is the axis in the direction perpendicular to the surface and defines the height of the micelle center above the silica surface. The major semi-axis R lat characterizes the lateral extension of the oblate surface micelles on the surface. It is assumed that the surface aggregates interact only through excluded volume interactions. The positioning of the micelles is then performed as with the spherical micelles, and again the Debye formula is employed to determine the micelle-micelle structure factor.
A subtlety of this procedure is that at first sight it seems to be incorrect, as it uses the formalism of separation into form factor and structure factor which is valid only for monodisperse objects of spherical symmetry. We show in the Appendix that it can be used in our case to calculate the scattered intensity, as before with resolution function.
The evidence for non-spherical surface micelles of C 12 DAO on silica nanoparticles suggested a comparison with the micelle shape in solution. A comparison of the scattering curves of C 12 DAO in H 2 O/D 2 O in the absence and presence of silica II is shown in Figure 3.
The data for the aqueous solution of C 12 DAO can be represented by a model of oblate ellipsoids, with a =1.47 nm and b = 2.47 nm (where a is the rotational semi-axis and b the equatorial semi-axis of the ellipsoid core). The model of oblate micelles fits the data at high q somewhat better than the prolate ellipsoidal model (fit not shown), as indicated by the lower residual, which was ~1.2 for the oblate ellipsoid, but ~1.6 for the prolate ellipsoid. Figure 3 also shows that the scattering curves of C 12 DAO in the absence and presence of the silica overlap in the high-q region, indicating that the shape of the micelles in solution and at the surface is similar. Accordingly, the parameters R n and R lat of the surface micelle model were set to 1.5 nm and 2.2 nm, respectively, for all surface concentrations and the number of micelles, N mic was taken as the only adjustable parameter. Fits for C 12 DAO on silica II and silica III are shown in Figure 6. The good fit of the data in the entire q range supports the chosen model of isolated oblate surface micelles. Furthermore, the values of N mic derived from the fits (Table 5) are similar to those found in the simple geometric analysis (Table 4).
To gain a better understanding of the way in which the calculated scattering function is influenced by the structural parameters R n and R lat we have varied them in a systematic manner, keeping one of them (and N mic ) fixed and varying the value of the other in a range from 0.5 to 3.5 nm, as shown in Figure 7 for the scattering profile of ⅞Γ mx of C 12 DAO on silica II. Figure 7a shows the effect of a variation of R n , at fixed R lat = 2.2 nm and N mic = 94.
As can be seen, R n is directly related to the size of the ellipsoidal aggregates because decreasing its value to 0.5 nm or increasing its value to 3.5 nm causes a shift of q max to higher or lower values. R lat is related to the ordering of the micelles on the surface. This is indicated in Fig. 7b by the fact that a decrease of R lat from 2.2 to 0.5 nm causes a deformation of the size of the surface aggregates as their shell is no longer well defined. By increasing R lat to 3.5 nm a strong oscillation appears at q ≈ 1.3 nm -1 , indicating intermicellar repulsion between the adsorbed surface aggregates (see also Figs. 6 and 7 in ref. 6).
Discussion
Influence of surfactant head group
Aggregate structures of surfactants in solution can often be predicted on the basis of the critical packing parameter 0 / a l V c , which expresses the preferred interface curvature of the aggregate in terms of molecular parameters alkyl chain volume V, the alkyl chain length c l and the head group area 0 a . 21 C 12 DAO in its cationic form at low pH is expected to have a higher effective head group area than in its nonionic form at high pH where the absence of electrostatic head group repulsion allows a closer packing of the head groups. Hence it should be possible to study the effect of head group size on the shape of surface aggregates simply by changing pH. However, this is not possible in the present case because pH has to be fixed near pH 9 to prevent flocculation of the silica dispersion. We have checked that added electrolyte has no effect on the morphology of the adsorbed layer of C 12 DAO (Figure 3) under the experimental conditions, as expected for nonionic surfactants. It would be of interest to study the effect of electrolyte on the surfactant aggregates in the cationic form of the surfactant, but again this is not possible at the given pH of the system.
In our earlier study, 7 spherical surface aggregates were observed for the surfactant C 12 E 5 on silica I, in line with the large head-group size of this molecule. Since C 12 DAO has a much smaller head group than C 12 E 5 , one expects that surface aggregates of smaller curvature are preferred for this surfactant. This conjectured behavior is indeed found for C 12 DAO on silica II and III, where we find oblate-shaped surface aggregates. On the other hand, spherical surface aggregates of C 12 DAO are found on silica I, and they have similar dimensions as those of C 12 E 5 on silica I. These findings suggest that the head group size (or packing parameter) of the surfactant is not the dominating factor for the shape of surface aggregates on the silica particles. However, the nature of the surfactant head group may have a pronounced influence on the number of surface aggregates per particle (N mic ). This is suggested by a comparison of the results for C 12 DAO on silica I with the earlier results for C 12 E 5 on the same silica. 7 For instance, at a surface concentrations ¾Γ mx we find N mic = 36 for C 12 DAO (Table 4), but N mic = 72 for C 12 E 5 . 7 The larger number for C 12 E 5 may be attributed to its ability to form more than one strong hydrogen bond to surface silanol groups. However, other factors may also affect the number of surface aggregates, as is suggested by the fact that rather small numbers of surface aggregates (N mic up to 15) were found in the study of Triton X-100 (a technical-grade alkylphenyl polyoxyethylene surfactant) on a commercial silica sol (Bindzil B30). Since Triton X-100 is similar to C 12 E 5 and the mean particle size of Bindzil B30 (R S = 7.7 nm) is similar to that of silica I, the large difference in N mic between these two systems is not clear.
Influence of nanoparticle size
An interesting finding of this study is the morphological transition from spherical to ellipsoidal shape of the surface aggregates. This transition must be caused by some property of the silica particles, either their surface chemistry or surface roughness, or the particle size.
Since silica I and silica II were prepared by the same method, their surface properties are similar, as indicated by the similar zeta potentials (Table 1) and surface roughness (a s /a geom in Table 3) of these two samples, when compared to silica III. Because the transition in surface aggregate shape occurs from silica I to silica II, we may conclude that it is not induced by changes in surface properties but by the increase in particle size. In the preceeding study 7 we conjectured that the formation of spherical surface aggregates of C 12 E 5 on silica I was caused by the high surface curvature of the silica nanoparticles, which prevents an effective packing of the hydrophobic tails of the molecules in a bilayer configuration. This argument may be generalized by noting that the formation of surface aggregates at concentrations below the CMC depends on favorable interactions of the surfactant heads with the solid surface, and that the morphology of the surface aggregates will be determined by a balance of amphiphileamphiphile (A-A) and amphiphile-surface (A-S) interactions. At weakly convex surfaces (large silica particles), micellar aggregates of relatively low mean curvature can have favourable A-S interactions without significant changes in aggregate structure, i.e. without sacrificing A-A interaction energy. This appears to be the situation for the oblate-shaped surface micelles of C 12 DAO on silica II and silica III. On the other hand, at highly convex surfaces (small silica particles), optimization of the A-A and A-S interactions may favor smaller surface aggregates, if that leads to a larger number of surface contacts per unit area of the solid particle -even at the cost of a higher A-A bending energy. This seems to the situation for C 12 DAO on silica I. The transition from spherical to oblate shape of the surface aggregates can then be seen as a relaxation from a strained to an unstrained curvature of the surface aggregates, since oblate micelles represent the favored aggregate shape of C 12 DAO in the bulk solution. The present work suggests that this morphological transition occurs at a particle radius R S ≈ 10-12 nm, i.e. R mic /R S ≈ 0.2. To our knowledge, no theoretical model for this morphological transition exists in the literature, but such a model would be most valuable for gaining a better understanding of this phenomenon.
Conclusion
SANS has been used to study the shape of surface aggregates of C 12
( ) ( ) q P q S V N q I = ) ( (A1)
where P(q) is the form factor related to the scattering amplitude F(q), which is the Fourier transform of the scattering length density distribution, by
P(q) = 2 ) ( | | q F = 2 ) ( | | q F([ ] 2 2 2 ) ( ) ( ) ( ) ( ) ( | | − | | + | | = q F q F q F q S V N q I (A3)
In order to see if the simpler factorization can be used, it is thus necessary to calculate 2 ) ( | | q F and 2 ) ( | | q F . In the Figure A As can be seen in Figure A
.1, 2 ) ( | | q F and 2 ) ( | | q F
coincide for small angles (q < 1.5 nm -1 ), but differences appear at q > 1.5 nm -1 due to stronger oscillations of the square of the average of the amplitude F(q). One thus has to discuss the low-q and high-q regions separately: -At low q, the two functions are the same, and equation (A3) reduces to (A1).
-At high q, the structure factor tends to one, and equation (A3) reduces to the measured form factor scattering P(q) = 2 ) ( | | q F .
Given that the maxima of S(q) in our experimental case are located well below 1 nm -1 , and
Porod surface (i.e. form factor) scattering is observed above 1.5 nm -1 , it is clear that S(q) = 1 in this high-q range. Thus, equation (A1) can be safely used with the experimentally measured form factor for ellipsoidal micelles of typical size 1.5 to 2.5 nm, adsorbed and interacting on the surface of an indexed-matched silica sphere of much larger radius. Limits of this approach may be a weaker separation in length scales, e.g. caused by a very high surface density of ellipsoidal micelles, which is not the case here. Note that we have applied such calculations to the similar case of interacting cylindrical (albeit non adsorbed) micelles. 22 Table 1. Characterization of the silica nanoparticles* *average particle radius R and size distribution SD from TEM, zeta potential ζ and conductivity κ of the silica dispersion . SANS profiles I(q) for silica I with adsorbed C 12 DAO (surface concentrations ⅞Γ mx and ¾Γ mx ) and fits by the micelle-decorated silica model (solid curves) for spherical surface micelles of radius 2 nm (parameters see Table 5). Results for surface concentration ⅞Γ mx are shifted relative to that for ¾Γ mx by factor of 4. Table 5). In (a) the curves for higher surface concentrations are shifted vertically relative to that of ¼Γmx by factors of 2 (½Γ mx ), 4 (¾Γ mx ) and 14 (⅞Γ mx ).
background of the samples was subtracted by enforcing a high-q Porod (I(q) = Pq -4 ) behaviour, with background intensity values typical for H 2 O/D 2 O mixtures. Transmission Electron Microscopy. TEM images were taken using a Tecnai G 2 20 S-Twin electronic microscope operating at an accelerating voltage of 200 kV and an electron source of LaB 6 . Samples for TEM were first diluted to 0.8 wt-% in ethanol and then prepared by drying a droplet of the dilute dispersion on a copper grid (coated with a carbon film with a thickness of 20 nm). TEM images were taken at a minimum of 5 different locations on the grid, and a total of 220 particles were measured per sample to ensure good statistics in the determination of the particles size. Nitrogen Adsorption. The specific surface area of the silica samples was determined by nitrogen adsorption at 77 K using a Gemini III 2375 Volumetric Surface Analyzer (Micromeritics). For this purpose the silica dispersion was dried at 218 K for two days using a Freeze Dryer Alpha 2-4 LD/plus (Martin Christ), and then outgassed at 393 K for 1 h under vacuum.
typical layer thickness of the adsorbed surfactant is δ = 2·(17.3-15.8) = 3 nm. From the high-q region (Porod regime) we obtain the volume-based surface area S/V of the surface aggregates, since the concentration of free micelles in solution is negligible at the chosen surfactant concentrations. The respective value for free micelles can be derived from the scattering profile of the surfactant in the absence of silica (Fig. 3). One finds that S/V for the free micelles is about 10% lower than for the surface aggregates. The similar magnitude of the two values implies similar morphologies of the surfactant aggregates in solution and on the surface. This excludes adsorbed half-micelles, 13 which would require considerably more surface area.
Figure 4 .
4In these fits the polydispersity in radius of the silica particles is taken into The model gives a good fit to the scattering data in the low-q regime including the maximum at intermediate q, if the theoretical intensities are multiplied by a scale factor f = 0.22 (δ = 3.2 nm) or f = 0.12 (δ = 4.0 nm) (cf. insetFig. 4). As the scattered intensity is proportional to the surfactant volume on each bead, this implies that the core-shell model with such a film thickness considerably overestimates the adsorbed surfactant volume. On the other hand, a fit with the layer thickness δ eff = 1.6 nm reproduces all features of I(q) without any scale factor f, except for the shoulder at q ≈ 1.0 nm -1 , causing some underestimate of the specific surface area of adsorbed surfactant. In addition, the maximum in I(q) appears at somewhat too high q, indicating that the real value of the average layer thickness should be greater than 1.6 nm. The agreement with the experimental data for the film thickness 1.6 nm is of course related to the fact that in this case the volume of adsorbed surfactant is conserved.However, although the core-shell model with a layer thickness 1.6 nm gives a good representation of the scattered intensities, the result is unrealistic because this value of δ corresponds to only about half the thickness expected for a bilayer of C 12 DAO at the solid/solution interface. Results similar to those shown inFig. 4were also obtained for lower surface concentrations of C 12 DAO on silica II and for the adsorption of C 12 DAO on silica I.
In order to check if a model of spherical micelles is consistent with the data, their radius and number was estimated directly from the scattering curves. The micellar radius must be approximately half the thickness of the layer, and N mic and R mic are related by volume conservation to the amount of adsorbed surfactant as determined either by the adsorption isotherm or by the low-q fit of the core-shell model with factor f. Similarly, the amount of surface produced by N mic spheres of radius R mic must match the volume-related surface area determined from Porod's law. This constrains the model considerably, and parameters can only be varied in a narrow range.Fitting of the scattered intensities with this model and the estimation of the real adsorbed surfactant volume, V tot , was based on the results of the core-shell model for the layer thicknesses δ = 3.2 and 4.0 nm. The real adsorbed surfactant volume was determined by introducing the effective volume fraction of surfactant in the shell, X (i.e., fraction of the layer volume occupied by the surfactant), which is related to the scale factor f by f X = . For example, for silica II at the surface concentration ⅞Γ mx , the scale factor f = 0.22 introduced for the layer thickness 3.2 nm implies that only 47% of the layer volume is occupied by C 12 DAO. The total surface area of the adsorbed surfactant, A tot , was calculated from the volume-related surface area of adsorbed surfactant as determined by Porod's law, (S/V) surf = P/2π∆ρ 2 , and the number density of silica beads, the relation A tot = (S/V) surf /(N/V) S .The number and dimensional parameters of surface aggregates of different morphologies were estimated from the real volume, V tot , and total surface area, A tot , of adsorbed surfactant.
DAO formed at the surface of spherical silica nanoparticles with diameters from 16 to 42 nm. In agreement with results for other nonionic surfactants studied earlier (Triton X-100 5 and C 12 E 5 7 ) it is found that C 12 DAO does not form a laterally uniform adsorbed layer on the surface of the silica nanoparticles, but rather they form small surface aggregates. The present work presents evidence for a morphological transition of the surface micelles as a function of the particle size of the silica nanoparticles. Spherical surface aggregates are formed on particles of 16 nm, but oblate ellipsoidal surface micelles on silica particles of 27 and 42 nm diameter. The formation of spherical surface micelles of C 12 DAO on the surface of the smallest silica particles is favored because this kind of morphology optimizes the surface free energy by increasing the contact area between the surface micelles and the silica surface. This energy decreases as the surface curvature of the silica particles decreases with increasing particle size, and thus aggregates of lower curvature, such as oblate ellipsoids are favored on larger silica particles. These ellipsoidal surface aggregates are characterized by a minor semi-axis R n and a major semi-axis R lat , which also defines the effective surface area of the surface aggregates. The dimensions of the ellipsoidal surface aggregates are similar to those of C 12 DAO micelles in the aqueous solutions. From a comparison of the present results with those of the preceding studies it is concluded that the shape of the surface aggregates (spherical or ellipsoidal) is not determined by the size of surfactants head group. However, for spherical surface micelles it appears that the nature of the head group can have a strong influence on the maximum number of surface aggregates per particle. Specifically, the maximal number of surface micelles of C 12 DAO is much smaller than for C 12 E 5 at the same silica, presumably due to its less hydrophilic character. However, further systematic work is needed to clarify the role of surfactant head group -surface interactions on the type of surface aggregates and the maximum number density of aggregates on the silica particles. We are planning such studies in our laboratories. The present work shows that the micelle-decorated silica model provides a reliable and versatile basis for determining size and shape and the number of surface aggregates of amphiphiles on spherical nanoparticles from SANS scattering data. The results obtained on the basis of this form-factor model are consistent with those derived by a simple geometric analysis of the Guinier and Porod regimes of the SANS data. Acknowledgments The authors wish to thank D. Berger for help with the Transmission Electron Microscopy. D.L. is grateful to Deutscher Akademischer Austauschdienst (DAAD) and the Fundación Gran Mariscal de Ayacucho (Fundayacucho) for receiving a doctoral scholarship, and to TU Berlin for a Promotionsabschluß-Stipendium. Financial support by DFG through project FI 235/16-2 and the cooperation initiated in the framework of the French-German Network "Complex Fluids: From 3 to 2 Dimensions" (Project FI 235/14-3) is also gratefully acknowledged. Appendix Scattering from index-matched silica particles decorated with ellipsoidal micelles Micelles adsorbed on the surface of a sphere are confined to a spherical shell. Correlations between their centers of mass are generated by excluded volume and possibly other interactions between the micelles. In our experiments, silica nanoparticles are index-matched,i.e., they do not contribute to the signal. Scattering from interacting spherical micelles can be factorized in a product of form and structure factor:
.1, the result for an isolated oblate ellipsoid at 1 vol.-%, with a contrast of ∆ρ = 5.1 10 10 cm -2 , is shown.
Fig
Fig. A.1. Comparison of
FiguresFigure 1 .Figure 2 .Figure 3 .
123TEM images and particles size distribution histograms for silica I, silica II and silica III. The histogram are based on the diameters of at least of 200 different particles from different TEM images. Scattering profile I(q) of a dilute dispersion of silica II in nearly pure D 2 O at pH 9 (298 K). The solid line represents a fit with the log-normal size distribution function. The inset shows I(q) for this silica in contrast matching H 2 O/D 2 O to indicate the quality of the contrast match. The constant background arises mostly from incoherent scattering SANS profiles I(q) for C 12 DAO in the contrast matching H 2 O/D 2 O in the absence and presence of silica II. Also shown is the scattering profile for the silica containing system with added 0.1 M NaBr. The scattered intensities were normalized with volume fraction φ of the surfactant in the system.
Figure 4 .Figure 5
45Experimental SANS profiles I(q) and intensities predicted by the spherical core-
Figure 7 .
7Scattering profile for C 12 DAO (surface concentration ⅞Γ mx ) on silica II and results predicted by the micelle-decorated silica model for ellipsoidal surface micelles: (a) varying the normal semi-axis R n , at fixed R lat = 2.2 nm and N mic = 94; (b) varying the lateral semi-axis R lat , at fixed R n =1.5 nm N mic = 94.
A2)The last equality is a direct consequence of spherical symmetry, the average being (also) a rotational one. The case of interacting ellipsoidal micelles can be deduced in the same way as0.01
0.1
1
10
1E-5
1E-4
1E-3
0.01
0.1
I<F(q)>I
2
q, nm
-1
I, cm
-1
<lF(q)l
2
>
Oblate Ellipsoid
R n = 1.47 nm
R lat = 2.47 nm
equation (A1) by simply keeping
2
)
( |
| q
F
and
2
)
( |
|
q
F
separate:
Table 2 .
2Characterization of the silica dispersions by SANS*: *Parameters of the log-normal size distribution, R S and σ, average radius <R S >, average surface area <A S >, average volume <V S > of the silica beads.Silica
Sol
R S
(nm)
σ
<R S >
(nm)
<A S >
(nm 2 )
<V S >
(nm 3 )
I
8.20
0.10
8.24 8.62⋅10 2 2.42⋅10 3
II
13.50
0.10
13.57 2.34⋅10 3 1.08⋅10 4
III
21.00
0.10
21.11 5.65⋅10 3 4.06⋅10 4
Table 3 .
3Surface characterization of the silicas.* *Specific surface area a s from the BET analysis; geometrical surface area a geom from the particle radius and mass density of silica.Table 4. Characteristics of the surfactant layer adsorbed at the silica nanoparticles derived from the SANS data and from the adsorption isotherm* *dry volume V dry and effective thickness δ of the adsorbed C 12 DAO layer; N mic is the number of surface micellesSilica Sol
R TEM
nm
SD TEM
ζ
mV
κ
µm cm -1
I
8.3
0.126
-45.6
121
II
13.7
0.104
-43.2
66
III
21.3
0.027
-32.7
69
Table 5 .
5Parameters of the micelle-decorated silica model for C 12 DAO on silica particles* * Silica I: Best-fit values of R mic and N mic for spherical surface micelles. Silica II and silica III: best-fit values of N mic for oblate ellipsoidal surface micelles of fixed values of R n and R lat.
Sample
R mic (nm)
N mic
Silica I (spherical)
½Γ mx
1.63
38
¾Γ mx
1.97
36
⅞Γ mx
1.97
42
Silica II (oblate)
¼Γ mx
1.5 / 2.2
26
½Γ mx
1.5 / 2.2
55
¾Γ mx
1.5 / 2.2
87
⅞Γ mx
1.5 / 2.2
94
Silica III (oblate)
Figure 6. SANS profiles I(q) for silica II and silica III with adsorbed C12 DAO in contrastmatching H 2 O/D 2 O and fits by the micelle-decorated silica model for ellipsoidal micelles (solid curves): (a) silica II at surface concentrations ¼Γ mx , ½Γ mx , ¾Γ mx and ⅞Γ mx ; (b) silica III with surface concentrations ⅞Γ mx of C 12 DAO (parameters see0.1
1
0.01
0.1
1
10
100
a)
I, cm
-1
7/8Γ mx C 12 DAO
3/4Γ mx C 12 DAO
1/2Γ mx C 12 DAO
1/4Γ mx C 12 DAO
0.1
1
0.01
0.1
1
10
b)
I, cm
-1
7/8Γ mx C 12 DAO
q, nm
-1
. O Dietsch, A Eltekov, H Bock, K E Gubbins, G H Findenegg, J. Phys. Chem. C. 111Dietsch, O.; Eltekov, A.; Bock, H.; Gubbins, K.E.; Findenegg, G.H. J. Phys. Chem. C 2007, 111, 16045- 16054.
. L M Grant, F Tiberg, W A Ducker, L M Grant, T Ederth, F Tiberg, Langmuir, A Blom, F P Duval, L Kovács, G G Warr, M Almgren, M Kadi, R Zana, Langmuir, J. Phys. Chem. B. 102Grant, L.M.; Tiberg, F.; Ducker, W.A. J. Phys. Chem. B 1998, 102, 4288-4294. (b) Grant, L. M.; Ederth, T.; Tiberg, F. Langmuir 2000, 16, 2285-2291. (c) Blom, A.; Duval, F. P.; Kovács, L.; Warr, G. G.; Almgren, M.; Kadi, M.; Zana, R. Langmuir 2004, 20, 1291-1297.
. P G Cummins, E Staples, J Penfold, P G Cummins, E Staples, J Penfold, P G Cummins, J Penfold, E Staples, J. Phys. Chem. 94J. Phys. Chem.Cummins, P.G.; Staples, E.; Penfold, J. J. Phys. Chem. 1990, 94, 3740-3745. (b) Cummins, P.G.; Staples, E.; Penfold, J. J. Phys. Chem. 1991, 95, 5902-5905. (c) Cummins, P.G.; Penfold, J.; Staples, E. J. Phys. Chem. 1992, 96, 8092-8094.
. J Penfold, E Staples, I Tucker, P G Cummins, J. Phys. Chem. 100Penfold, J.; Staples, E.; Tucker, I.; Cummins, P.G. J. Phys. Chem. 1996, 100, 18133-18137.
. G Despert, J Oberdisse, Langmuir, 19Despert, G.; Oberdisse, J. Langmuir 2003, 19, 7604-7610.
. Oberdisse, J. Phys. Chem. Chem. Phys. 6Oberdisse, J. Phys. Chem. Chem. Phys. 2004, 6, 1557-1561.
. D Lugo, J Oberdisse, M Karg, R Schweins, G H Findenegg, Soft Matter. 5Lugo, D.; Oberdisse, J.; Karg, M.; Schweins, R.; Findenegg, G.H. Soft Matter 2009, 5, 2928-2936.
. X Li, Z Lin, J Cai, ScrivenLi, X.; Lin, Z.; Cai, J.; Scriven;
. L A Davis, H , T J Phys, Chem, 99L.A. Davis, H,T J. Phys. Chem. 1995, 99, 10865-10878.
. U Menge, P Lang, G H Findenegg, P Strunz, J. Phys. Chem. B. 107Menge, U.; Lang, P.; Findenegg, G.H.; Strunz, P. J. Phys. Chem. B 2003, 107, 1316-1320.
. F Sterpone, G Marchetti, C Pierleoni, M Marchi, J. Phys. Chem. B. 110Sterpone, F.; Marchetti, G.; Pierleoni, C.; Marchi, M. J. Phys. Chem. B 2006, 110, 11504-11510.
. G G Chernik, E P J Sokolova, Colloid Interface, L Sci ; Benjamin, G C Kresheck, P A Timmins, J Hauk, T Wacker, W Welte, P A Timmins, M Leonhard, H U Weltzien, T Wacker, W Welte, D J Barlow, M J Lawrence, T Zuberi, S Zuberi, Langmuir, J. Am. Chem. Soc. 141Febs LettersChernik, G. G.; Sokolova, E. P. J. Colloid Interface Sci. 1991, 141, 409-414. (b) Benjamin, L. J. Phys. Chem. 1964, 68, 3575-3581. (c) Kresheck, G. C. J. Am. Chem. Soc. 1998, 120, 10964-10969. (d) Timmins, P. A.; Hauk, J.; Wacker, T.; Welte, W. Febs Letters 1991, 280, 115-120. (e) Timmins, P. A.; Leonhard, M.; Weltzien, H. U.; Wacker, T.; Welte, W. Febs Letters 1988, 238, 361-368. (f) Barlow, D. J.; Lawrence, M. J.; Zuberi, T.; Zuberi, S. Langmuir 2000, 16, 10398-10403.
. Z Király, G H Findenegg, Langmuir, 16Király, Z.; Findenegg, G. H. Langmuir 2000, 16, 8842-8849.
. A Pettersson, J B Rosenholm, A Langmuir ; Pettersson, J B Rosenholm, Langmuir, 18Pettersson, A.; Rosenholm, J. B. Langmuir 2002, 18, 8436-8446. (b) Pettersson, A.; Rosenholm, J. B. Langmuir 2002, 18, 8447-8454.
. W Stöber, A Fink, E Bohn, J. Colloid Interface Sci. 26Stöber, W.; Fink, A.; Bohn, E. J. Colloid Interface Sci. 1968, 26, 62-69.
. A Brûlet, D Lairez, A Lapp, J.-P Cotton, J. Appl. Cryst. 40165Brûlet, A.; Lairez, D.; Lapp, A.; Cotton, J.-P. J. Appl. Cryst. 2007, 40, 165-xxx;
P Lindner, Neutrons, Xrays and Light: Scattering Methods Applied to Soft Condensed Matter. P. Lindner and Th. ZembBoston1st ednLindner, P. in Neutrons, X- rays and Light: Scattering Methods Applied to Soft Condensed Matter 2002, ed. P. Lindner and Th. Zemb, Boston, 1st edn., ch. 2, pp 23-48.
. J Oberdisse, B Deme, Macromolecules, 35Oberdisse, J.; Deme, B. Macromolecules 2002, 35, 4397-4405.
Characterization of Porous Solids and Powders: Surface Area, Pore Size and Density. S Lowell, J E Shields, M A Thomas, M Thommes, C. J. Phys. Chem. 18Kluwer Academic PublishersDordrechtLowell, S.; Shields, J.E.; Thomas, M.A.; Thommes, M. Characterization of Porous Solids and Powders: Surface Area, Pore Size and Density, 2004, Kluwer Academic Publishers, Dordrecht 18 Tanford, C. J. Phys. Chem. 1972, 76, 3020-3024.
Surfactants and Polymers in Aqueous Solution. K Holmberg, B Jönsson, B Kronberg, B Lindman, John Wiley & Sons2nd ed.Holmberg, K.; Jönsson, B.; Kronberg, B.; Lindman, B. Surfactants and Polymers in Aqueous Solution 2003, 2nd ed., John Wiley & Sons.
P Pusey, Neutrons, X-rays and Light: Scattering Methods applied to Soft Condensed Matter. B.J. GabrysNetherlandsGordon and Breach Science Publishers1st ed.Pusey, P.N. in Neutrons, X-rays and Light: Scattering Methods applied to Soft Condensed Matter, ed. B.J. Gabrys, 2000, 1st ed., Gordon and Breach Science Publishers, Netherlands, chap. 4, pp. 77-102.
. J N Israelachvili, D J Mitchell, B W Ninham, J. Chem. Soc. Faraday Trans. 2Israelachvili, J.N.; Mitchell, D.J.; Ninham, B.W. J. Chem. Soc. Faraday Trans. 2 1976, 72, 1525-1568.
. J Oberdisse, O Regev, G Porte, Journal of Physical Chemistry B. 102Oberdisse, J.; Regev, O.; Porte, G. Journal of Physical Chemistry B 1998, 102, 1102-1108.
| []
|
[
"Smart at what cost? Characterising Mobile Deep Neural Networks in the wild",
"Smart at what cost? Characterising Mobile Deep Neural Networks in the wild"
]
| [
"Mario Almeida [email protected] ",
"Stefanos Laskaridis [email protected] ",
"Abhinav Mehrotra [email protected] ",
"Lukasz Dudziak [email protected] ",
"Ilias Leontiadis [email protected] ",
"Nicholas D Lane [email protected] ",
"\nSamsung AI Center\nCambridge\n",
"\nUniversity of Cambridge\n\n"
]
| [
"Samsung AI Center\nCambridge",
"University of Cambridge\n"
]
| []
| With smartphones' omnipresence in people's pockets, Machine Learning (ML) on mobile is gaining traction as devices become more powerful. With applications ranging from visual filters to voice assistants, intelligence on mobile comes in many forms and facets. However, Deep Neural Network (DNN) inference remains a compute intensive workload, with devices struggling to support intelligence at the cost of responsiveness. On the one hand, there is significant research on reducing model runtime requirements and supporting deployment on embedded devices. On the other hand, the strive to maximise the accuracy of a task is supported by deeper and wider neural networks, making mobile deployment of state-of-the-art DNNs a moving target.In this paper, we perform the first holistic study of DNN usage in the wild in an attempt to track deployed models and match how these run on widely deployed devices. To this end, we analyse over 16k of the most popular apps in the Google Play Store to characterise their DNN usage and performance across devices of different capabilities, both across tiers and generations. Simultaneously, we measure the models' energy footprint, as a core cost dimension of any mobile deployment. To streamline the process, we have developed gaugeNN, a tool that automates the deployment, measurement and analysis of DNNs on devices, with support for different frameworks and platforms. Results from our experience study paint the landscape of deep learning deployments on smartphones and indicate their popularity across app developers. Furthermore, our study shows the gap between bespoke techniques and real-world deployments and the need for optimised deployment of deep learning models in a highly dynamic and heterogeneous ecosystem. | 10.1145/3487552.3487863 | [
"https://arxiv.org/pdf/2109.13963v1.pdf"
]
| 238,215,587 | 2109.13963 | c92123e7cd34fb764bdcfe45986386a9b3636333 |
Smart at what cost? Characterising Mobile Deep Neural Networks in the wild
Mario Almeida [email protected]
Stefanos Laskaridis [email protected]
Abhinav Mehrotra [email protected]
Lukasz Dudziak [email protected]
Ilias Leontiadis [email protected]
Nicholas D Lane [email protected]
Samsung AI Center
Cambridge
University of Cambridge
Smart at what cost? Characterising Mobile Deep Neural Networks in the wild
PREPRINT: Accepted at the ACM Internet Measurement Conference (IMC), 2021 PREPRINT: Accepted at the ACM Internet Measurement Conference (IMC), 2021 * Indicates equal contribution.CCS CONCEPTS • Computing methodologies → Machine learning• Computer systems organization → Embedded and cyber-physical sys- tems• Information systems → Computing platformsKEYWORDS Deep Neural Networks, Mobile Systems, Benchmarking
With smartphones' omnipresence in people's pockets, Machine Learning (ML) on mobile is gaining traction as devices become more powerful. With applications ranging from visual filters to voice assistants, intelligence on mobile comes in many forms and facets. However, Deep Neural Network (DNN) inference remains a compute intensive workload, with devices struggling to support intelligence at the cost of responsiveness. On the one hand, there is significant research on reducing model runtime requirements and supporting deployment on embedded devices. On the other hand, the strive to maximise the accuracy of a task is supported by deeper and wider neural networks, making mobile deployment of state-of-the-art DNNs a moving target.In this paper, we perform the first holistic study of DNN usage in the wild in an attempt to track deployed models and match how these run on widely deployed devices. To this end, we analyse over 16k of the most popular apps in the Google Play Store to characterise their DNN usage and performance across devices of different capabilities, both across tiers and generations. Simultaneously, we measure the models' energy footprint, as a core cost dimension of any mobile deployment. To streamline the process, we have developed gaugeNN, a tool that automates the deployment, measurement and analysis of DNNs on devices, with support for different frameworks and platforms. Results from our experience study paint the landscape of deep learning deployments on smartphones and indicate their popularity across app developers. Furthermore, our study shows the gap between bespoke techniques and real-world deployments and the need for optimised deployment of deep learning models in a highly dynamic and heterogeneous ecosystem.
INTRODUCTION
The recent popularity of Deep Neural Networks (DNNs) has seen them being applied to myriads of areas, from computer vision [29] to speech recognition [10] and machine translation [58]. DNNs are no longer only being deployed in datacenters [28], as they have found their way into mobile devices, ranging from IoT devices to flagship smartphones and self-driving cars. In fact, large part of what makes smartphones smart, can be attributed to the everincreasing support for machine learning, be it in the form of camera optimisations, intelligent assistants or text predictions.
While DNNs have become more and more accurate, this was frequently at the expense of an increased number of parameters, energy consumption and computational load [3,29,32,55], often resulting in poor performance on resource-restricted mobile and embedded devices [3,40,72].
To address these challenges, there has been significant research towards mobile-specific DNN optimisations. Firstly, researchers have designed various mobile-specific architectures either manually [31,39] or automatically, through Network Architecture Search (NAS) [59]. Secondly, numerous works have looked into reducing computation through weight sparsification and pruning [41] and quantisation [26]. Thirdly, kernel optimisations have been proposed for mobile SoCs [13]. Last but not least, inference offloading is an alternative approach where computation is partly or wholly outsourced to a remote endpoint for faster results [35,38].
At the same time, recent developments on mobile SoCs enable smartphones to support higher DNN computational throughput at a lower energy budgets [33,65], either through heterogeneous multicore processors (e.g. ARM big.LITTLE and DynamIQ) or through specialised hardware (e.g. DSPs and NPUs). However, the device ecosystem remains very heterogeneous, ranging from cheaper devices with older processors to flagship devices with dedicated processing units. As a result, it is extremely hard for developers to assess the performance and optimise their DNN models for each possible device tier [68].
In this work, we attempt to measure what the actual mobile ML landscape looks like in the wild by studying real-world DNNs, as deployed with the most popular applications of the Google Play Store. Our goal is to examine whether real-life deployments follow the state-of-the-art of ML research and identify performance bottlenecks over devices of different tiers and generations. The gained experience will provide insights on the system and modellevel optimisations required to push the current frontier of mobile intelligence. In particular, we make the following contributions:
• We design a system, named gaugeNN, that automates the extraction, analysis and benchmarking of DNN models found in the most popular apps in the wild. • Using gaugeNN we analyse over 16k (33k across two snapshots)
Google Play Store apps with respect to their DNN models. We characterise these models in terms of their usage, architecture, layer operations and optimisations as well as external cloudbased DNN API calls. • We compare our latest snapshot with a previous version of the Google Play most popular apps 12 months ago and comment on the trajectory of DNN mobile penetration in the past year. • We perform a runtime measurement of hundreds of these DNN models across heterogeneous devices of different capabilities to further characterise these models in terms of their achieved latency and energy consumption. • We analyse model and system-level optimisations supported by publicly available toolsets and provide an overview of the current DNN optimisation landscape available to developers and practical guidelines for improving the development and deployment of future DNNs.
RESEARCH QUESTIONS & RESULTS
With our study, we aim to answer the following Research Questions (RQ) that arise:
RQ#1: Given the forefront of ML research and the multitude of tools and devices in the wild, what kind of models are being deployed in mobile apps and utilised by developers and for which tasks? RQ#2: In a highly heterogeneous ecosystem of smartphones, how are these models deployed and are they able to perform efficiently across different targets and tasks? RQ#3: What are common model and system-level optimisations being used to make inference in the wild faster on smartphones? Can they be improved?
Results: Our results indicate that mobile developers choose to deploy simple off-the-shelf models on-device, potentially pretrained or fine-tuned for targeting different tasks, and often rely on cloud offloading to support larger tasks. This minimises the burden to the app developer and cashes upon existing models widely available. Furthermore, we witness that devices of different tiers and generations have widely varying performance over the benchmarked models, with the low-tier devices being significantly slower in DNNbased tasks. When it comes to performance per watt, we notice a general trajectory of devices getting incrementally more efficient from generation to generation, with SoCs integrating more and more specialised hardware in the die. However, the same trajectory cannot be traced on battery technology, which remains largely the same and mainly varies depending on the device's form factor. Last, we have observed that off-the-shelf model-level optimisations deployed with major frameworks more often than not do not result to latency or memory benefits during inference, but are focused on compressibility of the model. Simultaneously, SoC vendor-specific tools offer a significant benefit in runtime, at the expense of generality of the deployed models. Still, we found no significant evidence of target-specific model deployment in the wild.
METHODOLOGY
To fulfil these diverse characterisation goals, we employ the three step methodology depicted in Fig. 1. First, we crawl the Google Play Store to find the DNN models from within the most popular apps among mobile users and extract their associated ML models, validating them against certain rules (grey boxes). Second, we perform a device-agnostic app and model analysis (purple boxes). Specifically, we look at the app's store metadata, where the DNN is used, as well as the model's layers and operations. Finally, we benchmark the models on different devices to analyse their performance upon deployment (blue box). To automate this process and analyse ML models at scale we designed gaugeNN. We describe below each component in greater detail.
DNNs retrieval
The first step in our methodology is to find, extract and validate the DNNs from Google Play Store most popular apps. App crawling. First, gaugeNN mimics the web API calls made from the Google Play store of a typical mobile device to crawl the Google Play Store. In these requests, both the user-agent and locale headers are defined, which determine the variant of the store and apps retrieved. To perform the crawling, we fetch the list of the top free apps per category which returns a maximum of 500 apps. Additionally, gaugeNN stores the store metadata for each app, including popularity, category, reviews, etc. in an ElasticSearch instance for quick ETL 1 analytics and cross-snapshot investigations (Sec. 4). Model extraction. Given the downloaded apps, gaugeNN proceeds to extract the DNN models from each application's package. Traditionally, Android applications are packaged in a zip file, i.e. apk, which comes with the the Java/Kotlin "bytecode" along with resources used by the app (e.g. textures, images, fonts). Apks have a size limit of 100MB and files -such as DNN weights -can have a larger storage footprint. As a result, Google Play allows additional content to be shared either with expansion files [21] (OBBs) or through Android App Bundles through Play Asset Delivery [20] The former supplement the main apk file and are hosted and served by Google Play, whereas the latter offers the possibility of downloading assets on demand, as needed for a given device. gaugeNN supports file extraction from i) the base apk, ii) expansion files (OBBs) and iii) Android App Bundles, but does not track asset delivery outside of Google Play. Extracted files are matched against a compiled list of 69 known DNN framework formats (listed in the Appendix) to identify potential DNN models. Model validation. Many models use generic file formats (e.g., protobuffer). Therefore, the number of candidate model files and extensions is quite large and benchmarking all prospective ones quickly becomes computationally prohibitive at scale. Therefore, inspired by the open-source Netron [53] tool , gaugeNN employs a lightweight -framework and format specific -validation process to remove files that are not DNN models. This validation consists of checking the binary signature of the file for the presence of specific identifiers that a framework uses. For example, for TFLite, we know that the FlatBuffer files representing models include specific headers at certain positions of the binary file, thus we check for the existence of e.g. the string "TFL3" there. On the downside, encrypted and obfuscated models do not match such validation rules and are not extracted in our analysis. Moreover, models downloaded on demand by the application outside of the official Google Play distribution mechanisms are omitted from our benchmarks. However, we do track applications using such models indirectly by means of library inclusion in the application code and native libraries, even without explicitly analysing the models. The native code detection follows the methodology of Xu et al. [70].
Offline DNN analysis
After collecting the top apps from each category, we analyse the usage of Deep Neural Networks in the wild. Apps can use DNN models in different ways; i) they can execute the models on-device or ii) offload the computation to external resources (e.g. cloud providers). In-app DNN models. After identifying the model files within an application, gaugeNN extracts their DNN architecture either by parsing directly the file, or by using the associated framework's interpreter. A DNN model is typically represented as a DAG 2 , where layers are represented by vertices and data flows by edges. By going through each model's graph, gaugeNN registers the type of layer, its parameters (weights) and operations in a trace-based manner and uses this information to estimate the total operations 3 (#FLOPs) and model size (#parameters). Furthermore, we can later individually run these models and measure their inference latency, energy and memory footprint. DNN Cloud APIs. Alternatively, applications might integrate ML functionality through cloud-backed APIs, by means of offloading inference to a remote endpoint. To detect the usage of cloud-based DNN models, gaugeNN inspects the app code to search for common DNN framework API calls. Android apps are typically developed in Kotlin or Java and then compiled into dex format [16] and packaged within the app binary. It is possible to extract this dex binary from the app package and decompile it into a human-readable (smali [14]) format using the apktool [63] to inspect the original code API calls. gaugeNN automates the process of decompiling these binaries and performs string matching on the smali files to detect 2
Model benchmarking
Next, we describe how gaugeNN assesses the on-device run time and energy consumption of DNNs.
Devices. To assess the performance of the deployed DNN models at runtime -i.e. latency, energy, memory and CPU utilisation -we deploy these models on the devices of Table 1. The devices of the first group represent three distinct tiers of smartphones (low to highend) and showcase the performance across heterogeneous clients, while the development boards of the second group represent hightier SoCs from different generations, whose open design allows us to measure energy consumption through cable probes connected to a Monsoon power monitor (Fig. 2). Benchmark workflow. All benchmarks are written in native code and compiled for aarch64 with Android NDK. gaugeNN adopts a master-slave architecture depicted in Fig. 2. The server, where the models initially reside, is responsible for orchestrating the deployment and benchmarking of the models across client devices (phones), connected over USB. To control the power passthrough of mobile devices, we use a USB controller board [71] that can programmatically disable data and power channels during measurements. This component was necessary, as connecting the device over USB charges it, interfering with the energy measurements. The benchmarking workflow is depicted in Fig. 3. Initially, the master (left side) pushes all the necessary dependencies to the device (right side) through adb and asserts the initial device state (WiFi and sensors are off, maximum screen timeout, etc). The benchmark consists of an unattended, headless script that runs on the device upon disconnection of the USB power, controlled through the USB board. This script is launched as a daemon process and performs the following tasks: 1) It waits until the USB power is off; 2) it runs a configurable amount of warmup inferences to remove cold cache outliers; 3) it runs the actual benchmark inferences with a configurable inter-experiment sleep period; 4) it turns on WiFi upon completion and communicates a TCP message through netcat to the server that the experiment is over. Subsequently, the server re-enables the USB power, connects over adb and gathers the job results before cleaning up and launching the next job. In the following sections, we present the findings of our experiments run with gaugeNN. First, we present an offline analysis of the apps and models found from crawling the Google Play Store (Sec. 4) and then we move to runtime analysis of these models on devices (Sec. 5) and specific optimisations (Sec. 6).
DATASET COLLECTION & ANALYSIS
In this section, we attempt to find an answer to RQ#1 with regards to DNN deployment in the wild. To this direction we first analyse our collected data with respect to the existence of DNN models in the top Google Play Store apps and their distribution to user devices. Then we move to more specific model and app categorisation and characterisation and finally draw conclusions about the trajectory of ML mobile deployment from our temporal analysis results.
Datasets
As shown in Table 2 we collected two snapshots of the top free Google Play apps, on the 14 ℎ of February 2020 and on the 4 of April 2021. At these points in time, the Android devices represented 73.3% and 72.19% of the mobile OS market share [15,56] respectively. Data was collected from an UK-based account associated to a Samsung S10 (SM-G977B), downloading the most popular apps across all categories of the Google Play Store (up to 500 apps per category). This accounts for the top 0.6% of total applications available in the store 4 . In general, apps downloads tend to follow a power law distribution [64]. Therefore, the most popular apps are installed on most users' phones while the rest follow a long tail. While we could not scale a study of paid apps for monetary reasons, these account for a very small percentage of downloaded apps [64]. For the rest of the paper, we report on the latest Play Store snapshot, unless explicitly stated otherwise.
Model distribution to devices
As described in Sec. 3.1, models in Android applications can be distributed post-installation (e.g. through OBBs or Asset Delivery). This allows developers to bypass the 100MB apk limit and to provide customised models for devices with different capabilities (e.g. 4 Google Play Store is estimated to have 2.9M apps at the time of the latest snapshot [6] Snapshot '20 Snapshot devices with specified NPU). To identify any models that are distributed post-installation, we downloaded all companion files and Google Play assets. We found no models being distributed outside of the main apk. Furthermore, we downloaded an extra snapshot with a three Android generations older device profile 5 , and found no evidence of device-specific model customisation.
Observations: Our results indicate that the functionality offered by Play Services to download device-specific models may be underutilised in the realm of mobile ML or that developers choose not to specialise their models per device SoC or model. While specialising the model distribution per device target can be beneficial for performance and energy, it requires offline vendor-specific customisation of the model. Evidently, app developers seem to prefer generality of their deployment solutions, in line with [68], and defer optimisation to middleware in the stack, such as NNAPI drivers or specific hardware delegates [33].
ML frameworks
Next, we look into the models found per ML framework. Specifically, Fig. 4 depicts the number of models successfully extracted, validated and benchmarked, per category and ML framework. These models represent 90.72% of the total apps including ML libraries in their codebase (Table 2), with the rest accounting for obfuscated, encrypted or lazily downloaded models. In total these account for 1,666 models -1436 (86.19%) TFLite, 176 (10.56%) caffe, 46 (2.76%) ncnn, 5 (0.3%) TensorFlow and 3 (0.18%) SNPE. TFLite is expectedly first in popularity, as the recommended solution from the OS provider for mobile ML inference. However, it is surprising to see caffe so widely used, since it has been long deprecated and replaced by caffe2 in 2017 and now PyTorch Mobile. Observations: These results illustrate a long latency between the state-of-the-art frontier of ML frameworks and their adoption for in-the-wild deployment.
Model categorisation
Here, we perform a quantitative analysis of DNN models and their respective apps and correlate them with metadata from the Google Play Store. Our aim is to categorise the most popular DNN-powered apps and characterise their usage. Fig. 4 shows the number of ML models per framework and Google Play category. We observe that the top DNN-powered apps belong to "communication" and "finance" tools with several DNNs for face and object detection (e.g. for detecting a card or ID to make transactions in the latter case). These are followed by more traditionally DNN-backed categories, such as "photography" and "beauty", which typically contain DNN-based filters to enhance photos. Potentially less expected categories include "food and drink", "dating" and "parenting". By manually examining these models, we found 0 20 40 anecdotal examples of apps within these categories using DNNs to detect or recognise objects (e.g. a bottle of wine or a face), for recommendation systems (e.g. partner matching, advertising and food recipe recommendation) and even for baby monitoring.
To dig deeper into the purpose of each AI model, we manually looked into the naming, input/output dimensions and layer types of the encountered DNN models in order to characterise their usage. This labour intensive job was done across three ML researchers with a majority vote on the results. We were able to identify the usage of 1, 531 models, accounting for 91.9% of all models, with around 67% having names which hint either the model, task at hand or both (e.g. "hair_segmentation_mobilenet.tflite"). Our characterisation shows that the most popular task for deploying Deep Learning is computer vision (> 89% of all models), followed by NLP (17 models) and audio (15 models). Last, we found traces of DNN models (4 models) utilising sensor data, such as accelerometer, gyroscope, etc. Two anecdotal use-cases for sensor ML are horse movement tracking and car crash detection in insurance apps. Task-specific results are shown in Table 3, where it can be seen that most vision models were targeted at object, face and contour detection, most audio tasks at ambient sound recognition, most NLP tasks at textcompletion and sensor tasks at movement tracking. Observations: Vision models seem to be the most prevalent, with a focus on object and face detection and text recognition and used mostly across communication, photography and beauty apps.
Model uniqueness characterisation.
Diving deeper into the models distributed amongst the most popular applications, we found that not all models are bespoke or unique. Overall, we witness DNN models spread across different application categories, with a significant portion of these being off-the-shelf models without customisation. In fact, after checking for unique checksums on these models and respective weights 6 , we find that only 318 models (19.1% of the models as shown in Table 3) are unique. For the most prevalent vision task, i.e., object detection, FSSD [43] seems to be the most popular model. We found such occurrences even within popular Google apps (e.g. "Gallery Go" and "Arts & Culture"). For face detection the Blazeface [8] is another very popular model. Spanning across tasks, MobileNet [31] seems to be the most popular architecture with variants (e.g. FSSD) being used to other vision tasks including semantic segmentation, pose estimation or classification. Last, we encounter multiple occurences of models tackling a common task, e.g. recognise information from credit cards [60], such as names and dates.
Model fine-tuning. Taking this analysis one step further, we perform a checksum-based analysis at finer-granularity (layer-level) to see to what degree to developers train their own models from scratch or fine-tune the last layers through transfer learning [49]. The intuition is that the first layers of the network are typically extracting low-level features (e.g. edges, shapes, etc. for vision tasks) that are shared between similar tasks and only deeper in the DNN do the task-specific and semantically relevant features get extracted. Results from our analysis show that, excluding duplicate models, 9.02% of the remaining models share at least 20% of the weights with at least one other model. In fact, 4.2% of the models only differ in up to three layers, indicating that some developers only fine-tune small portions of the network, resulting in a significantly smaller training footprint and exploiting transfer learning from other (typically off-the-shelf) networks. Moreover, we checked for traces of online fine-tuning done on device (e.g. through TFLiteTransferConverter [61]) and found none, indicating that on-device fine-tuning is not yet widely exploited in the wild due to the significant computation requirements and the limited availability of labelled high-quality on-device datasets.
Observations. Based on this type of evidence, we deduce that it is common for developers to leverage a pre-trained model that is widely available and pay the significantly smaller cost of training offline only a subset of the last DNN layers. While online on-device training is a prominent future avenue, be it through fine-tuning or federated learning, current support in mobile frameworks is limited and so are such deployments.
Temporal analysis across snapshots
As aforementioned, we took two distinct snapshots of the most popular apps in the Google Play Store 12 months apart from each other. In this part of our analysis, we compare and contrast these two snapshots in terms of app popularity and in-the-wild DNN deployment and draw conclusions about the trajectory of ML penetration in smartphones nowadays. What is unique about our dataset is that we happened to measure DNN-deployment across the COVID-19 pandemic, which had a crucial impact on human activity during the course of 2020/2021. For this reason, we also compare our temporal analysis with similar analyses done in the past [70] Figure 5: Individual models removed/added between two snapshots taken one year apart.
potential biases of our dataset during these exceptional circumstances and ii) to see how app popularity and, as an extension, DNN adoption, has been affected by these circumstances. Results from our temporal analysis indicate a surging number of DNN models being deployed on the Android platform, essentially doubling in the course of 12 months. Specifically, our traced models went from 821 to 1.6 for our latest snapshot (Table 2), with most additions belonging to vision tasks. TFLite remains the dominant mobile inference framework, going from 81.6% to 86.1% of the total models found (2.15 ). The increase in models was less pronounced for ncnn (1.18 ) and caffe (1.69 ). The latter is surprising given the fact it has been deprecated and newer frameworks have taken its place (caffe2 and PyTorch Mobile). Finally, we observe a drop in the TF (0.56 ) adoption rate, which is expected given the increasing popularity of its mobile counterpart.
Next, we analyse the DNN models across snapshots per category of application to which they belong. Fig. 5 depicts the number of individual models that were removed/added across our snapshots, sorted by the difference between the two. Interestingly, most additions of ML models happened for communication tools, taking the lead from "photography" applications, which was the top ML-powered category of 2020. This can potentially indicate that communication apps became more important due to the pandemic, and developer focus was diverted to this category. A similar trend could be witnessed for "finance" applications, where we observed many models aimed at the automated identification of people and their ID cards. Whilst this traditionally constituted a manual process done in person in financial institution (e.g. banks), the pandemic might have created a new need for ML models to fill. Last, apps related to "health" and "medical" purposes seem to have a surging deployment of DNN models. On the other side of the spectrum, "lifestyle", "food & drinks" and "Android Wear" applications seem to be falling in terms of popularity, something that could be potentially attributed to the fact that people stay more at home.
Next, we integrate the results of previous analyses [57,70] to shape a more general trend for DNN adoption in the Android ecosystem. In [70], the authors report the total ML-backed apps going from 166 in June 2018 to 211 in September 2018. In [57], the authors traced 178 ML-powered apps, somewhere between [70] and June
Mobile DNNs layers and operations
After having coarsely characterised the models based on their input modality, target task and app category, we take a finer-grained look into the models and analyse their structure in terms of the layers and operations they contain. DNN layers and operation types. First, we go through the graph representing each DNN and trace the layer types they contain, grouping results per input modality. Results are shown in Fig. 6 for TFLite, NCNN and Caffe. We see convolution layers being amongst the most popular layer types across modalities (34%, 10%, 20% for image, text and audio, respectively). Originally applied in visual tasks, their usage nowadays spreads across recommender systems, natural language processing and time-series analysis. Variants such as depthwise-separable convolutions (depth_conv) [31] are computationally less heavy and are aimed for mobile deployments. Dense (or linear) layers are fully-connected layers that are typically found in the output of classification tasks, or in the implementation of RNNs. Majority of these layers are found in audio (19%) and text (9%) models. Activations essentially impose non-linearity in DNNs, and can be fused with the previous layer in terms of implementation. Thus, the existence of such operations as distinct layers is framework dependent. Last, "helper" layers such as math, quant, resize and slice operations are performing math or matrix representation operations and can be found across modalities. DNN #operations and #parameters. Next, we estimate the number of operations (in FLOPs) and parameters that each model contains by going through the graph in a trace-based manner. Concretely, we generate a random input with the DNN-specified input 7 The snapshot date is not reported, thus we consider it between [70], with which it compares, and the work's venue submission date. dimensions and perform a DNN inference. During the forward propagation step, we measure analytically the amount of operations being performed per layer (dependent on the kind of layer) and the number of trainable parameters associated with it. Fig. 7 shows the result of this analysis per DNN task. We see that among the traced models, on average the heaviest deployed vision models belong to classification, hair reconstruction, segmentation and beauty tasks. For NLP the heaviest tasks belong to text auto-completion whereas for audio the heaviest deployed task is sound recognition. At this point, we note that these numbers only refer to the traced deployed models and do not represent a generic commentary on the overhead of models per task. In fact, in many cases it is the opposite if we only take into consideration the task (e.g. classification vs. segmentation or speech vs. sound recognition). Also, we note that the number of models found for each task category varies significantly.
Observations: We find that convolutions dominate the mobile DNN landscape due to their wide use in vision models, as well as the fact that they can map well on mobile hardware for efficient execution, compared to e.g. recurrent layers [72]. While depth-wise convolutions can significantly improve performance, their deployments are scarcer as they can impact the quality of the model. Furthermore, we find that there is huge variance in terms of FLOPs and parameters (four orders of magnitude) in the traced models. This might be attributed to the granularity of the task corresponding to a single inference. For example, in image recognition the input is typically an RGB image while in next-word prediction the input can be a couple of words.
RUNTIME ANALYSIS OF MOBILE DNNS
Up until now, we have focused our efforts on analysing the DNN models in an offline manner. In this section, we turn to on-device benchmarking and report on performance and energy when running the encountered models across the devices presented in Table 1. This analysis provides important insights about how real-world AI applications are performing on a heterogeneous set of devices, thus answering RQ#2.
On-device DNN latency
Prior work [3,33] has shown that FLOPs is not necessarily a good proxy for estimating a model's on-device performance. Reasons for such discrepancies include the underutilisation of hardware due to e.g. memory-bound operations, thermal throttling due to continuous inference or even due to scheduling on cores of different dynamics due to energy-saving scheduler policies on Heterogeneous Multi-Processors [36]. To further corroborate this fact, in Fig. 8 we depict the FLOPs and actual measured inference latency across devices for different models. Our analysis on real-world models on different devices reinforces this non-linear (line-fit) relationship as it not only varies for different model architectures, but also differs from one device to another.
To investigate this further, in Fig. 9 we show the ECDF of model runtime across all available devices. From the graph it is evident that the computing gap between a low-end device (A20) and a mid-tier device (A70) is considerably larger than the difference of mid-tier to high-end (S21). Specifically, low-end and mid-tier devices (A20 and A70) are 3.4× and 1.51× slower compared to S21. Across generations of high-end SoCs of the same manufacturer (Q845, Q855, Q888), we see incremental performance gains (i.e., average latency of 76, 58 and 35 ms), but noticeable, to the point that a next-gen mid-tier phone may perform better than the high-end SoC of a prior generation, despite claims about significant boosts in AI acceleration between generations. Last, we want to mention that for the two devices that integrate the same SoC (Q888 and S21), the open-deck design of the development board along with the vanilla variant of the OS leads to incrementally better results and faster inference overall. Heat dissipation of the open design, cross-manufacturer configurations and low-level configuration of the Android Scheduler can all be contributing factors. Observations: We observe a wide variability of inference latency across devices even for models that have similar FLOP counts, which reaffirms the need for on-device benchmarking. Devices of different tiers and generations offer variable dynamics, with the lower-tier falling significantly behind in performance. Even devices integrating the same SoC can offer variable performance due to vendor-specific configurations, the installed apps and drivers or even due to different thermal characteristics. Therefore, given this heterogeneity, it is hard for developers to accurately predict the users' experience without testing their models on a large sample of devices.
Energy consumption
In mobile settings, one cannot simply optimise for performance without taking energy consumption into consideration. While smartphone capabilities are growing larger every year, the same developments have not been witnessed in battery technology. Therefore, quantifying the cost of being smart in terms of energy is an important component in the mobile world. In this section, we report on the energy, power and efficiency of doing inference on device, across frameworks for the three Snapdragon boards representing different generations of devices. Fig. 10a shows the distribution of models with respect to the energy required per inference across our three devices. Expectedly, we see from the kernel density function lines that all three devices follow a similar trajectory, indicating that a similar amount of energy is required for similar workloads regardless of the device. On the other hand, this is not the case in terms of power consumption (Fig. 10b), where we can see newer generations of devices consistently drawing more power to run models. This is a direct implication of the fact that newer generations of devices can execute models faster, as shown in Fig. 9, while energy required remains similar.
Energy and power consumption per device.
Following these observations, we decided to calculate inference efficiency per each model by calculating how many floating-point operations can be executed in one second per one Watt 8 . As can be seen in Fig. 10c, trends in efficiency stay mostly the same across different devices, following energy consumption, but unlike energy we can see a minor improvement of the newer devices over Q845 in the middle of the distribution, suggesting that relatively more models can run more efficiently (median efficiency of 730, 765 and 873 MFLOP/sW, after removing outliers) on the newer hardware. 8 Effectively the same as calculating FLOPs per Joule.
Use-case
Battery discharge (mAh) Table 4: Scenario-driven energy consumption for three devices and use-cases in audio, text and vision.
5.2.2
Use-case driven energy consumption. Up to here, we have seen performance and energy consumption for single inferences. However, the quanta of data associated with each inference may vary considerably between tasks or modalities as noted before in Sec. 4.7. Thus, we dive deeper into three selected tasks representative of each modality, namely i) sound recognition for audio, ii) auto-completion for text and iii) semantic segmentation for vision.
We make certain realistic assumptions on the data sizes, granularity input and frequency of results and then assess all relevant models belonging to this category. Specifically, for speech recognition, we assumed each model is run in order to recognize 1 hour of audio input. To derive how long a model would need to be run, we manually investigated the models and assumed the most likely amount of audio input per inference considering the model's input dimension and common practices in speech ML [10,47,51]. For text auto-completion we assumed each model is run once per new word typed by a user, and further assumed a workload of 275 words, derived from WhatsApp's statistics about average daily number and length of messages [12,54,66]. Last, for semantic segmentation, we assumed each model is used to segment a human at 15 FPS during a 1-hour-long video call in order to apply background effects, we further assumed that the model processes one frame per inference which is the usual approach [11,45,73].
Results across the development boards are depicted in Table 4 we see that one hour of segmentation can result in a significant average reduction of 26.6% to 30.54% of a common 4000mAh battery capacity (e.g. A20 and S21). Moreover, the most energy hungry segmentation models can almost deplete the full battery capacity within an hour, with a 80.9% to 95.9% reduction. On the other end, models like auto-completion are ubiquitous across messaging apps and deliver both in terms performance and efficiency, allowing their frequent use without a significant impact on battery. Observations. Energy consumption is a major component in mobile, and intelligence comes at a cost to battery life. Unlike latency, which is visibly improved with new generations of devices, energy consumption seems to be predominantly dependent on the model architecture. Even though newer hardware might improve in power-efficiency, differences are much less pronounced compared to performance improvements, which are even less observable across different model architectures. This suggests that it is the AI developers who can optimise battery life the most, unlike plain latency which can be improved at multiple levels, including manufacturers.
AVAILABLE OPTIMISATIONS
After examining how real-world DNNs run on a heterogenous set of devices, we now look into RQ#3 by means of DNN-specific as well as system-level optimisations aiming to improve inference and deployment performance.
Model-level Optimisations
In this section, we focus on the adoption of three model-level optimisations, namely i) weight clustering, ii) pruning and iii) quantisation, for the identified TFLite models. Clustering: Clustering refers to the technique of reducing the number of distinct weight values by representing them through their clusters' centroids [26]. We identify clusters of shared weights by searching for layers with a "cluster_" prefix on TFLite models. Despite the advertised potential for significant model size reductions [22], we report that none of the models in-the-wild seem to use weight clustering. This may be a result of either accuracy drops or the fact that the current clustering implementation does not reduce runtime memory and targets model compression only [22]. Pruning: Pruning refers to the technique of zero-ing out specific weights/channels of the network that have minimal impact on the output, due to representational redundancy in DNNs. Weight pruning can be detected during training by searching for layers with a "prune_" prefix for TFLite models. Nonetheless this prefix is often removed for inference [23]. We report that we did not find any occurrence of such layers either. While this approach has the potential to skip the zero weight computations during inference, the current implementation benefits only from increased sparsity [62] which, like clustering, results only in model compressibility. To find if there is the potential of adopting magnitude-based weight pruning, we measured the weight sparsity for the tracked TFLite models. We find that, overall, 3.15% of weights are near zero (within ±10 −9 ), which might show limited prospects for weight magnitudebased pruning. Quantisation: Finally, quantisation constitutes a prominent method for minimizing the computational and memory demands of DNNs by means of reducing their representation precision [34,69]. To study its adoption, we analysed the layer types and their weight and input bitwidth representations. We report that 10.3% of the models make use of the dequantize layer, which indicates the deployment of lower-precision models as a way to perform model compression. Furthermore, by examining each model's weights, we found that 20.27% of the models use int8 for the weight tensors whereas 10.31% of the models work with int8 activations.
Recent hardware advances have led to NPUs that support multiple arithmetic precisions [7,44,52]. Such examples are the Hexagon 698 processor on Qualcomm Snapdragon 865 (SDM865) [52] and the Arm Ethos processor [7], which support 16-bit for activations and 8-bit for weights (A16W8). These schemes enable a better compromise between faster low-precision compute and having enough representational power to achieve good accuracy. In spite of the new opportunities of these hardware architectures, not only do existing deployment methodologies fail to exploit them but we also found no evidence of their adoption. We revisit the issue of quantisation with hardware-specific optimisations in Sec. 6.3, where we use the Google's NNAPI and Qualcomm's SNPE to target specific processors in the SoC. Observations: While the research community has developed numerous ways to optimise DNNs for mobile execution, out-of-the-box support for such optimisations in modern frameworks' can be primitive and might not translate to run time gains at the expense of accuracy. Furthermore, most optimisations typically require model re-training and access to large-scale datasets. As such, we find that such optimisations are not widely adopted by the mobile AI developers. Quantisation, which can also be used to target different SoC accelerators, is the most widely-used optimisation. However, more advanced hybrid quantisation schemes remain unsupported.
System-level optimisations
Upon deploying a model, developers have different setup choices that can affect the model's performance. In this section, we discuss the impact of different tuneable model and system parameters on model performance. Impact of batch size. One common way of increasing a model's throughput is batching input samples together. By taking advantage of SIMD instructions of SoCs and accelerators, this technique increases the DNNs throughput by producing multiple inference results in one forward pass. In Fig. 11, we show the batch throughput across devices when processing 2, 5, 10, and 25 samples at a time with 4 threads. We only consider TFlite models that successfully ran all batch sizes across all devices (149 in total). As expected, we see that the throughput increases as the batch size does. In fact, throughput scales almost linearly, which indicates that no bottleneck is hit up to that point. Moving the comparison across devices, we see that S21 offers significantly faster inference, with throughput being 2.14× and 5.42× higher compared to A70 and A20 respectively on the highest batch size. This result goes in line with our conclusions from Sec. 5.1. We anticipate that when scaling to higher batch sizes, devices with lower core count and memory will hit memory bandwidth bottlenecks or out of memory errors, but we defer this for future work. Impact of thread count. Another tuneable parameter during mobile execution is the number of threads allocated for execution on CPU. By default, all cores of the device can be simultaneously used during execution (ARM DynamIQ). However, in Heterogeneous Multi-core Processors (HMP) there usually exist multiple islands of cores, offering different dynamics and computational power. In Fig. 12 we show how the models' throughput varies when executed with different thread counts (2,4,8) and affinities (2,4). For the latter, we use process pinning to select which cores to target from the heterogeneous core sets. We observe that the optimal thread count can vary across devices, with A20, A70 and S21 performing better with 4, 2 and 4 threads, respectively. We also see that the 8threaded performance drops significantly across devices, indicating bottlenecked execution.
Digging deeper into thread performance, we further plot four additional setups where we set the CPU affinity to run over a varying number of the largest cores. For example, 4a2 means 4 threads with affinity 2, which means 4 threads will run over the top 2 cores of the mobile's SoC. As expected, we observe that any setup that sets the number of threads higher than the CPU affinity cores (4a2 and 8a4) results in significant performance degradation. This happens to due to time-sharing, having the other thread pinned on the same core waiting. Nonetheless we also witness some less expected findings, such as the fact that setting the affinity to the same number of top cores does not yield any significant gain, irrespective of our initial hypothesis that it would reduce process migration between cores. In fact, 4a4 performs worse to 4 threads for A70 and similar is the case for 2a2 and 2 threads for A20. Predicting the optimal number of threads for mobile inference can be challenging as mobile devices have different CPU architectures with varying core frequencies as well as DVFS-enabled schedulers implementing energy-preserving policies [36]. Moreover, most mobile devices, nowadays, incorporate HMP SoCs (i.e. ARM big.LITTLE, DynamIQ) with varying number of cores per island (e.g. Q888 has 1×X1, 3×A78, 4×A55 ARM Cortex cores, whereas Q675 has 2×A76 and 2×A55 cores). Therefore, scheduling across core islands can bring sub-optimal results to DNN execution. However, when selecting the optimal thread count and affinity for each device, we see up to 2× throughput gains overall. This suggests that tuning scheduling and thread count of DNN execution on heterogeneous devices and processors can yield significant improvements.
Observations: Results from model-level optimisation indicate that there are alternative parameters for boosting inference throughput, but they should be tweaked in tandem with system-level factors, including the SoC topology and memory hierarchy to make efficient use of the underlying hardware.
Target generality vs. hardware-specific optimisations
In the previous section, we have visited certain setup "hyperparameters", namely batch size and process affinity that depending on the use-case can enhance inference performance. In this section, we investigate framework-specific optimisations that can enhance performance, either by means of optimised operator kernel implementations or by moving computation to a different device altogether, i.e. targeting the GPU/NPU/DSP of the SoC. To this direction we run experiments measuring performance and energy of framework-specific optimisations on TFLite and caffe models across three alternative backends, namely NNAPI, XNNPACK and SNPE, on the Q845 board. We divert the reader to the Appendix for more information on these frameworks. Traces of hardware-specific acceleration. In our latest snapshot, we found some traces of hardware-specific acceleration. Specifically, we have found 71(23.8%) apps are using NNAPI, a single application using XNNPACK and three using SNPE. It is interesting to note that in the last case these models get blindly distributed to all devices, irrespective of having a Qualcomm-based SoC or not. In fact, they deploy both a TFLite and dlc variants of the same model. Overall, we see that many app models are missing on the efficiency promises of targeting specialized hardware or using target-optimized kernel operations. Optimisation opportunities. As a way to measure the potential benefit of using each of the aforementioned framework optimisations on different processing elements, we run two experiments, one on TFLite models for NNAPI and XNNPACK (Fig. 13) and another for TFLite and caffe models for SNPE (Fig. 14). In each case, we compare the performance of framework-specific optimisations to the baseline CPU and GPU runs. The reason we do not compare across them is because the number of models commonly compatible is low. This highlights one succinct characteristic of such optimisations, the rudimentary support for operators across heterogeneous targets which in turn can hinder their widespread adoption. Results from our evaluation indicate that for CPU execution (Fig. 13), one is better off using the XNNPACK delegate for executing DNN inference 1.03× faster and 1.13× more efficiently on average. NNAPI did not prove its potential in our experiments, with its performance lagging behind the default CPU execution (0.49× slower and 1.66× less efficient on average). This could be potentially attributed to unoptimised NN drivers from the vendor. On the other hand, when one is deploying with a vendor-specific platform, SNPE in our case, performance is better for DSP and GPU (Fig. 14), compared to vanilla CPU and GPU runs. Specifically, these are 5.72× and 2.28× faster and 20.3× and 8.39× more efficient on average, compared to CPU runs. In comparison to GPU runs, these are 2.97× and 1.19× faster and 2.69× and 1.11× more efficient on average. In the case of CPU, however, the story is similar with our last experiment, further corroborating the story for non-optimised CPU drivers from the vendor.
Note that CPU and GPU runs are executed at full-precision (float32), while the DSP runs in int8. Depending on the task this can result in accuracy variations, but we do not have access to model-specific data and labels to assess that. Observations. Results from our experiments say a mixed story about hardware and frameworks specific optimisations. While it can yield noticeably better performance across models, this is not always the case due to driver implementation or other low-level confounding factors. The dilemma of target generality vs hardware-specific optimisations ultimately lies in the hand of the developer and the resources they have at their disposal to extract every bit of performance in hardware.
Cloud-based DNN models
Another approach to accelerate inference and bring intelligence to mobile apps, without having the need to specialise per target device is by offloading to the cloud. We can envision this approach being popular amongst developers who do not implement or train their own models or for models that are too computationally intensive to run locally on a mobile device or too expensive to optimise for each available target to offer a similar QoE.
As mentioned in Sec. 3.2, gaugeNN tracks app invocations of known cloud-based machine learning APIs in their code. This includes calls to Google (Google Cloud and Firebase ML) and Amazon services. Fig. 15 shows the number of applications invoking each of the cloud-based ML APIs across our dataset. Overall, we find 524 distinct applications that use cloud AI APIs, a considerable increase of 2.33× from our 2020 dataset. More specifically, 452 and 72 apps using Google AI services and Amazon respectively. This increase is inline with the increase in models deployed within the apps (Sec. 4.6). Furthermore, we observe that developers primarily use cloud-based image and video analytics to perform face identification, bar/QR code recognition, video analytics and chatbots. Observations: Our results indicate that cloud APIs from Google and Amazon are gaining in popularity as they allow developers to quickly deploy AI capabilities without the need for specialised ML expertise and costly infrastructure for training. Moreover, developers do not need to maintain training data on-premise and the resulting apps can be supported by heterogeneous devices with similar QoE.
RELATED WORK
In the past, there have been numerous studies that performed large scale analysis of the Google Play Store but with different aims, such as characterising mobile apps [64] and their API usage [1,48]. Closer to the ML community, there has been an increasing effort to benchmark state-of-the-art models across different devices and frameworks [3,24,25,27,33,67]. Although these studies have done a great job at extensively benchmarking state-of-the-art models, we still lack the knowledge as to whether these models are representative of the ones deployed today in mobile apps. Moreover, there is a lack of understanding on how the latest trends on DNN optimisation affect the latest DNN-based mobile apps.
To the best of our knowledge, there are largely two works that have investigated DNN usage in the wild. One is from Xu et al. [70] and focuses on investigating who the early adopters of DNNs are and what are the use-cases for Deep Learning in mobile apps. While they do conduct a lightweight analysis of DNN operations, they have only measured model footprint and performance in an offline and device-agnostic manner, by means of measuring the FLOPs of DNN layers. However, it has been shown that FLOPs is not a good proxy of a model's run time [3,33], especially across different hardware configurations. Therefore, there is still limited understanding about the actual performance of DNN models in the wild, across a heterogeneous ecosystem of more and less capable devices. A more privacy-centric work has been presented in [57], which investigates DNN model protection on mobile devices and illustrates succinctly that many Android apps do not protect their DNN models, which means these can be easily leaked, or extracted for analysis. Nevertheless, it does not perform any performance analysis.
These two works serve as a starting point for our study, which aims to answer the question of how widely deployed DNNs found in the most popular Android apps actually perform on widely deployed devices, essentially correlating the state of Deep Learning mobile deployment in the wild. To this end, we conduct an indepth benchmarking of models used in the latest most trending mobile apps. This includes analyses of latency, energy, system and model-level parameters and optimisations, providing a better comprehension of the current limitations when deploying DNNs on mobile phones of different tiers and generations.
DISCUSSION & FUTURE WORK 8.1 Implications & Trends
Proliferation of mobile AI. Our results indicate that both ondevice and cloud-supported DNN applications are increasing rapidly (doubled within a year). This is mostly driven by the availability of pre-trained models and easy-to-use cloud-based APIs, focusing mostly on vision tasks such as image detection and recognition. Model reuse. While there is much research on bespoke model architectures, customisation and fine-tuning [37,49], we observe that most developers use off-the-shelf DNN architectures. In fact, close to 80.9% of the models are shared across two or more applications and a further 9.02% of the remaining models share some layers (i.e., derived from a common model after fine-tuning). Simultaneously, there is a parallel trend of resorting to cloud-powered inference, further demonstrating a preference of developers towards turnkey solutions, instead of bespoke customised ones. With the current trajectory of AI, we expect more developers specialising on ML-based app development at least until the middleware (e.g. NNAPI) which abstracts away ML-specific parameters becomes more prevalent. DNNs and mobile hardware resources. We witness that most applications do not take advantage of SoC-specific accelerators to accelerate their inference runtime, but rather target generality of their solutions, either by shipping vanilla CPU-only execution or by integrating framework-specific middleware options (e.g. NNAPI). Last, offloading inference to the cloud offers a consistent QoE, which is not dependent on the target device, at the expense of privacy [4,38] and monetary cost. This behaviour comes as a consequence of the fragmentation in the Android ecosystem in terms of hardware capabilities and software support (e.g. vendor-specific NNAPI drivers). Consequently, we anticipate the need of automated solutions for optimised development and deployment of ML solutions in mobile apps, which abstract away the complexity of efficiency and heterogeneity of the ecosystem. Energy as a bottleneck. While Deep Learning adoption is undisputed, with accelerating trajectory in the future, manufacturers turn to specialised hardware for faster and more efficient ML (e.g. NPUs). However, the same cannot be stated for battery technology and capacity, which remain relatively stale. Given what we observed for the segmentation scenario in Sec. 5.2.2, we anticipate energy sooner or later becoming a bottleneck in DNN deployment, requiring novel solutions to support mobile intelligence on the go. DNN co-habitation. With more and more applications shipping DNN-powered solutions, we also anticipate the co-existence and parallel runtime of more than one DNN in the future. Thus, researchers will need to tackle this emerging problem to efficiently support such runtimes, by means of OS or hardware-level solutions. On-device learning and personalisation. Last, so far in the paper we have only visited the task of mobile inference. In this setup, the weights of the model come pretrained on some centralised dataset and the device only performs forward propagation. However, with users becoming more and more privacy aware and with legislation discouraging the storage of user data without legitimate interest, on-device training and federated learning [30,46] become more and more prevalent [9,50]. Moreover, with the proliferation of on-device data, on-device personalisation [42] is also gaining traction. These tasks will create a different workload to be optimised for on-device runtime, for which current or future tools will need to provide support.
Limitations
In this work we have shed light to the use and performance of DNNs in real-world applications. However, we only focused on the Android smartphone landscape due to its larger market share and wide device fragmentation. These finding might only partially hold for other mobile ecosystems.
Furthermore, we have analysed the models that could be identified as DNN models. Obfuscated and encrypted models, or models that are downloaded outside of Google Play store were not benchmarked, despite us tracking the respective application as MLpowered. While there might be a different distribution of obfuscated models in the wild, the results from [57] indicate otherwise.
Our analysis included both offline introspection and dynamic benchmarking of the models. However, we did not investigate particular invocation paths and frequency of inference per app. We expect that some of these models are rarely used (e.g. credit card scanning) while others are utilised more frequently (e.g. activity detection). However, the real-world usage of these models requires device instrumentation and collecting telemetry data over a large user-base. While previous works [2,48] have proposed large-scale crowd-testing of virtualised mobile apps with real user interaction, these generally preclude testing sensor input-dependent functionality, on which DNNs depend. We leave this as future work.
Last, while we characterise DNN cloud offloading, we acknowledge that we miss any developers who use their own custom (e.g., REST-based) APIs to access remote execution.
CONCLUSION
In this work, we have carried out a comprehensive empirical study of the most popular DNN-powered mobile apps. Using gaugeNN, we analyse thousands of mobile apps in the wild and identify a significant chasm between the deployed models and the state-ofthe-art architectures and optimisation techniques. This is the first work to dig deeper into these aspects so as to provide guidelines for both the mobile application and the DNN-framework developer communities.
A ADDITIONAL PLATFORM INFORMATION DNN Model extraction
In Sec. 3.1 of the paper, we stated that gaugeNN supports file extraction from i) the base apk, ii) expansion files (OBBs) and iii) Android App Bundles. The extracted files are matched against a compiled list of known DNN framework formats and validation rules to identify potential DNN models. The complete list of formats is shown in Table 5. As per Sec. 6.3, we run our TFLite models against alterative backends, namely NNAPI, XNNPACK and SNPE. Below we provide additional information for each one: NNAPI 9 . Neural Networks API (NNAPI) is a middleware-level library in Android that sits between the machine learning framework library used by an application (e.g. TFLite) and the the Android Hardware Acceleration Layer (HAL). It essentially provides an abstraction layer, handling hardware acceleration through vendor and hardware specific NN drivers, which provide efficient operator implementations for CPU, GPU, DSP, NPUs or other kinds of specialised hardware. Execution falls back to CPU in the absence of such drivers or unsupported operators. TFLite is at the foreforent of NNAPI delegation and PyTorch Mobile has announced support for it. Nonetheless, NNAPI being in its infancy comes with some shortcomings, mainly in the realm of OS version support (Android P and above), NN drivers availability and heterogeneity in performance gains. XNNPACK 10 . XNNPack provides a low-level, highly optimised library for NN inference operators across platforms. Specifically for ARM, it supports efficient implementation of operators through Neon instructions, as well as inference on sparse networks, which 9 https://developer.android.com/ndk/guides/neuralnetworks 10 https://github.com/google/XNNPACK offers a practical solution to the problem described in Sec. 6.1. Despite the claimed performance benefits, operator support is limited and if not careful can lead to performance penalties instead of gains when compared to the baseline CPU delegates. SNPE 11 . The Snapdragon Neural Processing Engine (SNPE) constitutes a vendor-specific runtime for execution of DNNs on Qualcomm SoCs, targeting the CPU, Adreno GPU or Hexagon DSP of the SoC, handling quantisation in the proper precision internally. It uses its own representation for NNs (.dlc format) supports conversion from different frameworks, including caffe and TFLite. However, while SNPE can potentially take advantage of hardwarespecific optimisations, it can only target Qualcomm SoCs, trading off generality for performance. Operator support can also be of issue in SNPE, supporting CPU fallback in case of hardware-specific unsupported operations.
Figure 1 :
1Workflow of gaugeNN.
Figure 2 :
2gaugeNN benchmark platform.
Figure 4 :
4Number of models gaugeNN successfully extracted and executed per framework and Google Play category. Categories with less than 20 models are excluded.
Figure 6 :
6Model layer composition per input modality for TFLite, NCNN and caffe.
2020 7 .
7Last, for our trace, we report ML-powered apps going from 236 to 377 from February 2020 to April 2021. From the previously reported figures, we witness a soaring trajectory of ML apps deployed in the wild, with the adoption rate of ML being accelerating. Observations: While there was a big reshuffling in the type of AI models deployed during the pandemic, we observe a considerable general growth in the number of DNN models in AI-powered applications in the past 3 years (from 176 in 2018[70] to 1,666 in April 2021). These results demonstrate how the proliferation of mobile AI frameworks, the availability of pre-trained models and the constant improvement of mobile hardware have driven this growth and the need to keep up with this ever-increasing adoption.
Figure 7 :
7FLOPs and parameters per DNN task.
Figure 8 :Figure 9 :
89Observed relationship between latency and FLOPs across six different devices. Latency per device ECDF.
Figure 10 :
10Distributions of inference energy, power and efficiency of the collected models when run across 3 generations of Qualcomm SoCs. The lines represent kernel density estimations.
Figure 11 :
11and indicate that different tasks and use cases result in very different impact on the battery life. On the high-end of energy consumption, Inference throughput vs. batch size.
Figure 12 :
12TFLite's model throughput for different devices and compute targets.
Figure 13 :
13ECDF of TFLite models latency and energy per CPU runtime.
Figure 14 :
14ECDF of TFLite and caffe models latency and energy per hardware target with SNPE.
Figure 15 :
15Number of apps that invoke cloud-based ML APIs. Categories with less than 10 apps are excluded.
Directed Acyclic Graph 3 Model FLOPs are estimated as a function of the cumulative Multiply-Accumulate (MAC) operations performed by each of the model's layers.Prepare
Start experiment
Turn off WIFI
Wait for power off
Turn off power
Warm up
Run inferences
Turn on WIFI
Notify via WIFI
Wait for
notification
Collect
Clean
Turn on power
ADB
WIFI
Event driven synchronization
Figure 3: gaugeNN benchmark workflow.
known cloud DNN framework calls. In particular, gaugeNN recog-
nises calls to libraries belonging to Google FireBase [17], Google
Cloud [18] and Amazon AWS ML services [5].
Table 1 :
1Device specifications.Energy measurements. Energy on open deck devices is measured
via a Monsoon power monitor (AAA10F). To prevent Android's
battery saving mechanisms (e.g., Doze [19]) killing background
jobs when the screen goes off or scaling down the CPU frequency,
we keep the phone screen on during the benchmark, by interfacing
with the Android's Power Manager service. We also ensure that
the screen is always in a similar state across devices, by developing
an app that shows a black background. While the screen does incur
extra energy consumption, this is measured and accounted for.
Table 2 :
2Dataset snapshots details.
Table 3 :
3DNN task classification.
Framework Extensions ONNX .onnx, .pb, .pbtxt, .prototxt MXNet .mar, .model, .json, .params Keras .h5, .hd5, .hdf5, .keras, .json, .model, .pb, .pth Caffe .caffemodel, .pbtxt, .prototxt, .pt Caffe2 .pb, .pbtxt, .prototxt PyTorch .pt, .pth, .pt1, .pkl, .h5, .t7, .model, .dms, .pth.tar, .ckpt, .bin, .pb, .tar Torch .t7, .dat SNPE .dlc FeatherCNN .feathermodel TFLite .tflite, .lite, .tfl, .bin, .pb TF .pb, .meta, .pbtxt, .prototxt, .json, .index, .ckpt Sklearn .pkl, .joblib, .model armNN .armnn Mnn .mnn Ncnn .param, .bin, .cfg.ncnn, .weights.ncnn, .ncnn Tengine .tmfile Flux .bson Chainer .npz, .h5, .hd5, .hdf5, .chainermodel
Table 5 :
5Frameworks and formats validated by gaugeNN B ADDITIONAL EXPERIMENT INFORMATION Hardware-specific acceleration frameworks
Evaluate Transform Loop
Samsung S7 edge -SM-G935F, released in February'16, three years before the S10 5G.
Most apps distribute the model weights in their apk, either in a single file, along with the DNN graph, or in separate files (e.g. caffe). In either case, we perform an md5 checksum on both the model and weights.
Conference'17, July 2017, Washington, DC, USA Almeida, Laskaridis et al.
https://developer.qualcomm.com/docs/snpe/overview.html
An Empirical Study of Android Alarm Usage for Application Scheduling. Mario Almeida, Muhammad Bilal, Jeremy Blackburn, Konstantina Papagiannaki, Passive and Active Measurement. Thomas Karagiannis and Xenofontas DimitropoulosChamSpringer International PublishingMario Almeida, Muhammad Bilal, Jeremy Blackburn, and Konstantina Papa- giannaki. 2016. An Empirical Study of Android Alarm Usage for Application Scheduling. In Passive and Active Measurement, Thomas Karagiannis and Xeno- fontas Dimitropoulos (Eds.). Springer International Publishing, Cham, 373-384.
Chimp: Crowdsourcing human inputs for mobile phones. Mario Almeida, Muhammad Bilal, Alessandro Finamore, Ilias Leontiadis, Yan Grunenberger, Matteo Varvello, Jeremy Blackburn, Proceedings of the 2018 World Wide Web Conference. the 2018 World Wide Web ConferenceMario Almeida, Muhammad Bilal, Alessandro Finamore, Ilias Leontiadis, Yan Grunenberger, Matteo Varvello, and Jeremy Blackburn. 2018. Chimp: Crowd- sourcing human inputs for mobile phones. In Proceedings of the 2018 World Wide Web Conference. 45-54.
EmBench: Quantifying Performance Variations of Deep Neural Networks across Modern Commodity Devices. Mario Almeida, Stefanos Laskaridis, Ilias Leontiadis, I Stylianos, Nicholas D Venieris, Lane, The 3rd International Workshop on Deep Learning for Mobile Systems and Applications (EMDL. Mario Almeida, Stefanos Laskaridis, Ilias Leontiadis, Stylianos I Venieris, and Nicholas D Lane. 2019. EmBench: Quantifying Performance Variations of Deep Neural Networks across Modern Commodity Devices. In The 3rd International Workshop on Deep Learning for Mobile Systems and Applications (EMDL). 1-6.
DynO: Dynamic Onloading of Deep Neural Networks from Cloud to Device. Mario Almeida, Stefanos Laskaridis, Stylianos I Venieris, Ilias Leontiadis, Nicholas D Lane, arXiv:cs.DC/2104.09949Mario Almeida, Stefanos Laskaridis, Stylianos I. Venieris, Ilias Leontiadis, and Nicholas D. Lane. 2021. DynO: Dynamic Onloading of Deep Neural Networks from Cloud to Device. (2021). arXiv:cs.DC/2104.09949
. Amazon, AWS Android SDK. Amazon. 2020. AWS Android SDK. https://docs.aws.amazon.com/
Appbrain, Number of Android apps on Google Play. AppBrain. 2020. Number of Android apps on Google Play. https://www.appbrain. com/stats/number-of-android-apps. (2020).
. Arm, Arm. 2021. Ethos NPU. https://developer.arm.com/ip-products/processors/ machine-learning/arm-ethos-n. (2021). Accessed: September 30, 2021.
Valentin Bazarevsky, Yury Kartynnik, Andrey Vakunov, Karthik Raveendran, Matthias Grundmann, arXiv:1907.05047Blazeface: Sub-millisecond neural face detection on mobile gpus. arXiv preprintValentin Bazarevsky, Yury Kartynnik, Andrey Vakunov, Karthik Raveendran, and Matthias Grundmann. 2019. Blazeface: Sub-millisecond neural face detection on mobile gpus. arXiv preprint arXiv:1907.05047 (2019).
Towards Federated Learning at Scale: System Design. Keith Bonawitz, Hubert Eichner, Wolfgang Grieskamp, Dzmitry Huba, Alex Ingerman, Vladimir Ivanov, Chloé Kiddon, Jakub Konečný, Stefano Mazzocchi, Brendan Mcmahan, Timon Van Overveldt, David Petrou, Daniel Ramage, Jason Roselander, Proceedings of Machine Learning and Systems. A. Talwalkar, V. Smith, and M. ZahariaMachine Learning and Systems1Keith Bonawitz, Hubert Eichner, Wolfgang Grieskamp, Dzmitry Huba, Alex Ingerman, Vladimir Ivanov, Chloé Kiddon, Jakub Konečný, Stefano Mazzocchi, Brendan McMahan, Timon Van Overveldt, David Petrou, Daniel Ramage, and Jason Roselander. 2019. Towards Federated Learning at Scale: System Design. In Proceedings of Machine Learning and Systems, A. Talwalkar, V. Smith, and M. Zaharia (Eds.), Vol. 1. 374-388. https://proceedings.mlsys.org/paper/2019/file/ bd686fd640be98efaae0091fa301e613-Paper.pdf
William Chan, Navdeep Jaitly, V Quoc, Oriol Le, Vinyals, arXiv:1508.01211Listen, attend and spell. arXiv preprintWilliam Chan, Navdeep Jaitly, Quoc V Le, and Oriol Vinyals. 2015. Listen, attend and spell. arXiv preprint arXiv:1508.01211 (2015).
FasterSeg: Searching for Faster Real-time Semantic Segmentation. Wuyang Chen, Xinyu Gong, Xianming Liu, Qian Zhang, Yuan Li, Zhangyang Wang, International Conference on Learning Representations. Wuyang Chen, Xinyu Gong, Xianming Liu, Qian Zhang, Yuan Li, and Zhangyang Wang. 2020. FasterSeg: Searching for Faster Real-time Semantic Segmentation. In International Conference on Learning Representations.
Two Billion Users -Connecting the World Privately. Facebook, Facebook. 2020. Two Billion Users -Connecting the World Privately. https: //about.fb.com/news/2020/02/two-billion-users/. (2020).
Searching for Winograd-aware Quantized Networks. Javier Fernandez-Marques, Paul Whatmough, Andrew Mundy, Matthew Mattina, Proceedings of Machine Learning and Systems. I. Dhillon, D. Papailiopoulos, and V. SzeMachine Learning and Systems2Javier Fernandez-Marques, Paul Whatmough, Andrew Mundy, and Matthew Mattina. 2020. Searching for Winograd-aware Quantized Networks. In Pro- ceedings of Machine Learning and Systems, I. Dhillon, D. Papailiopoulos, and V. Sze (Eds.), Vol. 2. 14-29. https://proceedings.mlsys.org/paper/2020/file/ 45c48cce2e2d7fbdea1afc51c7c6ad26-Paper.pdf
Mobile operating systems' market share worldwide from. Globalstats, GlobalStats. 2020. Mobile operating systems' market share worldwide from April 2020 to April 2021. https://gs.statcounter.com/os-market-share/mobile/ worldwide. (2020).
Android Runtime and Dalvik. Google, Google. 2020. Android Runtime and Dalvik. https://source.android.com/devices/ tech/dalvik. (2020).
Google Cloud APIs. Google, Google. 2020. Google Cloud APIs. https://firebase.google.com/docs/ml. (2020).
Google Cloud APIs. Google, Google. 2020. Google Cloud APIs. https://cloud.google.com/apis. (2020).
Google. 2020. Optimize for Doze and App Standby. Google. 2020. Optimize for Doze and App Standby. https://developer.android. com/training/monitoring-device-state/doze-standby. (2020).
Google. 2021. About Android App Bundles. Google. 2021. About Android App Bundles. https://developer.android.com/guide/ app-bundle. (2021).
Google. 2021. APK Expansion Files. Google. 2021. APK Expansion Files. https://developer.android.com/google/play/ expansion-files. (2021).
Google. 2021. Tensorflow: Clustering. Google. 2021. Tensorflow: Clustering. https://www.tensorflow.org/model_ optimization/guide/clustering. (2021).
Tensorflow: pruning with keras. Google, Google. 2021. Tensorflow: pruning with keras. https://www.tensorflow.org/ model_optimization/guide/pruning/pruning_with_kera. (2021).
An Empirical Study towards Characterizing Deep Learning Development and Deployment across Different Frameworks and Platforms. Qianyu Guo, Sen Chen, Xiaofei Xie, Lei Ma, Qiang Hu, Hongtao Liu, Yang Liu, Jianjun Zhao, Xiaohong Li, Proceedings of the 34th IEEE/ACM International Conference on Automated Software Engineering (ASE. the 34th IEEE/ACM International Conference on Automated Software Engineering (ASEQianyu Guo, Sen Chen, Xiaofei Xie, Lei Ma, Qiang Hu, Hongtao Liu, Yang Liu, Jianjun Zhao, and Xiaohong Li. 2019. An Empirical Study towards Characterizing Deep Learning Development and Deployment across Different Frameworks and Platforms. In Proceedings of the 34th IEEE/ACM International Conference on Automated Software Engineering (ASE). 810-822.
Characterizing the Deployment of Deep Neural Networks on Commercial Edge Devices. R Hadidi, J Cao, Y Xie, B Asgari, T Krishna, H Kim, 2019 IEEE International Symposium on Workload Characterization (IISWC). R. Hadidi, J. Cao, Y. Xie, B. Asgari, T. Krishna, and H. Kim. 2019. Characterizing the Deployment of Deep Neural Networks on Commercial Edge Devices. In 2019 IEEE International Symposium on Workload Characterization (IISWC). 35-48.
Song Han, Huizi Mao, William J Dally, Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding. International Conference on Learning Representations (ICLR. Song Han, Huizi Mao, and William J Dally. 2016. Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding. International Conference on Learning Representations (ICLR) (2016).
Latency and Throughput Characterization of Convolutional Neural Networks for Mobile Computer Vision. Jussi Hanhirova, Teemu Kämäräinen, Sipi Seppälä, Matti Siekkinen, Vesa Hirvisalo, Antti Ylä-Jääski , Proceedings of the 9th ACM Multimedia Systems Conference (MMSys). the 9th ACM Multimedia Systems Conference (MMSys)ACMJussi Hanhirova, Teemu Kämäräinen, Sipi Seppälä, Matti Siekkinen, Vesa Hirvisalo, and Antti Ylä-Jääski. 2018. Latency and Throughput Characterization of Convolutional Neural Networks for Mobile Computer Vision. In Proceedings of the 9th ACM Multimedia Systems Conference (MMSys). ACM, 204-215.
Applied Machine Learning at Facebook: A Datacenter Infrastructure Perspective. K Hazelwood, S Bird, D Brooks, S Chintala, U Diril, D Dzhulgakov, M Fawzy, B Jia, Y Jia, A Kalro, J Law, K Lee, J Lu, P Noordhuis, M Smelyanskiy, L Xiong, X Wang, 2018 IEEE International Symposium on High Performance Computer Architecture (HPCA. K. Hazelwood, S. Bird, D. Brooks, S. Chintala, U. Diril, D. Dzhulgakov, M. Fawzy, B. Jia, Y. Jia, A. Kalro, J. Law, K. Lee, J. Lu, P. Noordhuis, M. Smelyanskiy, L. Xiong, and X. Wang. 2018. Applied Machine Learning at Facebook: A Datacenter Infras- tructure Perspective. In 2018 IEEE International Symposium on High Performance Computer Architecture (HPCA). 620-629.
Deep Residual Learning for Image Recognition. K He, S Zhang, J Ren, Sun, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR. K He, X Zhang, S Ren, and J Sun. 2016. Deep Residual Learning for Image Recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 770-778.
FjORD: Fair and Accurate Federated Learning under heterogeneous targets with Ordered Dropout. Samuel Horvath, Stefanos Laskaridis, Mario Almeida, Ilias Leontiadis, I Stylianos, Nicholas D Venieris, Lane, arXiv:2102.13451arXiv preprintSamuel Horvath, Stefanos Laskaridis, Mario Almeida, Ilias Leontiadis, Stylianos I Venieris, and Nicholas D Lane. 2021. FjORD: Fair and Accurate Federated Learning under heterogeneous targets with Ordered Dropout. arXiv preprint arXiv:2102.13451 (2021).
G Andrew, Menglong Howard, Bo Zhu, Dmitry Chen, Weijun Kalenichenko, Tobias Wang, Marco Weyand, Hartwig Andreetto, Adam, arXiv:1704.04861Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprintAndrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. 2017. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017).
Densely connected convolutional networks. Gao Huang, Zhuang Liu, Laurens Van Der Maaten, Kilian Q Weinberger, 10.1109/CVPR.2017.243arXiv:1608.06993Proceedings -30th IEEE Conference on Computer Vision and Pattern Recognition. -30th IEEE Conference on Computer Vision and Pattern RecognitionGao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q. Weinberger. 2017. Densely connected convolutional networks. Proceedings -30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 2017-Janua (2017), 2261-2269. https://doi.org/10.1109/CVPR.2017.243 arXiv:1608.06993
AI Benchmark: All About Deep Learning on Smartphones in 2019. Andrey Ignatov, Radu Timofte, Andrei Kulik, Seungsoo Yang, Ke Wang, Felix Baum, Max Wu, Lirong Xu, Luc Van Gool, International Conference on Computer Vision (ICCV) Workshops. Andrey Ignatov, Radu Timofte, Andrei Kulik, Seungsoo Yang, Ke Wang, Felix Baum, Max Wu, Lirong Xu, and Luc Van Gool. 2019. AI Benchmark: All About Deep Learning on Smartphones in 2019. In International Conference on Computer Vision (ICCV) Workshops.
Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference. B Jacob, S Kligys, B Chen, M Zhu, M Tang, A Howard, H Adam, D Kalenichenko, IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR. B. Jacob, S. Kligys, B. Chen, M. Zhu, M. Tang, A. Howard, H. Adam, and D. Kalenichenko. 2018. Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2704-2713.
Neurosurgeon: Collaborative Intelligence Between the Cloud and Mobile Edge. Yiping Kang, Johann Hauswald, Cao Gao, Austin Rovinski, Trevor Mudge, Proceedings of the Twenty-Second International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS). the Twenty-Second International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS)Jason Mars, and Lingjia TangYiping Kang, Johann Hauswald, Cao Gao, Austin Rovinski, Trevor Mudge, Jason Mars, and Lingjia Tang. 2017. Neurosurgeon: Collaborative Intelligence Between the Cloud and Mobile Edge. In Proceedings of the Twenty-Second International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS). 615-629.
Enhancing energy efficiency of multimedia applications in heterogeneous mobile multi-core processors. Minyong Young Geun Kim, Sung Woo Kim, Chung, IEEE Trans. Comput. 66Young Geun Kim, Minyong Kim, and Sung Woo Chung. 2017. Enhancing en- ergy efficiency of multimedia applications in heterogeneous mobile multi-core processors. IEEE Trans. Comput. 66, 11 (2017), 1878-1889.
Adaptive Inference through Early-Exit Networks: Design, Challenges and Directions. Stefanos Laskaridis, Alexandros Kouris, Nicholas D Lane, 10.1145/3469116.3470012Proceedings of the 5th International Workshop on Embedded and Mobile Deep Learning (EMDL'21). the 5th International Workshop on Embedded and Mobile Deep Learning (EMDL'21)New York, NY, USAAssociation for Computing MachineryStefanos Laskaridis, Alexandros Kouris, and Nicholas D. Lane. 2021. Adaptive Inference through Early-Exit Networks: Design, Challenges and Directions. In Proceedings of the 5th International Workshop on Embedded and Mobile Deep Learning (EMDL'21). Association for Computing Machinery, New York, NY, USA, 1-6. https://doi.org/10.1145/3469116.3470012
SPINN: Synergistic Progressive Inference of Neural Networks over Device and Cloud. Stefanos Laskaridis, Stylianos I Venieris, Mario Almeida, Ilias Leontiadis, Nicholas D Lane, The 26th Annual International Conference on Mobile Computing and Networking (MobiCom). Stefanos Laskaridis, Stylianos I. Venieris, Mario Almeida, Ilias Leontiadis, and Nicholas D. Lane. 2020. SPINN: Synergistic Progressive Inference of Neural Networks over Device and Cloud. In The 26th Annual International Conference on Mobile Computing and Networking (MobiCom).
HAPI: Hardware-Aware Progressive Inference. Stefanos Laskaridis, Stylianos I Venieris, Hyeji Kim, Nicholas D Lane, IEEE/ACM International Conference on Computer-Aided Design. ICCADStefanos Laskaridis, Stylianos I. Venieris, Hyeji Kim, and Nicholas D. Lane. 2020. HAPI: Hardware-Aware Progressive Inference. In IEEE/ACM International Con- ference on Computer-Aided Design (ICCAD).
On-Device Neural Net Inference with Mobile GPUs. Juhyun Lee, Nikolay Chirkov, Ekaterina Ignasheva, Yury Pisarchyk, Mogan Shieh, Fabio Riccardi, Raman Sarokin, Andrei Kulik, Matthias Grundmann, Efficient Deep Learning for Computer Vision CVPR 2019 (ECV2019). Juhyun Lee, Nikolay Chirkov, Ekaterina Ignasheva, Yury Pisarchyk, Mogan Shieh, Fabio Riccardi, Raman Sarokin, Andrei Kulik, and Matthias Grundmann. 2019. On-Device Neural Net Inference with Mobile GPUs. In Efficient Deep Learning for Computer Vision CVPR 2019 (ECV2019).
SNIP: Single-Shot Network Pruning based on Connection Sensitivity. Namhoon Lee, Thalaiyasingam Ajanthan, Philip Torr, International Conference on Learning Representations. ICLRNamhoon Lee, Thalaiyasingam Ajanthan, and Philip Torr. 2019. SNIP: Single-Shot Network Pruning based on Connection Sensitivity. In International Conference on Learning Representations (ICLR).
It's Always Personal: Using Early Exits for Efficient On-Device CNN Personalisation. Ilias Leontiadis, Stefanos Laskaridis, Stylianos I Venieris, Nicholas D Lane, 10.1145/3446382.3448359Proceedings of the 22nd International Workshop on Mobile Computing Systems and Applications (HotMobile '21). the 22nd International Workshop on Mobile Computing Systems and Applications (HotMobile '21)New York, NY, USAAssociation for Computing MachineryIlias Leontiadis, Stefanos Laskaridis, Stylianos I. Venieris, and Nicholas D. Lane. 2021. It's Always Personal: Using Early Exits for Efficient On-Device CNN Personalisation. In Proceedings of the 22nd International Workshop on Mobile Computing Systems and Applications (HotMobile '21). Association for Computing Machinery, New York, NY, USA, 15-21. https://doi.org/10.1145/3446382.3448359
FSSD: feature fusion single shot multibox detector. Zuoxin Li, Fuqiang Zhou, arXiv:1712.00960arXiv preprintZuoxin Li and Fuqiang Zhou. 2017. FSSD: feature fusion single shot multibox detector. arXiv preprint arXiv:1712.00960 (2017).
DaVinci: A Scalable Architecture for Neural Network Computing. H Liao, J Tu, J Xia, X Zhou, 2019 IEEE Hot Chips 31 Symposium (HCS. H. Liao, J. Tu, J. Xia, and X. Zhou. 2019. DaVinci: A Scalable Architecture for Neural Network Computing. In 2019 IEEE Hot Chips 31 Symposium (HCS). 1-44.
Fully convolutional networks for semantic segmentation. Jonathan Long, Evan Shelhamer, Trevor Darrell, 10.1109/CVPR.2015.72989652015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Jonathan Long, Evan Shelhamer, and Trevor Darrell. 2015. Fully convolutional networks for semantic segmentation. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 3431-3440. https://doi.org/10.1109/CVPR.2015. 7298965
Communication-efficient learning of deep networks from decentralized data. Brendan Mcmahan, Eider Moore, Daniel Ramage, Seth Hampson, Blaise Aguera Y Arcas, Artificial Intelligence and Statistics. PMLR. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep net- works from decentralized data. In Artificial Intelligence and Statistics. PMLR, 1273-1282.
NAS-Bench-ASR: Reproducible Neural Architecture Search for Speech Recognition. Abhinav Mehrotra, Alberto Gil, C P Ramos, Sourav Bhattacharya, Łukasz Dudziak, Ravichander Vipperla, Thomas Chau, Mohamed S Abdelfattah, Samin Ishtiaq, Nicholas Donald Lane, 2021 9th International Conference on Learning Representations. ICLRAbhinav Mehrotra, Alberto Gil C. P. Ramos, Sourav Bhattacharya, Łukasz Dudziak, Ravichander Vipperla, Thomas Chau, Mohamed S Abdelfattah, Samin Ishtiaq, and Nicholas Donald Lane. 2021. NAS-Bench-ASR: Reproducible Neural Architecture Search for Speech Recognition. In 2021 9th International Conference on Learning Representations (ICLR).
A Family of Droids-Android Malware Detection via Behavioral Modeling: Static vs Dynamic Analysis. Lucky Onwuzurike, Mario Almeida, Enrico Mariconti, Jeremy Blackburn, Gianluca Stringhini, Emiliano De Cristofaro, 16th Annual Conference on Privacy, Security and Trust (PST). IEEE. Lucky Onwuzurike, Mario Almeida, Enrico Mariconti, Jeremy Blackburn, Gian- luca Stringhini, and Emiliano De Cristofaro. 2018. A Family of Droids-Android Malware Detection via Behavioral Modeling: Static vs Dynamic Analysis. In 2018 16th Annual Conference on Privacy, Security and Trust (PST). IEEE, 1-10.
A survey on transfer learning. Qiang Sinno Jialin Pan, Yang, IEEE Transactions on knowledge and data engineering. 22Sinno Jialin Pan and Qiang Yang. 2009. A survey on transfer learning. IEEE Transactions on knowledge and data engineering 22, 10 (2009), 1345-1359.
Matthias Paulik, Matt Seigel, Henry Mason, Dominic Telaar, Joris Kluivers, Chi Wai Rogier Van Dalen, Luke Lau, Filip Carlson, Granqvist, arXiv:2102.08503Chris Vandevelde, et al. 2021. Federated Evaluation and Tuning for On-Device Personalization: System Design & Applications. arXiv preprintMatthias Paulik, Matt Seigel, Henry Mason, Dominic Telaar, Joris Kluivers, Rogier van Dalen, Chi Wai Lau, Luke Carlson, Filip Granqvist, Chris Vandevelde, et al. 2021. Federated Evaluation and Tuning for On-Device Personalization: System Design & Applications. arXiv preprint arXiv:2102.08503 (2021).
Scaling Up Online Speech Recognition Using ConvNets. Vineel Pratap, Qiantong Xu, Jacob Kahn, Gilad Avidov, Tatiana Likhomanenko, Awni Hannun, Vitaliy Liptchinsky, Gabriel Synnaeve, Ronan Collobert, 10.21437/Interspeech.2020-2840Proc. Interspeech 2020. Interspeech 2020Vineel Pratap, Qiantong Xu, Jacob Kahn, Gilad Avidov, Tatiana Likhomanenko, Awni Hannun, Vitaliy Liptchinsky, Gabriel Synnaeve, and Ronan Collobert. 2020. Scaling Up Online Speech Recognition Using ConvNets. In Proc. Interspeech 2020. 3376-3380. https://doi.org/10.21437/Interspeech.2020-2840
Qualcomm. 2021. Snapdragon Neural Processing Engine. Qualcomm. 2021. Snapdragon Neural Processing Engine. https://developer. qualcomm.com/docs/snpe/snapdragon_npe_runtime.html. (2021). Accessed: September 30, 2021.
. Lutz Roeder, Lutz Roeder. 2020. Netron. https://github.com/lutzroeder/netron. (2020).
A Study of WhatsApp Usage Patterns and Prediction Models without Message Content. Avi Rosenfeld, Sigal Sina, David Sarne, Or Avidov, Sarit Kraus, arXiv:1802.03393arXiv preprintAvi Rosenfeld, Sigal Sina, David Sarne, Or Avidov, and Sarit Kraus. 2015. A Study of WhatsApp Usage Patterns and Prediction Models without Message Content. arXiv preprint arXiv:1802.03393 (2015).
Very Deep Convolutional Networks for Large-Scale Image Recognition. K Simonyan, Zisserman, International Conference on Learning Representations. ICLRK Simonyan and A Zisserman. 2015. Very Deep Convolutional Networks for Large-Scale Image Recognition. In International Conference on Learning Represen- tations (ICLR).
Mobile operating systems' market share worldwide from. Statista, Statista. 2020. Mobile operating systems' market share worldwide from January 2012 to July 2020. https://www.statista.com/statistics/272698/ global-market-share-held-by-mobile-operating-systems-since-2009/. (2020).
Mind Your Weight(s): A Large-scale Study on Insufficient Machine Learning Model Protection in Mobile Apps. Zhichuang Sun, Ruimin Sun, Long Lu, Alan Mislove, 30th USENIX Security Symposium (USENIX Security 21). USENIX Association. Zhichuang Sun, Ruimin Sun, Long Lu, and Alan Mislove. 2021. Mind Your Weight(s): A Large-scale Study on Insufficient Machine Learning Model Pro- tection in Mobile Apps. In 30th USENIX Security Symposium (USENIX Security 21). USENIX Association. https://www.usenix.org/conference/usenixsecurity21/ presentation/sun-zhichuang
Ilya Sutskever, Oriol Vinyals, Quoc V Le, arXiv:1409.3215Sequence to sequence learning with neural networks. arXiv preprintIlya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. arXiv preprint arXiv:1409.3215 (2014).
MnasNet: Platform-Aware Neural Architecture Search for Mobile. Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard, V Quoc, Le, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard, and Quoc V. Le. 2019. MnasNet: Platform-Aware Neural Architecture Search for Mobile. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
Faceter Team. 2020. Pay Cards Recognizer. Faceter Team. 2020. Pay Cards Recognizer. https://github.com/faceterteam/ PayCards_iOS_Source. (2020).
Example on-device model personalization with TensorFlow Lite. Org Tensorflow, Tensorflow.org. 2019. Example on-device model personaliza- tion with TensorFlow Lite. https://blog.tensorflow.org/2019/12/ example-on-device-model-personalization.html. (2019).
Trim insignificant weights. Tensorflow, Org, Tensorflow.org. 2021. Trim insignificant weights. https://www.tensorflow.org/ model_optimization/guide/pruning. (2021).
. Connor Tumbleson, Connor Tumbleson. 2020. apktool. https://ibotpeaches.github.io/Apktool/. (2020).
A Measurement Study of Google Play. Nicolas Viennot, Edward Garcia, Jason Nieh, The 2014 ACM International Conference on Measurement and Modeling of Computer Systems (SIGMETRICS). Nicolas Viennot, Edward Garcia, and Jason Nieh. 2014. A Measurement Study of Google Play. In The 2014 ACM International Conference on Measurement and Modeling of Computer Systems (SIGMETRICS). 221-233.
Neural Network Inference on Mobile SoCs. S Wang, A Pathania, T Mitra, IEEE Design Test. S. Wang, A. Pathania, and T. Mitra. 2020. Neural Network Inference on Mobile SoCs. IEEE Design Test (2020).
Whatsapp. 2021. Whatsapp daily messages. Whatsapp. 2021. Whatsapp daily messages. https://twitter.com/wcathcart/status/ 1321949078381453314. (2021).
Machine Learning at Facebook: Understanding Inference at the Edge. C Wu, 2019 IEEE International Symposium on High Performance Computer Architecture (HPCA. C. Wu et al. 2019. Machine Learning at Facebook: Understanding Inference at the Edge. In 2019 IEEE International Symposium on High Performance Computer Architecture (HPCA). 331-344.
Machine Learning at Facebook: Understanding Inference at the Edge. C Wu, D Brooks, K Chen, D Chen, S Choudhury, M Dukhan, K Hazelwood, E Isaac, Y Jia, B Jia, T Leyvand, H Lu, Y Lu, L Qiao, B Reagen, J Spisak, F Sun, A Tulloch, P Vajda, X Wang, Y Wang, B Wasti, Y Wu, R Xian, S Yoo, P Zhang, 2019 IEEE International Symposium on High Performance Computer Architecture (HPCA. C. Wu, D. Brooks, K. Chen, D. Chen, S. Choudhury, M. Dukhan, K. Hazelwood, E. Isaac, Y. Jia, B. Jia, T. Leyvand, H. Lu, Y. Lu, L. Qiao, B. Reagen, J. Spisak, F. Sun, A. Tulloch, P. Vajda, X. Wang, Y. Wang, B. Wasti, Y. Wu, R. Xian, S. Yoo, and P. Zhang. 2019. Machine Learning at Facebook: Understanding Inference at the Edge. In 2019 IEEE International Symposium on High Performance Computer Architecture (HPCA). 331-344.
Quantized Convolutional Neural Networks for Mobile Devices. Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR. the IEEE Conference on Computer Vision and Pattern Recognition (CVPRJiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, and Jian Cheng. 2016. Quantized Convolutional Neural Networks for Mobile Devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 4820- 4828.
A first look at deep learning apps on smartphones. Mengwei Xu, Jiawei Liu, Yuanqiang Liu, Felix Xiaozhu Lin, Yunxin Liu, Xuanzhe Liu, The World Wide Web Conference. Mengwei Xu, Jiawei Liu, Yuanqiang Liu, Felix Xiaozhu Lin, Yunxin Liu, and Xuanzhe Liu. 2019. A first look at deep learning apps on smartphones. In The World Wide Web Conference. 2125-2136.
Yepkit YKUSH 3 USB 3.1 Switchable Hub. Yepkit, Yepkit. 2020. Yepkit YKUSH 3 USB 3.1 Switchable Hub. https://www.yepkit.com/ product/300110/YKUSH3. (2020).
Towards Memory Friendly Long-Short Term Memory Networks (LSTMs) on Mobile GPUs. Xingyao Zhang, Chenhao Xie, Jing Wang, Weidong Zhang, Xin Fu, 10.1109/MICRO.2018.00022Proceedings of the 51st Annual IEEE/ACM International Symposium on Microarchitecture (MICRO-51). the 51st Annual IEEE/ACM International Symposium on Microarchitecture (MICRO-51)IEEE PressXingyao Zhang, Chenhao Xie, Jing Wang, Weidong Zhang, and Xin Fu. 2018. Towards Memory Friendly Long-Short Term Memory Networks (LSTMs) on Mo- bile GPUs. In Proceedings of the 51st Annual IEEE/ACM International Symposium on Microarchitecture (MICRO-51). IEEE Press, 162-174. https://doi.org/10.1109/ MICRO.2018.00022
ICNet for Real-Time Semantic Segmentation on High-Resolution Images. Hengshuang Zhao, Xiaojuan Qi, Xiaoyong Shen, Jianping Shi, Jiaya Jia, Computer Vision -ECCV 2018. Vittorio Ferrari, Martial Hebert, Cristian Sminchisescu, and Yair WeissChamSpringer International PublishingHengshuang Zhao, Xiaojuan Qi, Xiaoyong Shen, Jianping Shi, and Jiaya Jia. 2018. ICNet for Real-Time Semantic Segmentation on High-Resolution Images. In Com- puter Vision -ECCV 2018, Vittorio Ferrari, Martial Hebert, Cristian Sminchisescu, and Yair Weiss (Eds.). Springer International Publishing, Cham, 418-434.
| [
"https://github.com/google/XNNPACK",
"https://github.com/lutzroeder/netron.",
"https://github.com/faceterteam/"
]
|
[
"Space-Air-Ground Integrated Multi-domain Network Resource Orchestration based on Virtual Network Architecture: a DRL Method",
"Space-Air-Ground Integrated Multi-domain Network Resource Orchestration based on Virtual Network Architecture: a DRL Method"
]
| [
"Peiying Zhang [email protected]. ",
"Chao Wang ",
"Senior Member, IEEENeeraj Kumar [email protected]. ",
"Lei Liu [email protected]. ",
"Chao Wang ",
"\nState Key Laboratory of Networking and Switching Technology\nChina University of Petroleum (East China)\n266580QingdaoChina\n",
"\nCollege of Computer Science and Technology\nBeijing University of Posts and Telecommunications\n100876BeijingChina., China\n",
"\nUniversity of Petroleum (East China)\n266580QingdaoChina\n",
"\nDepartment of Computer Science and Information Engineering\nbe University\n147004PatialaIndia\n",
"\nSchool of Computer Science\nAsia University\n41354TaichungTaiwan\n",
"\nLei Liu is with the State Key Laboratory of Integrated Services Networks\nUniversity of Petroleum and Energy Studies\n248007DehradunIndia\n",
"\nXidian University\nXi'an 710071China\n"
]
| [
"State Key Laboratory of Networking and Switching Technology\nChina University of Petroleum (East China)\n266580QingdaoChina",
"College of Computer Science and Technology\nBeijing University of Posts and Telecommunications\n100876BeijingChina., China",
"University of Petroleum (East China)\n266580QingdaoChina",
"Department of Computer Science and Information Engineering\nbe University\n147004PatialaIndia",
"School of Computer Science\nAsia University\n41354TaichungTaiwan",
"Lei Liu is with the State Key Laboratory of Integrated Services Networks\nUniversity of Petroleum and Energy Studies\n248007DehradunIndia",
"Xidian University\nXi'an 710071China"
]
| [
"IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS"
]
| Traditional ground wireless communication networks cannot provide high-quality services for artificial intelligence (AI) applications such as intelligent transportation systems (ITS) due to deployment, coverage and capacity issues. The spaceair-ground integrated network (SAGIN) has become a research focus in the industry. Compared with traditional wireless communication networks, SAGIN is more flexible and reliable, and it has wider coverage and higher quality of seamless connection. However, due to its inherent heterogeneity, time-varying and selforganizing characteristics, the deployment and use of SAGIN still faces huge challenges, among which the orchestration of heterogeneous resources is a key issue. Based on virtual network architecture and deep reinforcement learning (DRL), we model SAGIN's heterogeneous resource orchestration as a multi-domain virtual network embedding (VNE) problem, and propose a SAGIN cross-domain VNE algorithm. We model the different network segments of SAGIN, and set the network attributes according to the actual situation of SAGIN and user needs. In DRL, the agent is acted by a five-layer policy network. We build a feature matrix based on network attributes extracted from SAGIN and use it as the agent training environment. Through training, the probability of each underlying node being embedded can be derived. In test phase, we complete the embedding process of virtual nodes and links in turn based on this probability. Finally, we verify the effectiveness of the algorithm from both training and testing. | 10.1109/tits.2021.3099477 | [
"https://arxiv.org/pdf/2202.02459v1.pdf"
]
| 238,809,892 | 2202.02459 | d28e41cc2acc3e4fc0fea0688ff15868de2015be |
Space-Air-Ground Integrated Multi-domain Network Resource Orchestration based on Virtual Network Architecture: a DRL Method
Peiying Zhang [email protected].
Chao Wang
Senior Member, IEEENeeraj Kumar [email protected].
Lei Liu [email protected].
Chao Wang
State Key Laboratory of Networking and Switching Technology
China University of Petroleum (East China)
266580QingdaoChina
College of Computer Science and Technology
Beijing University of Posts and Telecommunications
100876BeijingChina., China
University of Petroleum (East China)
266580QingdaoChina
Department of Computer Science and Information Engineering
be University
147004PatialaIndia
School of Computer Science
Asia University
41354TaichungTaiwan
Lei Liu is with the State Key Laboratory of Integrated Services Networks
University of Petroleum and Energy Studies
248007DehradunIndia
Xidian University
Xi'an 710071China
Space-Air-Ground Integrated Multi-domain Network Resource Orchestration based on Virtual Network Architecture: a DRL Method
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS
XX1(Corresponding authors: Neeraj Kumar and Peiying Zhang). Peiying Zhang is with the College of Computer Science and Technology, Neeraj Kumar is with the Department of Computer Science and Engineer-ing, Thapar Institute of Engineering and Technology, Deemed toIndex Terms-Wireless Communication NetworkSpace-air- ground Integrated NetworkVirtual Network ArchitectureVir- tual Network EmbeddingDeep Reinforcement Learning
Traditional ground wireless communication networks cannot provide high-quality services for artificial intelligence (AI) applications such as intelligent transportation systems (ITS) due to deployment, coverage and capacity issues. The spaceair-ground integrated network (SAGIN) has become a research focus in the industry. Compared with traditional wireless communication networks, SAGIN is more flexible and reliable, and it has wider coverage and higher quality of seamless connection. However, due to its inherent heterogeneity, time-varying and selforganizing characteristics, the deployment and use of SAGIN still faces huge challenges, among which the orchestration of heterogeneous resources is a key issue. Based on virtual network architecture and deep reinforcement learning (DRL), we model SAGIN's heterogeneous resource orchestration as a multi-domain virtual network embedding (VNE) problem, and propose a SAGIN cross-domain VNE algorithm. We model the different network segments of SAGIN, and set the network attributes according to the actual situation of SAGIN and user needs. In DRL, the agent is acted by a five-layer policy network. We build a feature matrix based on network attributes extracted from SAGIN and use it as the agent training environment. Through training, the probability of each underlying node being embedded can be derived. In test phase, we complete the embedding process of virtual nodes and links in turn based on this probability. Finally, we verify the effectiveness of the algorithm from both training and testing.
I. INTRODUCTION
In recent years, with the vigorous development of the artificial intelligence (AI) industry, intelligent transportation systems (ITS) have entered a stage of rapid development [1], [2]. Among them, as the main part of ITS, vehicular communication networks (VCNs) mainly rely on the communication services provided by 802.11p networks and cellular networks, which can complete vehicular functions such as road safety, entertainment interaction and location awareness to a certain extent [3], [4]. The development potential of VCN is huge. It is estimated that the number of vehicles connected to the Internet will reach 286 million by 2025. However, the deployment of VCN is facing a series of inevitable problems. First of all, 802.11p networks and cellular networks only provide dedicated short-distance communication services. The rapid movement of vehicles may cause frequent terminals on the network, thereby reducing service quality [5]. Secondly, the deployment of ground communication facilities (base stations (BSs), roadside units (RSUs)) is expensive and takes a long time to deploy [6], [7]. It is impossible to achieve high coverage in rural or remote mountainous areas. Finally, ground communication facilities are easily damaged by natural disasters such as earthquakes or floods, and cannot provide stable communication services for vehicles at any time [8]. Therefore, VCN deployment, coverage and capacity issues still need to be resolved urgently [9], [10]. Radio network resource management faces severe challenges, including storage, spectrum, computing resource allocation, and joint allocation of multiple resources [11], [12]. With the rapid development of communication networks, the integrated space-ground network has also become a key research object [13].
Space-air-ground integrated networks (SAGIN) can provide three-dimensional network connection for vehicles anytime and anywhere, which has become the key research direction of the next generation of ITS [14]. For example, Tesla plans to launch a certain number of commercial satellites to provide global connectivity services for new energy vehicles. The mTenna equipped with Toyota's Mirai car can provide it with a transfer rate of 50M B/s. Google and Facebook also plan to deploy balloons and unmanned aerial vehicles (UAVs) to provide Internet services in remote areas, respectively. As a promising network architecture, SAGIN can provide seamless global connectivity and efficient and reliable low latency services for emerging applications including VCN [15]. SAGIN is essentially a layered network architecture, arXiv:2202.02459v1 [cs.NI] 3 Feb 2022 which is mainly composed of three network segments, as shown in Fig. 1. Satellites can be divided into three categories according to the height above the ground: geosynchronous orbit (GEO), medium earth orbit (MEO) and low earth orbit (LEO) satellites. Air networks can be divided into high altitude platform (HAP) and low altitude platform (LAP). UAVs, balloons and airships are its main components [16]. Ground networks mainly refer to traditional communication networks such as cellular networks and wireless local area networks (WLAN). SAGIN is a layered network architecture, and different network segments are quite different. The network nodes composed of satellites or UAVs are always in a mobile state, so SAGIN has inherent characteristics such as heterogeneity, time-varying and self-organizing [17]. SAGIN is restricted by many factors such as traffic distribution, routing scheduling, power control, spectrum allocation, and load balancing [18]. Among them, the allocation and scheduling of heterogeneous physical network resources is a key issue. A reliable idea is to adopt a new architecture to enhance SAGIN, focusing on solving the allocation and scheduling of SAGIN's heterogeneous physical network resources, i.e., the problem of network resource orchestration. Network virtualization (NV) is a technology that logically abstracts physical networks [19], [20]. It can solve the problem of resource allocation in heterogeneous networks by providing intelligent and flexible management and orchestration systems. Virtual network embedding (VNE) is the core issue of NV research, and its essence is the orchestration of network resources [21]. Therefore, the resource allocation problem of SAGIN can be transferred to the research of VNE algorithm. In SAGIN, physical network resources may come from different network segments, so we consider implementing a multi-domain VNE algorithm based on a virtual network architecture.
AI technology solves many problems in daily production and life with its superior performance, especially in the field of perception and decision-making high-dimensional space problems [22]. The rapid development and universal application of deep learning (DL) and reinforcement learning (RL) are the key to the success of AI [23], [24]. The former has strong perception ability, while the latter has strong decision-making ability. The product of the combination of the two is deep reinforcement learning (DRL). DRL is essentially an end-toend perception and control system, which has strong versatility and is usually used to solve decision-making problems in highdimensional space. Scholars have already used this technology to improve network performance [25], [26]. VNE is NP-hard [27]. DRL has better performance than optimization methods or heuristic methods when solving such problems [28], [29]. Therefore, we consider using DRL methods to optimize the resource scheduling problem of SAGIN. This paper has done the following main work. 1) In order to improve the efficiency and rationality of the allocation of heterogeneous network resources in SAGIN, based on the virtual network architecture, we model the resource scheduling problem of SAGIN as a multi-domain VNE problem, and provide a solution for resource allocation of SAGIN from the perspective of VNE. 2) We use DRL to improve the performance of the multidomain VNE algorithm. Specifically, we use a self-built policy network as an agent, and form a feature matrix by extracting SAGIN resource attributes to provide an environment for agent training. DRL method can deduce the node embedding probability, and then complete the entire multi-domain VNE algorithm. 3) We verify the performance of the proposed algorithm through experimental simulations. According to the actual network characteristics of different network segments of SAGIN, we set differentiated network attributes for the network topology. Experimental results show that the proposed algorithm performs well in multiple network performance. The main content of the rest of this paper is as follows. Section II reviews the related research work carried out on SAGIN, including SAGIN based on virtual network architecture. Section III describes related issues and system models. Section IV gives the constraints and performance indicators of the algorithm. Section V introduces the algorithm realization process. We showed and analyzed the experimental results in Section VI. Section VII summarizes the full paper.
II. RELATED WORK
A. Overview of Research Status of SAGIN Technology SAGIN, as an important form of future wireless network communication system, has become a research hotspot in industry and academia. Scholars have carried out a lot of research on SAGIN related technologies. Reference [30] and [31] have carried out research from satellite system and UAV communication network respectively. They focused on summarizing a series of problems (dynamic topology, energy loss and capacity limitation) faced by satellite communications and UAV communications. Spectrum allocation, traffic offloading, routing strategy and system integration are key issues in SAGIN research. Li et al. [32] developed a spectrum allocation scheme for cognitive satellite networks. This solution used Bayesian equalization as the final spectrum allocation strategy, and improved spectrum efficiency by making full use of spectrum resources and overall user demand. The authors of [33] studied the spectrum allocation problem when multiple UAVs were used as relay nodes. The authors aimed to maximize the service efficiency of the integrated IoT network system, and solved the joint optimization problem of bandwidth allocation, gateway selection, and UAV deployment based on simulated annealing and continuous convex planning. In order to provide an economical, reliable, and efficient resource management solution for air vehicles, Varasteh et al. [34] modeled routing and service placement problems as virtual machine placement problems. After weighing different optimization solutions, the authors decided on the routing, service placement and service migration of the aircraft in SAGIN, realizing the dynamic adjustment of the service network. Ruan et al. [35] studied the issue of spectrum efficiency between satellite networks and ground networks. The authors proposed an adaptive transmission scheme with symbol error rate (SER) constraints. Finally, they took the SER as a constraint and discussed the trade-off between energy efficiency and spectrum efficiency.
B. Research Status of SAGIN based on Virtual Network Architecture
Virtual network architecture has begun to try to apply in SAGIN. As a future network architecture, virtual network has significant advantages in development and management of heterogeneous resources, and have been favored by researchers. As excellent representatives of NV, software defined network (SDN) and network function virtualization (NFV) are considered to be enabling technologies for the flexible and effective integration of heterogeneous networks, and can provide innovative solutions for the orchestration of heterogeneous network resources.
Authors of [36] proposed a software defined SAGIN architecture based on reviewing the motivation and challenges of SAGIN integration. In order to protect the traditional services in different segmented networks, the authors used network slicing to slice the resources of each network segment, and then put the available resources into a public resource pool, which provided reliable services for the Internet of Vehicles. In order to optimize the load balancing of network communication, reference [37] studied a software defined SAGIN routing algorithm. Based on the characteristics of the SDN model and the dynamic changes of SAGIN topology, the authors considered the multidimensionality of resources and energy consumption, which effectively reduced the end-toend delay and packet loss rate. Wang et al. [38] proposed a SAGIN reconfigurable service framework based on service function chain (SFC). The framework modeled the realization of SFC and virtual network functions (VNFs) as integer nonlinear programming problems. The authors proposed a heuristic greedy algorithm to balance the resource consumption of different network nodes. The results proved that this algorithm can improve resource utilization efficiency. Du et al. [39] studied spectrum sharing and interference control technology based on SDN. They proposed a spectrum sharing and service offloading mechanism to realize the cooperative relationship between the ground BSs and the beam group of the satellite ground communication system. In this mode, the communication between satellites and the ground effectively realized frequency sharing and traffic offloading.
C. Research Status Analysis
Based on the comprehensive analysis of SAGIN related research, it is found that they all have the following problems.
1) In the relevant technical research field of SAGIN, they only study space, air, and ground one of the network segments or any two network segments combined with related technologies, and do not pay attention to the integration of space-air-ground three-dimensional network segments. 2) In the research field of SAGIN technology based on virtual network architecture, the existing work usually adopt optimization methods or heuristic methods to solve SAGIN resource management problems. They do not apply AI algorithms to SAGIN resource orchestration problems.
3) The existing work only analyze the possible impact of the heterogeneity and time-varying of SAGIN on the network resource allocation, and do not model the physical network resources of different network segments, so it cannot fully reflect the dynamic changes of network resources when SAGIN provides services for end users. Therefore, on the basis of analyzing the inherent characteristics of SAGIN, such as heterogeneity, time-varying and selforganizing, this paper proposes a SAGIN resource orchestration algorithm based on virtual network architecture and DRL method, which is essentially a multi-domain VNE algorithm.
III. PROBLEM DESCRIPTION AND SYSTEM MODEL
A. Problem Description
One of the most prominent features of SAGIN is timevarying. Since satellites and aerial vehicles are constantly moving, the network topology will always change. For example, when a vehicle is receiving the positioning service provided by satellite A, but due to the revolution of the satellite, the vehicle leaves the service coverage area of satellite A and enters the service coverage area of satellite B instead. Then the mode of providing network resources has changed. In addition, due to the heterogeneity of SAGIN, the network resources of different network segments are also heterogeneous. Satellite nodes or aerial vehicle nodes have small capacity due to volume limitations, so their computing resources are often limited. It should be noted that the delay attributes of channel links in different network segments are often quite different [40], [41].
We assume that the network topology of different network segments of SAGIN is relatively unchanged within a certain period of time, and end users are always within the service range of one or several satellites or aircrafts. Under the virtual network architecture, the network resource request sent by the end user to SAGIN forms a virtual network request (VNR). SAGIN will allocate network resources of different network segments according to the actual needs of the VNR. We regard T as the reconstruction period of SAGIN resources, and the VNR may arrive at any point within T . The process of a VNR request service will not be interrupted by the deadline T . If the VNR cannot be completed within a period of time, it will continue to complete the request in the next reconstruction period T . When the VNR leaves, the network resources occupied by it will be released. The ultimate goal is to increase the revenue of network service providers on the basis of accepting as many VNRs as possible. Therefore, the problem of VNE across multiple network domains is finally formed. TABLE I summarizes the notations used in the multidomain VNE problem of SAGIN based on the virtual network architecture.
B. System Model
1) Physical Network Model:
The physical network is modeled as an undirected weighted graph G P = {N P , E P , A P }, where N P represents the network node set of SAGIN, E P represents the network link set of SAGIN, and A P represents the network attribute set of SAGIN. Network node set N P = {N P S , N P A , N P G }, where N P S is the satellite node set, N P A is the aerial node set, and N P G is the ground node set. Network link set
E = {E P S , E P A , E P G , E P S,A , E P S,G , E P A,G }, where E P S , E P A and E P
G are the physical links among satellite nodes, air nodes and ground nodes respectively. In particular, E P S,A is the set of inter domain links between satellite nodes and air nodes, E P S,G is the set of inter domain links between satellite nodes and ground nodes, and E P A,G is the set of links between aerial nodes and ground nodes. Network attribute set
A P = {CP U N P S , CP U N P A , CP U N P G , BW E P S , BW E P A , BW E P G , D E P S , D E P A , D E P G }, where CP U N P S
represents the computing resource attributes of satellite nodes, CP U N P A represents the computing resource attributes of air nodes, and CP U N P G represents the computing resource attributes of ground nodes. BW E P S , BW E P A and BW E P G are the bandwidth resource attribute of satellite network links, air network links and ground network links respectively. D E P S , D E P A and D E P G are the delay attribute of satellite network links, air network and ground network links respectively. We use {(N P m , N P n ) ∈ E P |N P m , N P n ∈ N P } to indicate that there are links between nodes N P m and N P n . Thus, the bandwidth resource attribute between nodes N P m and N P n can be expressed as BW P N P m ,N P n , and the single hop delay between nodes N P m and N P n can be expressed as D P N P m ,N P n .
G V = {N V , E V , A V }, where N V
represents the virtual node set, E V represents the virtual link set, and A V represents the attribute set of VNRs. Attribute set
A V = {CP U N V , BW E V , D E V }, where CP U N V represents
the computing resource requirements of virtual nodes, BW E V represents the bandwidth resource requirements of virtual links, and D E V represents the delay requirements of virtual links. In particular, we use
{(N V j , N V k ) ∈ E V |N V j , N V k ∈ N V } to indicate that there is a link between virtual nodes N V j and N V k .
Thus, the bandwidth resource requirement between virtual nodes j and k can be expressed as BW V
N V j ,N V k
, and the delay requirement between virtual nodes j and k can be expressed as D V Fig. 2 shows an example of a VNR embedded in SAGIN. The ellipses in the figure represent network nodes, and the connections between nodes represent network links. In SAGIN, the number on the node represents the amount of computing resources, and the number on the link represents the amount of bandwidth resources and the delay value respectively. In VNR, the number on the node represents the computing resource demand, and the number on the link represents the bandwidth resource demand and the maximum delay value respectively. In the feasible VNE scheme, virtual node a is mapped to satellite node B, virtual node b is mapped to aerial node D, and virtual node c is mapped to ground node G. The CPU resource capacity of the mapped physical node meets the requirements of the virtual node. At the same time, the link resource conditions between each node are also satisfied. If virtual node a is mapped to satellite node A, virtual node b is mapped to aerial node C, and virtual node c is mapped to ground node F . At this time, the delay of the inter domain link between A and C is greater than the delay requirement of the virtual link between a and b, and the delay of the inter domain link between A and F is also greater than the delay requirement of the virtual link between a and c, so this is not a feasible VNE scheme.
N V j ,N V k .
IV. CONSTRAINTS AND PERFORMANCE INDICATORS
A. Attribute Constraints
We use the binary variable x n v ,n p to indicate whether the virtual node n v is embedded on the physical node n p , as shown below.
x n v ,n p = 1, n v is embedded on n p , 0, others.
The binary variable y
N V j ,N V k N P m ,N P n
is also used to indicate whether the virtual link (N V j , N V k ) is embedded on the physical link (N P m , N P n ), as shown below.
y N V j ,N V k N P m ,N P n = 1, (N V j , N V k ) is embedded on (N P m , N P n ), 0, others.
(2) Each physical node may be embedded by multiple virtual nodes from different VNRs. Expressed as follows.
n v ↑n p x n v ,n p ≥ 1, n v ∈ G V i , i = 1, 2, ..., |V N R|, n v ↑n p x n v ,n p = 1, n v ∈ G V .(3)
Each virtual link may be embedded on multiple physical links. Expressed as follows.
(N V j ,N V k )↑(N P m ,N P n ) y N V j ,N V k N P m ,N P n ≥ 1.(4)
If the virtual node n v is embedded on the physical node n p , the computing resource capacity of n p should meet the computing resource consumption of n v , which is expressed as follows.
CP
U n p ≥ CP U n v , if n v ↑ n p .(5)
For the physical node n p , the total consumption of computing resource requirements of all virtual nodes embedded in n p cannot exceed the total computing resource of n p .
|V N R| i=1 n v ↑n p CP U n v i ≤ CP U n p .(6)
If the virtual link (N V j , N V k ) is embedded on the physical link (N P m , N P n ), the bandwidth resource capacity of (N P m , N P n ) should not be less than the bandwidth resource demand of (N V j , N V k ).
BW (N V j ,N V k ) ≤ BW (N P m ,N P n ) , if (N V j , N V k ) ↑ (N P m , N P n ).(7)
For the physical link (N P m , N P n ), the total bandwidth resource demand of all virtual links embedded in the physical link (N P m , N P n ) cannot exceed the total bandwidth resource of the physical link (N P m , N P n ).
|V N R| i=1 (N V j ,N V k )↑(N P m ,N P n ) BW (N V j ,N V k )i ≤ BW (N P m ,N P n ) .(8)
In SAGIN, the transmission delay of link within different network segments are different. In satellite network, the link delay in the satellite network domain is usually large due to the interference of the propagation medium, radiation and temperature. Moreover, the delay of inter-domain links is often greater than the delay of intra-domain links. We set the delay attribute for physical links and virtual links, and the virtual link can only be embedded on the physical link that is not greater than its delay requirement, as shown below.
D (N V j ,N V k ) ≥ D (N P m ,N P n ) , if (N V j , N V k ) ↑ (N P m , N P n ).(9)
In the graph model, the transmission of traffic must comply with the law of conservation of traffic, which is a necessary condition for establishing a routing path, i.e., the traffic flowing into the physical node N P m must be equal to the traffic flowing out of the physical node N P n , as shown below.
N P m ∈N P y (N V j ,N V k ) (N P m ,N P n ) − N P m ∈N P y (N V j ,N V k ) (N P n ,N P m ) = x N V j ,N P m − x N V k ,N P m , ∀N P n ∈ N P , ∀(N V j , N V k ) ∈ E V .(10)
B. Performance Indicators
The resource consumption cost of VNE embedded in multi domain SAGIN is calculated as follows.
Cost G V ↑G P = |n v | i=1,n v i ∈N V CP U n v i + |e v | i=1,(N V j ,N V k )i∈E V BW (N V j ,N V k )i × hops[(N V j , N V k )],(11)
where hops[(N V j , N V k )] represents the number of hops of virtual link (N V j , N V k ). The goal of VNE is to increase the revenue on the basis of receiving as many VNRs as possible. The revenue of VNE is calculated as follows.
Revenue G V ↑G P = |n v | i=1,n v i ∈N V CP U n v i + |e v | i=1,(N V j ,N V k )i∈E V BW (N V j ,N V k )i .(12)
We use VNE long-term average revenue, long-term revenuecost ratio and VNR acceptance rate to evaluate the performance of the SAGIN cross-domain VNE algorithm. The longterm average revenue is calculated as follows.
R = lim T →∞ T t=0 [Revenue G V ↑G P , t] T .(13)
The long-term revenue-cost ratio is calculated as follows.
R/C = lim T →∞ T t=0 [Revenue G V ↑G P , t] T t=0 [Cost G V ↑G P , t] .(14)
The VNR acceptance rate is calculated as follows.
ACC = lim T →∞ T t=0 G V acc T t=0 G V arr ,(15)
where G V arr represents the number of virtual network requests arrived, and G V acc represents the number of virtual network requests successfully embedded.
V. ALGORITHM IMPLEMENTATION
A. Feature Matrix and Policy Network
The implementation of the SAGIN cross-domain VNE algorithm based on the virtual network architecture is divided into node embedding stage and link embedding stage. We apply the DRL method to the virtual node embedding stage to derive the probability of each SAGIN node being embedded. The key to achieve the desired effect of the DRL method is the effective interaction between the agent and the environment, i.e., the agent needs to be trained in the SAGIN environment as real as possible. Therefore, we use the feature matrix extracted from SAGIN as the input of the agent, and train the agent according to the changes in the underlying resources of SAGIN.
We extract the following four network attributes for each physical node of satellite networks, air networks and ground networks: computing resources, sum of connected link bandwidth, sum of connected link delay and average distance to other non embedded nodes. The above four attributes not only focus on the local characteristics of SAGIN, but also take into account the global characteristics of SAGIN, so they can characterize the underlying network more comprehensively. Among them, the link connected to a physical node refers to the intra domain links, and the sum of bandwidth of the link connected to the physical node is calculated as follows.
SU M (n p ) BW = (N P m ,N P n )∈E P n p BW [(N P m , N P n )],(16)
where E P n p represents the physical link connected to the physical node n p . In the same way, the sum of delay of the links connected to a physical node is calculated as follows.
SU M (n p ) D = (N P m ,N P n )∈E P n p D[(N P m , N P n )].(17)
The larger the value of SU M (n p ) BW , it means that when the virtual node n v is embedded on the physical node n p , a richer bandwidth resource can be selected, and the number of links that can be embedded is more. Conversely, the smaller the value of SU M (n p ) D , the smaller the delay interference when the virtual node n v is embedded on the physical node n p . The average distance to other non embedded nodes in the domain is calculated based on the number of link hops. The smaller the value, the lower the bandwidth resource cost and delay limitation of link embedding. The calculation method is as follows.
AV G(n p ) DST = n p i ∈N P DST (n p , n p i )
|N P | + 1 ,(18)
among them, n p i refers to those physical nodes that have not been embedded by virtual nodes, and DST (n p , n p i ) refers to the distance from node n p to other non embedded nodes in the domain.
It should be noted that the underlying network attributes that can be extracted are far more than the above four. Other network attributes such as node degree, storage resources, etc. are all network attributes that can be extracted. Extracting more network attributes means that more detailed information about the underlying network resources can be provided to the agent, but the computational complexity of the algorithm will also increase. Therefore, after comprehensively considering the actual situation of SAGIN and multi-domain VNE, it is more appropriate to extract the above four features. After extracting the feature of each physical node, the normalized value is concatenated into a feature vector. For nodes in different network segments of SAGIN, the feature vectors are expressed as follows.
(CP U (n p s ), SU M (n p s ) BW , SU M (n p s ) D , AV G(n p s ) DST ) T , n p s ∈ N P S , (CP U (n p a ), SU M (n p a ) BW , SU M (n p a ) D , AV G(n p a ) DST ) T , n p a ∈ N P A , (CP U (n p g ), SU M (n p g ) BW , SU M (n p g ) D , AV G(n p g ) DST ) T , n p g ∈ N P G .(19)
Combine all the feature vectors extracted from SAGIN into a four-dimensional feature matrix. Each row of the matrix is the feature vector of a certain physical node. Agent extracts a feature matrix from SAGIN every time it is trained, so the feature matrix is constantly changing as the underlying network resources change.
The key to DRL method with good perception and decisionmaking ability is the design and selection of agent. In the proposed algorithm, the agent is assumed by a five-layer policy network, which is composed of the basic elements of neural network. They are extraction layer, convolution layer, probabilistic layer, filtering layer and output layer, respectively. As shown in Fig. 3. The extraction layer is used to extract the feature matrix from SAGIN during agent training. The convolution layer performs a convolution operation on each feature vector in the feature matrix to obtain the available resource vector form of each feature vector. In probabilistic layer, we use softmax function to generate a probability for each feature vector, i.e., the probability that each physical node is embedded. The filtering layer is used to filter those physical nodes that do not meet the embedding requirements due to excessive resource consumption. The output layer is used to output an available physical node, which is sorted according to the probability of being embedded from large to small. Among them, the convolution operation method is as follows,
ARV cov i = ω × v i + b.(20)
The calculation method of softmax function is as follows,
p i = e ARV cov i n e ARV cov n ,(21)
where ARV cov i represents the i-th output of the convolution layer. In this way, the embedded probability of the i-th node can be calculated.
B. Training and Testing
The training of agent is realized in the process of interaction with the environment. Initializing the parameters of policy network, we assume that all VNRs in each VNR period T follow a constant distribution. For each request period T , the policy network will extract a feature matrix from SAGIN as input. After the embedding probabilities of all physical nodes are output, the embedding of virtual nodes is completed in a predetermined order. Then we use the breadth first search strategy to complete the embedding of intra-domain links, and finally complete the embedding of inter-domain links.
In multi-domain VNE, we use the revenue-consumption ratio as the agent's reward signal. The revenue-consumption ratio can fully reflect the utilization of the underlying network resources. When the reward signal is large, it means that the node selection strategy currently adopted by the agent can obtain large VNE revenue, i.e., the current action is effective. On the contrary, the agent needs to adjust its actions. The learning rate of agent is also involved in the training phase, and the learning rate will directly affect the gradient of policy network. If the parameter gradient of policy network is large, the training may not derive a better embedding strategy, and the training will be meaningless. In contrast, the training process will be very slow, reducing the efficiency of the algorithm. Therefore, we explore the optimal gradient value by manually adjusting the learning rate. The training process of SAGIN cross-domain VNE algorithm based on virtual network architecture and DRL method is shown in Algorithm 1.
Algorithm 1 Training
Input: G P , G V , P olicy network parameters; Output: P robability of SAGIN nodes being embedded; 1: Random initialization of policy network; 2: while iteration < epoch do if isM apped(∀ n v ∈ G V ) then if isM apped(∀ n v ∈ G V , ∀ e v ∈ G V ) then iteration + +; 16: end while
In the testing phase, we directly complete the embedding of virtual nodes based on the embedding probability of SAGIN nodes derived from policy network. Then, the breadth first search strategy is used to sequentially complete the embedding of intra-domain links and inter-domain links. The test process is shown in Algorithm 2.
Algorithm 2 Testing
Input: testset; Output: T hree perf ormance indexes;
1: Random initialization of policy network; 2: for request ∈ testset and n v ∈ request do 3:
V irtual nodes embedding;
4:
U sing BF S strategy to f ind the shortest path;
5:
V irtual links embedding; 6: if isM apped(∀ n v ∈ G V , ∀ e v ∈ G V ) then 7: return (success); 8: end if 9: end for
C. Complexity Analysis
The time complexity of the cross-domain VNE algorithm for SAGIN based on DRL is mainly generated from the two stages of DRL agent training and cross-domain VNE (testing). Since the agent training is performed online and the test phase is performed offline, only the time complexity of the training phase can be considered. For a 4 × n feature matrix, the complexity of extracting it from the underlying network is O(n), and the time complexity of solving all feature vectors is O(n 2 ). When a new VNR arrives, the feature matrix needs to be updated once, so for all VNRs, the complexity of updating the feature matrix is O(kn 2 ). Therefore, the time complexity of training stage is O(n + n 2 + kn 2 ), which can be regarded as the final complexity of the algorithm, where n represents the number of nodes in the SAGIN and k is the number of nodes in the successfully embedded VNR.
VI. EXPERIMENTAL SETUP AND RESULT ANALYSIS A. Simulation Parameters and Environment
In order to simulate SAGIN, we generate a layered physical network with 100 physical nodes and about 600 physical links, of which 10 physical nodes are used as satellite network nodes, 30 physical nodes are used as air network nodes, and the remaining 60 are used as ground network nodes. There are two inter-domain links connected between each of the three network segments, and the physical nodes connected to the inter-domain links are called boundary nodes. In satellite networks, the computing resources of each physical node are randomly distributed between 20T f lops and 40T f lops, the bandwidth resources of each physical link are randomly distributed between 50M bps and 100M bps, and the delay values are randomly distributed between 20ms and 40ms. In air networks, the computing resources of each physical node are randomly distributed between 20T f lops and 40T f lops, the bandwidth resources of each physical link are randomly distributed between 50M bps and 100M bps, and the delay values are randomly distributed between 10ms and 30ms. In ground networks, the computing resources of each physical node are randomly distributed between 50T f lops and 100T f lops, the bandwidth resources of each physical link are randomly distributed between 50M bps and 100M bps, and the Besides, we generate 2,000 VNRs, 1,000 of which are used as training set and 1,000 as test set. Each VNR randomly contains 2 to 10 virtual nodes. The computing resource demand of each node is randomly distributed between 1T f lops and 20T f lops, the bandwidth resource demand of each link is randomly distributed between 1M bps and 20M bps, and the delay demand is randomly distributed between 1ms and 50ms. The virtual link can only be embedded in the physical link which can meet the bandwidth and delay requirements. Each virtual node will randomly choose which SAGIN segment to embed. We set the batch size to 100, i.e., update the parameters of the policy network once after 100 VNRs, and re-extract a policy network from the underlying network. We summarize the parameter settings of the experimental simulation in TABLE II.
B. Results Display and Analysis
Since multi-domain VNE is a decision-making problem and is NP-hard, it is necessary to verify the training convergence of the agent under different performance indicators. Fig. 4 shows the changes in agent training from three aspects: VNE longterm average revenue, VNR acceptance rate and revenue-cost ratio. In the initial training stage, since the policy network has just been activated and the agent is not familiar with SAGIN resources, the VNE strategy adopted is random, with low and unstable performance of various indicators. As the training progresses, the agent will continue to explore efficient and reasonable VNE strategies, and the accumulated rewards will continue to increase. After that, the agent will continue to take similar actions to accumulate rewards, so the performance of the three indicators training continues to improve and gradually becomes problematic. Therefore, from the training results, the DRL method based on the policy network is effective.
In order to show that the agent can flexibly adjust the strategy when the environment changes, i.e., the performance of the algorithm may be different when SAGIN resource attribute or the demand of VNR changes, we take the delay factor in VNR as a variable to explore the impact on the algorithm when the user's delay demand changes. In the training phase, the delay requirements of all VNRs in the training set are set according to the pre parameters, which is fixed to 50ms. The virtual network can only be embedded in the physical network which is not higher than the delay value. TABLE III shows the average results of the three performance indicators on the training set when the maximum delay requirements of VNR are 50ms, 40ms, 30ms, and 20ms. It can be seen that with the continuous improvement of delay performance requirements (the delay value is getting lower and lower), the revenue of VNE and the VNR acceptance rate are significantly reduced. This is consistent with the facts, because with the continuous increase of VNR delay requirements, there are fewer and fewer SAGIN links that can meet the delay requirements, so fewer and fewer VNRs can be successfully embedded, and the revenue and acceptance rate will decline. The revenue-cost ratio does not show a downward trend, because this index has nothing to do with the amount of VNR embedded, it only depends on the revenue and cost of SAGIN resource consumption. Therefore, when the VNR acceptance rate decreases, the revenue cost ratio will not decrease.
After verifying the effectiveness of the training method, we test the algorithm based on the test set composed of 1,000 VNRs. According to the embedding probability of the physical node derived from training, we directly use the greedy strategy to embed the virtual node [42], and then use the breadth first search strategy to embed the virtual link. Fig. 5, Fig. 6 and Fig. 7 respectively show the test results of the above three indicators under different delay requirements.
From the algorithm test results, it can be seen that the overall changes of the three performances are similar to the training results and are in line with expectations. Since the acceptance rate of VNRs is related to the number of resources of SAGIN, as VNRs continue to be embedded, the available underlying network resources continue to decrease, and the embedding success rate of VNRs continues to decrease. Therefore, the long-term revenue of VNE and the VNR acceptance rate continue to decrease over time. Both VNR acceptance rate and VNE long-term average revenue are affected by changes in delay requirements. As the number of VNRs that can be successfully embedded decreases, both indicators continue to decrease. The test result of VNE revenue-cost ratio shows that this indicator will not change significantly due to changes in delay requirements. In addition, as the delay requirements of VNRs continue to increase, the number of SAGIN nodes and links that can meet the delay requirements decreases, so the performance of the three indicators is continuously reduced. In summary, the experimental results have successfully demonstrated the effectiveness of the DRL-based cross-domain VNE algorithm in the SAGIN resource orchestration field.
In order to further reflect the performance of the algorithm, we compare the algorithm proposed in this paper with the two baseline algorithms proposed in reference [27]. The SAGIN resource orchestration algorithm based on virtual network architecture is essentially a cross-domain VNE algorithm. In order to ensure the fairness of the comparison, we set the same network resource attributes as this paper for the two comparison algorithms. The MRN-VNE algorithm first calculates the resource metric value of the network node, and then arranges the physical nodes and virtual nodes from large to small according to the value of the value. The virtual nodes complete the mapping in this order. In the link mapping stage, the authors arrange the physical links in order from largest to smallest, and then the virtual link completes the mapping in this order. The RCR-VNE algorithm does not perform the sorting process of virtual nodes. In the link mapping stage, the shortest path algorithm is used to complete the mapping process after sorting according to the bandwidth size. Specifically, we compare the algorithm revenue and virtual request acceptance rate, as shown in Figure 8.
In general, our algorithm achieves better experimental results than the other two benchmark algorithms. At the beginning of the experiment, because the other two benchmark algorithms adopt greedy strategy, which preferentially selects the physical nodes with abundant free resources for mapping, so their resource revenue and acceptance rate are relatively high. In the subsequent experimental process, the experimental effect of our algorithm is always better than the other two algorithms. On the one hand, NRM-VNE algorithm and RCR-VNE algorithm are two heuristic VNE algorithms. They mainly rely on manual mapping rules (such as setting the sorting method of nodes) to implement VNE algorithm, which greatly limits the flexibility of the algorithm. Our algorithm is a VNE algorithm based on machine learning. This shows that the performance of VNE algorithm based on machine learning method is better than that based on heuristic method. On the other hand, we create a training environment close to the real network for the DRL agent. The agent has fully learned the attributes of SAGIN, so it can make better decisions. VII. CONCLUSION SAGIN can take advantage of high flexibility, high reliability and high coverage by integrating space network, air network and ground network. However, the seamless integration of the three networks and the orchestration of heterogeneous resources are still a problem. Based on the virtual network architecture, this paper proposes a DRL-based SAGIN multidomain VNE algorithm. The essence of the algorithm is the allocation of heterogeneous network resources. In DRL, we use the basic elements of neural networks to build a five-layer policy network and use it as the DRL agent. In order to enable the agent to train in real SAGIN environment, we extract four important network attributes for each SAGIN node to form a feature matrix. Through training, the policy network can output the probability of each SAGIN node being embedded. Based on the probability, we complete the VNR embedding in the testing phase. In the experimental phase, we test the performance of the algorithm from both training and testing.
In addition, we also analyze the flexibility of the algorithm in dealing with changes in network attributes. Gratifying experimental results show the effectiveness of the DRL-based SAGIN multi-domain VNE algorithm in the arrangement of heterogeneous network resources. As a part of our future work, we will explore more effective and flexible modeling methods of SAGIN, and set more reasonable resource attributes for network topology. In addition, we will follow the latest research progress in this field, and try to use more comprehensive data to train intelligent agents, so as to obtain better experimental results.
Fig. 1 :
1A typical space-air-ground integrated network architecture.
Fig. 2 :
2Example of cross-domain VNE. On the left is a layered SAGIN architecture, and on the right is a VNR.
Fig. 3 :
3Policy network. (A) Extraction layer (B) Convolution layer (C) Probabilistic layer (D) Filtering layer (E) Output layer.
Fig. 4 :
4The results of the algorithm on the training set.
Fig. 5 :
5VNE long-term average revenue.
Fig. 6 :
6VNR acceptance rate.
Fig. 7 :
7VNE long-term revenue-cost ratio.
Fig. 8 :
8Comparison results with benchmark algorithms.
TABLE I :
INotations CP U N V Computing resource requests of virtual nodes BW E V Bandwidth resource requests of virtual links D E V Delay requests of virtual links 2) Virtual Network Model: VNRs are modeled as undirected weighted graphNotations
Descriptions
G P
Physical networks
N P
N P
S
Satellite network nodes
N P
A
Air network nodes
N P
G
Ground network nodes
E P
E P
S
Satellite network links
E P
A
Air network links
E P
G
Ground network links
E P
S,A
Satellite network and air network inter-domain links
E P
S,G
Satellite network and ground network inter-domain
links
E P
A,G
Air network and air network inter-domain links
A P
CP U N P
S
Satellite network nodes computing resources
CP U N P
A
Air network nodes computing resources
CP U N P
G
Ground network nodes computing resources
BW E P
S
Satellite network links bandwidth resources
BW E P
A
Air network links bandwidth resources
BW E P
G
Ground network links bandwidth resources
D E P
S
Space network links delay attributes
D E P
A
Air network links delay attributes
D E P
G
Ground network links delay attributes
G V
Virtual network requests
N V
Virtual nodes
E V
Virtual links
A V
TABLE II :
IIParameter SettingParameter
Value
Physical nodes
100
Physical links
600
Satellite network nodes
10
Air network nodes
30
Ground network nodes
60
Computing resources of satellite nodes
U[20,40]T f lops
Bandwidth resources of satellite links
U[50,100]M bps
Delay values of satellite links
U[20,40]ms
Computing resources of air nodes
U[20,40]T f lops
Bandwidth resources of air links
U[50,100]M bps
Delay values of air links
U[10,30]ms
Computing resources of ground nodes
U[50,100]T f lops
Bandwidth resources of ground links
U[50,100]M bps
Delay values of ground links
U[1,20]ms
Bandwidth resources of inter-domain links
U[50,100]M bps
Delay values of inter-domain links
U[40,60]ms
Number of VNRs
2,000
Number of training sets
1,000
Number of testing sets
1,000
Number of virtual nodes
U[2,10]
Computing requirements of virtual nodes
U[1,20]T f lops
Bandwidth requirements of virtual nodes
U[1,20]M bps
Delay requirements of virtual nodes
U[1,50]ms
delay values are randomly distributed between 1ms and 20ms.
The bandwidth resources of inter-domain links are randomly
distributed between 50M bps and 100M bps, and the delay
values are randomly distributed between 40ms and 60ms.
TABLE III :
IIIAverage Performance-
Average Revenue
Acceptance Rate
R/C
50ms
1064.929
0.644
0.343
40ms
1069.818
0.636
0.338
30ms
1024.469
0.61
0.347
20ms
894.03
0.561
0.352
DwaRa: A Deep Learning-Based Dynamic Toll Pricing Scheme for Intelligent Transportation Systems. A Shukla, P Bhattacharya, S Tanwar, N Kumar, M Guizani, IEEE Transactions on Vehicular Technology. 6911A. Shukla, P. Bhattacharya, S. Tanwar, N. Kumar and M. Guizani, "DwaRa: A Deep Learning-Based Dynamic Toll Pricing Scheme for Intelligent Transportation Systems," IEEE Transactions on Vehicular Technology, vol. 69, no. 11, pp. 12510-12520, Nov. 2020.
Parallel Transportation Systems: Toward IoT-Enabled Smart Urban Traffic Control and Management. F Zhu, Y Lv, Y Chen, X Wang, G Xiong, F.-Y. Wang, IEEE Transactions on Intelligent Transportation Systems. 2110F. Zhu, Y. Lv, Y. Chen, X. Wang, G. Xiong and F.-Y. Wang, "Parallel Transportation Systems: Toward IoT-Enabled Smart Urban Traffic Con- trol and Management," IEEE Transactions on Intelligent Transportation Systems, vol. 21, no. 10, pp. 4063-4071, Oct. 2020.
A Novel UAV-Enabled Data Collection Scheme for Intelligent Transportation System Through UAV Speed Control. X Li, J Tan, A Liu, P Vijayakumar, N Kumar, M Alazab, IEEE Transactions on Intelligent Transportation Systems. 224X. Li, J. Tan, A. Liu, P. Vijayakumar, N. Kumar and M. Alazab, "A Novel UAV-Enabled Data Collection Scheme for Intelligent Transportation System Through UAV Speed Control," IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 4, pp. 2100-2110, Apr. 2021.
On the Content Delivery Efficiency of NOMA Assisted Vehicular Communication Networks With Delay Constraints. S Fang, H Chen, Z Khan, P Fan, IEEE Wireless Communications Letters. 96S. Fang, H. Chen, Z. Khan and P. Fan, "On the Content Delivery Efficiency of NOMA Assisted Vehicular Communication Networks With Delay Constraints," IEEE Wireless Communications Letters, vol. 9, no. 6, pp. 847-850, Jun. 2020.
DENet: A Universal Network for Counting Crowd With Varying Densities and Scales. L Liu, J Jiang, W Jia, S Amirgholipour, Y Wang, M Zeibots, X He, IEEE Transactions on Multimedia. 23L. Liu, J. Jiang, W. Jia, S. Amirgholipour, Y. Wang, M. Zeibots and X. He, "DENet: A Universal Network for Counting Crowd With Varying Densities and Scales," IEEE Transactions on Multimedia, vol. 23, pp. 1060-1068, 2021.
Design and Prototyping of a Software Defined Vehicular Networking. O Sadio, I Ngom, C Lishou, IEEE Transactions on Vehicular Technology. 691O. Sadio, I. Ngom and C. Lishou, "Design and Prototyping of a Soft- ware Defined Vehicular Networking," IEEE Transactions on Vehicular Technology, vol. 69, no. 1, pp. 842-850, Jan. 2020.
Deep Reinforcement Learning Assisted Federated Learning Algorithm for Data Management of IIoT. P Zhang, C Wang, C Jiang, Z Han, 10.1109/TII.2021.3064351IEEE Transactions on Industrial Informatics. P. Zhang, C. Wang, C. Jiang and Z. Han, "Deep Reinforcement Learn- ing Assisted Federated Learning Algorithm for Data Management of IIoT," IEEE Transactions on Industrial Informatics, pp. 1-1, 2021, doi: 10.1109/TII.2021.3064351.
Blockchain-Enabled Secure Data Sharing Scheme in Mobile-Edge Computing: An Asynchronous Advantage Actor-Critic Learning Approach. L Liu, J Feng, Q Pei, C Chen, Y Ming, B Shang, M Dong, IEEE Internet of Things Journal. 84L. Liu, J. Feng, Q. Pei, C. Chen, Y. Ming, B. Shang and M. Dong, "Blockchain-Enabled Secure Data Sharing Scheme in Mobile-Edge Com- puting: An Asynchronous Advantage Actor-Critic Learning Approach," IEEE Internet of Things Journal, vol. 8, no. 4, pp. 2342-2353, Feb. 2021.
An Efficient Spam Detection Technique for IoT Devices Using Machine Learning. A Makkar, S Garg, N Kumar, M S Hossain, A Ghoneim, M Alrashoud, IEEE Transactions on Industrial Informatics. 172A. Makkar, S. Garg, N. Kumar, M. S. Hossain, A. Ghoneim and M. Alrashoud, "An Efficient Spam Detection Technique for IoT Devices Using Machine Learning," IEEE Transactions on Industrial Informatics, vol. 17, no. 2, pp. 903-912, Feb. 2021.
Deep-Reinforcement-Learning-Based Proportional Fair Scheduling Control Scheme for Underlay D2D Communication. I Budhiraja, N Kumar, S Tyagi, IEEE Internet of Things Journal. 85I. Budhiraja, N. Kumar and S. Tyagi, "Deep-Reinforcement-Learning- Based Proportional Fair Scheduling Control Scheme for Underlay D2D Communication," IEEE Internet of Things Journal, vol. 8, no. 5, pp. 3143-3156, Mar. 2021.
Renewal-Theoretical Dynamic Spectrum Access in Cognitive Radio Network with Unknown Primary Behavior. C Jiang, Y Chen, K J R Liu, Y Ren, IEEE Journal on Selected Areas in Communications. 313C. Jiang, Y. Chen, K. J. R. Liu and Y. Ren, "Renewal-Theoretical Dynamic Spectrum Access in Cognitive Radio Network with Unknown Primary Behavior," IEEE Journal on Selected Areas in Communica- tions, vol. 31, no. 3, pp. 406-416, 2013.
Joint Spectrum Sensing and Access Evolutionary Game in Cognitive Radio Networks. C Jiang, Y Chen, Y Gao, K J R Liu, IEEE Transactions on Wireless Communications. 125C. Jiang, Y. Chen, Y. Gao and K. J. R. Liu, "Joint Spectrum Sensing and Access Evolutionary Game in Cognitive Radio Networks," IEEE Transactions on Wireless Communications, vol. 12, no. 5, pp. 2470-2483, 2013.
Non-orthogonal Multiple Access Based Integrated Terrestrial-Satellite Networks. X Zhu, C Jiang, L Kuang, N Ge, J Lu, IEEE Journal on Selected Areas in Communications. 3510X. Zhu, C. Jiang, L. Kuang, N. Ge and J. Lu, "Non-orthogonal Multiple Access Based Integrated Terrestrial-Satellite Networks," IEEE Journal on Selected Areas in Communications, vol. 35, no. 10, pp. 2253-2267, Oct. 2017.
Space-Air-Ground Integrated Network: A Survey. J Liu, Y Shi, Z M Fadlullah, N Kato, IEEE Communications Surveys & Tutorials. 204J. Liu, Y. Shi, Z. M. Fadlullah and N. Kato, "Space-Air-Ground Inte- grated Network: A Survey," IEEE Communications Surveys & Tutorials, vol. 20, no. 4, pp. 2714-2741, Fourthquarter 2018.
Guest Editorial: Service-Oriented Space-Air-Ground Integrated Networks. J Ren, N Zhang, Y Gao, Y Wang, M Ismail, J Kimery, IEEE Wireless Communications. 276J. Ren, N. Zhang, Y. Gao, Y. Wang, M. Ismail and J. Kimery, "Guest Ed- itorial: Service-Oriented Space-Air-Ground Integrated Networks," IEEE Wireless Communications, vol. 27, no. 6, pp. 10-11, Dec. 2020.
A Deep Reinforcement Learning Approach to Energy-harvesting UAV-aided Data Collection. N Zhang, J Liu, L Xie, P Tong, 2020 International Conference on Wireless Communications and Signal Processing. NanjingN. Zhang, J. Liu, L. Xie and P. Tong, "A Deep Reinforcement Learning Approach to Energy-harvesting UAV-aided Data Collection," 2020 Inter- national Conference on Wireless Communications and Signal Processing (WCSP), Nanjing, 2020, pp. 93-98.
Corrections to "Energy-Efficient and Secure Air-to-Ground Communication With Jittering UAV. H Wu, Y Wen, J Zhang, Z Wei, N Zhang, X Tao, IEEE Transactions on Vehicular Technology. 699H. Wu, Y. Wen, J. Zhang, Z. Wei, N. Zhang and X. Tao, "Corrections to "Energy-Efficient and Secure Air-to-Ground Communication With Jittering UAV"," IEEE Transactions on Vehicular Technology, vol. 69, no. 9, pp. 10397-10397, Sept. 2020.
Secrecy Performance Analysis of Air-to-Ground Communication with UAV Jitter and Multiple Random Walking Eavesdroppers. H Wu, H Li, Z Wei, N Zhang, X Tao, IEEE Transactions on Vehicular Technology. 701H. Wu, H. Li, Z. Wei, N. Zhang and X. Tao, "Secrecy Performance Analysis of Air-to-Ground Communication with UAV Jitter and Multi- ple Random Walking Eavesdroppers," IEEE Transactions on Vehicular Technology, vol. 70, no. 1, pp. 572-584, Jan. 2021.
IoV Scenario: Implementation of a Bandwidth Aware Algorithm in Wireless Network Communication Mode. P Zhang, C Wang, G Singh, N Kumar, M Guizani, IEEE Transactions on Vehicular Technology. 6912P. Zhang, C. Wang, G. Singh, N. Kumar and M. Guizani, "IoV Scenario: Implementation of a Bandwidth Aware Algorithm in Wireless Network Communication Mode," IEEE Transactions on Vehicular Technology, vol. 69, no. 12, pp. 15774-15785, Dec. 2020.
Deep-Learning-Based Blockchain Framework for Secure Software-Defined Industrial Networks. M Singh, G S Aujla, A Singh, N Kumar, S Garg, IEEE Transactions on Industrial Informatics. 171M. Singh, G. S. Aujla, A. Singh, N. Kumar and S. Garg, "Deep- Learning-Based Blockchain Framework for Secure Software-Defined Industrial Networks," IEEE Transactions on Industrial Informatics, vol. 17, no. 1, pp. 606-616, Jan. 2021.
Analysis and Prevention of Subsequent Commutation Failures Caused by Improper Inverter Control Interactions in HVDC Systems. L Liu, S Lin, J Liu, P Sun, K Liao, X Li, Z He, IEEE Transactions on Power Delivery. 356L. Liu, S. Lin, J. Liu, P. Sun, K. Liao, X. Li and Z. He, "Analysis and Prevention of Subsequent Commutation Failures Caused by Improper Inverter Control Interactions in HVDC Systems," IEEE Transactions on Power Delivery, vol. 35, no. 6, pp. 2841-2852, Dec. 2020.
Resource Management and Security Scheme of ICPSs and IoT Based on VNE Algorithm. P Zhang, C Wang, C Jiang, N Kumar, Q Lu, 10.1109/JIOT.2021.3068158IEEE Internet of Things Journal. P. Zhang, C. Wang, C. Jiang, N. Kumar and Q. Lu, "Resource Management and Security Scheme of ICPSs and IoT Based on VNE Algorithm," IEEE Internet of Things Journal, pp. 1-1, 2021, doi: 10.1109/JIOT.2021.3068158.
Realizing the Potential of Internet of Things for Smart Tourism with 5G and AI. W Wang, N Kumar, J Chen, Z Gong, X Kong, W Wei, H Gao, IEEE Network. 346W. Wang, N. Kumar, J. Chen, Z. Gong, X. Kong, W. Wei and H. Gao, "Realizing the Potential of Internet of Things for Smart Tourism with 5G and AI," IEEE Network, vol. 34, no. 6, pp. 295-301, Nov. 2020.
Security-Aware Virtual Network Embedding Algorithm based on Reinforcement Learning. P Zhang, C Wang, C Jiang, A Benslimane, IEEE Transactions on Network Science and Engineering. 82P. Zhang, C. Wang, C. Jiang and A. Benslimane, "Security-Aware Virtual Network Embedding Algorithm based on Reinforcement Learning," IEEE Transactions on Network Science and Engineering, vol. 8, no. 2, pp. 1095- 1105, April-June 2021.
The Deep Learning Vision for Heterogeneous Network Traffic Control: Proposal, Challenges, and Future Perspective. N Kato, Z M Fadlullah, B Mao, F Tang, O Akashi, T Inoue, K Mizutani, IEEE Wireless Communications. 243N. Kato, Z. M. Fadlullah, B. Mao, F. Tang, O. Akashi, T. Inoue and K. Mizutani, "The Deep Learning Vision for Heterogeneous Network Traffic Control: Proposal, Challenges, and Future Perspective," IEEE Wireless Communications, vol. 24, no. 3, pp. 146-153, Jun. 2017.
Bidirectional Mission Offloading for Agile Space-Air-Ground Integrated Networks. S Zhou, G Wang, S Zhang, Z Niu, X S Shen, IEEE Wireless Communication s. 262S. Zhou, G. Wang, S. Zhang, Z. Niu and X. S. Shen, "Bidirectional Mission Offloading for Agile Space-Air-Ground Integrated Networks," IEEE Wireless Communication s, vol. 26, no. 2, pp. 38-45, Apr. 2019.
Virtual Network Embedding Based on Computing, Network, and Storage Resource Constraints. P Zhang, H Yao, Y Liu, IEEE Internet of Things Journal. 55P. Zhang, H. Yao and Y. Liu, "Virtual Network Embedding Based on Computing, Network, and Storage Resource Constraints," IEEE Internet of Things Journal, vol. 5, no. 5, pp. 3298-3304, Oct. 2018.
Optimizing Space-Air-Ground Integrated Networks by Artificial Intelligence. N Kato, Z M Fadlullah, F Tang, B Mao, S Tani, A Okamura, J Liu, IEEE Wireless Communications. 264N. Kato, Z. M. Fadlullah, F. Tang, B. Mao, S. Tani, A. Okamura and J. Liu, "Optimizing Space-Air-Ground Integrated Networks by Artificial Intelligence," IEEE Wireless Communications, vol. 26, no. 4, pp. 140-147, Aug. 2019.
Physical-Layer Authentication for Internet of Things via WFRFT-Based Gaussian Tag Embedding. N Zhang, X Fang, Y Wang, S Wu, H Wu, D Kar, H Zhang, IEEE Internet of Things Journal. 79N. Zhang, X. Fang, Y. Wang, S. Wu, H. Wu, D. Kar and H. Zhang, "Physical-Layer Authentication for Internet of Things via WFRFT-Based Gaussian Tag Embedding," IEEE Internet of Things Journal, vol. 7, no. 9, pp. 9001-9010, Sept. 2020.
Survey of Inter-Satellite Communication for Small Satellite Systems: Physical Layer to Network Layer View. R Radhakrishnan, W W Edmonson, F Afghah, R M Rodriguez-Osorio, F Pinto, S C Burleigh, IEEE Communications Surveys & Tutorials. 184R. Radhakrishnan, W. W. Edmonson, F. Afghah, R. M. Rodriguez- Osorio, F. Pinto and S. C. Burleigh, "Survey of Inter-Satellite Commu- nication for Small Satellite Systems: Physical Layer to Network Layer View," IEEE Communications Surveys & Tutorials, vol. 18, no. 4, pp. 2442-2473, Forthquarter 2016.
Survey of Important Issues in UAV Communication Networks. L Gupta, R Jain, G Vaszkun, IEEE Communications Surveys & Tutorials. 182L. Gupta, R. Jain and G. Vaszkun, "Survey of Important Issues in UAV Communication Networks," IEEE Communications Surveys & Tutorials, vol. 18, no. 2, pp. 1123-1152, Secondquarter 2016.
Spectrum Allocation With Asymmetric Monopoly Model for Multibeam-Based Cognitive Satellite Networks. F Li, X Liu, K-Y. Lam, Z Na, J Hua, J Wang, L Wang, IEEE Access. 6F. Li, X. Liu, K-Y. Lam, Z. Na, J. Hua, J. Wang and L. Wang, "Spectrum Allocation With Asymmetric Monopoly Model for Multibeam-Based Cognitive Satellite Networks," IEEE Access, vol. 6, pp. 9713-9722, 2018.
Joint Gateway Selection and Resource Allocation for Cross-tier Communication in Space-Air-Ground Integrated IoT Networks. Y Shi, Y Xia, Y Ga, IEEE Access. 9Y. Shi, Y. Xia and Y. Ga, "Joint Gateway Selection and Resource Allocation for Cross-tier Communication in Space-Air-Ground Integrated IoT Networks" IEEE Access, vol. 9, pp. 4303-4314, 2021.
Toward Optimal Mobility-Aware VM Placement and Routing in Space-Air-Ground Integrated Networks. A Varasteh, S Hofmann, N Deric, A Blenk, D Schupke, W Kellerer, C M Machuca, IEEE INFO-COM 2019 -IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). Paris, FranceA. Varasteh, S. Hofmann, N. Deric, A. Blenk, D. Schupke, W. Kellerer and C. M. Machuca, "Toward Optimal Mobility-Aware VM Placement and Routing in Space-Air-Ground Integrated Networks," IEEE INFO- COM 2019 -IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), Paris, France, 2019, pp. 1-2.
Energy Efficient Adaptive Transmissions in Integrated Satellite-Terrestrial Networks With SER Constraints. Y Ruan, Y Li, C Wang, R Zhang, IEEE Transactions on Wireless Communications. 171Y. Ruan, Y. Li, C. Wang and R. Zhang, "Energy Efficient Adaptive Transmissions in Integrated Satellite-Terrestrial Networks With SER Constraints," IEEE Transactions on Wireless Communications, vol. 17, no. 1, pp. 210-222, Jan. 2018.
Software Defined Space-Air-Ground Integrated Vehicular Networks: Challenges and Solutions. N Zhang, S Zhang, P Yang, O Alhussein, W Zhuang, X S Shen, IEEE Communications Magazine. 557N. Zhang, S. Zhang, P. Yang, O. Alhussein, W. Zhuang and X. S. Shen, "Software Defined Space-Air-Ground Integrated Vehicular Networks: Challenges and Solutions," IEEE Communications Magazine, vol. 55, no. 7, pp. 101-109, Jul. 2017.
An LBMRE-OLSR Routing Algorithm under the Emergency Scenarios in the Space-Air-Ground Integrated Networks. H Qu, Y Luo, J Zhao, Z Luan, 2020 Information Communication Technologies Conference (ICTC). Nanjing, ChinaH. Qu, Y. Luo, J. Zhao and Z. Luan, "An LBMRE-OLSR Routing Algorithm under the Emergency Scenarios in the Space-Air-Ground Integrated Networks," 2020 Information Communication Technologies Conference (ICTC), Nanjing, China, 2020, pp. 103-107.
SFC-Based Service Provisioning for Reconfigurable Space-Air-Ground Integrated Networks. G Wang, S Zhou, S Zhang, Z Niu, X Shen, IEEE Journal on Selected Areas in Communications. 387G. Wang, S. Zhou, S. Zhang, Z. Niu and X. Shen, "SFC-Based Service Provisioning for Reconfigurable Space-Air-Ground Integrated Networks," IEEE Journal on Selected Areas in Communications, vol. 38, no. 7, pp. 1478-1489, Jul. 2020.
Auction Design and Analysis for SDN-Based Traffic Offloading in Hybrid Satellite-Terrestrial Networks. J Du, C Jiang, H Zhang, Y Ren, M Guizani, IEEE Journal on Selected Areas in Communications. 3610J. Du, C. Jiang, H. Zhang, Y. Ren and M. Guizani, "Auction Design and Analysis for SDN-Based Traffic Offloading in Hybrid Satellite-Terrestrial Networks," IEEE Journal on Selected Areas in Communications, vol. 36, no. 10, pp. 2202-2217, Oct. 2018.
Task-Oriented Intelligent Networking Architecture for the Space-Air-Ground-Aqua Integrated Network. J Liu, X Du, J Cui, M Pan, D Wei, IEEE Internet of Things Journal. 76J. Liu, X. Du, J. Cui, M. Pan and D. Wei, "Task-Oriented Intelligent Networking Architecture for the Space-Air-Ground-Aqua Integrated Network," IEEE Internet of Things Journal, vol. 7, no. 6, pp. 5345-5358, Jun. 2020.
Service-Oriented Fair Resource Allocation and Auction for Civil Aircrafts Augmented Space-Air-Ground Integrated Networks. Q Chen, W Meng, S Han, C Li, IEEE Transactions on Vehicular Technology. 6911Q. Chen, W. Meng, S. Han and C. Li, "Service-Oriented Fair Resource Allocation and Auction for Civil Aircrafts Augmented Space-Air-Ground Integrated Networks," IEEE Transactions on Vehicular Technology, vol. 69, no. 11, pp. 13658-13672, Nov. 2020.
Multi-robot Task Allocation Strategy based on Particle Swarm Optimization and Greedy Algorithm. X Kong, Y Gao, T Wang, J Liu, W Xu, IEEE 8th Joint International Information Technology and Artificial Intelligence Conference (ITAIC). Chongqing, ChinaX. Kong, Y. Gao, T. Wang, J. Liu and W. Xu, "Multi-robot Task Allocation Strategy based on Particle Swarm Optimization and Greedy Algorithm," 2019 IEEE 8th Joint International Information Technology and Artificial Intelligence Conference (ITAIC), Chongqing, China, pp. 1643-1646, 2019.
Peiying Zhang is currently an Associate Professor with the College of Computer Science and Technology. China University of Petroleum (East ChinaPeiying Zhang is currently an Associate Professor with the College of Computer Science and Technol- ogy, China University of Petroleum (East China).
He served as the Technical Program Committee of ISCIT. Ieee Tii, Ieee Tvt, Ieee Tnse, Ieee Tnsm, Tetc, Network, Ieee Access, -J Iot, Acm Tallip, Comput Commun, Mag Ieee Commun, Etc , SoftIoT 2021, IWCMC-Satellite 2019, and IWCMC-Satellite 2020. His research interests include semantic computing, future internet architecture, network virtualization, and artificial intelligence for networking. 2020Communication Engineering at University of Beijing University of Posts and TelecommunicationsHe received his Ph.D. in the School of Information andHe received his Ph.D. in the School of Information and Communication Engineering at University of Beijing University of Posts and Telecommunications in 2019. He has published multiple IEEE/ACM Trans./Journal/Magazine papers since 2016, such as IEEE TII, IEEE TVT, IEEE TNSE, IEEE TNSM, IEEE TETC, IEEE Network, IEEE Access, IEEE IoT-J, ACM TALLIP, COMPUT COMMUN, IEEE COMMUN MAG, and etc. He served as the Technical Program Committee of ISCIT 2016, ISCIT 2017, ISCIT 2018, ISCIT 2019, Globecom 2019, COMNETSAT 2020, SoftIoT 2021, IWCMC-Satellite 2019, and IWCMC- Satellite 2020. His research interests include semantic computing, future internet architecture, network virtualization, and artificial intelligence for networking.
Chao Wang is a graduate student in the College of Computer Science and Technology. China University of Petroleum (East ChinaHis research interests include network artificial intelligenceChao Wang is a graduate student in the College of Computer Science and Technology, China University of Petroleum (East China). His research interests include network artificial intelligence, network vir- tualization and wireless network.
| []
|
[
"Apparent universality of 1/ f spectra as an artifact of finite-size effects",
"Apparent universality of 1/ f spectra as an artifact of finite-size effects"
]
| [
"M A Korzeniowska \nDepartment of Physics and Technology\nUiT The Arctic University of Norway\nN-9037TromsøNorway\n",
"A Theodorsen \nDepartment of Physics and Technology\nUiT The Arctic University of Norway\nN-9037TromsøNorway\n",
"M Rypdal \nDepartment of Mathematics and Statistics\nThe Arctic University of Norway\nN-9037TromsøUiT, Norway\n",
"O E Garcia \nDepartment of Physics and Technology\nUiT The Arctic University of Norway\nN-9037TromsøNorway\n"
]
| [
"Department of Physics and Technology\nUiT The Arctic University of Norway\nN-9037TromsøNorway",
"Department of Physics and Technology\nUiT The Arctic University of Norway\nN-9037TromsøNorway",
"Department of Mathematics and Statistics\nThe Arctic University of Norway\nN-9037TromsøUiT, Norway",
"Department of Physics and Technology\nUiT The Arctic University of Norway\nN-9037TromsøNorway"
]
| []
| Power spectral density scaling with frequency f as 1/ f β and β ≈ 1 is widely found in natural and socioeconomic systems. Consequently, it has been suggested that such self-similar spectra reflect universal dynamics of complex phenomena. Here we show that for a superposition of uncorrelated pulses with a power-law distribution of duration times the estimated scaling exponentsβ depend on the system size. We derive a parametrized, closed-form expression for the power spectral density, and demonstrate that for β ∈ [0, 2] the estimated scaling exponents have a bias towardsβ = 1. For β = 0 and β = 2 the explicit logarithmic corrections to frequency scaling are derived. The bias is particularly strong when the scale invariance spans less than four decades in frequency. Since this is the case for the majority of empirical data, the boundedness of systems well-described by superposition of uncorrelated pulses may contribute to overemphasizing the universality of 1/ f . | null | [
"https://export.arxiv.org/pdf/2304.08371v2.pdf"
]
| 258,179,185 | 2304.08371 | 28922735d47c45355d6500f8e2cd5dbeb5cebffd |
Apparent universality of 1/ f spectra as an artifact of finite-size effects
26 May 2023
M A Korzeniowska
Department of Physics and Technology
UiT The Arctic University of Norway
N-9037TromsøNorway
A Theodorsen
Department of Physics and Technology
UiT The Arctic University of Norway
N-9037TromsøNorway
M Rypdal
Department of Mathematics and Statistics
The Arctic University of Norway
N-9037TromsøUiT, Norway
O E Garcia
Department of Physics and Technology
UiT The Arctic University of Norway
N-9037TromsøNorway
Apparent universality of 1/ f spectra as an artifact of finite-size effects
26 May 2023(Dated: May 29, 2023)arXiv:2304.08371v2 [physics.data-an]
Power spectral density scaling with frequency f as 1/ f β and β ≈ 1 is widely found in natural and socioeconomic systems. Consequently, it has been suggested that such self-similar spectra reflect universal dynamics of complex phenomena. Here we show that for a superposition of uncorrelated pulses with a power-law distribution of duration times the estimated scaling exponentsβ depend on the system size. We derive a parametrized, closed-form expression for the power spectral density, and demonstrate that for β ∈ [0, 2] the estimated scaling exponents have a bias towardsβ = 1. For β = 0 and β = 2 the explicit logarithmic corrections to frequency scaling are derived. The bias is particularly strong when the scale invariance spans less than four decades in frequency. Since this is the case for the majority of empirical data, the boundedness of systems well-described by superposition of uncorrelated pulses may contribute to overemphasizing the universality of 1/ f .
Introduction.-A wide range of complex systems display spatial or temporal scale invariance, fractality and long-range dependence (LRD) [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16]. In particular, the emergence of self-similar frequency power spectral density scaling 1/ f β has been of interest since the discovery of a 1/ f -type noise in vacuum tubes almost a century ago [17,18]. Reports of scaling exponents β close to unity in various systems have led to questions about universality. Theoretical ideas such as self-organized criticality (SOC) have been put forward [19]. However, identifying a general mechanism for the observed variety of self-similar behavior has proved difficult [20][21][22][23][24][25].
In this paper, we demonstrate that an apparent 1/ f universality arises in a generalized filtered Poisson process subject to finite-size effects [26,27]. The shot-noise approach is canonical for phenomenological modeling of LRD statistics of fluctuating systems, from background noise to violent bursts [28][29][30][31][32][33]. We derive a closed-form expression for the parametrized power spectral density of a finite-size system and explore its scale invariance while varying the self-similarity range and the exponent β ∈ [0, 2]. We assess the finite-size effects by comparing the asymptotic scaling relations with the effective scaling of the analytical power spectral density. Our results show that the observed scaling is always biased towards β = 1 in the presence of finite-size effects, and the bias is most substantial when the scaling range is narrow.
Filtered Poisson process.-Let us first introduce the theoretical framework for our analysis. Consider a stochastic process given by a super-position of K uncorrelated, independent and identically distributed pulses φ (θ ), occurring as a random sequence in a time interval of duration T [34],
Φ K (t) = K(T ) ∑ k=1 A k φ t − t k s k .(1)
Each pulse labelled k is characterized by an amplitude A k , a duration time s k , and an arrival time t k distributed uniformly * [email protected] † [email protected] ‡ [email protected] § [email protected] on the interval T . The pulse duration times are assumed to be randomly distributed with probability density P s (s), and an average pulse duration time s = ∞ 0 ds s P s (s). Given the distribution of pulse amplitudes P A (A), we use Campbell's theorem to compute the moments and the auto-correlation function of the process (1) by averaging over all random variables for the case of exactly K pulses [34,35], and subsequently averaging over the randomly distributed number of pulses K. This yields the rigorous characteristics of the stationary process Φ(t) [36]. The power spectral density follows directly as the Fourier transform of the auto-correlation function. For the standardized process Φ = (Φ − Φ )/Φ rms , and with a normalized, dimensionless duration time τ = s/ s , the power spectral density is expressed in a non-dimensional form as
Ω Φ (ω) = ∞ 0 dτ τ 2 P τ (τ) ̺ φ (τω),(2)
where ω = 2π f s denotes the dimensionless angular fre-
quency, ̺ φ (τω) = ∞ −∞ dθ ρ φ (θ ) exp(−iτωθ )
is the Fourier transform of the normalized auto-correlation function ρ φ of the pulse function φ , and P τ (τ) = s P s (s) is the normalized probability density function for pulse durations [34].
Pareto distributed durations.-Equation (2) holds for an arbitrary finite-mean distribution P τ (τ) of pulse durations. In particular, it holds for a bounded Pareto distribution with exponent α and a finite support τ ↓ , τ ↑ , normalized by a factor η(τ ↓ , τ ↑ , α) such that ∞ 0 dτ P τ (τ) = 1,
P τ (τ; τ ↓ , τ ↑ , α) = η τ −α if τ ↓ ≤ τ ≤ τ ↑ , 0 otherwise.(3)
The normalization of P τ and the inherent property of a normalized-variable mean τ = τ ↑ τ ↓ dτ τP τ (τ) = 1 put two constraints on the three parameters {τ ↓ , τ ↑ , α} in Eq. (3). Defining a dimensionless ratio parameter ∆ = τ ↑ /τ ↓ and solving the resulting system of three constraints, we obtain τ ↓ , τ ↑ and η in terms of α and ∆ as
τ ↓ (∆, α) = (α−2)(1−∆ 1−α ) (α−1)(1−∆ 2−α ) ,(4a)τ ↑ (∆, α) = ∆τ ↓ ,(4b)η(∆, α) = (α−1) 1−∆ 1−α τ α−1 ↓ ,(4c)
with well-defined limits for α → 1 and α → 2. Given Eqs. (4), the probability distribution given by Eq. (3) is parametrized as P τ = P τ (τ; ∆, α). We note that a finite, nondivergent mean τ = 1 is a requirement for the stationarity of the process given by Eq. (1), and the well-defined normalization of the power spectral density given by Eq. (2). With the chosen parametrization P τ (τ; ∆, α) and the condition τ = 1, the effect of the increase in ∆ on the boundaries τ ↓ and τ ↑ depends on the value of α. When α < 1 the divergence ∆ → ∞ is driven by the decrease τ ↓ → 0, rather than by the increase of τ ↑ , thus hindering longrange correlations. As α → 0, P τ (τ) given by Eq. (3) reduces to a uniform distribution, with finite mean and variance [34].
Scale invariance.-In the unbounded limit,
P τ (τ) defined by Eq. (3) displays self-similar scaling lim τ ↓ →0 τ ↑ →∞ P τ (λ τ) = lim τ ↓ →0 τ ↑ →∞ λ −α P τ (τ),(5)
which together with Eq. (2) implies a power-law scaling relation for the power spectral density;
lim τ ↓ →0 τ ↑ →∞ Ω Φ (λ ω) = lim τ ↓ →0 τ ↑ →∞ λ α−3 Ω Φ (ω).(6)
Equation (6) suggests the existence of a universal 1/ω β selfsimilarity of the power spectral density given by Eq. (2), with β (α) = 3 − α. Strictly, the probability distribution given by Eq. (3) is not well-defined in the asymptotic limit, but bounding τ ↓ at an arbitrarily small value results in a finite variance of the process for α > 3, and an infinite variance otherwise. In order to ensure a finite pulse-duration mean in the asymptotic limit, α ≥ 1 is required. Thus, we conjecture that if Ω Φ displays a power-law signature in the limit when ∆ → ∞, then it does so for Pareto exponents 1 ≤ α ≤ 3. The resulting power spectral density scaling exponents range within 0 ≤ β (α) ≤ 2. Exponents α = 1, α = 2 and α = 3 characterize Brownian, pink and white noise signatures with β = 2, β = 1 and β = 0, respectively. The spectral scale invariance of a finite-size system is confined to the frequency range limited by the cutoff values ωτ ↑ = 1 and ωτ ↓ = 1, ranging over log 10 ∆ decades in frequency. Outside this range the power spectral density assumes the shape determined by the power spectra of the pulse function φ , following a broken power-law with the associated break points to and from the 1/ω β scaling.
Power-law spectra.-The asymptotic scaling relation β = 3 − α is verified for a one-sided exponential pulse function φ ,
φ (θ ) = exp(−θ ) if θ ≥ 0, 0 otherwise,(7)
whose power spectral density follows to be a Lorentzian function ̺ φ (ϑ ) = 2/(1 + ϑ 2 ) [34]. For a constant pulse duration τ = τ the power spectral density given by Eq. (2) inherits the Lorentzian shape Ω Φ (ω) = 2 τ /(1 + τ 2 ω 2 ), flat for low frequencies and with a 1/ω 2 tail for high frequencies, consistent with β → 0 and β → 2, respectively. For distributed pulse durations, Eqs. (2), (3) and (4) yield an explicit, closed-form expression for the frequency power spectral density parametrized by ∆ and α:
Ω Φ (ω; ∆, α) = 1 ln ∆ ω 2 ln (∆−1) 2 +∆ 2 ln 2 ∆ ω 2 (∆−1) 2 +ln 2 ∆ ω 2 if α = 1, 2 ln ∆ ω arctan (∆−1)ω ln ∆ − arctan (∆−1)ω ∆ ln ∆ if α = 2, 2 (∆ α −∆)ω 2 ∆ α 2 F 1 1, α−1 2 , α+1 2 ; − 1 τ 2 ↓ ω 2 −∆ 2 F 1 1, α−1 2 , α+1 2 ; − 1 τ 2 ↑ ω 2 otherwise,(8)
where 2 F 1 is a hypergeometric function defined by Gauss series [37]. The expected frequency scaling 1/ω 3−α is manifested by considering the compensated spectra in the limit of an infinitely broad distribution of duration times. For several values of α representing the LRD regime 1 ≤ α ≤ 3, the following equations (9) present both the prefactors and the powers of ω which together satisfy the compensation of the power spectral density Ω Φ (ω; ∆, α) given by Eq. (8),
lim ∆→∞ Ω Φ (ω; ∆, 1) ln ∆ ln ω 2 ln 2 ∆ ω 2 = 1, (9a) lim ∆→∞ Ω Φ (ω; ∆, 3 2 ) √ 2( √ ∆ − 1) π 4 √ ∆ |ω| 3/2 = 1, (9b) lim ∆→∞ Ω Φ (ω; ∆, 2) ln ∆ π |ω| = 1, (9c) lim ∆→∞ Ω Φ (ω; ∆, 5 2 ) √ 6( √ ∆ − 1) π 1 + √ ∆ + ∆ |ω| 1/2 = 1, (9d) lim ∆→∞ Ω Φ (ω; ∆, 3) 2 ln 1 + 4 ω 2 −1 = 1. (9e)
Equation (9c) reveals the 1/ω signature of the pink noise, obtained for α = 2. Logarithmic corrections to the theoretical frequency scaling are present at the LRD-regime boundaries, α = 1 and α = 3. Similar logarithmic corrections have been linked to phase transitions and critical behavior of certain statistical-mechanical systems [38][39][40], as well as demonstrated for a renewal process with power-law distributed waiting times [41]. The parameters α and ∆ represent two mechanisms shaping the power spectral density in the range of self-similarity: logarithmic corrections and boundedness. Figures 1(a) and 1(c) present plots of the power spectral density Ω Φ (ω; ∆, α) given by Eq. (8) for multiple choices of α and ∆, respectively. The corresponding compensated spectra are presented in Figs. 1(b) and 1(d). The chosen values of α span the entire LRD regime, and are aligned to Eqs. (9). The selected values of ∆ allow for examining the scaling behaviour of Ω Φ (ω; ∆, α) over different ranges of self-similarity. Compensated spectra aid the identification of the power-law scaling.
Logarithmic corrections.- Figure 1(b) confirms the existence of power-law scaling for α = 3 /2, α = 2 and α = 5 /2, as well as the logarithmic corrections to scaling at the boundaries of the LRD regime, α = 1 and α = 3. The curvature of the compensated spectra increases as α moves away from the center of the LRD regime, α = 2, causing gradual shortening of the power-law scaling ranges. The dashed colored lines in Fig. 1(b) reveal the shape of the compensated spectra for α = 2 ± 6 /7 (β = 1 ∓ 6 /7), equivalent to 1/7 away from the nearest LRD-regime boundary. These two cases demonstrate that the loss of power-law scaling occurs already inside the LRD regime, not only at its boundaries.
Boundedness.-The theoretical boundaries of the powerlaw scaling ranges, given by Eq. (4), are marked with dots in Figs. 1(b) and 1(d). The broken power-laws affect the spectral scaling in the vicinity of ωτ ↑ = 1 and ωτ ↓ = 1 by reducing the effective ranges of self-similarity. Figure 1(d) shows that in the center of the LRD regime, α = 2, the reduction is by approximately one and a half frequency decades on each side of the self-similarity range, for any of the considered values of ∆. Power-law scaling does not emerge unless the underlying process is characterized by at least four decades (∆ ≥ 10 4 ) of scale invariance.
The empirical power spectral densities obtained for realizations of the stochastic process given by Eq. (1) expectedly match the corresponding analytical predictions given by Eq. (8). Examples for α = 2 and different values of ∆ are shown in the inset in Fig. 1(c).
Apparent universality.-The combined effect of the logarithmic corrections to frequency scaling and the boundedness of the self-similarity range is gauged by comparing the effective scaling of the analytical power spectral density Ω Φ (ω; ∆, α) given by Eq. (8) for various combinations of the parameters α and ∆, to the asymptotic scaling relation lim ∆→∞ β (α) = 3 − α. In order to reduce the effect of the break-point curvature, half a decade is discarded on each side of the theoretical self-similarity range, shifting the boundaries of the power-law fitting ranges to ωτ ↑ = 10 1/2 and ωτ ↓ = 10 −1/2 , respectively. Linear least-square fits are made to logarithmically-spaced points in double-logarithmic coor-dinates. The resulting estimations of power-law scaling exponentsβ are presented in Fig. 2. As α approaches any of the LRD-regime boundaries, the effectiveβ (α) relation diverges from the asymptotic limit β (α) = 3 − α towards the central valueβ = 1. The divergence is stronger for small ∆.
The colored side-bars in Fig. 2 mark the ranges of the estimated exponentsβ for different values of ∆. For ∆ = 10 8 the range isβ ≈ 1 ± 0.86. We recall that Fig. 1(b) demonstrates a notable curvature of the compensated spectra for ∆ = 10 8 and α = 2 ± 6 /7 (β = 1 ∓ 0.86). For ∆ = 10 2 and ∆ = 10 4 we further recall that even at the center of the LRD regime, α = 2 (β = 1), the compensated spectra in Fig. 1(d) reveal none, or very short power-law scaling ranges, respectively. The lack of power-law scaling does not affect the power-law fitting procedure. The estimated exponents range withinβ ≈ 1 ± 0.56 for ∆ = 10 2 , andβ ≈ 1 ± 0.75 for ∆ = 10 4 .
The findings presented in Figs. 1 and 2 indicate that the effective spectral scaling is biased towardsβ = 1, and the bias increases with the decrease of ∆, or with α approaching the LRD-regime boundaries. Specifically: (1) For the ranges of the underlying scale invariance shorter than approximately four decades (∆ < 10 4 ) the power spectral density does not display power-law scaling. (2) For the longer ∆-ranges the spectral power-law scaling is manifested only for a sub-range of exponents centered around α = 2 (β = 1). (3) The extent of this sub-range increases with the increase of ∆, up to the asymptotic limit α ∈ (1, 3) [β ∈ (0, 2)] when ∆ → ∞.
Discussion.-The results presented in Fig. 2 are obtained under favourable conditions: power-law fitting is made to logarithmicly-spaced data points following analytical curves, exact boundaries of the self-similarity ranges are known, and symmetric cutoffs are applied to reduce the effect of the breakpoint curvature. Despite these measures the effectiveβ (α) relation is biased towardsβ = 1 with respect to the asymptotic lim ∆→∞ β (α) = 3 − α. The scaling exponents close to the LRD-regime boundaries β = 0 and β = 2 are not observed for any of the investigated finite values of ∆.
The power spectral density of a one-sided exponential pulse has asymptotic scaling as 1/ω 0 for low frequencies and 1/ω 2 for high frequencies. The associated break points in the spectrum affect the self-similarity range, biasing the underlying 1/ω β scaling towardsβ = 1. The wider the range for powerlaw fitting, the more weight is put on the break-point curvature. Experiments show that discarding significant margins on both sides of the fitting range reduces the bias, yielding more accurate scaling estimations when compared with the theoretical predictions. However, for relatively narrow ranges of scale invariance the break-point curvature affects the entire 1/ω β range, inflicting a bias too extensive to retrieve the underlying 1/ω β scaling. Consulting compensated spectra allows for scrutinizing the effective scale-invariance.
Narrow ranges of scale invariance prone to theβ → 1 bias may overemphasize the universality of 1/ f -type scaling. Observing long ranges of scale invariance demands both that the underlying process is long-range self-similar, and that it is measured with precision and scope satisfying the long-range extent [27]. Estimating power-law statistics of unequally sampled or merged data sets has been addressed in [42,43]. Legend color coding is aligned to Fig. 1(d).
If the exact boundaries of the self-similarity range are not known, the choice of the power-law fitting range is arbitrary, and possibly biased towards either low or high frequencies. Different methods of spectral scaling estimation may increase the bias, or compensate for it. The smoothness of the effectivē β (α) relations presented in Fig. 2 suggests that knowing the boundaries of the self-similarity range might facilitate tracing back from the observed scaling to the underlying scaling of the studied process.
Conclusions.-The results presented here demonstrate that the estimated spectral scaling of long-range dependent processes may be biased towards 1/ f in the presence of finitesize effects. This bias results from the curvature in the spectra due to broken power-law scaling, as well as the logarithmic corrections associated with long range dependence. Identification of the true power-law scaling requires scale invariance over several decades in frequency in the underlying process, as shown in Fig. 1(d). Empirical data seldom displays accordingly broad ranges of self-similarity [7][8][9][10][11][12], suggesting a spectral scaling bias at least in the case of processes that are well-described by a superposition of uncorrelated pulses. Considering that a variety of physical phenomena has been canonically modelled in this way [28][29][30][31][32][33], the observed 1/ f universality may be overstated. Whether a similar bias is present for other complex-dynamics systems requires further investigation.
FIG
. 1. (Color online) Frequency power spectral density of the filtered Poisson process with one-sided exponential pulse shape and Paretodistributed pulse duration times. Legend color coding applies per row. Top row: varied α at fixed ∆ = 10 8 . Bottom row: varied ∆ at fixed α = 2. Left column: uncompensated spectra Ω Φ (ω; ∆, α) given by Eq.(8). Dashed lines represent Lorentzian-function spectra. Right column: compensated spectra ω 3−α Ω Φ (ω; ∆, α). The horizontal dashed black lines spanning the entire ω-range mark the inverse of the compensating prefactors according to Eqs.(9). The regions where the dashed black lines overlap with the colored lines indicate the ranges of power-law scaling. Colored dots mark the theoretical boundaries of the self-similarity ranges, ωτ ↑ = 1 and ωτ ↓ = 1. (a) The inset presents the spectra at the boundaries of the LRD regime, α = 1 (β = 2) and α = 3 (β = 0), where logarithmic corrections to 1/ω β -scaling apply. The domain represented in the inset is shaded in the outer plot. (b) Two ancillary α-cases plotted with dashed colored lines showcase the reduction in the range of self-similarity when α is 1/7 away from the nearest LRD boundary. (c) The inset presents the empirical power spectra obtained for realizations of the process, shifted vertically by a factor √ ∆ to avoid overlapping. The color coding of the empirical spectra is aligned to the legend. The overlying solid black lines represent the corresponding analytical results. An additional empirical case ∆ = 0, representing a constant pulse duration, is plotted in black and overlaid by a dashed-white Lorentzian.
FIG
. 2. (Color online) Estimated power-law scaling exponentsβ of the analytical power spectral density curves Ω Φ (ω; ∆, α) given by Eq. (8) for various ranges ∆ of the underlying scale invariance, and in the entire LRD regime 1 ≤ α ≤ 3. The dashed gray line marks the asymptotic scaling relation lim ∆→∞ β (α) = 3 − α. The solid gray line marksβ = 1 representative of the 1/ f noise. The colorful vertical sidebars mark the range ofβ observed for different values of ∆.
ACKNOWLEDGMENTSThis work was supported by the UiT Aurora Centre Program, UiT The Arctic University of Norway (2020). A. T. was supported by Tromsø Research Foundation under grant number 19 SG AT.
Solar Hard X-Ray Bursts. B R Dennis, 10.1007/BF00158441Sol. Phys. 100465B. R. Dennis, Solar Hard X-Ray Bursts, Sol. Phys. 100, 465 (1985).
Power laws in solar flares: Self-organized criticality or turbulence?. G Boffetta, V Carbone, P Giuliani, P Veltri, A Vulpiani, 10.1103/PhysRevLett.83.4662Phys. Rev. Lett. 834662G. Boffetta, V. Carbone, P. Giuliani, P. Veltri, and A. Vulpiani, Power laws in solar flares: Self-organized criticality or turbu- lence?, Phys. Rev. Lett. 83, 4662 (1999).
Quiet-time statistics of electrostatic turbulent fluxes from the jet tokamak and the w7-as and tj-ii stellarators. R Sánchez, B P Van Milligen, D E Newman, B A Carreras, 10.1103/PhysRevLett.90.185005Phys. Rev. Lett. 90185005R. Sánchez, B. P. van Milligen, D. E. Newman, and B. A. Carreras, Quiet-time statistics of electrostatic turbulent fluxes from the jet tokamak and the w7-as and tj-ii stellarators, Phys. Rev. Lett. 90, 185005 (2003).
Universality in solar flare and earthquake occurrence. L De Arcangelis, C Godano, E Lippiello, M Nicodemi, 10.1103/PhysRevLett.96.051102Phys. Rev. Lett. 9651102L. de Arcangelis, C. Godano, E. Lippiello, and M. Nicodemi, Universality in solar flare and earthquake occurrence, Phys. Rev. Lett. 96, 051102 (2006).
Finite system-size effects in self-organized criticality systems. M J Aschwanden, 10.3847/1538-4357/abda48Astrophys. J. 90969M. J. Aschwanden, Finite system-size effects in self-organized criticality systems, Astrophys. J. 909, 69 (2021).
Interoccurrence times in the bak-tang-wiesenfeld sandpile model: A comparison with the observed statistics of solar flares. M Paczuski, S Boettcher, M Baiesi, 10.1103/PhysRevLett.95.181102Phys. Rev. Lett. 95181102M. Paczuski, S. Boettcher, and M. Baiesi, Interoccur- rence times in the bak-tang-wiesenfeld sandpile model: A comparison with the observed statistics of solar flares, Phys. Rev. Lett. 95, 181102 (2005).
The dependence of solar wind burst size on burst duration and its invariance across solar cycles 23 and 24. E Tindale, S C Chapman, N R Moloney, N W Watkins, 10.1029/2018JA025740J. Geophys. Res. Space Phys. 1237196E. Tindale, S. C. Chapman, N. R. Moloney, and N. W. Watkins, The dependence of solar wind burst size on burst duration and its invariance across solar cycles 23 and 24, J. Geophys. Res. Space Phys. 123, 7196 (2018).
1 f γ noise in thick-film resistors as an effect of tunnel and thermally activated emissions, from measures versus frequency and temperature. B Pellegrini, R Saletti, P Terreni, M Prudenziati, 10.1103/PhysRevB.27.1233Phys. Rev. B. 271233B. Pellegrini, R. Saletti, P. Terreni, and M. Prudenziati, 1 f γ noise in thick-film resistors as an effect of tunnel and thermally acti- vated emissions, from measures versus frequency and tempera- ture, Phys. Rev. B 27, 1233 (1983).
Origin of 1/f noise in graphene multilayers: Surface vs. volume. G Liu, S Rumyantsev, M S Shur, A A Balandin, 10.1063/1.4794843Appl. Phys. Lett. 10293111G. Liu, S. Rumyantsev, M. S. Shur, and A. A. Balandin, Ori- gin of 1/f noise in graphene multilayers: Surface vs. volume, Appl. Phys. Lett. 102, 093111 (2013).
Self-organised criticality and emergent hyperbolic networks: blueprint for complexity in social dynamics. B Tadić, 10.1088/1361-6404/aaf144Eur. J. Phys. 4024002B. Tadić, Self-organised criticality and emergent hyperbolic networks: blueprint for complexity in social dynamics, Eur. J. Phys. 40, 024002 (2019).
The structure of climate variability across scales. C L E Franzke, S Barbosa, R Blender, H.-B Fredriksen, T Laepple, F Lambert, T Nilsen, K Rypdal, M Rypdal, M G Scotto, S Vannitsem, N W Watkins, L Yang, N Yuan, 10.1029/2019RG000657Rev. Geophys. 58C. L. E. Franzke, S. Barbosa, R. Blender, H.-B. Fredriksen, T. Laepple, F. Lambert, T. Nilsen, K. Rypdal, M. Rypdal, M. G. Scotto, S. Vannitsem, N. W. Watkins, L. Yang, and N. Yuan, The structure of climate variability across scales, Rev. Geophys. 58, e2019RG000657 (2020).
Late quaternary temperature variability described as abrupt transitions on a 1/ f noise background. M Rypdal, K , 10.5194/esd-7-281-2016Earth Syst. Dyn. 7281M. Rypdal and K. Rypdal, Late quaternary temperature vari- ability described as abrupt transitions on a 1/ f noise back- ground, Earth Syst. Dyn. 7, 281 (2016).
Links between annual, milankovitch and continuum temperature variability. P Huybers, W Curry, 10.1038/nature04745Nature. 441329P. Huybers and W. Curry, Links between annual, milankovitch and continuum temperature variability, Nature 441, 329 (2006).
The fractal geometry of nature. B B Mandelbrot, W.H. FreemanB. B. Mandelbrot, The fractal geometry of nature (W.H. Free- man, 1983).
How nature works: the science of self-organized criticality. P Bak, Oxford University PressOxford, UKP. Bak, How nature works: the science of self-organized criti- cality (Oxford University Press, Oxford, UK, 1997).
Fractals, chaos, power laws -minutes from an infinite paradise. M R Schroeder, FreemanM. R. Schroeder, Fractals, chaos, power laws -minutes from an infinite paradise (Freeman, 1991).
Small-shot effect and flicker effect. W Schottky, 10.1103/PhysRev.28.74Phys. Rev. 2874W. Schottky, Small-shot effect and flicker effect, Phys. Rev. 28, 74 (1926).
The schottky effect in low frequency circuits. J B Johnson, 10.1103/PhysRev.26.71Phys. Rev. 2671J. B. Johnson, The schottky effect in low frequency circuits, Phys. Rev. 26, 71 (1925).
Self-organized criticality: An explanation of the 1/f noise. P Bak, C Tang, K Wiesenfeld, 10.1103/PhysRevLett.59.381Phys. Rev. Lett. 59381P. Bak, C. Tang, and K. Wiesenfeld, Self-organized criticality: An explanation of the 1/f noise, Phys. Rev. Lett. 59, 381 (1987).
Universal 1/f noise from dissipative self-organized criticality models. P De Los Rios, Y.-C Zhang, 10.1103/PhysRevLett.82.472Phys. Rev. Lett. 82472P. De Los Rios and Y.-C. Zhang, Universal 1/f noise from dissipative self-organized criticality models, Phys. Rev. Lett. 82, 472 (1999).
1/ f noise from the laws of thermodynamics for finite-size fluctuations. R V Chamberlin, D M Nasir, 10.1103/PhysRevE.90.012142Phys. Rev. E. 9012142R. V. Chamberlin and D. M. Nasir, 1/ f noise from the laws of thermodynamics for finite-size fluctuations, Phys. Rev. E 90, 012142 (2014).
General mechanism for the 1/ f noise. A C Yadav, R Ramaswamy, D Dhar, 10.1103/PhysRevE.96.022215Phys. Rev. E. 9622215A. C. Yadav, R. Ramaswamy, and D. Dhar, General mechanism for the 1/ f noise, Phys. Rev. E 96, 022215 (2017).
Universal generation of 1/ f noises. I Eliazar, J Klafter, 10.1103/PhysRevE.82.021109Phys. Rev. E. 8221109I. Eliazar and J. Klafter, Universal generation of 1/ f noises, Phys. Rev. E 82, 021109 (2010).
1/ f flux noise in low-T c squids due to superparamagnetic phase transitions in defect clusters. A De, 10.1103/PhysRevB.99.024305Phys. Rev. B. 9924305A. De, 1/ f flux noise in low-T c squids due to su- perparamagnetic phase transitions in defect clusters, Phys. Rev. B 99, 024305 (2019).
Possible mechanisms for 1/ f noise in chalcogenide glasses: A theoretical description. M Nardone, V I Kozub, I V Karpov, V G Karpov, 10.1103/PhysRevB.79.165206Phys. Rev. B. 79165206M. Nardone, V. I. Kozub, I. V. Karpov, and V. G. Karpov, Pos- sible mechanisms for 1/ f noise in chalcogenide glasses: A the- oretical description, Phys. Rev. B 79, 165206 (2009).
Size effects in finite systems with long-range interactions. E S Loscar, C M Horowitz, 10.1103/PhysRevE.97.032103Phys. Rev. E. 9732103E. S. Loscar and C. M. Horowitz, Size effects in finite systems with long-range interactions, Phys. Rev. E 97, 032103 (2018).
Fluctuations of 1/ f noise and the low-frequency cutoff paradox. M Niemann, H Kantz, E Barkai, 10.1103/PhysRevLett.110.140603Phys. Rev. Lett. 110140603M. Niemann, H. Kantz, and E. Barkai, Fluctuations of 1/ f noise and the low-frequency cutoff paradox, Phys. Rev. Lett. 110, 140603 (2013).
Self-organized criticality. P Bak, C Tang, K Wiesenfeld, 10.1103/PhysRevA.38.364Phys. Rev. A. 38364P. Bak, C. Tang, and K. Wiesenfeld, Self-organized criticality, Phys. Rev. A 38, 364 (1988).
Fogedby, 1/f noise, distribution of lifetimes, and a pile of sand. H J Jensen, K Christensen, H C , 10.1103/PhysRevB.40.7425Phys. Rev. B. 407425H. J. Jensen, K. Christensen, and H. C. Fogedby, 1/f noise, distribution of lifetimes, and a pile of sand, Phys. Rev. B 40, 7425 (1989).
S Lowen, M Teich, 10.1002/0471754722Fractal-Based Point Processes. Wiley9S. Lowen and M. Teich, Fractal-Based Point Processes (Wiley, 2005) Chap. 9.
. M J Aschwanden, 10.1007/978-3-642-15001-2Self-Organized Criticality in Astrophysics. 11Springer4.8M. J. Aschwanden, Self-Organized Criticality in Astrophysics, Vol. 11 (Springer Berlin, Heidelberg, 2011) Chap. 4.8, pp. 129- 135.
G Samorodnitsky, 10.1007/978-3-319-45575-4Stochastic Processes and Long Range Dependence. Springer3.4.G. Samorodnitsky, Stochastic Processes and Long Range Dependence (Springer, 2016) Chap. 3.4.
V Pipiras, M S Taqqu, 10.1017/CBO9781139600347Long-Range Dependence and Self-Similarity. Cambridge University PressV. Pipiras and M. S. Taqqu, Long-Range Dependence and Self-Similarity (Cambridge University Press, 2017).
Auto-correlation function and frequency spectrum due to a super-position of uncorrelated exponential pulses. O E Garcia, A Theodorsen, 10.1063/1.4978955Phys. Plasmas. 2432309O. E. Garcia and A. Theodorsen, Auto-correlation function and frequency spectrum due to a super-position of uncorrelated ex- ponential pulses, Phys. Plasmas 24, 032309 (2017).
The study of discontinuous phenomena. N Campbell, Proc. Cambridge Philos. Soc. 15117N. Campbell, The study of discontinuous phenomena, Proc. Cambridge Philos. Soc. 15, 117 (1909).
A theory of 1/f noise. A R Butz, 10.1007/BF01008550J. Stat. Phys. 4A. R. Butz, A theory of 1/f noise, J. Stat. Phys. 4, 199-216 (1972).
Chapter 15 Hypergeometri. NIST, Digital Library of Mathematical FunctionsNIST, Digital Library of Mathematical Functions, Chapter 15 Hypergeometri (2022), accessed 3. Nov. 2022.
Scaling relations for logarithmic corrections. R Kenna, D A Johnston, W Janke, 10.1103/PhysRevLett.96.115701Phys. Rev. Lett. 96115701R. Kenna, D. A. Johnston, and W. Janke, Scaling relations for logarithmic corrections, Phys. Rev. Lett. 96, 115701 (2006).
Continuous quantum phase transition between an antiferromagnet and a valence-bond solid in two dimensions: Evidence for logarithmic corrections to scaling. A W Sandvik, 10.1103/PhysRevLett.104.177201Phys. Rev. Lett. 104177201A. W. Sandvik, Continuous quantum phase transition be- tween an antiferromagnet and a valence-bond solid in two dimensions: Evidence for logarithmic corrections to scaling, Phys. Rev. Lett. 104, 177201 (2010).
Logarithmic finite-size scaling correction to the leading fisher zeros in the p-state clock model: A higher-order tensor renormalization group study. S Hong, D.-H Kim, 10.1103/PhysRevE.101.012124Phys. Rev. E. 10112124S. Hong and D.-H. Kim, Logarithmic finite-size scaling correction to the leading fisher zeros in the p-state clock model: A higher-order tensor renormalization group study, Phys. Rev. E 101, 012124 (2020).
Fractal renewal processes generate 1/f noise. S B Lowen, M C Teich, 10.1103/PhysRevE.47.992Phys. Rev. E. 47992S. B. Lowen and M. C. Teich, Fractal renewal processes gener- ate 1/f noise, Phys. Rev. E 47, 992 (1993).
A voyage through scales, a missing quadrillion and why the climate is not what you expect. S Lovejoy, 10.1007/s00382-014-2324-0Clim. Dyn. 443187S. Lovejoy, A voyage through scales, a missing quadrillion and why the climate is not what you expect, Clim. Dyn. 44, 3187 (2014).
Universality of power-law exponents by means of maximum-likelihood estimation. V Navas-Portella, A González, I Serra, E Vives, A Corral, 10.1103/PhysRevE.100.062106Phys. Rev. E. 10062106V. Navas-Portella, A. González, I. Serra, E. Vives, and A. Corral, Universality of power-law expo- nents by means of maximum-likelihood estimation, Phys. Rev. E 100, 062106 (2019).
| []
|
[
"INFINITE MATROIDS IN TROPICAL DIFFERENTIAL ALGEBRA",
"INFINITE MATROIDS IN TROPICAL DIFFERENTIAL ALGEBRA"
]
| [
"F Aroca \nVALENCIA NEGRETE\n\n",
"L Bossinger \nVALENCIA NEGRETE\n\n",
"S Falkensteiner \nVALENCIA NEGRETE\n\n",
"C Garay Lopez \nVALENCIA NEGRETE\n\n",
"L R Gonzalez-Ramirez \nVALENCIA NEGRETE\n\n",
"C V \nVALENCIA NEGRETE\n\n"
]
| [
"VALENCIA NEGRETE\n",
"VALENCIA NEGRETE\n",
"VALENCIA NEGRETE\n",
"VALENCIA NEGRETE\n",
"VALENCIA NEGRETE\n",
"VALENCIA NEGRETE\n"
]
| []
| We consider a finite-dimensional vector space W ⊂ K E over an arbitrary field K and an arbitrary set E. We show that the set C(W ) ⊂ 2 E consisting of the minimal supports of W are the circuits of a matroid on E. In particular, we show that this matroid is cofinitary (hence, tame). When the cardinality of K is large enough (with respect to the cardinality of E), then the set T(W ) ⊂ 2 E consisting of all the supports of W is a matroid itself.Afterwards we apply these results to tropical differential algebraic geometry and study the set of supports T(Sol(Σ)) ⊂ (2 N m ) n of spaces of formal power series solutions Sol(Σ) of systems of linear differential equations Σ in differential variables x1, . . . , xn having coefficients in the ring K [[t1, . . . , tm]]. If Σ is of differential type zero, then the set C(Sol(Σ)) ⊂ (2 N m ) n of minimal supports defines a matroid on E = N mn , and if the cardinality of K is large enough, then the set of supports φ • T(Sol(Σ)) itself is a matroid on E as well. By applying the fundamental theorem of tropical differential algebraic geometry (fttdag), we give a necessary condition under which the set of solutions Sol(U ) of a system U of tropical linear differential equations to be a matroid.We also give a counterexample to the fttdag for systems Σ of linear differential equations over countable fields. In this case, the set φ • T(Sol(Σ)) may not form a matroid.consists of all the supports of the points of X: it suffices to consider the identification {−∞, 0} n ∼ = 2 [n] as the set of all the indicator functions of the subsets of [n]. In general we have v 0 (X) ⊆ trop(X, v 0 ), and this inclusion may be proper; for example, if X ⊂ F 3 2 is the linear space spanned by {(0, 1, 1), (1, 0, 1)}, then v 0 (X) ∪ {(0, 0, 0)} = trop(X, v 0 ), which says that (1) carries potentially more information than the matroid M (X). We show that, if X ⊂ K E is a finite-dimensional vector space, then the above situation cannot happen if the cardinality of K is large enough with respect to the cardinality of the set E; that is, the set of supports v 0 (X) ⊂ 2 E ∼ = {0, −∞} E and the matroid M (X) = (E, C(X)) associated to X can be identified with each other via (E, C(X)) = M (X) v 0 (X) = (E, S(X)). scrawls circuits (2) Date: May 30, 2023. | 10.48550/arxiv.2305.04784 | [
"https://export.arxiv.org/pdf/2305.04784v2.pdf"
]
| 258,557,906 | 2305.04784 | f6cbc298efb75ef58b9085ccc52e7d1348b874f3 |
INFINITE MATROIDS IN TROPICAL DIFFERENTIAL ALGEBRA
29 May 2023
F Aroca
VALENCIA NEGRETE
L Bossinger
VALENCIA NEGRETE
S Falkensteiner
VALENCIA NEGRETE
C Garay Lopez
VALENCIA NEGRETE
L R Gonzalez-Ramirez
VALENCIA NEGRETE
C V
VALENCIA NEGRETE
INFINITE MATROIDS IN TROPICAL DIFFERENTIAL ALGEBRA
29 May 2023
We consider a finite-dimensional vector space W ⊂ K E over an arbitrary field K and an arbitrary set E. We show that the set C(W ) ⊂ 2 E consisting of the minimal supports of W are the circuits of a matroid on E. In particular, we show that this matroid is cofinitary (hence, tame). When the cardinality of K is large enough (with respect to the cardinality of E), then the set T(W ) ⊂ 2 E consisting of all the supports of W is a matroid itself.Afterwards we apply these results to tropical differential algebraic geometry and study the set of supports T(Sol(Σ)) ⊂ (2 N m ) n of spaces of formal power series solutions Sol(Σ) of systems of linear differential equations Σ in differential variables x1, . . . , xn having coefficients in the ring K [[t1, . . . , tm]]. If Σ is of differential type zero, then the set C(Sol(Σ)) ⊂ (2 N m ) n of minimal supports defines a matroid on E = N mn , and if the cardinality of K is large enough, then the set of supports φ • T(Sol(Σ)) itself is a matroid on E as well. By applying the fundamental theorem of tropical differential algebraic geometry (fttdag), we give a necessary condition under which the set of solutions Sol(U ) of a system U of tropical linear differential equations to be a matroid.We also give a counterexample to the fttdag for systems Σ of linear differential equations over countable fields. In this case, the set φ • T(Sol(Σ)) may not form a matroid.consists of all the supports of the points of X: it suffices to consider the identification {−∞, 0} n ∼ = 2 [n] as the set of all the indicator functions of the subsets of [n]. In general we have v 0 (X) ⊆ trop(X, v 0 ), and this inclusion may be proper; for example, if X ⊂ F 3 2 is the linear space spanned by {(0, 1, 1), (1, 0, 1)}, then v 0 (X) ∪ {(0, 0, 0)} = trop(X, v 0 ), which says that (1) carries potentially more information than the matroid M (X). We show that, if X ⊂ K E is a finite-dimensional vector space, then the above situation cannot happen if the cardinality of K is large enough with respect to the cardinality of the set E; that is, the set of supports v 0 (X) ⊂ 2 E ∼ = {0, −∞} E and the matroid M (X) = (E, C(X)) associated to X can be identified with each other via (E, C(X)) = M (X) v 0 (X) = (E, S(X)). scrawls circuits (2) Date: May 30, 2023.
Introduction
A fundamental concept in tropical algebraic geometry is the tropicalization trop(X, v) ⊆ (R ∪ {−∞}) n of an algebraic variety X ⊂ K n defined over a valued field K = (K, v). Assume X is defined by an ideal I with constant coefficients (i.e. I ⊂ k[x 1 , . . . , x n ] where k ⊂ K is a field with trivial valuation v 0 : k − → {−∞, 0}). In this case we may consider the K-points of X and tropicalize with respect to v which yields a polyhedral fan trop(X, v). A classical result from tropical algebraic geometry states that trop(X, v) coincides with the Bergman fan B(X) [Stu02,Theorem 9.6]. If X is a linear space, the Bergman fan can be obtained from its matroid M (X) = ({1, . . . , n}, C(X)) (see e.g. [MS15,p.165] or [AK06]). On the other hand, we may consider the trivial valuation v = v 0 . Note that the set v 0 (X) := {(v 0 (p 1 ), . . . , v 0 (p n )) ∈ {−∞, 0} n : (p 1 , . . . , p n ) ∈ X},
To do this, we use the notion of infinite matroids under the cryptomorphisms of circuits [BDK + 13] and scrawls [BC18]. Up to our knowledge, this result is new in both cases when E is finite and when E is infinite. Our interest in this type of questions comes from the theory of tropical differential algebraic geometry, where the tropicalization of the set of formal power series solutions of systems of homogeneous linear differential equations appears as a set of the form v 0 (X) ⊂ 2 E where E is infinite, as in (6). This is why in this paper we deal mainly with duals of representable matroids over an infinite set of labels E (see Theorem 2.17), which are not representable in the usual sense (see Remark 2.18), but we show that under the above hypothesis they do arise as the semigroup of the set of supports of a vector space.
Also, we aim to further study the Boolean formal power series solutions of systems of tropical homogeneous linear differential equations. This study was initiated in [Gri17] and it is a natural continuation for the tropical aspects of the differential algebraic theory of such systems, following the development of classical tropical geometry.
1.1. Analogies between classical and tropical differential algebraic geometry. The theory of tropical differential algebraic geometry was initiated by Grigoriev [Gri17], the fundamental theorem was proved by Aroca, Garay and Toghani in [AGT16], and is becoming an established active field of research with several contributions such as [GM21a,FT22,GM21b,Mer23,HG21].
Since the beginning, a natural question arose: which concepts of classical tropical algebraic geometry over a valued field (K, v) can be generalized to the differential setting? More specifically, can tropical differential algebraic geometry be regarded as an infinite version of classical tropical algebraic geometry?
A cornerstone of classical tropical algebraic geometry is the fundamental theorem of tropical algebraic geometry (fttag, [MS15,Theorem 3.2.3], a generalization of Kapranov's Theorem [EKL06] for hypersurfaces), which describes in three ways the tropicalization trop(X, v) ⊂ R n coming from algebraic subvarieties X ⊂ (K * ) n , where K is an algebraically closed field and v : K * − → R is a non-trivial valuation. A tropical (partial) differential analogue of this result (fttdag) was successfully constructed in [AGT16, FGLH + 20, FGLH + 23] giving three different descriptions of the tropical space of formal Boolean formal power series v 0 (X) ⊂ B[[t 1 , . . . , t m ]] n that come from differential algebraic (DA) varieties X ⊂ K[[t 1 , . . . , t m ]] n , where K is an uncountable algebraically closed field of characteristic zero and v 0 is the trivial valuation. Note that in [BFNS21] it is shown that the uncountability condition can be replaced by countably infinite transcendence degree over the field of definiton of X.
A natural source of finite-dimensional vector spaces X ⊂ K E with E infinite are the sets of solutions of systems of homogeneous linear differential equations of differential type zero with coefficients in K[[t 1 , . . . , t m ]]. We explore its consequences for distinct fields K satisfying or not the conditions of the (fttdag). Our main results on matroids (Theorem 2.15 and Theorem 3.5) lead to the following result (Theorem 4.5):
Theorem. Let E = N m for m, n ≥ 1 fixed and let T : K E −→ 2 E be the support map (Definition 2.4). Let Σ ⊂ K m,n be a system of homogeneous linear differential equations of differential type zero and let T(Sol(Σ)) ⊂ (2 E ) n be the set of supports of Sol(Σ). Then (i) the minimal elements C(Sol(Σ)) of T(Sol(Σ)) define the circuits of a matroid on N mn ; (ii) if the cardinality of K is large enough, then T(Sol(Σ)) is the set of scrawls of C(Sol(Σ)).
Obtaining that T(Sol(Σ)) is the set of scrawls of C(Sol(Σ)) (see item (i)), implies in particular that, if we have two supports of solutions of Σ, then their union appears also as the support of a solution of Σ; this is, T(Sol(Σ)) is a semigroup (with respect to the operation of union of sets). In item (ii), the assumption on the cardinality of K is strict as we show in Section 6.
The previous result is about the structure of the tropicalization of the set of formal solutions of a classical system. On the other hand, following D. Grigoriev [Gri17], one can study from an algebraic and combinatorial perspective the set of solutions X = Sol(U ) ⊂ B[[t 1 , . . . , t m ]] n associated to a system U ⊂ B m,n of tropical linear differential equations, disregarding U is realizable in some field K, this is, independent of the existence of linear systems Σ ⊂ K m,n such that U = trop(Σ) as in Definition 5.5. These sets of solutions are always semigroups even for the case of partial differential equations m > 0, as we show in Theorem 5.4. 1.2. Statement of results. In Theorem 2.15 we show that if {0} = W ⊂ K E is a finitedimensional K-vector subspace, then the pair M (W ) = (E, C(W )) is a matroid, where C(W ) ⊂ 2 E is the set of vectors in W having minimal (nonempty) support. This seems to be known for experts, but we provide a rigorous proof of this. We also show that any element of T(W ) is a union of circuits, and in Theorem 3.5 we show that if #E < #S(K), where S(K) = K ∪ {K} denotes the set-successor of K, then also the converse holds true. In Theorem 5.4 we show that the the set of solutions Sol(U ) ⊂ B[[T ]] n associated to a system U ⊂ B m,n of homogeneous linear tropical differential equations is a semigroup. In particular, using the fttdag, we give in Corollary 5.8 a necessary condition for a the set of solutions X = Sol(U ) ⊂ B[[t 1 , . . . , t m ]] n to be a matroid (or a matroid of scrawls, as in Definition 2.3). As a consequence of the previous results, in Section 6, we give a counterexample for the fttdag in the case of linear differential equations over countable fields.
1.3. Roadmap. The paper is organized as follows. In Section 2 we introduce standard preliminary material on matroid theory and we prove Theorem 2.15. In Section 3 we study the matroid of scrawls and prove Theorem 3.5. In Section 4 we discuss the theory of algebraic differential equations with coefficients in the ring K[[t 1 , . . . , t m ]] over an arbitrary field K, and we recast the result of the previous two sections for the case of homogeneous systems of linear differential equations of differential type zero. In Section 5 we discuss tropical differential equations, go further and analyze the special case in which K satisfies the hypotheses of the fttdag.
The infinite matroid induced by a finite dimensional subspace of a vector space
In this section we denote by E = ∅ an arbitrary set and by 2 E the power set of E, which is ordered by inclusion. We consider 2 E as a semigroup endowed with the set union as operation.
2.1. Basic theory of infinite matroids. A matroid on E may be given in terms of different collections of subsets of E: the circuits, the independent sets, or the bases. The matroid axioms were shown to be equivalent by Whitney [Whi35] in the finite case. For the infinite case, the Whitney axioms need to be completed, see [BDK + 13]. In [BC18], another possibility to define matroids is exhibited by using scrawls.
2.1. Definition. Let C ⊂ 2 E . We call C the set of circuits if it satisfies the following axioms:
(i) ∅ / ∈ C; (ii) No element of C is a subset of another; (iii) Whenever X ⊂ C ∈ C and {C x : x ∈ X} is a family of elements of C such that x ∈ C y iff x = y for all x, y ∈ X, then for every z ∈ C \ ( x C x ) there is an element In this case, M = (E, C) is a matroid and C is called the set of circuits of M .
C ′ ∈ C such that z ∈ C ′ ⊂ (C ∪ x C x ) \ X; (iv)
2.2. Definition. Let M = (E, C) be a matroid given in terms of its circuits. An independent set is a subset of E that contains no circuits (compare item 4 in Definition 2.1). A basis is a maximal independent set. If all the circuits of a matroid M (respectively M * ) are finite then M is called finitary (respectively cofinitary). These matroids are tame [HB15]. The matroids considered in this paper are cofinitary as we will show in Theorem 2.17.
2.2. The set of supports of a finite-dimensional vector space. In this section, we consider the vector space K E , where E is as above and K is any field. We will consider finite-dimensional
K-vector subspaces {0} = W ⊂ K E . 2.4. Definition. The support map T : K E −→ 2 E is the mapping (a i ) i∈E → {i ∈ E : a i = 0}. We call T(v) the support of v.
It seems commonly known among experts in matroid theory that T(W ) has a natural matroid structure by considering the elements with minimal support as the circuits. For the sake of completeness, we provide a proof by describing the circuits.
2.5. Definition. Let {0} = W ⊂ K E be a finite-dimensional K-vector subspace. We define C(W ) ⊂ 2 E to be the sets in T(W ) \ ∅ that are minimal with respect to set inclusion.
Note that, by definition, C(W ) satisfies item (i) and (ii) from Definition 2.1 by explicitly excluding the empty set and only considering minimal sets which cannot be subsets of other minimal sets. The next result shows that C(W ) = ∅.
2.6. Lemma. Let W be as above. For every element 0 = ϕ ∈ W there is 0 = ψ ∈ W of minimal support such that T(ψ) ⊂ T(ϕ).
Proof. Since {0} = W , we have s = dim K (W ) > 0. If s = 1, then W = K ·ϕ 1 with 0 = ϕ 1 ∈ K E , and it is clear that C(W ) = {S 1 = T(ϕ 1 ) = ∅}.
Suppose that s > 1. Consider ϕ 1 ∈ W and let S 1 = T(ϕ 1 ). If S 1 is minimal, we are done. Otherwise, there exists ∅ S 2 S 1 corresponding to some 0 = ϕ 2 ∈ W . Then {ϕ 1 , ϕ 2 } ⊂ W is linearly independent, otherwise λ 1 ϕ 1 + λ 2 ϕ 2 = 0 with (λ 1 , λ 2 ) = (0, 0) would yield S 1 = S 2 . Now repeat the process: if S 2 is minimal, we are done, otherwise, there exists ∅ S 3 S 2 corresponding to some 0 = ϕ 3 ∈ W . Then the chain ∅ S 3 S 2 S 1 implies that {ϕ 1 , ϕ 2 , ϕ 3 } ⊂ W is linearly independent, otherwise λ 1 ϕ 1 + λ 2 ϕ 2 + λ 3 ϕ 3 = 0 with (λ 1 , λ 2 , λ 3 ) = (0, 0, 0) would yield S 3 ⊂ S 2 . This process eventually finishes since the dimension of W is finite.
Given W as above, below we show that C(W ) also satisfies (iii) and (iv) from Definition 2.1 such that M (W ) = (E, C(W )) is indeed a matroid.
2.7. Remark. Suppose that W = K · ϕ 1 with 0 = ϕ 1 ∈ K E . Then C(W ) = {T(ϕ 1 )}, and so (E, C(W )) is a matroid. Take a basis {ϕ 1 , . . . , ϕ s } ⊂ W ⊂ K E of W as a K-vector space. Then W = {λ · ϕ := s i=1 λ i ϕ i : λ ∈ K s }.
Each element of the basis has an expression in the standard basis of K E of the form ϕ i = (a ij ) j∈E with a ij ∈ K. For each j ∈ E, set u (j) := (a 1j , . . . , a sj ) ∈ K s . With this notation, an index j ∈ E lies in the support of λ · ϕ ∈ W if and only if λ · u (j) = 0, that is:
T(λ · ϕ) = {j ∈ E : λ · u (j) = 0}.
(3)
We keep this notation for the following lemmata.
2.8. Lemma. Given X ⊂ E, there exists 0 = φ ∈ W with T(φ) ⊂ X if and only if the K-linear subspace of K s generated by {u (j) } j / ∈X is a proper subspace of K s .
Proof. As a consequence of (3), we have that T(λ · ϕ) ⊂ X if and only if 0 = λ ∈ K s is a solution of the system {u (i) · λ = 0} i / ∈X . This system has non zero solutions if and only if the K-linear subspace of K s generated by {u (j) } j / ∈X is a proper subspace of K s . Since W = {λ · ϕ : λ ∈ K s }, the statement follows, as the dimension of the space of solutions of a linear system in s unknowns is s minus the rank of the matrix.
2.9. Lemma. Given 0 = φ ∈ W , let L ⊂ K s be the K-linear subspace generated by {u (i) } i / ∈T(φ) . Then, L is a proper subspace of K s and u (i) / ∈ L for all i ∈ T(φ).
Proof. As a direct consequence of Lemma 2.8, L is a proper subspace of K s . Now, φ is an element of W if and only if it is of the form λ · ϕ and an element i ∈ E is in the support of λ · ϕ if and only if λ · u (i) = 0. Since L is generated by the set
{u (i) } i / ∈T(φ) , we have that λ · u (i) = 0 for all elements in L. 2.10. Lemma. Given C ⊂ E, let L ⊂ K s be the space spanned by {u (i) } i / ∈C . Then C ∈ C(W ) if and only if L is (s − 1)-dimensional and u (i) / ∈ L for all i ∈ C.
Proof. The proof is divided in three cases:
(i) If L = K s , then by Lemma 2.8, C is not the support of any element of W . (ii) If L is (s − 1)-dimensional.
Then a solution of the system {u (i) · λ = 0} i / ∈C is unique up to scalar multiplication. Take such a solution 0 = λ ∈ K s . Then the only elements of W with support contained in C are scalar multiples of φ := λ · ϕ and have the same support. Then C is a minimal set if and only if C = T(φ) and one implication follows from Lemma 2.9.
Moreover, if λ is such a solution, since L is of codimension one, then λ · u = 0 for any
u / ∈ L. If for all i ∈ C it holds that u (i) / ∈ L, then C = T(λ · ϕ). Now, if there is i ∈ C with u (i) ∈ L, then i / ∈ T(λ · ϕ) and C = T(λ · ϕ). (iii) If the dimension of L is d ≤ s − 2. Take i 1 , i 2 , . . . i s−d in C such that the subspace spanned by L and {u (i k ) } k=1,...,s−d is K s . The system {u (i) · λ = 0} i / ∈C ∪ {u (i k ) · λ = 0} k=2,...,s−d ∪ {u (i 1 ) · λ = 1} has a unique solution 0 = λ ∈ K s . For this solution, T(λ · ϕ) ⊂ C. Since {i k } k=2,...,s−d ⊂ C and i k / ∈ T(λ · ϕ) for k = 2, . . . , s − d, the set inclusion is strict. 2.11. Lemma. Given 0 = φ ∈ W , for every z ∈ T(φ) ⊂ E there exists a minimal element C ∈ T(W ) \ ∅ such that z ∈ C ⊂ T(φ).
Proof. Let L be the subspace generated by
{u (i) } i / ∈T(φ) . By Lemma 2.8, the subspace L is of dimension 0 < d < s. Moreover, since φ is a combination of the ϕ i 's, for all i ∈ T(φ) it holds that u (i) / ∈ L. (i) If d = s − 1, then, by Lemma 2.10, T(φ) is minimal. (ii) Suppose that the dimension of L is d ≤ s − 2. Take i 2 , . . . i s−d in C such that the subspace generated by L, u (z) and {u (i k ) } k=2,...,s−d is K s . The system {u (i) · λ = 0} i / ∈C ∪ {u (i k ) · λ = 0} k=2,...,s−d ∪ {u (z) · λ = 1} has a unique solution 0 = λ ∈ K s . For this solution, z ∈ T(λ · ϕ) ⊂ C \ {i k )} k=2,...,s−d .
Since the subspace generated by L and
{u (i k ) } k=2,...,s−d is of dimension s − 1, again by Lemma 2.10, T(λ · ϕ) is minimal. 2.12. Lemma. Given X ⊂ E and a set {φ x : x ∈ X} ⊂ W such that T(φ x ) ∩ X = {x} for all x ∈ X. Then the vectors {u (x) : x ∈ X} are linearly independent.
Proof. For each x ∈ X, let λ x ∈ K s be such that φ x = λ x · ϕ and let L x be the linear subspace generated by
{u (i) } i / ∈T(φx) . By Lemma 2.9, since x ∈ T(φ x ), then u (x) / ∈ L x . Now, T(φ x ) ∩ X = {x} implies that u (y) ∈ {u (i) } i / ∈T(φx) for y ∈ X \ {x}, then u (x) is not in the space generated by {u (i) } i∈X\{x} ⊂ L x .
2.13. Lemma. Property 3 of Definition 2.1 holds for (E, C(W )).
Proof. Suppose that X ⊂ C ∈ C and {C x : x ∈ X} is a family of elements of C such that x ∈ C y iff x = y for all x, y ∈ X. Then, since W is finitely generated, by Lemma 2.12, X is a finite set.
Choose
ϕ = (b j ) j∈E ∈ W such that T(ϕ) = C and, for each x ∈ X, choose ϕ x = (a xj ) j∈E ∈ W such that T(ϕ x ) = C x . Set φ := ϕ − x∈X bx ax x ϕ x . We have that C \ ( x C x ) ⊂ T(φ) ⊂ (C ∪ x C x )
\ X and the result follows from Lemma 2.11.
2.14. Lemma. Property 4 of Definition 2.1 holds for (E, C(W )).
Proof. Given I ⊂ X ⊂ E with I ∈ I. If X ∈ I, then X is the maximal element we are looking for. Otherwise, there exists ϕ ∈ W such that T(ϕ) ⊂ X. Then, by lemma 2.8, the dimension of the subspace of K s generated by {u (j) : j / ∈ I} is s and the dimension of the subspace of K s generated by
{u (j) : j / ∈ X} is r with r < s. Then, there exist {i r+1 , . . . , i s } ⊂ X \ I such that {u (i r+1 ) , . . . , u (is) } together with {u (j) : j / ∈ X} generate K s . IfĪ := X \ {u (i r+1 ) , . . . , u (is) } thenĪ is a maximal element of {I ′ ∈ I : I ⊂ I ′ ⊂ X}. 2.15. Theorem. Let {0} = W ⊂ K E be a finite-dimensional K-vector subspace. Then M (W ) = (E, C(W )) is a matroid. Moreover, any element of T(W ) is a union of circuits.
Proof. Items (i) and (ii) are fulfilled as stated after Definition 2.5, and Lemmata 2.13 and 2.14 show the remaining items. The fact that any element of T(W ) is a union of circuits follows from Lemma 2.11.
2.16. Example. The set C may consist of infinitely many elements. Consider for instance W generated by
ϕ 1 = 1 + i≥2 t i , ϕ 2 = i≥1 i t i .
Then, for every n > 1, we obtain that φ n := n · ϕ 1 − ϕ 2 ∈ W has the support T(φ n ) = N \ {n}. Since there is no non-zero element in W whose support is a subset of T(φ n ), T(ϕ 1 ) or T(ϕ 2 ), we obtain that
C(W ) = {N \ {n} : n ∈ N}.
Nevertheless, the matroid (E, C(W )) has some finite structure in the following sense.
Scrawls and cardinality
In the previous section we have shown that the minimal elements of T(W ) satisfy the axioms of circuits. The scrawls of the matroid on E given in terms of these circuits are, by definition, unions of circuits. In this section we investigate the conditions on E and W upon which the collection of scrawls coincide with T(W ). For this purpose, let us denote by #C the cardinality of a set C and by S(C) = C ∪ {C} the successor-set. Note that #S(C) = #C + 1 for finite sets C and #S(C) = #C if C is infinite.
3.1. Matroids of scrawls. Denote by Lin(W ) the set of K-linear subspaces L ⊂ K s with L = K s such that L is generated by a set of the form {u (i) } i∈X for some X ⊂ E. We will denote by Ψ W the map given by
Ψ W : Lin(W ) −→ 2 E L → {i ∈ E : u (i) / ∈ L}.(4)
Notice that Lin(W ) is in fact independent of the choice of basis {ϕ 1 , . . . , ϕ s }. To see this consider another basis {ψ 1 , . . . , ψ s } of W and let ψ i = (b ij ) j∈E . Denote for j ∈ E, w (j) := (b 1j , . . . , b sj ). Then there exists an invertible s × s matrix λ = (λ ij ) 1≤i,j≤s encoding the change of basis: ψ i = λ ij ϕ j . In particular, for all j ∈ E we have
λu (j) = w (j) .
Now consider X ⊂ E and L = u (j) : j ∈ X . We have
u (i) i∈X = λ −1 w (i) i∈X = u (j) j∈T({λ −1 w (i) } i∈X )
. So the base change matrix λ induces a natural bijection between the linear spaces generated by subsets of {u (i) } i∈E and those of {w (i) } i∈E .
3.1. Lemma. The morphism Ψ W induces a one to one correspondence between the circuits of the matroid induced by W and the spaces of codimension one in Lin W .
Proof. This is a direct consequence of Lemma 2.10. Proof. Let Lin d+1 L (K s ) be the collection of K-linear subspaces of dimension d+1 of K s containing L. The collection Lin d+1 L (K s ) is isomorphic to P s−d−1 K where P K denotes the projective space over K. Then the cardinality of Lin d+1 L (K s ) is greater or equal to the cardinality of S(K). For every non-zero vector u (i) / ∈ L, i ∈ X, since the dimension of L is d, there exists exactly one
L i ∈ Lin d+1 L (K s ) such that u (i) ∈ L i . Thus, if #X < #S(K), there is L ∈ Lin d+1 L (K s ) \ {L i } i∈X and L does not contain any of the u (i) / ∈ L.
Let us note that for infinite K the proof could be simplified by using #P s−d−1
K = #P K = #K.
3.4. Lemma. Let W be a subspace of K E and let Ψ W be as in (4). If #E < #S(K), then Ψ W is a natural one to one correspondence between Lin(W ) and T(W ).
Proof. We start by showing that, for L ∈ Lin(W ), Ψ W (L) is in T(W ).
(i) If L is of codimension one it is a consequence of Lemma 2.10 (with no assumption about cardinality). (ii) Suppose that the dimension of L is d < s − 1. Then the cardinality of X := {i ∈ E : u (i) / ∈ L} satisfies #S d (X) ≤ #E where S d denotes the d-th successor-set. Applying s − d − 1 times Lemma 3.3 , there exists a K-linear hyperplane L ⊃ L that does not contain any element of {u (i) } i∈X . Take λ ∈ K s such that L = {v ∈ K s : λ · v = 0}. Since L ⊂ L and {u (i) } i∈E,u (i) / ∈L ∩ L = ∅ we have that λ · u (i) = 0 for all u (i) ∈ L and λ · u (i) = 0 for all u (i) / ∈ L. Then, by (3), we have that T(λ · ϕ) = Ψ W (L).
That the mapping is injective is straightforward and, to show surjectivity it is enough to see that T(φ) = Ψ W (L) where L is the subspace generated by {u (i) } i∈E\T(φ) .
3.5. Theorem. Let W be a subspace of K E . If #E < #S(K), then T(W ) is a set of scrawls for E.
Proof. By Theorem 2.15, any element of T(W ) is a union of circuits. That any union of circuits is in T(W ) is a consequence of Lemma 3.4 together with Lemma 3.2.
The following example shows that the condition in Theorem 3.5 about cardinality is optimal.
3.6. Example. Let E be a set and let K be a field with #S(K) ≤ #E. Let i 0 ∈ E and set a i 0 := 1. Choose, for i ∈ E, the a i ∈ K such that K = {a i } i∈E\{i 0 } . Let i 1 ∈ E with i 1 = i 0 be such that a i 1 = 0. Set ϕ 1 := {b i } i∈E ∈ K E where b i := 1 for i = i 0 and b i 0 = 0 and set ϕ 2 := {a i } i∈E ∈ K E and let W be the subspace of K E spanned by ϕ 1 and ϕ 2 . In this case, the image of the mapping (4) is not contained in T(W ) since Ψ({(0, . . . , 0)}) = E and E is not the support of any element of W . In order to see this, take an element φ := λ 1 ϕ 1 + λ 2 ϕ 2 ∈ W . If λ 1 λ 2 = 0 then either
i 0 / ∈ T(φ) or i 1 / ∈ T(φ). If λ 1 λ 2 = 0, let i ∈ E be such that a i = − λ 1 λ 2 (it exists because K = {a i } i∈E\{i 0 } ). Since T(φ) = T( 1 λ 2 φ) = T( λ 1 λ 2 ϕ 1 + ϕ 2 ) = T({ λ 1 λ 2 + a i } i∈E ) ∪ {i 0 } we have that i / ∈ T(φ).
Note that if K, E and W ⊂ K E satisfy the conditions of Theorem 3.5, then T(W ) is in particular a semigroup. 3.7. Corollary. Let K and E be arbitrary. Let s ∈ N, for each j ∈ E, set u (j) := (a 1j , . . . , a sj ) ∈ K s , such that {ϕ i = (a ij ) j∈E : i = 1, . . . , s} is linearly independent.
Let M be the representable matroid induced by the family {u (j) : j ∈ E}. Then M * is the matroid of scrawls of LinSpan{ϕ 1 , . . . , ϕ s } if #E < #S(K).
Proof. Let W = LinSpan{ϕ 1 , . . . , ϕ s }. By Theorem 2.17, if M is the representable matroid induced by the family {u (j) : j ∈ E}, then M * is its matroid of scrawls, and M * = T(W ) if #E < #S(K) by Theorem 3.5.
3.8. Remark. We have that Corollary 3.7 says that even if a cofinitary matroid M is representable by a family of column vectors, it does not follow automatically that its dual M * is the matroid of scrawls of the linear span of the row vectors.
3.9. Remark. Our concept of set of scrawls is stronger than that of a semigroup of (2 E , ∪), since it is clear that a semigroup, being closed under unions, is spanned by its minimal elements, but it does not necessarily follow that this set of minimal elements satisfy the axioms of the circuits of a matroid. The concept of set of scrawls is also stronger than that of the circuits of a matroid on 2 E , since it may happen that a, b ∈ C(W ), but a ∪ b / ∈ T(W ) (see Theorem 3.5).
Tropical linear spaces in the differential algebra setting
We apply the previous theory to the case in which the set of formal solutions of a homogeneous system of linear differential equations is a finite dimensional vector space. The dimension of such solution spaces could be stated with the usage of D-modules [SS19] or jet-spaces [KL06]. When considering differential equations with polynomial coefficients instead of formal power series coefficients, one would speak of D-finite solutions [Lip89]. In this paper, however, we give a presentation by using the so-called differential type applicable to every system of algebraic (non-linear) differential polynomials as those considered in the fttdag [FGLH + 23].
Throughout this section, we will consider K to be a field of characteristic zero and m, n ∈ N \ {0}.
4.1. Preliminaries on differential algebra. We start with some preliminaries. We denote by 4.1. Definition. Let Σ ⊂ K m,n . The differential ideal [Σ] ⊂ K m,n spanned by Σ is the minimal ideal containing Σ and being closed under taking derivatives.
An element P ∈ K m,n is called a differential polynomial, and the order of P is defined as the maximum of the |J| = J 1 + · · · + J m effectively appearing in P . The variables x i,J , i = 1, . . . , n, J ∈ N m , in K m,n denote differential variables. We can define a map P :
K[[T ]] n − → K[[T ]] in which a monomial E M = i,J x m i,J i,J sends the vector ϕ = (ϕ 1 , . . . , ϕ n ) ∈ K[[T ]] n to E M (ϕ) = i,J ( ∂ |J | ϕ i ∂t j 1 1 ···∂t jm m ) m i,J .
4.2. Definition. We say that ϕ = (ϕ 1 , . . . , ϕ n ) ∈ K[[T ]] n is a solution of P ∈ K m,n if P (ϕ) = 0.
We denote by Sol(P ) the set of solutions of P ∈ K m,n . Let Σ ⊂ K m,n . The differential (algebraic) variety defined by Σ is the set of common solutions Sol(Σ)
= P ∈Σ Sol(P ) ⊂ K[[T ]] n .
Let Σ ⊂ K m,n 1 be a system of differential equations such that the radical differential ideal generated by Σ is prime. Let Ω be an autoreduced set of Σ with respect to some ranking [Kol73]. Let L denote the set of leaders of Ω. Then the transcendence degree of the general solution of Σ is equal to the cardinality d of the set Θ{x 1 , . . . , x n }\ΘL, which is independent of the actual choice of the ranking and the autoreduced set. We say that Σ is of differential type zero if and only if d is finite. Note that autoreduced sets can be computed algorithmically for algebraic differential equations (e.g. with the MAPLE-command DifferentialAlgebra:-RosenfeldGroebner) and thus, it can be decided whether Σ is of differential type zero or not. In the case of linear differential equations, several simplifications occur (cf. Remark 4.4).
The linear case.
In this section we apply the results of the previous sections to the space of solutions Sol(Σ) of a homogeneous linear system of differential equations Σ ⊂ K m,n . We start with some definitions.
4.3.
Definition. An (algebraic) linear differential equation is a linear polynomial P ∈ K m,n , i.e.
P = i,J α i,J x i,J + α, with α i,J , α ∈ K[[T ]
]. We say that P is homogeneous if α = 0.
If P ∈ K m,n is linear and homogeneous, then it is easy to see that Sol(P ) ⊂ K[[T ]] n is a K-vector space. Thus if Σ ⊂ K m,n is a system of homogeneous linear differential equations, then Sol(Σ) is also a K-vector space.
4.4.
Remark. Let us note that for homogeneous linear systems of differential equations Σ, the differential ideal generated by Σ is always prime and every autoreduced set Ω of Σ is homogeneous and linear as well. Moreover, the transcendence degree of the general solution d, if it is finite, is the dimension of Sol(Σ) [Kol73, Chapter 3, Section 5].
With abuse of notation, we will denote by N m the idempotent monoid (N m , ∪, ∅), and we will denote by T : K[[T ]] − → 2 N m the support map 2 . If n ≥ 1 and X ⊂ K[[T ]] n , set of supports (see [FGLH + 20]) is
T(X) = {(T(w 1 ), . . . , T(w n )) ∈ (2 N m ) n : (w 1 , . . . , w n ) ∈ X}.(5)
Note that if n ≥ 2, then the order in (2 N m ) n is not the one induced by inclusion, but rather the product order. Thus, in order to be able to apply the theory of the previous sections, first we need to perform the following transformation.
We construct an injective map Φ :
K[[t 1 , . . . , t m ]] n → K[[t i,j : 1 ≤ i ≤ m, 1 ≤ j ≤ n]]
of linear spaces as follows. For i = 1, . . . , n we denote by {t i,1 , . . . , t i,m } the set of variables of the i−th copy of the n−fold product, then we send the vector (ϕ 1 , . . . , ϕ n ) to ϕ 1 + · · · + ϕ n . This in turn induces an injective homomorphism of monoids φ : (2 N m ) n → 2 N mn , where 2 N mn is ordered by inclusion.
Then we have φ • T(X) = {T(w 1 ) ∪ · · · ∪ T(w n ) ∈ 2 N mn : (w 1 , . . . , w n ) ∈ X}.
1 The set Σ can be seen as a system over the smallest differential field containing K[[T ]] which is necessary for some of the following arguments. 2 The formal power series in the argument of T can be identified as the list of coefficients such that this is consistent with Definition 2.4.
Thus, if W ⊂ K[[t 1 , . . . , t m ]] n is a linear space, then Φ(W ) ⊂ K[[t i,j : 1 ≤ i ≤ m, 1 ≤ j ≤ n]]
is also a linear space which is isomorphic to W , and we have that φ • T(W ) = T • Φ(W ), so we can apply the theory of the previous sections to the images under φ of set of supports (5) associated to finitely-dimensional vector spaces W ⊂ K[[t 1 , . . . , t m ]] n . 4.5. Theorem. Let Σ ⊂ K m,n be a system of homogeneous linear differential equations of differential type zero and let T(Sol(Σ)) ⊂ (2 N m ) n be the set of supports of Sol(Σ). Then (i) the minimal elements C(Sol(Σ)) of T(Sol(Σ)) define the circuits of a matroid M (Sol(Σ)) on N mn ; (ii) If #E m < #S(K), then φ • T(Sol(Σ)) ⊂ 2 N mn is a set of scrawls of the matroid M (Sol(Σ)).
Proof. Since Σ is of differential type zero, we have that
{0} = Sol(Σ) ⊂ K[[T ]] n is a finite dimensional K-vector space.
We also have that the minimal elements of φ • T(Sol(Σ)) ⊂ (2 N mn , ⊆) coincide with the minimal elements of T(Sol(Σ)) ⊂ ((2 N m ) n , ≤ prod ), where ≤ prod is the product order, since they are isomorphic as posets. Thus φ(C(Sol(Σ))) = C(φ(Sol(Σ))), so (i) follows from Theorem 2.15, and (ii) follows from Theorem 3.5 after applying the inverse homomorphism φ −1 to the semigroup φ • T(Sol(Σ)) ⊂ 2 N mn . 4.6. Remark. If n ≥ 2, on must use the homomorphism φ : (2 N m ) n → 2 N mn in order to unveil the matroidal structures of the sets Sol(Σ). The isomorphism of posets yields that condition (iii) in Definition 2.1 can be stated directly in terms of the poset T(Sol(Σ)) ⊂ ((2 N m ) n , ≤ prod ). It should be interesting to see if the same can be done for condition (iv) in Definition 2.1.
Applications to tropical differential algebraic geometry
We have that Theorem 4.5 from Section 4 is valid for the set of supports T(Sol(Σ)) ⊂ (2 N m ) n of a system Σ ⊂ K m,n of homogeneous linear differential equations of differential type zero over an arbitrary field K. An important case occurs when K satisfies the hypotheses of the fttdag [BFNS21], namely when K is an algebraically closed field of characteristic zero and has infinite transcendence degree over the field of definiton of Σ 3 . 5.1. Tropical algebra preliminaries. If K satisfies the hypotheses of the fttdag, then we can express our results in a more algebraic form using the formalism of tropical algebra. Recall that 2 N m = (2 N m , ∪) is a semigroup. The tropical counterparts of the underlying algebraic structures are as follows.
(i) Consider the Minkowski set sum + : 2 N m × 2 N m − → 2 N m ; (ii) Define the (tropical) differential operators D = { ∂ ∂t i : N m − → N m : i = 1, . . . , m} by shifting the support accordingly as ∂ ∂t i (S) := {(j 1 , . . . , j i−1 , j i − 1, j i+1 , . . . , j m ) : (j 1 , . . . , j m ) ∈ S, j i > 0}. Then the tuple (2 N m , ∪, +, D) is an (idempotent) differential semiring, see [FGLH + 20].
We give an alternative presentation of this algebraic structure. More details and the connection to the approach above is presented in [CGL20].
Let In the following we will denote the sum of equivalence classes in V B[T ] by "⊕". 5.2. Definition. Given a sum s = a 1 ⊕ · · · ⊕ a k (8) in V B[T ] involving k ≥ 2 summands, let s i := a 1 ⊕ · · · ⊕ a i ⊕ · · · ⊕ a k denote the sum obtained by omitting the i-th summand, i = 1, . . . , k.
The sum (8) tropically vanishes in V B[T ] if s = s i for every i = 1, . . . , k.
Given P ∈ K m,n , the definition of a solution of P can be given in a tropical way as follows.
5.3. Definition. We say that ϕ = (ϕ 1 , . . . ,
ϕ n ) ∈ B[[T ]] n is a solution of M a M E M = P ∈ B m,n if V (P (ϕ)) = M V (a M E M (ϕ))
vanishes tropically in V B[T ]. We denote by Sol(P ) the set of solutions of P .
Given U ⊂ B m,n a system of tropical differential equations, we will denote by p∈U Sol(p) = Sol(U ) ⊂ B[[T ]] n its set of common solutions.
We will be mostly interested in the case where P ∈ B m,n is linear as in Definition 4.3 where α i,J , α ∈ B[[T ]]. The tropical analogue of the fact that the set of solutions of a system of homogeneous linear differential equations is a vector space also holds for the case of homogeneous linear tropical differential equations in B m,n . Note that under the isomorphism (7), the closure under taking unions translates to the closure under taking sums. 5.4. Theorem. Let U ⊂ B m,n be a system of homogeneous linear tropical differential equations. Then Sol(U ) ⊂ B[[T ]] n is a semigroup.
Proof. Since Sol(U ) = p∈U Sol(p), it suffices to show that Sol(p) is a semigroup for linear p ∈ B m,n . Let ϕ, ψ ∈ Sol(p) and set φ = ϕ + ψ, then p(φ) = p(ϕ) + p(ψ), and it follows that V (p(φ)) = V (p(ϕ) + p(ψ)) = V (p(ϕ)) ⊕ V (p(ψ)) ⊂ V (p(ϕ)) ∪ V (p(ψ)), which finishes the proof.
So, if U ⊂ B m,n is as in Theorem 5.4, then Sol(U ) ⊂ B[[T ]] n is a semigroup and thus, the union of tropical solutions are again tropical solutions. The structure of the semigroup Sol(U ) was studied in [Gri17] for the ordinary case (m = 1) and U finite. Following Remark 4.6, if n ≥ 2, we can consider the image of Sol(U ) under the map φ :
B[[T ]] n → B[[t i,j : 1 ≤ i ≤ m, 1 ≤ j ≤ n]]
, which is also a semigroup, but it is not necessarily the set of scrawls of a matroid (cf. Theorem 4.5).
In Corollary 5.8, we give a necessary condition for a set φ(Sol(U )) to be the set of scrawls of a matroid. It would be interesting to find sufficient conditions under which a semigroup φ(Sol(U )) is the set of scrawls of a matroid.
5.2.
Connections with the Fundamental Theorem. In this section we analyze the special case when the coefficient field K fulfills the hypotheses of the fttdag.
To start with, by (7) Consider now a system Σ ⊂ K m,n of homogeneous linear differential equations of differential type zero over K. Then, by Theorem 4.5, the set φ • T(Sol(Σ)) is the set of scrawls of a matroid. Now, the fttdag can be used to give a necessary condition for the solution set of tropical differential equations to be the set of scrawls of a matroid, see 5.6. This theorem gives an equality between the set T(Sol(Σ)) and the set of formal Boolean power series solutions Sol(trop ([Σ] Recall that if Σ ⊂ K m,n , then [Σ] denotes the differential ideal spanned by it. The following result defines DA tropical varieties in three different ways, and also justifies the name. , where Σ ⊂ K m,n is a system of homogeneous linear differential equations of differential type zero, then φ(X) is the set of scrawls of φ(C(Sol(Σ))).
Proof. Follows from the above result after applying the Fundamental Theorem 5.6, since T(Sol(Σ)) = Sol(trop([Σ])) = P ∈[Σ] Sol(trop(P )) = X. Then we apply φ to both ends of the equality.
We have shown that if W = Sol([Σ]), where Σ is as in Corollary 5.8, then M (W ) = (2 N mn , φ • T(W )) is a matroid. Since duals exist for infinite matroids, it should be interesting to know if the set of circuits C * (W ) of the dual matroid M (W ) * of W relate to some notion of tropical basis for the ideal [Σ]. 5.9. Example. Let Σ ⊂ K 1,1 have the solutions W generated by ϕ 1 = i≥0 t 2i , ϕ 2 = i≥0 t 2i+1 ∈ K[[t]]. Since n = 1, there is no need to consider the map φ, and it follows that T(W ) = {T(ϕ 1 ) = 2 · N, T(ϕ 2 ) = 2 · N + 1, ∅, N}. 6. Counterexample: the fundamental Theorem of tropical differential algebraic geometry over a countable field
The article [AGT16] by Aroca, Garay and Toghani proves a fundamental theorem for tropical differential algebraic geometry over uncountable fields. Using a non constructive result in [DL84], it was shown that there exists a system of algebraic partial differential equations over a countable field for which the fundamental theorem does not hold [FGLH + 20, Remark 7.3]. The question of whether the fundamental theorem holds for ordinary or even linear differential equations over countable fields remained open. As consequence of the results given in the previous sections, we give a negative answer to this question by constructing a counterexample as follows.
Fix a countable field k = {a 0 , a 1 , a 2 , . . . }, and consider the linear differential polynomial Σ := {y ′′ + γ(t)y ′ + β(t)y} ⊂ k 1,1
with γ(t) = i≥0 c i t i and β(t) = i≥0 b i t i . Suppose that Sol(Σ) is generated by the two power series solutions (see Example 3.6) ϕ 1 = 1 + i≥2 t i and ϕ 2 = t + i≥2 a i t i .
Their supports are T(ϕ 1 ) = N \ {1} and T(ϕ 2 ) = N \ {0}. For every a 1 , a 2 ∈ k, also a 1 φ 1 − a 2 φ 2 is a solution of Σ and T(a 1 ϕ 1 − a 2 ϕ 2 ) = N \ {j} for some j ∈ N (or ∅ in the case of a 1 = a 2 = 0). Note that there are no further solutions of Σ. The union T(ϕ 1 ) ∪ T(ϕ 2 ) = N is a tropical solution (see Theorem 5.4), but cannot be realized by any solution of (9). Let us now construct the coefficients b i , c i such that Σ has a solution set generated by ϕ 1 , ϕ 2 . By plugging ϕ 1 into (9), we obtain
0 = i≥0 (i + 2)(i + 1)t i + ( i≥0 c i t i ) · ( i≥0 (i + 1)t i − 1) + ( i≥0 b i t i ) · ( i≥0 t i − t) = i≥0 ((i + 2)(i + 1) + i j=0 (i + 1 − j)c j − c i + i j=0 b j − b i+1 )t i .
Similarly, by plugging-in ϕ 2 into (9) and setting a 0 = 0, a 1 = 1,
0 = i≥0 (i + 2)(i + 1)a i+2 t i + ( i≥0 c i t i ) · ( i≥0 (i + 1)a i+1 t i ) + ( i≥0 b i t i ) · ( i≥0 a i t i ) = i≥0 ((i + 2)(i + 1)a i+2 + c i + i−1 j=0 (i + 1 − j)c j a i+1−j + i−1 j=0 b j a i−j )t i .
By coefficient comparison in both equations, seen as polynomials in t, we obtain the system of recurrence equations b i+1 = (i + 2)(i + 1) + i j=0 (i + 1 − j)c j − c i + i j=0 b j , c i+1 = −(i + 3)(i + 2)a i+3 − i j=0 (i + 2 − j)c j a i+2−j − i j=0 b j a i+1−j for every i ∈ N. By choosing b 0 , c 0 as any element in k, γ(t), β(t) are uniquely determined from the given solutions ϕ 1 , ϕ 2 . Let us fix b 0 , c 0 ∈ k.
Then, since ϕ 1 , ϕ 2 are linearly independent and Σ consists of a single linear ordinary differential equation of order two, the solution set is indeed given exactly as the linear combinations of ϕ 1 and ϕ 2 . 6.1. Remark. Notice that if k is the algebraic closure of the rational numbers, then γ(t) and β(t) are (non-convergent) formal power series and thus, Σ is a system of linear differential equations involving formal power series coefficients. The question of whether the fundamental theorem holds for holononomic systems, i.e. when all coefficients of the elements in Σ are polynomial, remains open.
The set of C-independents I(C) := {I ⊂ E : C ⊂ I ∀C ∈ C} satisfies if I ⊂ X ⊂ E and I ∈ I, then {I ′ ∈ I : I ⊂ I ′ ⊂ X} has a maximal element.
With abuse of notation, a matroid M is also denoted by M = (E, B), where the elements in B are the bases of M . We also have the following definition of matroid via sets of scrawls, which are the unions of circuits [BC18, Section 2.2]. 2.3. Definition. Let S ⊂ 2 E . We call S the set of scrawls if it satisfies the following axioms: (i) S is a semigroup, i.e. any union of elements in S is in S; (ii) S satisfies the conditions (iii) and (iv) in Definition 2.1. In this case, M = (E, S) is a matroid and S is called the set of scrawls of M . Notice that a set of scrawls naturally carries the structure of a semigroup and simultaneously encodes the information of a matroid. Let M = (E, B) be a matroid with basis elements B. By [BDK + 13, Theorem 3.1], the complements B * := {E \ B : B ∈ B} form another matroid M * = (E, B * ), called the dual matroid to M . The circuits of M * are called cocircuits of M . A matroid M is called tame if every intersection of a circuit and a cocircuit is finite [BDK + 13], otherwise M is called wild.
2. 17 .
17Theorem. The matroid M = (E, C(W )) is cofinitary. In particular, M is tame.Proof. By Lemma 2.8, we have that X ⊂ E is independent if and only if the K-linear subspace generated by {u (j) } j / ∈X is K s , which means that there exist {j 1 , . . . , j s } ⊂ E \ X such that {u (j 1 ) , . . . , u (js) } is a linearly independent set. Then the bases of the matroid are exactly B M = {E \ {j 1 , . . . , j s } : Span{u (j 1 ) , . . . , u (js) } = K s }, and B * M = {{j 1 , . . . , j s } : Span{u (j 1 ) , . . . , u (js) } = K s }. Thus, the circuits of the dual matroid will have at most s + 1 elements.2.18. Remark. The fact that B * M = {{j 1 , . . . , j s } : Span{u (j 1 ) , . . . , u (js) } = K s } says that M * is precisely the representable matroid associated to the family of vectors {u (j) : j ∈ E}, see [HB15, Definition 2.6].The matroid M * is always finitary (even if W is infinite-dimensional, see [HB15, p. 1], [BDK + 13, Section 2.6]). Thus, the original matroid M = (M * ) * will in general not be representable unless E is finite.
3. 2 .
2Lemma. The collection T(W ) ⊂ 2 E is closed under union if and only if Ψ W is a natural one to one correspondence between Lin(W ) and T(W ).Proof. Lemma 2.9 implies that T(W ) ⊂ Ψ W (Lin(W )).Now Lin(W ) is closed under intersection and ΨW (L ∩ L ′ ) = Ψ W (L) ∪ Ψ W (L ′ ).Then, Ψ W (Lin(W )) is closed under union. All elements of Lin(W ) can be written as intersection of elements of codimension one. Then, all the elements in Ψ W (Lin(W )) may be written in terms of images of elements of codimension one. By Lemma 3.1, images of elements of codimension one are circuits. By Theorem 2.15, all elements of T(W ) are union of circuits.3.3. Lemma. Let L be a K-linear subspace of K s with dim K (L) = d ≤ s − 2 and let {u (i) } i∈X ⊂ K s with X ⊂ E be such that {u (i) } i∈X ∩ L = ∅. If #X < #S(K), then there exists a K-linear subspace L ⊃ L of K s of dimension d + 1 with {u (i) } i∈X ∩ L = ∅.
K[[T ]] = K[[t 1 , . . . , t m ]] the ring of multivariate formal power series, by D = { ∂ ∂t 1 , . . . , ∂ ∂tm } the set of standard partial derivatives, and we denote for J ∈ N m the differential operator Θ(J) = ∂ |J | ∂t j 1 1 ···∂t jm m . We denote by K m,n the polynomial ring K[[T ]][x i,J : i = 1, . . . , n, J ∈ N m ] where x i are differential indeterminates and x i,J := Θ(J)x i . For Σ ⊂ K m,n , we set ΘΣ = {Θ(J)f : J ∈ N m , f ∈ Σ}.
.
B = {0 < 1} be the Boolean semifield with the usual tropical addition a + b := min(a, b) and tropical multiplication ab := a + b. If B[[t 1 , . . . , t m ]] denotes the semiring of Boolean formal power series endowed with the standard operations of sum and product of Boolean power series, and D = { ∂ ∂t 1 , . . . , ∂ ∂tm } denotes the set of standard partial derivations, then we have an isomorphism of (idempotent) differential semirings (B[[t 1 , . . . , t m ]], +, ×, D) ∼ = (2 N m , ∪, +, D). (7) We denote by B m,n the polynomial semiring B[[T ]][x i,J : i = 1, . . . , n, J ∈ N m ]. An element P ∈ B m,n is called a differential polynomial, and the variables x i,J , i = 1, . . . , n, J ∈ N m in B m,n denote differential variables. We define a map P : B[[T ]] n − → B[[T ]] in which a monomial E M = i,J x m i,J i,J sends the vector ϕ = (ϕ 1 , . . . , ϕ n ) ∈ B[[T ]] n to E M (ϕ) = i,J ( ∂ |J | Definition. Given A ∈ 2 N m , its Newton polyhedron New(A) is the convex hull of the set {I + J : I ∈ A, J ∈ N m } ⊆ R m ≥0 . We define the Newton polyhedron New(ϕ) of ϕ ∈ B[[t 1 , . . . , t m ]] by using the isomorphism from (7). The semiring of vertex polynomials is defined as the quotient V B[T ] := B[[T ]]/New, where New ⊂ B[[T ]] × B[[T ]] denotes the semiring congruence comprised of pairs of boolean power series with equal Newton polyhedra. We denote by V : B[[T ]] − → V B[T ] the resulting quotient homomorphism of semirings.
we have an isomorphism of semirings 2 N m ∼ = B[[t 1 , . . . , t m ]], and we will denote by T : K[[t 1 , . . . , t m ]] − → B[[t 1 , . . . , t m ]] the support map. If n ≥ 1 and X ⊂ K[[T ]] n , its set of supports T(X) ⊂ B[[t 1 , . . . , t m ]] n is defined as in (5).
.
Definition. Given P = M a M E M in K m,n , we denote by trop(P ) the polynomial trop(P ) = M T(a M )E M in B m,n .
5. 6 .
6Theorem. [FGLH + 23, Fundamental Theorem] Let Σ ⊂ K m,n . Then the following three subsets of B[[t 1 , . . . , t m ]] n coincide.(i) X = T(Sol(Σ));(ii) X = Sol(trop([Σ])) = P ∈[Σ] Sol(trop(P )); (iii) X = {w ∈ B[[T ]] n : in w ([Σ])}contains no monomial. 5.7. Definition. Any subset X ⊂ B[[t 1 , . . . , t m ]] n satisfying one of the characterizations of the above theorem is called a DA tropical variety. For a variety of examples and further discussion on the fttdag, see [BFNS21]. We have the following result. 5.8. Corollary. Let U ⊂ B m,n and X = p∈U Sol(p). If U = trop([Σ])
Thus, C = {C 1 = T(ϕ 1 ), C 2 = T(ϕ 2 )}. The bases of T(W ) are the maximal subsets of N that do not contain the set of even numbers nor the set of odd numbers. So they are complements of sets of the form {e, o} where e is an even number and o an odd number. In particular, the bases of the dual are these pairs {e, o}. The circuits of the dual (cocircuits) are now given as the pairs {e 1 , e 2 }, {o 1 , o 2 } where e 1 , e 2 are even and o 1 , o 2 are odd numbers.
Note that the latter is always fulfilled for uncountable fields.
AcknowledgmentsThe author F.A. was supported by the PAPIIT projects IN108320 and IN113323 dgapa UNAM. L.B. is partially supported by the PAPIIT project IA100122 dgapa UNAM 2022. S.F. is partially supported by the grant PID2020-113192GB-I00 (Mathematical Visualization: Foundations, Algorithms and Applications) from the Spanish MICINN and by the OeAD project FR 09/2022. C.G. wishes to thank David Fernández-Bretón for valuable conversations.
The fundamental theorem of tropical differential algebraic geometry. Fuensanta Aroca, Cristhian Garay, Zeinab Toghani, Pacific J. Math. 2832Fuensanta Aroca, Cristhian Garay, and Zeinab Toghani. The fundamental theorem of tropical differ- ential algebraic geometry. Pacific J. Math., 283(2):257-270, 2016.
The Bergman complex of a matroid and phylogenetic trees. Federico Ardila, Caroline J Klivans, J. Combin. Theory Ser. B. 961Federico Ardila and Caroline J. Klivans. The Bergman complex of a matroid and phylogenetic trees. J. Combin. Theory Ser. B, 96(1):38-49, 2006.
An excluded minors method for infinite matroids. Nathan Bowler, Johannes Carmesin, Journal of Combinatorial Theory, Series B. 128Nathan Bowler and Johannes Carmesin. An excluded minors method for infinite matroids. Journal of Combinatorial Theory, Series B, 128:104-113, 2018.
Axioms for infinite matroids. Reinhard Henning Bruhn, Matthias Diestel, Rudi Kriesell, Paul Pendavingh, Wollan, Adv. Math. 239+ 13] Henning Bruhn, Reinhard Diestel, Matthias Kriesell, Rudi Pendavingh, and Paul Wollan. Axioms for infinite matroids. Adv. Math., 239:18-46, 2013.
On the relationship between differential algebra and tropical differential algebraic geometry. François Boulier, Sebastian Falkensteiner, Marc Paul Noordman, Omar Leon Sanchez, Computer Algebra in Scientific Computing: 23rd International Workshop, CASC 2021. Sochi, RussiaSpringer23François Boulier, Sebastian Falkensteiner, Marc Paul Noordman, and Omar Leon Sanchez. On the relationship between differential algebra and tropical differential algebraic geometry. In Computer Al- gebra in Scientific Computing: 23rd International Workshop, CASC 2021, Sochi, Russia, September 13-17, 2021, Proceedings 23, pages 62-77. Springer, 2021.
Exploring tropical differential equations. Ethan Cotterill, Cristhian Garay, Johana Luviano, arXiv:2012.14067arXiv preprintEthan Cotterill, Cristhian Garay, and Johana Luviano. Exploring tropical differential equations. arXiv preprint arXiv:2012.14067, 2020.
Power series solutions of algebraic differential equations. J Denef, L Lipshitz, Math. Ann. 2672J. Denef and L. Lipshitz. Power series solutions of algebraic differential equations. Math. Ann., 267(2):213-238, 1984.
Non-Archimedean amoebas and tropical varieties. Manfred Einsiedler, Mikhail Kapranov, Douglas Lind, J. Reine Angew. Math. 601Manfred Einsiedler, Mikhail Kapranov, and Douglas Lind. Non-Archimedean amoebas and tropical varieties. J. Reine Angew. Math., 601:139-157, 2006.
The fundamental theorem of tropical partial differential algebraic geometry. Cristhian Sebastian Falkensteiner, Mercedes Garay-López, Marc Haiech, Zeinab Paul Noordman, François Toghani, Boulier, Proceedings of the 45th International Symposium on Symbolic and Algebraic Computation. the 45th International Symposium on Symbolic and Algebraic Computation+ 20] Sebastian Falkensteiner, Cristhian Garay-López, Mercedes Haiech, Marc Paul Noordman, Zeinab Toghani, and François Boulier. The fundamental theorem of tropical partial differential algebraic ge- ometry. In Proceedings of the 45th International Symposium on Symbolic and Algebraic Computation, pages 178-185, 2020.
On initials and the fundamental theorem of tropical partial differential algebraic geometry. Cristhian Sebastian Falkensteiner, Mercedes Garay-López, Marc Haiech, François Paul Noordman, Zeinab Boulier, Toghani, Journal of Symbolic Computation. 115+ 23] Sebastian Falkensteiner, Cristhian Garay-López, Mercedes Haiech, Marc Paul Noordman, François Boulier, and Zeinab Toghani. On initials and the fundamental theorem of tropical partial differential algebraic geometry. Journal of Symbolic Computation, 115:53-73, 2023.
Initial forms and a notion of basis for tropical differential equations. A Fink, Z Toghani, Pacific J. Math. 3182A. Fink and Z. Toghani. Initial forms and a notion of basis for tropical differential equations. Pacific J. Math., 318(2):453-468, 2022.
A general framework for tropical differential equations. J Giansiracusa, S Mereta, arXiv:2111.039252021math.AGJ. Giansiracusa and S. Mereta. A general framework for tropical differential equations. arXiv:2111.03925 [math.AG], 2021.
A general framework for tropical differential equations. Jeffrey Giansiracusa, Stefano Mereta, arXiv:2111.03925arXiv preprintJeffrey Giansiracusa and Stefano Mereta. A general framework for tropical differential equations. arXiv preprint arXiv:2111.03925, 2021.
Tropical differential equations. Dima Grigoriev, Adv. in Appl. Math. 82Dima Grigoriev. Tropical differential equations. Adv. in Appl. Math., 82:120-128, 2017.
Thin sums matroids and duality. S Hadi Afzali, Nathan Borujeni, Bowler, Advances in Mathematics. 271S. Hadi Afzali Borujeni and Nathan Bowler. Thin sums matroids and duality. Advances in Mathe- matics, 271:1-29, 2015.
Tropical differential gröbner bases. Y Hu, X S Gao, Math.Comput.Sci. 15Y. Hu and XS Gao. Tropical differential gröbner bases. Math.Comput.Sci., 15:255-269, 2021.
Dimension of the solutions space of pdes. Boris Kruglikov, Valentin Lychagin, math/0610789arXiv preprintBoris Kruglikov and Valentin Lychagin. Dimension of the solutions space of pdes. arXiv preprint math/0610789, 2006.
Differential algebra & algebraic groups. Ellis Robert Kolchin, Academic pressEllis Robert Kolchin. Differential algebra & algebraic groups. Academic press, 1973.
D-finite power series. Leonard Lipshitz, Journal of algebra. 1222Leonard Lipshitz. D-finite power series. Journal of algebra, 122(2):353-373, 1989.
The fundamental theorem of tropical differential algebra over nontrivially valued fields and the radius of convergence of nonarchimedean differential equations. S Mereta, arXiv:2303.121242023math.AGS. Mereta. The fundamental theorem of tropical differential algebra over nontrivially valued fields and the radius of convergence of nonarchimedean differential equations. arXiv:2303.12124 [math.AG], 2023.
Introduction to tropical geometry. Diane Maclagan, Bernd Sturmfels, Graduate Studies in Mathematics. 161American Mathematical SocietyDiane Maclagan and Bernd Sturmfels. Introduction to tropical geometry, volume 161 of Graduate Studies in Mathematics. American Mathematical Society, Providence, RI, 2015.
Anna-Laura Sattelberger, Bernd Sturmfels, arXiv:1910.01395D-modules and holonomic functions. arXiv preprintAnna-Laura Sattelberger and Bernd Sturmfels. D-modules and holonomic functions. arXiv preprint arXiv:1910.01395, 2019.
Solving systems of polynomial equations. Bernd Sturmfels, CBMS Regional Conference Series in Mathematics. Published for the Conference Board of the Mathematical Sciences. Providence, RIAmerican Mathematical Society97Washington, DC; by theBernd Sturmfels. Solving systems of polynomial equations, volume 97 of CBMS Regional Conference Series in Mathematics. Published for the Conference Board of the Mathematical Sciences, Washing- ton, DC; by the American Mathematical Society, Providence, RI, 2002.
On the abstract properties of linear dependence. Hassler Whitney, American Journal of Mathematics. 573Hassler Whitney. On the abstract properties of linear dependence. American Journal of Mathematics, 57(3):509-533, 1935.
| []
|
[
"Quartic diophantine equation X",
"Quartic diophantine equation X"
]
| [
"S Muthuvel ",
"R Venkatraman ",
"\nDepartment of Mathematics\nCollege of Engineering and Technology\nSRM Institute of Science and Technology\nVadapalani Campus\n\n",
"\nJawaharlal Nehru Salai\nChennai-600026Vadapalani, TamilnaduIndia\n"
]
| [
"Department of Mathematics\nCollege of Engineering and Technology\nSRM Institute of Science and Technology\nVadapalani Campus\n",
"Jawaharlal Nehru Salai\nChennai-600026Vadapalani, TamilnaduIndia"
]
| []
| In this paper, we deal with the quartic diophantine equation X 4 − Y 4 = R 2 − S 2 to present its infinitely many integer solutions. | null | [
"https://export.arxiv.org/pdf/2303.13366v1.pdf"
]
| 257,687,626 | 2303.13366 | 8e1e7fbcac468b76c442a10d5d0b172028914954 |
Quartic diophantine equation X
20 Mar 2023
S Muthuvel
R Venkatraman
Department of Mathematics
College of Engineering and Technology
SRM Institute of Science and Technology
Vadapalani Campus
Jawaharlal Nehru Salai
Chennai-600026Vadapalani, TamilnaduIndia
Quartic diophantine equation X
20 Mar 2023Quartic diophantine equationElementary method 2020 MSC: 11D2511D45
In this paper, we deal with the quartic diophantine equation X 4 − Y 4 = R 2 − S 2 to present its infinitely many integer solutions.
Introduction
In the general form of Diophantine equation
x n + y n = u n + v n , n ∈ N The case n = 2 has been expressed in [6,12,13]. For n = 4, the parametric solutions to the aforementioned equation are described in [4,8,9]. More general diophantine equations with more variable or with integer coefficients that are not all equal to one were taken into consideration by several researchers [5,7,10,11].
The authors provided an infinite number of positive integral solutions for various powers as illustrated in [3]. The objective of this work is to obtain infinitely many integral solutions of
X 4 − Y 4 = R 2 − S 2(1)
for each one of the parametric method. In [2], the equations
a X ′ 5 1 + X ′ 5 2 + m i=0 a i X 5 i = b Y ′ 3 1 + Y ′ 3 2 + n i=0 b i Y 3 i(2)
where m, n ∈ N ∪ {0} and a, b = 0, a i , b i are fixed arbitrary rational numbers are examined. The solution to (2), which is converted into a cubic or a quartic elliptic curve with a positive rank, is found using the theory of elliptic curves.
Authors demonstrate that in [[1], Main Theorem 2] n i=1 p i x ai i = m j=1 q j y bj j m, n, a i , b j ∈ N, p i , q j ∈ Z, i = 1,
Solving the Diophantine equation
X 4 − Y 4 = R 2 − S 2
The trivial solution of the equation (1) is (X, Y, R, S) = (m, n, m 2 , n 2 ) for m, n ∈ Z. Four different linear transformations are considered and for each one of them, we give a different class of infinitely many integer solutions of equation
(1).
Method-1
Consider the linear transformations,
X = px + u, Y = qx − u, R = x + v, S = px − v (3) p, x, u, v ∈ Z. Introducing (3) in (1), we get αx 4 + βx 3 + γx 2 + δx = 0 (4) where α = p 4 − q 4 , γ = 6p 2 u 2 − 6p 2 u 2 + p 2 − 1, β = 4p 3 u + 4q 3 u, δ = 4pu 3 + 4qu 3 − 2v + 2pv(5)
For δ = 0 in (5), we obtain
(2p + 2q)u 3 = v(1 − p)
Further, we put u = t, v = t 3 and get p = 1−2q 3 . In (5), equating the like terms γ = 0
(q + 1) −15t 2 + 2 q + 3t 2 − 4 = 0
Simplifying the above expression, we have q = 4−3t 2 2−15t 2 and therefore (4) becomes
αx 4 + βx 3 = 0 x = −t 405t 8 + 459t 6 − 1404t 4 + 600t 2 − 56 3 (27t 6 − 27t 4 + 36t 2 − 10)
Plugging the values p, q, u, v in (3), we acquire
X = t 1215t 10 − 1782t 8 + 4671t 6 − 774t 4 − 366t 2 + 52 3(2 − 15t 2 ) (27t 6 − 27t 4 + 36t 2 − 10) Y = t 1215t 10 − 1782t 8 + 4671t 6 − 5634t 4 + 1902t 2 − 164 3(2 − 15t 2 ) (27t 6 − 27t 4 + 36t 2 − 10) R = −t 324t 8 − 378t 6 + 1296t 4 − 570t 2 + 56 3 (27t 6 − 27t 4 + 36t 2 − 10) S = t 810t 8 + 1512t 6 + 1674t 4 − 1092t 2 + 112 3(2 − 15t 2 ) (27t 6 − 27t 4 + 36t 2 − 10)(6)
Eliminating the denominators from the above equations,
X = 1215t 11 − 1782t 9 + 4671t 7 − 774t 5 − 366t 3 + 52t Y = 1215t 11 − 1782t 9 + 4671t 7 − 5634t 5 + 1902t 3 − 164t R = 5904900t 19 − 14368590t 17 + 41898546t 15 − 55842858t 13 + 58236894t 11 − 36547200t 9 + 12314916t 7 − 2186784t 5 + 193392t 3 − 6720t S = 984150t 17 + 721710t 15 + 1395306t 13 − 1476954t 11 + 3664440t 9 − 3124332t 7 + 1027296t 5 − 140112t 3 + 6720t
We get a integer solution (X, Y, R, S) of equation (1) for every t ∈ Z. So, the presented method generates infinitely many integer solutions of the initial equation (1).
Method-2
In this method, we deal with different transformation in (1). Let
X = px + u, Y = qx − u, R = x + v, S = px − v(7)
p, x, u, v ∈ Z. In previous subsection, by introducing these linear transformations in (1), leads us to the equation of the form
Ax 4 + Bx 3 + Cx 2 + Dx = 0 (8) where A = p 4 − q 4 , C = 6p 2 u 2 − 6p 2 u 2 + p 2 − 1, B = 4p 3 u + 4q 3 u, D = 4pu 3 + 4qu 3 − 2v − 2pv(9)
For D = 0 in (9), we get
(2p + 2q)u 3 = v(1 + p)
Further, we set u = t, v = t 3 and attain p = 1 − 2q. In (9), equating the like terms C = 0 (q − 1) 9t 2 + 2 q − 3t 2 = 0
Using the above equation, we obtain q = 3t 2 9t 2 +2 and therefore (8) becomes
Ax 4 + Bx 3 = 0 x = −t 243t 8 + 297t 6 + 216t 4 + 72t 2 + 8 (27t 6 + 27t 4 + 12t 2 + 2)
Applying the values p, q, u, v in (7), we acquire X = −3t 27t 8 + 36t 6 + 27t 4 + 12t 2 + 2 (27t 6 + 27t 4 + 12t 2 + 2) Y = −t 81t 8 + 108t 6 + 81t 4 + 24t 2 + 2 (27t 6 + 27t 4 + 12t 2 + 2) R = −2t 108t 8 + 135t 6 + 102t 4 + 35t 2 + 4 (27t 6 + 27t 4 + 12t 2 + 2) S = −2t 54t 8 + 81t 6 + 60t 4 + 25t 2 + 4 (27t 6 + 27t 4 + 12t 2 + 2)
By cancelling the denominators in (10), X = 2187t 15 + 5103t 13 + 6075t 11 + 4617t 9 + 2322t 7 + 756t 5 + 144t 3 + 12t Y = 2187t 15 + 5103t 13 + 6075t 11 + 4293t 9 + 1890t 7 + 504t 5 + 72t 3 + 4t
R =+ 125496t 7 + 17664t 5 + 1552t 3 + 64t
For any t ∈ Z, we obtain an integer solution (X, Y, R, S) to equation (1). Consequently, the proposed method yields an infinite number of integer solutions to the starting equation (1).
Method-3
In this method, we deal with different transformation in (1). Let
X = v, Y = px + v, R = qx + u, S = x + u(11)
p, x, u, v ∈ Z. In subsection-1, by introducing these linear transformations in (1), leads us to the equation of the form
ax 4 + bx 3 + cx 2 + dx = 0(12)
where a = p 4 ,
c = 6p 2 v 2 + q 2 − 1, b = 4p 3 v, d = 4pv 3 − 2u − 2qu(13)
For d = 0 in (13), we obtain
(2p)v 3 = u(1 − q)
Additionally, we put u = t 3 , v = t and get p = 1−q 2 . In (13), equating the like terms c = 0 (q − 1) 3t 2 + 2 q − 3t 2 − 2 = 0 Thus, we get q = 3t 2 −2 3t 2 +2 and therefore (12) becomes
ax 4 + bx 3 = 0 x = −2t 81t 8 + 216t 6 + 216t 4 + 96t 2 + 16 (27t 6 + 54t 4 + 36t 2 + 8)
Taking the values p, q, u, v and applying it in (11), we get
X = t Y = −3t 81t 8 + 216t 6 + 216t 4 + 96t 2 + 16 (3t 2 + 2) (27t 6 + 54t 4 + 36t 2 + 8) R = −t 405t 10 + 756t 8 + 216t 6 − 384t 4 − 304t 2 − 64 (3t 2 + 2) (27t 6 + 54t 4 + 36t 2 + 8) S = −t 135t 8 + 358t 6 + 396t 4 + 184t 2 + 32 (27t 6 + 54t 4 + 36t 2 + 8)(14)
Neglecting the denominators from the above equation, X = 81t 10 + 216t 8 + 216t 6 + 96t 4 + 16t 2 Y = 243t 10 + 648t 8 + 648t 6 + 288t 4 + 48t 2 R = 32805t 21 + 148716t 19 + 268272t 17 + 217728t 15 + 18144t 13 − 120960t 11 − 112896t 9 − 49152t 7 − 11008t 5 − 1024t 3 S = 32805t 21 + 196344t 19 + 532008t 17 + 849312t 15 + 874656t 13 + 600000t 11 + 273536t 9 + 79872t 7 + 13568t 5 + 1024t 3
For each t ∈ Z, equation (1) has an integer solution (X, Y, R, S). The resulting method produces an infinite number of integer solutions to the initial equation (1).
Method-4
In this method, we deal with different transformation in (1). Let
X = −v, Y = px − v, R = qx − u, S = x + u(15)
p, x, u, v ∈ Z. In subsection-1, by introducing these linear transformations in Substituting the values p, q, u, v in (15), we obtain
X = −t Y = 3t R = 5t 3 − 4t S = −5t 3 − 4t
We get a integer solution (X, Y, R, S) of equation (1) for every t ∈ Z. So, the presented method generates infinitely many integer solutions of the initial equation (1).
2, . . . , n, j = 1, 2, . . . , m has a parametric solution and infinitely many solutions in nonzero integers if there exists an i such that p i = 1 and (a i , a 1 a 2 . . . a i−1 a i+1 . . . a n b 1 b 2 . . . b m ) = 1 or there exists a j such that q j = 1 and (b j , a 1 . . . a n b 1 . . . b j−1 b j+1 . . . b m ) = 1. Although linear transformations are also employed in this article, we propose a different strategy and some different conditions for the integer coefficients in order to solve (1).
( 1 ) 2 +
12, leads us to the equation of the formM x 4 + N x 3 + P x QxFurther, we put u = t 3 , v = t and get q = −1 − 2p. In(17)
A note on the high power diophantine equations. M Baghlaghdam, F Izadi, Proc. Math. Sci. 12914M. Baghlaghdam, F. Izadi, A note on the high power diophantine equations, Proc. Math. Sci., 129(14), (2019).
On the diophantine equation in the form that a sum of cubes equals a sum of quantics. M Baghlaghdam, F Izadi, Math. J. Okaama Univ. 61M. Baghlaghdam, F. Izadi, On the diophantine equation in the form that a sum of cubes equals a sum of quantics, Math. J. Okaama Univ., 61, (2019) 75-84.
On some Diophantine equations. S Babić, K Nabardi, Miskolc Mathematical Notes. 22S. Bujačić Babić, K. Nabardi, On some Diophantine equations, Miskolc Mathematical Notes, 22(1), (2021), 65-75.
The diophantine equation A 4 + B 4 = C 4 + D 4. A Choudhry, Indian J. Pure Appl. Math. 221A. Choudhry, The diophantine equation A 4 + B 4 = C 4 + D 4 , Indian J. Pure Appl. Math., 22(1), (1991) 9-11.
On the Diophantine equation A 4 + hB 4 = C 4 + hD 4. A Choudhry, Indian J. Pure Appl. Math. 2611A. Choudhry, On the Diophantine equation A 4 + hB 4 = C 4 + hD 4 , Indian J. Pure Appl. Math., 26(11), (1995) 1057-1061.
On generating solutions of the Diophantine equation x 2 + y 2 = u 2 + v 2. H B Davies, Int. J. Math. Educ. Sci. Technol. 151H.B. Davies, On generating solutions of the Diophantine equation x 2 + y 2 = u 2 + v 2 , Int. J. Math. Educ. Sci. Technol., 15(1), (1984) 43-46.
On A 4 + B 4 + C 4 = D 4. N Elkies, Math. Comput. 51184N. Elkies, On A 4 + B 4 + C 4 = D 4 , Math. Comput., 51(184), (1988) 825-835.
. L Euler, Novi Comm, Acad. Petrop. vL. Euler, Novi Comm. Acad. Petrop. v(17), (1772).
An Introduction to the Theory of Numbers. G H Hardy, E M Wright, Oxford University PressLondonG. H. Hardy,E. M. Wright, An Introduction to the Theory of Numbers, Oxford University Press, London (1960).
F Izadi, K Nabardi, Diophantine equation X 4 + Y 4 = 2. 66F. Izadi, K. Nabardi, Diophantine equation X 4 + Y 4 = 2(U 4 + V 4 ), Math. Slovaca., 66(3), (2016) 557-560.
. A S Janfada, N Nabardi, Math. Slovaca. 696On Diophantine equation x 4 + y 4 = n(u 4 + v 4A. S. Janfada, N. Nabardi, On Diophantine equation x 4 + y 4 = n(u 4 + v 4 ), Math. Slovaca., 69(6), (2019) 1245-1248.
The Elements of Algebra. J Kersey, LondonJ. Kersey, The Elements of Algebra, London (1674).
. P Pasternak, Zeitschr, Math. Naturw. Unterrieht. 37P. Pasternak, Zeitschr, Math. Naturw. Unterrieht., 37, (1906) 33-35.
| []
|
[
"Prepared for submission to JHEP de Sitter State in Heterotic String Theory",
"Prepared for submission to JHEP de Sitter State in Heterotic String Theory"
]
| [
"Stephon Alexander stephon−[email protected] \nBrown Theoretical Physics Center\nDepartment of Physics\nBrown University\n02912ProvidenceRIUSA\n",
"Keshav Dasgupta [email protected]−[email protected] \nDepartment of Physics\nMcGill University\nH3A 2T8MontréalQuébecCanada\n",
"Archana Maji \nDepartment of Physics\nIndian Institute of Technology Bombay\n400076MumbaiIndia\n",
"P Ramadevi [email protected] \nDepartment of Physics\nIndian Institute of Technology Bombay\n400076MumbaiIndia\n",
"Radu Tatar [email protected] \nDepartment of Mathematical Sciences\nUniversity of Liverpool\nL69 7ZLLiverpoolUnited Kingdom\n"
]
| [
"Brown Theoretical Physics Center\nDepartment of Physics\nBrown University\n02912ProvidenceRIUSA",
"Department of Physics\nMcGill University\nH3A 2T8MontréalQuébecCanada",
"Department of Physics\nIndian Institute of Technology Bombay\n400076MumbaiIndia",
"Department of Physics\nIndian Institute of Technology Bombay\n400076MumbaiIndia",
"Department of Mathematical Sciences\nUniversity of Liverpool\nL69 7ZLLiverpoolUnited Kingdom"
]
| []
| Recent no-go theorems have ruled out four-dimensional classical de Sitter vacua in heterotic string theory. On the other hand, the absence of a well-defined Wilsonian effective action and other related phenomena also appear to rule out such time-dependent vacua with de Sitter isometries, even in the presence of quantum corrections. In this note, we argue that a four-dimensional de Sitter space can still exist in SO(32) heterotic string theory as a Glauber-Sudarshan state, i.e. as a coherent state, over a supersymmetric Minkowski background, albeit within a finite temporal domain. Borel resummation and resurgence play a crucial role in constructing such a state in the Hilbert space of heterotic theory governed entirely by the IR degrees of freedom. | null | [
"https://export.arxiv.org/pdf/2303.12843v1.pdf"
]
| 257,687,652 | 2303.12843 | 86ab9953e24c7362de736f3d3b41bbffee22f11e |
Prepared for submission to JHEP de Sitter State in Heterotic String Theory
22 Mar 2023
Stephon Alexander stephon−[email protected]
Brown Theoretical Physics Center
Department of Physics
Brown University
02912ProvidenceRIUSA
Keshav Dasgupta [email protected]−[email protected]
Department of Physics
McGill University
H3A 2T8MontréalQuébecCanada
Archana Maji
Department of Physics
Indian Institute of Technology Bombay
400076MumbaiIndia
P Ramadevi [email protected]
Department of Physics
Indian Institute of Technology Bombay
400076MumbaiIndia
Radu Tatar [email protected]
Department of Mathematical Sciences
University of Liverpool
L69 7ZLLiverpoolUnited Kingdom
Prepared for submission to JHEP de Sitter State in Heterotic String Theory
22 Mar 2023
Recent no-go theorems have ruled out four-dimensional classical de Sitter vacua in heterotic string theory. On the other hand, the absence of a well-defined Wilsonian effective action and other related phenomena also appear to rule out such time-dependent vacua with de Sitter isometries, even in the presence of quantum corrections. In this note, we argue that a four-dimensional de Sitter space can still exist in SO(32) heterotic string theory as a Glauber-Sudarshan state, i.e. as a coherent state, over a supersymmetric Minkowski background, albeit within a finite temporal domain. Borel resummation and resurgence play a crucial role in constructing such a state in the Hilbert space of heterotic theory governed entirely by the IR degrees of freedom.
Introduction and summary
Our modern understanding of quantum field theories is based on two recurring themes, one, on the existence of a Wilsonian effective action and two, on the asymptotic nature of the perturbation series. The latter, which was actually known for some time now [1], was surprisingly only appreciated more recently from some remarkable works [2] which showed clearly how non-perturbative effects manifest themselves naturally in correlation functions.
Extending both these themes to cosmological set-up wherein temporal dependences appear automatically is much more non-trivial. Even more challenging is the scenario where string theory is involved. In string theory, where the asymptotic nature of string perturbation theory is well documented, the existence of a Wilsonian effective action over a temporally varying cosmological background is not guaranteed. In fact there are strong evidences to suggest that a Wilsonian effective action may not exist because of the temporal dependences of the fluctuating frequencies, as well as of the massive stringy and the KK modes. For such a background, although we do expect some (as yet unknown) stringy description, it is a futile affair to search for a supergravity description where none exists. Equally futile then is the search for a vacuum solution for a cosmological background. These and other related arguments form the core of the so-called trans-Planckian problems in string-cosmology [3].
The situation, unfortunate as may seem, is not without hope. Solutions do exist, but not in a way envisioned earlier. Demanding the existence of a Wilsonian effective action then instructs us to realize the cosmological background − which is a de Sitter space in this case − as an excited state over a supersymmetric Minkowski background in string theory. We expect the excited state to break supersymmetry spontaneously, but the question is what kind of excited state are we looking for? Clearly, since the universe we live in is very close to a classical one, the only excited state that has any chance of reproducing the classical behavior is a coherent state which amounts to shifting the free vacua in field theory. Unfortunately, due to the non-existence of free vacua in string theory, such a state cannot be easily realized and the closest we can come to realizing a coherent state would be by shifting the interacting vacua. This actually turned out to be a reasonably viable option as amply demonstrated for the type IIB case in [4,5], and we called such a state as the Glauber-Sudarshan state to distinguish it from the coherent state 1 .
Our aim in this paper is to realize a de Sitter state in heterotic string theory. Due to various technical reasons, SO(32) heterotic theory appears to provide a more controlled laboratory than E 8 × E 8 theory to implement the computational technology. This computational technology involves performing a full-fledged path-integral along the lines of [6,7] over a Minkowski saddle which, expectedly, leads us to an asymptotic series of the Gevrey kind [8] thus requiring Borel resummation [9]. The final answer we get matches somewhat with the type IIB case from [4][5][6], but the details are quite different. These differences are important and in section 2.3 we will spell them out.
The note is organized in the following way. In section 2.1 we point out the reason for uplifting the type IIB or the dual heterotic background to M-theory. In section 2.2 we provide the duality chain that relates a type IIB orientifold background to a heterotic SO(32) background, and in section 2.3 we present our main results of constructing the de Sitter Glauber-Sudarshan state using Borel resummation of a Gevrey series and point out the key differences from the type IIB case. We end with a discussion in section 3.
Quantum corrections, Glauber-Sudarshan state and M-theory
In [4,5] we showed how a four-dimensional type IIB background with de Sitter isometries can be realized as a Glauber-Sudarshan state. As we also discussed therein, this background cannot appear as a vacuum configuration in IIB string theory due to numerous issues. The question that we want to ask here is that whether a generic background of the form: 1) can also be realized as a Glauber-Sudarshan state. Here F i (t) captures the dominant temporal scalings, and in what sense they do will be elaborated when we lift this configuration to M-theory. Note that a 2 (t), with t being the dimensionless conformal time (measured with respect to M p = 1), is kept arbitrary with the only condition being that it becomes large at late time. This means the background (2.1) naturally expands at late time. For example when a 2 (t) = 1 Λt 2 , with Λ being the cosmological constant, we get an expanding de Sitter space in the flat slicing in type IIB as t → 0 at late time. The other factors, g ij (x), g 33 (x), g mn (y), g αβ (y) and H 2 (y) are the unwarped spatial metric components and the warp-factor respectively. The coordinate y ≡ (y m , y α ) ∈ M 4 × M 2 and x = (t, x) ∈ R 2,1 , so that nothing depends on the third spatial direction parametrized by x 3 here. We will soon make a further restriction by converting g αβ = δ αβ , so that M 2 = T 2 Z 2 where Z 2 ≡ Ω(−1) F L I T 2 will be an orientifolding operation (I T 2 is the orbifold action. Details are in [14]). Such a choice will give us a way to reach the heterotic background by making a series of duality transformations. In that case y ≡ y m ∈ M 4 . For the time being, however, we will continue with the generic picture.
ds 2 = a 2 (t) H 2 (y) −dt 2 + gijdx i dx j + g33(dx 3 ) 2 + H 2 (y) F2(t)gmndy m dy n + F1(t)g αβ dy α dy β ,(2.
The reason for this genericity is simple. As alluded to above, for various choices of a 2 (t), F i (t) and the internal sub-manifolds, we can study the possibilities of realizing de Sitter state in various string theories (including also in M-theory). Such realizations will involve duality transformations: for example appropriate T-dualities, with and without any orientifolding operations, can give rise to the possibiltites of de Sitter states in type I and type IIA theories respectively. With an additional S-duality, as mentioned earlier, we could study de Sitter state in heterotic SO(32) theory (appropriately broken to a suitable subgroup). We can even dualize to M-theory (see [5]) and from there dualize further to heterotic E 8 × E 8 theory. The questions that we want to investigate here is whether such possibilities could be explicitly realized.
Expectedly, there are also a few other changes from the construction in [4,5]. We no longer impose any constraint on F i (t). This means the four-dimensional Newton constant can become time-dependent in the IIB side. What we do want, however, is that the Newton constant remains time-independent in the dual side (i.e. the dual side where we want to realize the de Sitter state). In a similar vein, the functional form for a 2 (t) will be determined by demanding a de Sitter space in the dual side. The precise conditions on a 2 (t) and F i (t) will be elucidated once we dualize to the corresponding theory.
The dualities to the various string and M-theory sides are more subtle now because the type IIB background (2.1) cannot be realized classically [11]. Quantum mechanically we expect such a background to exist only in the presence of all possible perturbative, non-perturbative and non-local, including topological corrections. Additionally − on one hand − temporal dependences of the underlying degrees of freedom are absolutely essential for an Effective Field Theory (EFT) to exist. (The existence of EFT is in turn related to the existence of four-dimensional Null Energy Condition (NEC) [5].) On the other hand, existence of the temporal degrees of freedom, for example fluxes etc., are tightly constrained by the flux quantization and anomaly cancellation conditions. Thus the system is highly intertwined, and unless we demonstrate that a background like (2.1) can exist (at least as a Glauber-Sudarshan state), the duality chasing will be a meaningless exercise.
The last comment on the existence of the background (2.1) as a Glauber-Sudarshan state deserves some explanation. As we saw in [4,5], when a 2 (t) specifies a given de Sitter slicing, Wilsonian effective action can only be defined properly when the background becomes a Glauber-Sudarshan state 2 . Other issues like the existence of a Trans-Planckian Cosmic Censorship (TCC), moduli stabilization, Faddeev-Popov ghosts, Schwinger-Dyson equations etc., appear much more naturally in this framework. More so, the existence of the Glauber-Sudarshan state tells us how a de Sitter state may exist in the type IIB string landscape (and not in the so-called swampland). The question that we want to ask here is whether such a de Sitter Glauber-Sudarshan state can be found in the dual landscape. The answer, as we shall see, turns out to be more complex in a sense that shall be elaborated soon 3 . But first: does the background (2.1) exist in the IIB landscape as a Glauber-Sudarshan state? This is what we turn to next.
Consistency of M-theory uplift and Glauber-Sudarshan state
As in [4,5], we will lift the background (2.1) to M-theory by T-dualizing along x 3 and then uplifting the configuration to eleven-dimensions. There are various reasons why such an uplift becomes necessary.
• The type IIB background (2.1) is supported at a constant coupling point in F-theory [12]. This means axio-dilaton vanishes and the IIB coupling g b = 1. This is a strong coupling point where S-duality doesn't help. More so, the vanishing axio-dilaton for example is necessary to get a gauge group of D 4 4 = [SO (8)] 4 in the dual heterotic side. The E 8 heterotic theory appears in a more non-trivial way as shown in [13].
• The internal space with topology M 4 × M 2 is not only a non-Kähler manifold but is also non-complex. Additionally, various parts of the internal space evolve differently with time, as shown by the temporal factors F i (t). The space-time has positive cosmological constant, and is therefore highly non-supersymmetric 4 . Putting everything together we see that all conventional techniques that we have learnt so far would fail to quantify the dynamics of the system.
• The unit coupling in the IIB side means that no controlled computation can be performed there. The only leverage we can get is by dualizing to the IIA side where the IIA coupling g s becomes gs HHo = 1 a(t) , where H(y) is the warp-factor and H o (x) is related to g 33 (x). For an expanding cosmology, the system then is naturally weakly coupled at late times.
• Uplifting this to M-theory converts most of the IIB fluxes to four-form G-fluxes. Moreover, the IIB seven-branes become geometric spaces in M-theory. The D3-branes, which are instantons on the seven-branes, naturally also become geometric. The temporal dependences of the G-flux components will make the D3-branes dynamical.
• In the small-instanton limit, these D3-branes dualize to either D2 or D4-branes in IIA, that are uplifted to M2 and M5-branes respectively. Since both D2 and D4-branes dissolve as instantons or as first Chern classes respectively on the IIA D6-branes, they naturally become dynamical in M-theory 5 . These dynamical M5-branes are responsible for the flux quantization procedure as shown in [5,10].
are related by a set of duality transformations, doesn't that naturally guarantee a de Sitter state in the heterotic side? The answer is no. In fact, as we shall see, duality chasing only works if the seed background in the IIB side exists. Our analysis will hopefully reveal that this is not guaranteed a-priori. 4 If we can realize the background as a Glauber-Sudarshan state over a supersymmetric Minkowski space (with a non-Kähler non-complex internal manifold), much like in [4,5,10], then the supersymmetry is broken spontaneously. The supersymmetric vacuum appears from the self-dual G-fluxes. Once we take the expectation values of the G-fluxes over the Glauber-Sudarshan state, they no longer remain self-dual and therefore break supersymmetry spontaneously. See [4,5] for details. 5 Recall that the gauge fluxes on the D6-branes appear from localized G-fluxes of the form G MNab and G 0Mab that are generically time-dependent [4,5,10]. This would make both the world-volume gauge fluxes as well as the branes dynamical.
• The IIB D-string dualize to either D0 or D2-branes in IIA. The D0-branes uplift to the massive gravity multiplet (which makes sense because the magnetic dual of the D0-branes, namely the D6-branes, dualize to Taub-NUT spaces (or KK-monopoles) in M-theory). The type IIB fundamental string dualize to a wrapped M2-brane when uplifted.
• The Glauber-Sudarshan states are most succinctly presented from M-theory perspective as they could easily reproduce any metric and flux configurations. Since most of the IIB branes dualize to either geometry or flux configurations when uplifted to M-theory, the Glauber-Sudarshan states could in principle reproduce these configurations too. From the IIB side we could probably view them from a string field theory set-up, where the shifting of the interacting IIB vacuum could be naturally realized 6 .
• Non-existence of a well-defined action in the IIB side is also another reason for the uplift to M-theory. As we saw in [4,5,10] it is absolutely essential to spell out the precise set of perturbative, non-perturbative, non-local and topological quantum corrections. It is only in M-theory that such a procedure may be explicitly performed. Existence of a Wilsonian effective action − as guaranteed from the existence of a Glauber-Sudarshan state 7 − means that all the short-distance degrees of freedom can be integrated out to express the quantum corrections in the form given in [4,5,10].
• Lifting the metric configuration (2.1) to M-theory, one may easily see that the toroidal direction, parametrized by (x 3 , x 11 ), scales as gs HHo
4/3
which becomes arbitrarily small in the limit g s → 0. Since we are always in the limit 8 g s < 1, the M-theory degrees of freedom do capture the type IIB behavior exactly. This also means that, for example if we are allowed to keep the M 2 cycle small compared to α ′ we can, in the orientifold limit, capture the SO(32) heterotic dynamics using the M-theory degrees of freedom. Thus the M-theory uplift has a dual advantage: provide both the IIB and the heterotic dynamics under appropriate conditions. In fact, as it turns out, the M-theory configuration is a master theory from where the de Sitter dynamics can be determined for all the string theories (including de Sitter state directly in M-theory). The proof of the statement is beyond the scope of this paper and will be demonstrated elsewhere.
Interestingly, the form of the M-theory metric always remains the same for any type IIB cosmology expressed using conformal coordinates as in (2.1). The only change is the value of the dual IIA string coupling: gs HHo = 1 a(t) which is sensitive to the functional form of a(t). Clearly, and as mentioned earlier, for expanding cosmologies the IIA coupling can be made small. In fact demanding gs HHo < 1, provides the temporal domain in which controlled quantum computations may be performed in M-theory. For the usual de Sitter case, irrespective of the choice of the de Sitter slicings, this temporal domain remains perfectly consistent with the so-called Trans-Planckian Cosmic Censorship (TCC) [3], as shown in [4,5,10]. We now proceed to find the functional form of a(t) that provides a de Sitter metric in the dual heterotic side. In what follows we elaborate on this solution.
M-theory uplift of a heterotic SO(32) background and dualities
The duality from type IIB to heterotic SO(32) theory, in the presence of orientifolds and fluxes, has been explicitly shown in [14]. We will basically follow similar duality chasing here too, but not before we elucidate the consistency of the IIB background (2.1) from M-theory. In M-theory, the uplifted metric takes the following standard form:
ds 2 = gs HHo −8/3 −g00dt 2 +gij dx i dx j + gs HHo −2/3 F1 gs H1 g αβ dy α dy β + F2 gs H1 gmndy m dy n + gs HHo 4/3g ab dw a dw b , (2.2) where H 1 (x, y) ≡ H(y)H o (x)
, which means F i (g s /H 1 ) depends on the temporal factor a(t), and we shall discuss their functional form soon. The other metric components may be related to the metric components in (2.1) in the following way:
g ab (x, y) ≡ H(y)H o (x) 4/3 g ab (x, y) g µν (x, y) ≡ g µν (x) H 4 (y)H o (x) 2/3 ,g MN (x, y) ≡ H 2 (y) H o (x) 2/3 g MN (y) (2.3)
where we have taken M, N ∈ M 4 × M 2 and (w a , w b ) ≡ (x 3 , x 11 ). Note that we have taken the un-warped metric components along the toroidal direction to depend on both (x i , y M ). In fact, for the computations of the curvature scalings, as shown in [5,10], one may take both the un-warped and the warped metric components to depend on all the coordinates (except of course the toroidal direction). Once we go to the heterotic side, we will see that the dependence on the coordinates of M 2 have to be removed. Let us now come to the functional form for the temporal factors F i (g s /H 1 ). In our earlier works [4,5,10], these factors did not change the dominant scalings of the metric components as they were constrained by F i (g s /H 1 ) → 1, g s → 0 and F 1 F 2 2 = 1 to preserve the Newton's constant and to avoid late-time singularities. Both of these conditions are not essential now if we want to dualize to any of the other string and M-theories because only in the dual landscape we want a time-independent Newton's constant with no late time singularities. This means the dominant scalings of the internal metric could in-principle change, implying changes to the curvature scalings from what we had in [4,5,10]. We can then propose the following scalings: 4) where (A k , B k , C k ) are all integers, positive or negative with (α o , β o , γ o ) being the dominant scalings and k ∈ Z + . Note that, as we demonstrated rigorously in [5], when γ o < 0 EFT breaks down along-with a violation of the four-dimensional NEC. Here, in the generic setting, we will see whether this continues to hold or not. On the other hand (α o , β o ) are not a-priori required to be positive definite. An interesting question would be to find whether there is a connection between the three dominant scalings. If there is one then it would lead to an even deeper connection between the three disparate facts: existence of EFT from M-theory, preserving four-dimensional NEC from IIB, and temporal dependence of the internal six-dimensional manifold. Before going into this, let us clarify couple more things about the temporal dependence of the internal manifold. One, taking (2.4) at face-value might imply that all components of the internal metric should scale in a certain way temporally. This is actually not the case as long as we maintain the dominant scalings. For example we can start by generalizing the internal metric components as:
F1 ≡ ∞ k=0 A k gs HHo βo+2k/3 , F2 ≡ ∞ k=0 B k gs HHo αo+2k/3 , ∂ ∂t gs HHo ≡ ∞ k=0 C k gs HHo γo+2k/3 ,(2.g αβ (x, y; g s ) = ∞ k=0 B (α,β) k g s HH o − 2 3 +βo+ 2k 3 g (k)
αβ (x, y),
g mn (x, y; g s ) = ∞ k=0 A (m,n) k g s HH o − 2 3 +αo+ 2k 3 g (k) mn (x, y),(2.5)
where the repeated indices are not summed over, and the g
MN (x, y) are the various possible metric components. It is easy to see that, unless we impose g (k)
MN (x, y) ≡g MN (x, y) from (2.3), it will in general be hard to keep the Newton's constant time-independent even in the type IIB side (although we are not required to do so now). Thus as long as the dominant scalings are divided into two sets: (α o , β o ), the system can be generalized without violating the EFT constraints. Note that this implies that further splitting into three or more dominant scalings are not necessary as higher order splittings can always be brought back to the two-splitting case by choosing appropriate values for A (m,n) k and B (α,β) k . Two, the signs of (α o , β o ) are important. If β o > + 2 3 , it would mean that the size of the two-cycle M 2 shrinks at late time when g s → 0. Beyond certain short-distance scale, T-duality would become necessary, and if M 2 ≡ T 2 Z 2 − with Z 2 being the orientifold operation [14] − we see that the late time physics may be succinctly captured by either the Type I or the heterotic theory. On the other hand if α o > + 2 3 , the late time physics may still be given by the IIB theory, assuming no orientifolding operation, although problems like late-time singularities could arise in the absence of local isometries and in the presence of orientifolds. Note that we did not encounter any of these subtleties in [4,5] because both the space-time and the internal six-manifold had dominant scalings of − 8 3 , − 2 3 and therefore the late-time physics was well within the supergravity limits 9 . (The subtleties with the M-theory torus are explained earlier.)
With this we are almost ready to make the duality transformations to the heterotic side. We will assume that g αβ = δ αβ specifies a square torus capturing the local metric of M 2 = T 2 Z 2 , and y ≡ y m ∈ M 4 with M 4 being a generic non-Kähler manifold, as mentioned earlier. There are also NS and RR two-forms with components B αm (y, g s ) and C αm (y, g s ) respectively with one of their legs along the toroidal directions. The heterotic metric then takes the following form 10 :
ds 2 het = F 1 (t)a 2 (t) −dt 2 + g ij dx i dx j + g 33 (dx 3 ) 2 (2.6) + H 4 (y)F 1 (t)F 2 (t) g mn dy m dy n + δ αβ dy α + B α m dy m dy β + B β n dy n ,
where note that the toroidal directions no longer allow a warp-factor, but do allow a nontrivial fibration coming from the NS two-forms. Additionally, since the NS two-forms are time-dependent, the fibration naturally becomes time-dependent too. On the other hand, the temporal dependence of the RR two-forms provides the necessary torsion to support the non-Kähler, time-dependent metric (2.6) in the heterotic side. We now want the four-dimensional part of the metric (2.6) to be a de Sitter metric in some specific slicing. For simplicity we will choose a flat slicing. (We avoid static patch or any other slicings related to the static patch because of the issues mentioned in [5].) We also want the four-dimensional Newton's constant to be time-independent. Putting these two together implies that 11 :
a 2 (t) = 1 Λt 2 F 1 (t) , F 1 (t)F 2 (t) = 1,(2.7)
in the original type IIB metric (2.1), where Λ is the cosmological constant, t is the conformal time in flat-slicing and (g ij , g 33 ) = (δ ij , 1). (We can keep everything dimensionless by measuring with respect to M p ≡ 1 as shown in [6,7].) The choice (2.7) tells us that the original type IIB metric (2.1) is now no longer required to have a time-independent fourdimensional Newton's constant, or have a metric with four-dimensional de Sitter isometries. The M-theory uplift however takes the same form as in (2.2) but with the following choices for g s , H o (x), α o and β o : (2.8) showing that there is a dynamical possibility of the type IIB metric (2.1) to go to the heterotic side at late time. This also fixes the form for F 2 (t). One might worry that this could in-principle over-constrain the system, but a similar computation in the type IIB side done in [10] shows that this may not be the case. Of course we will have to fix the functional form for F 1 (t) using the Schwinger-Dyson type equations (in the presence of all non-perturbative and non-local corrections), but there is an additional condition on F 1 (t) borne out of the non-violating-NEC criterion from [5], namely:
g s HH o = t ΛF 1 (t), H o (x) = 1, β o = −α o , β o ≥ 0,dF 1 dt = 2 √ F 1 t √ Λ ∞ k=0 C k t ΛF 1 γo+2k/3 − ΛF 1 , (2.9)
where γ o is defined in (2.4). Once we determine the form of F 1 (t), the above equation will then fix the values of the constant coefficients C k and γ o . If γ o < 0, then unfortunately such a system cannot be embedded in a UV complete theory, i.e. the solution (as a Glauber-Sudarshan state) cannot exist in heterotic string theory. There are however reasons to believe that γ o could simply be zero if not a positive integer, but never negative. This is because the solution with β o = α o = 0 has been studied earlier in [10], where no apparent inconsistencies were detected. In the following section, we will demonstrate, under some mild approximations, that γ o can indeed be made positive definite here as long as β o ≥ 0.
The resurgence of de Sitter space as a Glauber-Sudarshan state
Our aforementioned discussion should convince the readers that an M-theory uplift of the heterotic background (2.6) − via type IIB − is possible and is given by (2.2) with the choice of parameters from (2.8). Unfortunately however, such a background cannot be realised as a vacuum solution either classically or quantum mechanically. The former is ruled out in [15] from the constraints coming from the Kac-Moody algebras with SO(4, 1) global symmetries 12 . The latter is ruled out in [4][5][6][7] from the absence of a well-defined Wilsonian effective action for an accelerating background like (2.1) or (2.6) 13 . This means the only way to realize the background (2.2) would be as a Glauber-Sudarshan state over a supersymmetric Minkowski background 14 . How should we go about constructing such a state now?
Fortunately, since the form of the metric (2.2) in M-theory remains unchanged from what we had in [4,5], the procedure to explicitly construct such a state, will follow [6], namely, doing a path-integral over the Minkowski saddle. In other words:
gµν σ ≡ [DgMN][DCMNP][DΨM][DΨN] e iStot D † (α, β, γ)gµν (x, y, z)D(α, β, γ) [DgMN][DCMNP][DΨM][DΨN] e iStot D † (α, β, γ)D(α, β, γ) , (2.10) where (M, N, P) ∈ R 2,1 × M 4 × M 2 × T 2 Z 2 with Z 2 being the orbifold action; x ≡ (x, t) ∈ R 2,1 , y ∈ M 4 × M 2 , z ∈ T 2 Z 2 ; σ = (α, β, γ)
is the Glauber-Sudarshan state associated with 12 See also the last reference in [11] for a different take on the no-go conditions stemming directly from the energy conditions. 13 One might argue that the situation could be dealt using open quantum field theories [16] which is suited to tackle scenarios where energy is not conserved. However in string or M-theories it is not a priori clear how to separate degrees of freedom to implement the energy loss. Plus other issues pointed out in [6] suggest that this may not be a viable option.
14 Being supersymmetric Minkowski (or more appropriately, warped supersymmetric Minkowski with compact non-Kähler internal eight-manifold), there is no longer any Kac-Moody constraints from [15], or any energy constraints from [11].
({g MN }, {C MNP }, {Ψ M }) degrees of freedom (see also [4,5]); g µν is the graviton operator related to g µν ; the measure Dg MN ≡ Dg µν Dg AB with (A, B) ∈ M 4 × M 2 × T 2 Z 2 ; D(σ) is a non-unitary displacement operator, i.e. D † (σ)D(σ) = D(σ)D † (σ) = 1 (see [4] for details); and the total action: 11) where the perturbative part of S int comes from an interaction term like eq. (4.81) in [5] and S gf is the gauge-fixing term. The ghosts and the gauge-fixing terms are necessary to bring the propagators in the standard forms and remove the redundant degrees of freedom. In writing the expectation value, we have suppressed the measure associated with the ghosts (while this is not important for the present work, it will be elaborated in [7]). Unfortunately dealing with the path-integral structure in (2.10) with 44 metric degrees of freedom, 84 three-form degrees of freedom and 128 Rarita-Schwinger degrees of freedom (plus the ghost terms) is clearly beyond the scope of the present treatment. We will then follow the simplifying procedure adopted in [6], namely, consider three scalar degrees of freedom which are the representative samples from the set of 44 gravitons, 84 fluxes and fermionic condensates from the 128 Rarita-Schwinger fermions. The latter is chosen to avoid Grassmanian integrals in (2.10), and the representative sample of gravitons include the graviton degrees of freedom along the non-compact directions. Even for such simplifying choice, the analysis of the path-integral is still complicated. In [6], it is shown that the path-integral (2.10) may be analyzed using the so-called nodal diagrams instead of the usual Feynman diagrams because of the shifted vacuum structure. In fact the nodal diagrams show growth of the Gevrey kind [8] implying that Borel resummation [9] of the Gevrey series is now necessary for the path-integral (2.10) to make any sense! Putting everything together, and using the computational procedure outlined in [6] gives us the following: (2.12) where we restricted the metric function g µν = g µν (x, y) with y ∈ M 4 only in (2.10); P. V is the principal value of the enclosed integral; g (s) is a set of coupling constants defined by inverse powers of M p in a coupling-constant space parametrized by the set {s} of interactions; the first integral is the result of Borel resumming the Gevrey-l growth where l is one less than the total number of fields (i.e. l = 2 here); A (s) is the result of computing all the nodal diagrams, including the NLO diagrams for all interactions in the set {s}, that are combinatorially suppressed but not volume V suppressed where the volume V refers to the IR volume appearing from the IR/UV mixing [17];
S tot ≡ S kin + S int + S ghost + S gf ,(2.gµν σ = {s} 1 g 1/l (s) ∞ 0 dS exp − S g 1/l (s) 1 1 − A (s) S l P.V µ k IR d 11 k αµν (k) a(k) Re ψ k (X) e −i(k 0 −κ IR )t ,α µν (k) = αµν (k) V ≡ α(k)ηµν (k) V , a(k) = k 2 V
if we choose the scale to be M p and α(k) is related to the Glauber-Sudarshan state |σ discussed earlier; and ψ k (X), with X ≡ (x, y) ∈ (R 2 , M 4 ), is the spatial wave-function over the solitonic background with κ IR (> k IR ) being an IR scale. (See [6] for details on the aforementioned computations.) The summing over the set {s} of interactions in the coupling constant space means that we are summing over Borel-resummed series. This double-summing is necessary to make the cosmological constant Λ small [7], where the closed form expression of the dimensionless cosmological constant 15 appears from the first integral over dS in (2.12), namely: (2.13) with all the parameters appearing above being defined earlier, and κ will be determined later (see (2.17)). This integral form of the cosmological constant has already been shown to be positive definite in [6], irrespective of the signs of A (s) and in [7] we argue that this can be made small. All in all, although it may appear that the analysis follows closely the ones from [6], there are few key differences. First is the temporal domain of the validity of the Glauber-Sudarshan state, i.e. the temporal domain where the state remains approximately coherent, and second is the functional form of α µν (k). Can they be precisely determined here?
1 Λ κ ≡ {s} 1 g 1/l (s) ∞ 0 dS exp − S g 1/l (s) 1 1 − A (s) S l P.V ,
The answer turns out to be yes provided we know the functional form for F 1 (t). This is represented by a series in gs HHo in (2.4). Compared to the type IIB case studied in [10] the situation is different because gs HHo itself depends on F 1 (t) as shown in (2.8). This means, even before we invoke the consequence from the Schwinger-Dyson's equations, F 1 (t) has to satisfy: 14) where A k are time independent constants; and in the limit M p ≡ 1 we take both Λ and t to be dimensionless as explained earlier. In addition to this, there is also a derivative constraint coming from the NEC non-violating criterion [5] as in (2.9). The coefficients A k can only be fixed using the Schwinger-Dyson's equations (see equivalent example for the type IIB case in [4,5,10]). This is in general a non-trivial exercise because of the mixing of the various degrees of freedom (including ghosts), but we can try a toy example. A simple case would be when A 0 >> A k for k ≥ 1. For such a case, defining A 0 ≡ 1, we find: 15) with 0 ≤ β o < 2 for the system to remain consistent 16 , and not violate the NEC criterion.
F 1 (t) = t ΛF 1 (t) βo ∞ k=0 A k t ΛF 1 (t) 2k/3 ,(2.F 1 (t) = Λt 2 βo 2−βo , − 1 √ Λ < t ≤ 0,(2.
Recall that as long as β o > + 2 3 , the type IIB system can dynamically go to SO(32) heterotic (broken to a suitable subgroup that we will discuss soon). The NEC non-violating criterion allows 0 ≤ β o < 2, so is compatible with the aforementioned condition. More so, the size of 15 Recall that both the cosmological constant Λ and the conformal time t are made dimensionless here using the scale Mp ≡ 1. One could use a different scale, namely the size of the internal eight-manifold from the supersymmetric Minkowski background, but the final answer remains unaffected by this choice [6]. 16 Note that, with the choice of F1(t) from (2.15), both the type IIA coupling gs HHo = Λt 2 1 2−βo and the heterotic coupling g het H 2 = Λt 2 βo 2−βo are small at late time, i.e. when t → 0. Thus both the system are naturally weakly coupled at late time as long as 0 ≤ βo < 2. Beyond this regime of βo, there are multiple issues with the Glauber-Sudarshan states in both heterotic and the dual type IIB theories. M 4 grows at late time keeping the four-dimensional Newton's constant time-independent in the heterotic side.
Interestingly the temporal domain of the validity of our analysis matches exactly with the TCC bound advocated in [3], at least for this simple toy example. Additionally, plugging (2.15) in (2.9) − to see whether the non-violating NEC criterion from [5] is satisfied or not − gives us the following solution: (2.16) This is clearly consistent as long as β o ≥ 0. Thus combining the compatibility criteria from both TCC [3] and non-violating NEC [5] gives us a stronger reason to justify the existence of a de Sitter space as a Glauber-Sudarshan state in SO(32) heterotic string theory. Two points still remain. One, the functional form for α µν (k) which would specify the Glauber-Sudarshan state in the supergravity configuration space, and two, the form of the vector bundle over the internal six-manifold. The latter is a bit subtle because, while the four-dimensional base of the internal six-manifold in (2.6) is time-independent, the fibration structure is time-dependent because of its dependence on time-dependent NS two-form fluxes from the dual type IIB side. Nevertheless the vector bundle can be studied from localized G-flux components whose expectation values may be extracted from the corresponding Glauber-Sudarshan state. The computation should be similar to the path-integral analysis we did for the metric in (2.10), which in turn means Gevrey growth of the corresponding nodal diagrams and the subsequent Borel resummation to extract finite answer. For the simpler case studied here, we can restrict the D 4 4 bundle, alluded to earlier, on the non-Kähler four-dimensional base of the internal six-manifold. Unfortunately a generic study of the vector bundle is beyond the scope of this paper and will be dealt elsewhere 17 .
γ o = β o 2 , C 0 = 2 √ Λ 2 − β o , C k = 0, ∀k > 0.
Finally let us figure out the functional form for α µν (k). It is now related to the Fourier transform of the temporal part of the M-theory metric (2.2). Since we are only dealing with real fields in the path-integral, it suffices to take the cosine Fourier transform. Doing this gives us both the functional form for α µν (k) and the value of the parameter κ in the expression for the cosmological constant Λ in (2.13). They are:
αµν (k) = 2 π Γ 1 − 16 3(2 − βo) sin 8π 3(2 − βo) k 1+ 16 3(2−βo ) + χ(k) ηµν , κ = 8 3(2 − βo)
, (2.17) where χ(k) may be determined by restricting α µν (k) to its temporal part. For β o = 0 we recover exactly the results of [6] showing that the system is consistent. Thus together with (2.17), (2.16), and (2.15) we have a SO(32) heterotic de Sitter background (2.6) realized as a Glauber-Sudarshan state with a positive cosmological constant given by (2.13). 17 A generic study has to deal with two aspects of the internal space: one, the temporal dependence of the fibration structure and two, the non-Kähler (and subsequently, non-complex) nature of the six-manifold. The latter has been addressed in the past, for example in [14], but the former is new. Other issues like anomaly cancellations and flux quantizations follow the route laid out in [10].
Discussions and conclusions
The quest for the existence of a four-dimensional de Sitter space in type II string theories is a non-trivial problem [18], partly because of the existence of various no-go theorems, ruling out classical de Sitter vacuum, and partly because of the absence of a well-defined Wilsonian action over an accelerating background, ruling out quantum de Sitter vacuum. (Although arguments can be made in favor of a quantum de Sitter vacuum using open QFTs [16], there are numerous issues with this 18 . See footnote 13.) Similar fate befalls de Sitter vacua in heterotic theories. In this paper we have steered clear of both classical and quantum vacuum solutions, and instead argued for the existence of de Sitter space as a Glauber-Sudarshan state. Such a state is the closest we can come to a classical solution, yet the analysis is fully quantum. In fact our analysis reveals that the quantum computations are more subtle because of the asymptotic natures of the perturbation series. The Gevrey growths of the perturbation series then allow us to use the powerful machinery of resurgence and Borel resummations to argue for the existence of such a state.
Our studies have shown that heterotic SO(32) theory does allow such a state to exist within a finite temporal domain given in (2.15), which is consistent with the trans-Planckian bound [3]. Moreover (2.16) reveals that the solution is consistent with the non-violating NEC condition of [5]. With some mild approximations, one can even achieve a surprisingly precise determination of such a state in (2.17) with a closed-form expression for the positive cosmological constant in (2.13).
Few detail still remains to be investigated. For example, we haven't been able to analyze the case for the E 8 ×E 8 heterotic theory. The duality chain connecting this to M-theory from [13] certainly looks promising, but the advantage we had from the eight-manifold in M-theory for the present analysis cannot be replicated so easily using seven-manifold for the E 8 case. We also haven't said much on moduli stabilization, flux-quantizations, anomaly cancellations or vector bundles either. All of these should follow the path laid out for the type IIB case in [10] because our heterotic background is dual to an orientifold background in type IIB. Nevertheless a direct analysis from the heterotic side is needed. All these and other related issues will be discussed elsewhere.
Education, Government of India. PR would like to acknowledge the ICTP's Associate programme where progress on the ongoing work continued during her visit as senior associate. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising from this submission.
Although we shall use both the terminologies variably throughout, it is the former that is always meant.
As alluded to in the introduction, the key differences between the Glauber-Sudarshan state and the standard coherent state are described in[4,5]. We will not go into those details here and the readers may pick up all the relevant informations from the references.3 For example, one of the question whose answer that we seek is as follows. Since heterotic and type IIB
This shifting is of course an essential ingredient in constructing the Glauber-Sudarshan states. In Mtheory this is easily realized at low energies and is directly related to the wave-function renormalization of the usual coherent states. There are quite a few subtleties that we have kept under the rug here which the readers could get from section 6.1 of the first reference in[5].7 As demonstated carefully in[4], these two facts go hand-in-hand. One implies the other. This is also a stronger reason for viewing de Sitter space as a Glauber-Sudarshan state and not as a vacuum configuration.8 The reason for this is to control the non-perturbative corrections of the form exp − gs HHo −2k/3 , where k ∈ Z+. So while M-theory is usually defined for gs >> 1, we want to be in the opposite limit.
This doesn't mean that the dynamics are captured by classical supergravity. The late time physics is captured by weakly curved manifolds, but all perturbative, non-perturbative, non-local and topological corrections are necessary to solve the EOMs as shown in[4,5,10].
This metric was first derived by Evan McDonough in 2015 (and later by Bohdan Kulinich in 2022) by following the duality chasing argument mentioned above. We thank them for many discussions related to heterotic de Sitter solutions.11 Note that the volume V6 of the internal non-Kähler six-manifold in (2.6) is in general time-dependent but independent of the fibration structure and is given by V6 = H 8 (y)F 2 1 (t)F 2 2 (t) det(gmn). Once the second condition of (2.7) is taken into account, the volume becomes time-independent.
See however recent attempts to realize de Sitter vacuum solution using loop-holes in the no-go theorems[19], or using AdS spaces[20]. It will be interesting to find some connections to our work.
Acknowledgements:We would like to thank Heliudson Bernardo, Suddhasattwa Brahma, Mir-Mehedi Faruk, Bohdan Kulinich, Evan McDonough, Brent Pym, Mark Van Raamsdonk, Savdeep Sethi and Alexander Westphal for many discussions related to de Sitter space in heterotic string theory. The work of SA is supported in part by the Simons Foundation award number 896696. The work of KD is supported in part by a Discovery Grant from the Natural Sciences and Engineering Research Council of Canada (NSERC). The work of AM is supported in part by the Prime Minister's Research Fellowship provided by the Ministry of
Divergence of perturbation theory in quantum electrodynamics. F J Dyson, Phys. Rev. 85F. J. Dyson, "Divergence of perturbation theory in quantum electrodynamics," Phys. Rev. 85, 631-632 (1952).
Generating nonperturbative physics from perturbation theory. G V Dunne, M , arXiv:1210.2423arXiv:1306.4405Resurgence and Trans-series in Quantum Field Theory: The CP N−1 Model. 1141701JHEP. hep-thG. V. Dunne and M.Ünsal, "Resurgence and Trans-series in Quantum Field Theory: The CP N−1 Model," JHEP 11, 170 (2012), [arXiv:1210.2423 [hep-th]]; "Generating nonperturbative physics from perturbation theory," Phys. Rev. D 89, no.4, 041701 (2014), [arXiv:1306.4405 [hep-th]];
Resurgence theory, ghost-instantons, and analytic continuation of path integrals. G Basar, G V Dunne, M , arXiv:1308.1108JHEP. 1041hep-thG. Basar, G. V. Dunne and M.Ünsal, "Resurgence theory, ghost-instantons, and analytic continuation of path integrals," JHEP 10, 041 (2013), [arXiv:1308.1108 [hep-th]].
The Trans-Planckian problem of inflationary cosmology. J Martin, R H Brandenberger, arXiv:hep-th/0005209Phys. Rev. D. 63123501hep-thJ. Martin and R. H. Brandenberger, "The Trans-Planckian problem of inflationary cosmology," Phys. Rev. D 63, 123501 (2001) [arXiv:hep-th/0005209 [hep-th]];
Trans-Planckian Censorship and the Swampland. A Bedroya, C Vafa, arXiv:1909.11063JHEP. 09123hep-thA. Bedroya and C. Vafa, "Trans-Planckian Censorship and the Swampland," JHEP 09, 123 (2020) [arXiv:1909.11063 [hep-th]];
Trans-Planckian Censorship and Inflationary Cosmology. A Bedroya, R Brandenberger, M Loverde, C Vafa, arXiv:1909.11106Phys. Rev. D. 10110103502hep-thA. Bedroya, R. Brandenberger, M. Loverde and C. Vafa, "Trans-Planckian Censorship and Inflationary Cosmology," Phys. Rev. D 101, no.10, 103502 (2020) [arXiv:1909.11106 [hep-th]];
Trans-Planckian censorship conjecture from the swampland distance conjecture. S Brahma, arXiv:1910.12352Phys. Rev. D. 101446013hep-thS. Brahma, "Trans-Planckian censorship conjecture from the swampland distance conjecture," Phys. Rev. D 101, no.4, 046013 (2020) [arXiv:1910.12352 [hep-th]].
Four-dimensional de Sitter space is a Glauber-Sudarshan state in string theory. S Brahma, K Dasgupta, R Tatar, arXiv:2007.00786JHEP. 07hep-thS. Brahma, K. Dasgupta and R. Tatar, "Four-dimensional de Sitter space is a Glauber-Sudarshan state in string theory," JHEP 07, 114 (2021) [arXiv:2007.00786 [hep-th]];
de Sitter Space as a Glauber-Sudarshan State. arXiv:2007.11611JHEP. 02hep-th"de Sitter Space as a Glauber-Sudarshan State," JHEP 02, 104 (2021) [arXiv:2007.11611 [hep-th]].
Four-Dimensional Null Energy Condition as a Swampland Conjecture. H Bernardo, S Brahma, K Dasgupta, M M Faruk, R Tatar, arXiv:2107.06900arXiv:2108.08365de Sitter Space as a Glauber-Sudarshan State: II. 127181301Fortsch. Phys.. hep-thH. Bernardo, S. Brahma, K. Dasgupta, M. M. Faruk and R. Tatar, "Four-Dimensional Null Energy Condition as a Swampland Conjecture," Phys. Rev. Lett. 127, no.18, 181301 (2021) [arXiv:2107.06900 [hep-th]]; "de Sitter Space as a Glauber-Sudarshan State: II," Fortsch. Phys. 69, no.11-12, 2100131 (2021) [arXiv:2108.08365 [hep-th]].
S Brahma, K Dasgupta, M M Faruk, B Kulinich, V Meruliya, B Pym, R Tatar, arXiv:2211.09181Resurgence of a de Sitter Glauber-Sudarshan State: Nodal Diagrams and Borel Resummation. hep-thS. Brahma, K. Dasgupta, M. M. Faruk, B. Kulinich, V. Meruliya, B. Pym and R. Tatar, "Resurgence of a de Sitter Glauber-Sudarshan State: Nodal Diagrams and Borel Resummation," [arXiv:2211.09181 [hep-th]].
Nodal Diagrammar, Borel Resummations and the Smallness of the Positive Cosmological constant. S Brahma, J Chakravarty, K Dasgupta, B Kulinich, To AppearS. Brahma, J. Chakravarty, K. Dasgupta, and B. Kulinich, "Nodal Diagrammar, Borel Resummations and the Smallness of the Positive Cosmological constant" To Appear.
Sur la nature analytique des solutions deséquations aux dérivées partielles. Premier mémoire. M Gevrey, Annales scientifiques de l'École Normale Supérieure. 35M. Gevrey, "Sur la nature analytique des solutions deséquations aux dérivées partielles. Premier mémoire", Annales scientifiques de l'École Normale Supérieure, 35, 129-190 (1918);
Sur la représentation arithmétique des fonctions analytiques d'une variable complexe. G Mittag-Leffler, Atti del IV Congresso Internazionale dei Matematici. RomaG. Mittag-Leffler, "Sur la représentation arithmétique des fonctions analytiques d'une variable complexe", Atti del IV Congresso Internazionale dei Matematici, Roma; 6-11 (1908).
Mémoire sur les séries divergentes. E Borel, Ann. Sci.Éc. Norm. Supér., Series. 3E. Borel, "Mémoire sur les séries divergentes", Ann. Sci.Éc. Norm. Supér., Series 3, 16, 9-131 (1899).
How a four-dimensional de Sitter solution remains outside the swampland. K Dasgupta, M Emelin, M M Faruk, R Tatar, arXiv:1908.05288arXiv:1911.12382Nucl. Phys. B. 9692019de Sitter Vacua in the String landscape: La Petite Version. hep-thK. Dasgupta, M. Emelin, M. M. Faruk and R. Tatar, "de Sitter Vacua in the String Landscape," Nucl. Phys. B 969, 115463 (2021) [arXiv:1908.05288 [hep-th]]; "How a four-dimensional de Sitter solution remains outside the swampland," JHEP 07, 109 (2021) [arXiv:1911.02604 [hep-th]]; "de Sitter Vacua in the String landscape: La Petite Version," QTS2019 [arXiv:1911.12382 [hep-th]];
Quantum Corrections and the de Sitter Swampland Conjecture. K Dasgupta, M Emelin, E Mcdonough, R Tatar, arXiv:1808.07498JHEP. 01145hep-thK. Dasgupta, M. Emelin, E. McDonough and R. Tatar, "Quantum Corrections and the de Sitter Swampland Conjecture," JHEP 01, 145 (2019) [arXiv:1808.07498 [hep-th]].
Aspects of supergravity theories. G W Gibbons, print-85-0061CambridgeG. W. Gibbons, "Aspects of supergravity theories," print-85-0061 (Cambridge);
Supergravity description of field theories on curved manifolds and a no go theorem. J M Maldacena, C Nunez, arXiv:hep-th/0007018Int. J. Mod. Phys. A. 16hep-thJ. M. Maldacena and C. Nunez, "Supergravity description of field theories on curved manifolds and a no go theorem," Int. J. Mod. Phys. A 16, 822-855 (2001) [arXiv:hep-th/0007018 [hep-th]];
Thoughts on tachyon cosmology. G W Gibbons, 10.1088/0264-9381/20/12/301arXiv:hep-th/0301117Class. Quant. Grav. 20hep-thG. W. Gibbons, "Thoughts on tachyon cosmology," Class. Quant. Grav. 20, S321-S346 (2003) doi:10.1088/0264-9381/20/12/301 [arXiv:hep-th/0301117 [hep-th]];
de Sitter Vacua in Type IIB String Theory: Classical Solutions and Quantum Corrections. K Dasgupta, R Gwyn, E Mcdonough, M Mia, R Tatar, arXiv:1402.5112JHEP. 0754hep-thK. Dasgupta, R. Gwyn, E. McDonough, M. Mia and R. Tatar, "de Sitter Vacua in Type IIB String Theory: Classical Solutions and Quantum Corrections," JHEP 07, 054 (2014) [arXiv:1402.5112 [hep-th]];
The inheritance of energy conditions: Revisiting no-go theorems in string compactifications. H Bernardo, S Brahma, M M Faruk, arXiv:2208.09341hep-thH. Bernardo, S. Brahma and M. M. Faruk, "The inheritance of energy conditions: Revisiting no-go theorems in string compactifications," [arXiv:2208.09341 [hep-th]].
F theory and orientifolds. A Sen, arXiv:hep-th/9605150Nucl. Phys. B. 475hep-thA. Sen, "F theory and orientifolds," Nucl. Phys. B 475, 562-578 (1996) [arXiv:hep-th/9605150 [hep-th]];
F theory at constant coupling. K Dasgupta, S Mukhi, arXiv:hep-th/9606044Phys. Lett. B. 385hep-thK. Dasgupta and S. Mukhi, "F theory at constant coupling," Phys. Lett. B 385, 125-131 (1996) [arXiv:hep-th/9606044 [hep-th]].
Heterotic and type I string dynamics from eleven-dimensions. P Horava, E Witten, arXiv:hep-th/9510209arXiv:hep-th/9603142Nucl. Phys. B. 460Nucl. Phys. B. hep-thP. Horava and E. Witten, "Heterotic and type I string dynamics from eleven-dimensions," Nucl. Phys. B 460, 506-524 (1996) [arXiv:hep-th/9510209 [hep-th]]; "Eleven-dimensional supergravity on a manifold with boundary," Nucl. Phys. B 475, 94-114 (1996) [arXiv:hep-th/9603142 [hep-th]].
K Dasgupta, G Rajesh, S Sethi ; G -Flux, arXiv:hep-th/9908088M theory. 0823hep-thK. Dasgupta, G. Rajesh and S. Sethi, "M theory, orientifolds and G -flux," JHEP 08, 023 (1999) [arXiv:hep-th/9908088 [hep-th]];
Heterotic strings with torsion. K Becker, K Dasgupta, arXiv:hep-th/0209077JHEP. 116hep-thK. Becker and K. Dasgupta, "Heterotic strings with torsion," JHEP 11, 006 (2002) [arXiv:hep-th/0209077 [hep-th]];
K Becker, M Becker, K Dasgupta, P S Green, arXiv:hep-th/0301161Compactifications of heterotic theory on nonKahler complex manifolds. 1. 047hep-thK. Becker, M. Becker, K. Dasgupta and P. S. Green, "Compactifications of heterotic theory on nonKahler complex manifolds. 1.," JHEP 04, 007 (2003) [arXiv:hep-th/0301161 [hep-th]];
NonKahler string backgrounds and their five torsion classes. G Lopes Cardoso, G Curio, G Dall'agata, D Lust, P Manousselis, G Zoupanos, arXiv:hep-th/0211118Nucl. Phys. B. 652hep-thG. Lopes Cardoso, G. Curio, G. Dall'Agata, D. Lust, P. Manousselis and G. Zoupanos, "NonKahler string backgrounds and their five torsion classes," Nucl. Phys. B 652, 5-34 (2003) [arXiv:hep-th/0211118 [hep-th]];
BPS action and superpotential for heterotic string compactifications with fluxes. G Lopes Cardoso, G Curio, G Dall'agata, D Lust, arXiv:hep-th/0306088JHEP. 104hep-thG. Lopes Cardoso, G. Curio, G. Dall'Agata and D. Lust, "BPS action and superpotential for heterotic string compactifications with fluxes," JHEP 10, 004 (2003) [arXiv:hep-th/0306088 [hep-th]];
Properties of heterotic vacua from superpotentials. K Becker, M Becker, K Dasgupta, S Prokushkin, arXiv:hep-th/0304001Nucl. Phys. B. 666hep-thK. Becker, M. Becker, K. Dasgupta and S. Prokushkin, "Properties of heterotic vacua from superpotentials," Nucl. Phys. B 666, 144-174 (2003) [arXiv:hep-th/0304001 [hep-th]];
Compactifications of heterotic strings on nonKahler complex manifolds. 2.,". K Becker, M Becker, P S Green, K Dasgupta, E Sharpe, arXiv:hep-th/0310058Nucl. Phys. B. 678hep-thK. Becker, M. Becker, P. S. Green, K. Dasgupta and E. Sharpe, "Compactifications of heterotic strings on nonKahler complex manifolds. 2.," Nucl. Phys. B 678, 19-100 (2004) [arXiv:hep-th/0310058 [hep-th]];
Kahler versus nonKahler compactifications. M Becker, K Dasgupta, arXiv:hep-th/0312221hep-thM. Becker and K. Dasgupta, "Kahler versus nonKahler compactifications," [arXiv:hep-th/0312221 [hep-th]].
Constraining de Sitter Space in String Theory. D Kutasov, T Maxfield, I Melnikov, S Sethi, arXiv:1504.00056Phys. Rev. Lett. 115771305hep-thD. Kutasov, T. Maxfield, I. Melnikov and S. Sethi, "Constraining de Sitter Space in String Theory," Phys. Rev. Lett. 115, no.7, 071305 (2015) [arXiv:1504.00056 [hep-th]].
The Theory of a general quantum system interacting with a linear dissipative system. R P Feynman, F L Vernon, Jr , Annals Phys. 24R. P. Feynman and F. L. Vernon, Jr., "The Theory of a general quantum system interacting with a linear dissipative system," Annals Phys. 24, 118-173 (1963);
Coarse Grained Quantum Dynamics. C Agon, V Balasubramanian, S Kasko, A Lawrence, arXiv:1412.3148Phys. Rev. D. 98225019hep-thC. Agon, V. Balasubramanian, S. Kasko and A. Lawrence, "Coarse Grained Quantum Dynamics," Phys. Rev. D 98, no.2, 025019 (2018) [arXiv:1412.3148 [hep-th]].
Quantum corrections to the primordial tensor spectrum: open EFTs & Markovian decoupling of UV modes. S Brahma, A Berera, J Calderón-Figueroa, arXiv:2206.05797JHEP. 08225hep-thS. Brahma, A. Berera and J. Calderón-Figueroa, "Quantum corrections to the primordial tensor spectrum: open EFTs & Markovian decoupling of UV modes," JHEP 08, 225 (2022) [arXiv:2206.05797 [hep-th]].
Benchmarking the cosmological master equations. T Colas, J Grain, V Vennin, arXiv:2209.01929hep-thT. Colas, J. Grain and V. Vennin, "Benchmarking the cosmological master equations," [arXiv:2209.01929 [hep-th]].
Quantum Hotspots: Mean Fields, Open EFTs, Nonlocality and Decoherence Near Black Holes. C P Burgess, R Holman, G Kaplanek, arXiv:2106.10804Fortsch. Phys. 7042200019hep-thC. P. Burgess, R. Holman and G. Kaplanek, "Quantum Hotspots: Mean Fields, Open EFTs, Nonlocality and Decoherence Near Black Holes," Fortsch. Phys. 70, no.4, 2200019 (2022) [arXiv:2106.10804 [hep-th]];
Minimal decoherence from inflation. C P Burgess, R Holman, G Kaplanek, J Martin, V Vennin, arXiv:2211.11046hep-thC. P. Burgess, R. Holman, G. Kaplanek, J. Martin and V. Vennin, "Minimal decoherence from inflation," [arXiv:2211.11046 [hep-th]].
Effective field theory, black holes, and the cosmological constant. A G Cohen, D B Kaplan, A E Nelson, arXiv:hep-th/9803132Phys. Rev. Lett. 82hep-thA. G. Cohen, D. B. Kaplan and A. E. Nelson, "Effective field theory, black holes, and the cosmological constant," Phys. Rev. Lett. 82, 4971-4974 (1999) [arXiv:hep-th/9803132 [hep-th]];
Snowmass White Paper: Implications of Quantum Gravity for Particle Physics. P Draper, I G Garcia, M Reece, arXiv:2203.07624hep-phP. Draper, I. G. Garcia and M. Reece, "Snowmass White Paper: Implications of Quantum Gravity for Particle Physics," [arXiv:2203.07624 [hep-ph]];
UV/IR Mixing, Causal Diamonds and the Electroweak Hierarchy Problem. T W Kephart, H Päs, arXiv:2209.03305hep-phT. W. Kephart and H. Päs, "UV/IR Mixing, Causal Diamonds and the Electroweak Hierarchy Problem," [arXiv:2209.03305 [hep-ph]].
M Cicoli, J P Conlon, A Maharana, S Parameswaran, F Quevedo, I Zavala, arXiv:2303.04819String Cosmology: from the Early Universe to Today. hep-thM. Cicoli, J. P. Conlon, A. Maharana, S. Parameswaran, F. Quevedo and I. Zavala, "String Cosmology: from the Early Universe to Today," [arXiv:2303.04819 [hep-th]];
What if string theory has no de Sitter vacua?. U H Danielsson, T Van Riet, arXiv:1804.01120Int. J. Mod. Phys. D. 27121830007hep-thU. H. Danielsson and T. Van Riet, "What if string theory has no de Sitter vacua?," Int. J. Mod. Phys. D 27, no.12, 1830007 (2018) [arXiv:1804.01120 [hep-th]].
Heterotic de Sitter beyond modular symmetry. J M Leedom, N Righi, A Westphal, arXiv:2212.03876JHEP. 02209hep-thJ. M. Leedom, N. Righi and A. Westphal, "Heterotic de Sitter beyond modular symmetry," JHEP 02, 209 (2023) [arXiv:2212.03876 [hep-th]];
Duality, compactification, and e −1/λ effects in the heterotic string theory. E Silverstein, arXiv:hep-th/9611195Phys. Lett. B. 396hep-thE. Silverstein, "Duality, compactification, and e −1/λ effects in the heterotic string theory," Phys. Lett. B 396, 91-96 (1997) [arXiv:hep-th/9611195 [hep-th]].
Accelerating cosmology from Λ < 0 gravitational effective field theory. S Antonini, P Simidzija, B Swingle, M Van Raamsdonk, C Waddell, arXiv:2212.00050hep-thS. Antonini, P. Simidzija, B. Swingle, M. Van Raamsdonk and C. Waddell, "Accelerating cosmology from Λ < 0 gravitational effective field theory," [arXiv:2212.00050 [hep-th]];
Non-Supersymmetric AdS from String Theory. Z K Baykara, D Robbins, S Sethi, arXiv:2212.02557hep-thZ. K. Baykara, D. Robbins and S. Sethi, "Non-Supersymmetric AdS from String Theory," [arXiv:2212.02557 [hep-th]].
| []
|
[
"The Kinetics Human Action Video Dataset",
"The Kinetics Human Action Video Dataset"
]
| [
"Will Kay [email protected] ",
"João Carreira ",
"Karen Simonyan [email protected] ",
"Brian Zhang [email protected] ",
"Chloe Hillier [email protected] ",
"Sudheendra Vijayanarasimhan ",
"Fabio Viola [email protected] ",
"Tim Green ",
"Trevor Back [email protected] ",
"Paul Natsev [email protected] ",
"Mustafa Suleyman [email protected] ",
"Andrew Zisserman [email protected] "
]
| []
| []
| We describe the DeepMind Kinetics human action video dataset. The dataset contains 400 human action classes, with at least 400 video clips for each action. Each clip lasts around 10s and is taken from a different YouTube video. The actions are human focussed and cover a broad range of classes including human-object interactions such as playing instruments, as well as human-human interactions such as shaking hands. We describe the statistics of the dataset, how it was collected, and give some baseline performance figures for neural network architectures trained and tested for human action classification on this dataset. We also carry out a preliminary analysis of whether imbalance in the dataset leads to bias in the classifiers. | null | [
"https://arxiv.org/pdf/1705.06950v1.pdf"
]
| 27,300,853 | 1705.06950 | 86e1bdbfd13b9ed137e4c4b8b459a3980eb257f6 |
The Kinetics Human Action Video Dataset
Will Kay [email protected]
João Carreira
Karen Simonyan [email protected]
Brian Zhang [email protected]
Chloe Hillier [email protected]
Sudheendra Vijayanarasimhan
Fabio Viola [email protected]
Tim Green
Trevor Back [email protected]
Paul Natsev [email protected]
Mustafa Suleyman [email protected]
Andrew Zisserman [email protected]
The Kinetics Human Action Video Dataset
We describe the DeepMind Kinetics human action video dataset. The dataset contains 400 human action classes, with at least 400 video clips for each action. Each clip lasts around 10s and is taken from a different YouTube video. The actions are human focussed and cover a broad range of classes including human-object interactions such as playing instruments, as well as human-human interactions such as shaking hands. We describe the statistics of the dataset, how it was collected, and give some baseline performance figures for neural network architectures trained and tested for human action classification on this dataset. We also carry out a preliminary analysis of whether imbalance in the dataset leads to bias in the classifiers.
Introduction
In this paper we introduce a new, large, video dataset for human action classification. We developed this dataset principally because there is a lack of such datasets for human action classification, and we believe that having one will facilitate research in this area -both because the dataset is large enough to train deep networks from scratch, and also because the dataset is challenging enough to act as a performance benchmark where the advantages of different architectures can be teased apart.
Our aim is to provide a large scale high quality dataset, covering a diverse range of human actions, that can be used for human action classification, rather than temporal localization. Since the use case is classification, only short clips of around 10s containing the action are included, and there are no untrimmed videos. However, the clips also contain sound so the dataset can potentially be used for many purposes, including multi-modal analysis. Our inspiration in providing a dataset for classification is ImageNet [18], where the significant benefits of first training deep networks on this dataset for classification, and then using the trained network for other purposes (detection, image segmentation, non-visual modalities (e.g. sound, depth), etc) are well known.
The Kinetics dataset can be seen as the successor to the two human action video datasets that have emerged as the standard benchmarks for this area: HMDB-51 [15] and UCF-101 [20]. These datasets have served the community very well, but their usefulness is now expiring. This is because they are simply not large enough or have sufficient variation to train and test the current generation of human action classification models based on deep learning. Coincidentally, one of the motivations for introducing the HMDB dataset was that the then current generation of action datasets was too small. The increase then was from 10 to 51 classes, and we in turn increase this to 400 classes. Table 1 compares the size of Kinetics to a number of recent human action datasets. In terms of variation, although the UCF-101 dataset contains 101 actions with 100+ clips for each action, all the clips are taken from only 2.5k distinct videos. For example there are 7 clips from one video of the same person brushing their hair. This means that there is far less variation than if the action in each clip was performed by a different person (and different viewpoint, lighting, etc). This problem is avoided in Kinetics as each clip is taken from a different video.
The clips are sourced from YouTube videos. Consequently, for the most part, they are not professionally videoed and edited material (as in TV and film videos). There can be considerable camera motion/shake, illumination variations, shadows, background clutter, etc. More im-
Dataset
Year Actions Clips Total Videos HMDB-51 [15] 2011 51 min 102 6,766 3,312 UCF-101 [20] 2012 101 min 101 13,320 2,500 ActivityNet-200 [3] 2015 200 avg 141 28,108 19,994 Kinetics 2017 400 min 400 306,245 306,245 Table 1: Statistics for recent human action recognition datasets. 'Actions', specifies the number of action classes; 'Clips', the number of clips per class; 'Total', is the total number of clips; and 'Videos', the total number of videos from which these clips are extracted.
portantly, there are a great variety of performers (since each clip is from a different video) with differences in how the action is performed (e.g. its speed), clothing, body pose and shape, age, and camera framing and viewpoint. Our hope is that the dataset will enable a new generation of neural network architectures to be developed for video. For example, architectures including multiple streams of information (RGB/appearance, optical flow, human pose, object category recognition), architectures using attention, etc. That will enable the virtues (or otherwise) of the new architectures to be demonstrated. Issues such as the tension between static and motion prediction, and the open question of the best method of temporal aggregation in video (recurrent vs convolutional) may finally be resolved.
The rest of the paper is organized as: Section 2 gives an overview of the new dataset; Section 3 describes how it was collected and discusses possible imbalances in the data and their consequences for classifier bias. Section 4 gives the performance of a number of ConvNet architectures that are trained and tested on the dataset. Our companion paper [5] explores the benefit of pre-training an action classification network on Kinetics, and then using the features from the network for action classification on other (smaller) datasets.
The URLs of the YouTube videos and temporal intervals of the dataset can be obtained from http://deepmind. com/kinetics.
An Overview of the Kinetics Dataset
Content: The dataset is focused on human actions (rather than activities or events). The list of action classes covers: Person Actions (singular), e.g. drawing, drinking, laughing, pumping fist; Person-Person Actions, e.g. hugging, kissing, shaking hands; and, Person-Object Actions, e.g. opening present, mowing lawn, washing dishes. Some actions are fine grained and require temporal reasoning to distinguish, for example different types of swimming. Other actions require more emphasis on the object to distinguish, for example playing different types of wind instruments.
There is not a deep hierarchy, but instead there are several (non-exclusive) parent-child groupings, e.g. Music (playing drums, trombone, violin, . . . ); Personal Hygiene (brushing teeth, cutting nails, washing hands, . . . ); Dancing (ballet, macarena, tap, . . . ); Cooking (cutting, frying, peeling, . . . ). The full list of classes is given in the appendix, together with parent-child groupings. Figure 1 shows clips from a sample of classes.
Statistics:
The dataset has 400 human action classes, with 400-1150 clips for each action, each from a unique video. Each clip lasts around 10s. The current version has 306,245 videos, and is divided into three splits, one for training having 250-1000 videos per class, one for validation with 50 videos per class and one for testing with 100 videos per class. The statistics are given in table 2. The clips are from YouTube videos and have a variable resolution and frame rate.
Train
Validation Test 250-1000 50 100 Non-exhaustive annotation. Each class contains clips illustrating that action. However, a particular clip can contain several actions. Interesting examples in the dataset include: "texting" while "driving a car"; "Hula hooping" while "playing ukulele"; "brushing teeth" while "dancing" (of some type). In each case both of the actions are Kinetics classes, and the clip will probably only appear under only one of these classes not both, i.e. clips do not have complete (exhaustive) annotation. For this reason when evaluating classification performance, a top-5 measure is more suitable than top-1. This is similar to the situation in ImageNet [18], where one of the reasons for using a top-5 measure is that images are only labelled for a single class, although it may contain multiple classes.
How the Dataset was Built
In this section we describe the collection process: how candidate videos were obtained from YouTube, and then the processing pipeline that was used to select the candidates Best seen in colour and with zoom. Note that in some cases a single image is not enough for recognizing the action (e.g. "headbanging") or distinguishing classes ("dribbling basketball" vs "dunking basketball"). The dataset contains: Singular Person Actions (e.g. "robot dancing", "stretching leg"); Person-Person Actions (e.g. "shaking hands", "tickling"); Person-Object Actions (e.g. "riding a bike"); same verb different objects (e.g. "playing violin", "playing trumpet"); and same object different verbs (e.g. "dribbling basketball", "dunking basketball"). These are realistic (amateur) videos -there is often significant camera shake, for instance. and clean up the dataset. We then discuss possible biases in the dataset due to the collection process.
Overview: clips for each class were obtained by first searching on YouTube for candidates, and then using Amazon Mechanical Turkers (AMT) to decide if the clip contains the action or not. Three or more confirmations (out of five) were required before a clip was accepted. The dataset was de-duped, by checking that only one clip is taken from each video, and that clips do not contain common video material. Finally, classes were checked for overlap and denoised.
We now describe these stages in more detail.
Stage 1: Obtaining an action list
Curating a large list of human actions is challenging, as there is no single listing available at this scale with suitable visual action classes. Consequently, we had to combine numerous sources together with our own observations of actions that surround us. These sources include: (i) Action datasets -existing datasets like Ac-tivityNet [3], HMDB [15], UCF101 [20], MPII Human Pose [2], ACT [25] have useful classes and a suitable sub set of these were used; (ii) Motion capture -there are a number of motion capture datasets which we looked through and extracted file titles. These titles described the motion within the file and were often quite creative; and, (iii) Crowdsourced -we asked Mechanical Turk workers to come up with a more appropriate action if the label we had presented to them for a clip was incorrect.
Stage 2: Obtaining candidate clips
The chosen method and steps are detailed below which combine a number of different internal efforts:
Step 1: obtaining videos. Videos are drawn from the YouTube corpus by matching video titles with the Kinetics actions list.
Step 2: temporal positioning within a video. Image classifiers are available for a large number of human actions. These classifiers are obtained by tracking user actions on Google Image Search. For example, for a search query "climbing tree", user relevance feedback on images is collected by aggregating across the multiple times that that search query is issued. This relevance feedback is used to select a high-confidence set of images that can be used to train a "climbing tree" image classifier. These classifiers are run at the frame level over the videos found in step 1, and clips extracted around the top k responses (where k = 2).
It was found that the action list had a better match to relevant classifiers if action verbs are formatted to end with 'ing'. Thinking back to image search, this makes sense as typically if you are searching for an example of someone performing an action you would issue queries like 'running man' or 'brushing hair' over other tenses like 'man ran' or 'brush hair'.
The output of this stage is a large number of videos and a position in all of them where one of the actions is potentially occurring. 10 second clips are created by taking 5 seconds either side of that position (there are length exceptions when the position is within 5 seconds of the start or end of the video leading to a shorter clip length). The clips are then passed onto the next stage of cleanup through human labelling.
Stage 3: Manual labelling process
The key aim of this stage was to identify whether the supposed action was actually occurring during a clip or not. A human was required in the loop for this phase and we chose to use Amazon's Mechanical Turk (AMT) for the task due to the large numbers of high quality workers using the platform.
A single-page webapp was built for the labelling task and optimised to maximise the number of clips presented to the workers whilst maintaining a high quality of annotation. The labelling interface is shown in figure 2. The user interface design and theme were chosen to differentiate the task from many others on the platform as well as make the task as stimulating and engaging as possible. This certainly paid off as the task was one of the highest rated on the platform and would frequently get more than 400 distinct workers as soon as a new run was launched.
The workers were given clear instructions at the beginning. There were two screens of instruction, the second reinforcing the first. After acknowledging they understood the task they were presented with a media player and several response icons. The interface would fetch a set of videos from the available pool for the worker at that moment and embed the first clip. The task consisted of 20 videos each with a different class where possible; we randomised all the videos and classes to make it more interesting for the workers and prevent them from becoming stuck on classes with low yields. Two of the video slots were used by us to inject groundtruth clips. This allowed us to get an estimate of the accuracy for each worker. If a worker fell below a 50% success rating on these, we showed them a 'low accuracy' warning screen. This helped address many low accuracies.
In the labelling interface, workers were asked the question "Can you see a human performing the action class-name?". The following response options were available on the interface as icons:
• Yes, this contains a true example of the action When a worker responded with 'Yes' we also asked the question "Does the action last for the whole clip?" in order to use this signal later during model training. Note, the AMT workers didn't have access to the audio to ensure that the video can be classified purely based on its visual content.
In order for a clip to be added to the dataset, it needed to receive at least 3 positive responses from workers. We allowed each clip to be annotated 5 times except if it had been annotated by more than 2 of a specific response. For example, if 3 out of 3 workers had said it did not contain an example of the action we would immediately remove it from the pool and not continue until 5 workers had annotated it.
Due to the large scale of the task it was necessary to quickly remove classes that were made up of low quality or completely irrelevant candidates. Failing to do this would have meant that we spent a lot of money paying workers to mark videos as negative or bad. Accuracies for each class were calculated after 20 clips from that class had been annotated. We adjusted the accuracy threshold between runs but would typically start at a high accuracy of 50% (1 in 2 videos were expected to contain the action).
Following annotating, the video ids, clip times and labels were exported from the database and handed on to be used for model training.
What we learnt: We found that more specific classes like 'riding mule' were producing much less noise than more general classes like 'riding'. However, occasionally using more general classes was a benefit as they could subsequently be split into a few distinct classes that were not previously present and the candidates resent out to workers e.g. 'gardening' was split into 'watering plants', 'trimming trees' and 'planting trees'.
The amount of worker traffic that the task generated meant that we could not rely on direct fetching and writes to the database even with appropriate indexes and optimised queries. We therefore created many caches which were made up of groups of clips for each worker. When a worker started a new task, the interface would fetch a set of clips for that specific worker. The cache was replenished often by background processes as clips received a sufficient number of annotations. This also negated labelling collisions where previously > 1 worker might pick up the same video to annotate and we would quickly exceed 5 responses for any 1 clip.
Stage 4: Cleaning up and de-noising
One of the dataset design goals was having a single clip from each given video sequence, different from existing datasets which slice videos containing repetitive actions into many (correlated) training examples. We also employed mechanisms for identifying structural problems as we grew the dataset, such as repeated classes due to synonymy or different word order (e.g. riding motorbike, riding motorcycle), classes that are too general and co-occur with many others (e.g. talking) and which are problematic for typical 1-of-K classification learning approaches (instead of multi-label classification). We will now describe these procedures.
De-duplicating videos. We de-duplicated videos using two complementary approaches. First, in order to have only one clip from each YouTube link, we randomly selected a single clip from amongst those validated by Turkers for that video. This stage filtered out around 20% of Turkerapproved examples, but we visually found that it still left many duplicates. The reason is that YouTube users often create videos reusing portions of other videos, for example as part of video compilations or promotional adverts. Sometimes they are cropped, resized and generally pre-processed in different ways (but, nevertheless, the image classifier could localize the same clip). So even though each clip is from a distinct video there were still duplications.
We devised a process for de-duplicating across YouTube links which operated independently for each class. First we computed Inception-V1 [12] feature vectors (taken after last average pooling layer) on 224 × 224 center crops of 25 uniformly sampled frames from each video, which we then averaged. Afterwards we built a class-wise matrix having all cosine similarities between these feature vectors and thresholded it. Finally, we computed connected components and kept a random example from each. We found this to work well for most classes using the same threshold of 0.97, but adjusted it in a few cases where classes were visually similar, such as some taking place in the snow or in the water. This process reduced the number of Turker-approved examples by a further 15%.
Detecting noisy classes. Classes can be 'noisy' in that they may overlap with other classes or they may contain several quite distinct (in terms of the action) groupings due to an ambiguity in the class name. For example, 'skipping' can be 'skipping with a rope' and also 'skipping stones across water'. We trained two-stream action classifiers [19] repeatedly throughout the dataset development to identify these noise classes. This allowed us to find the top confusions for each class, which sometimes were clear even by just verifying the class names (but went unnoticed due to the scale of the dataset), and other times required eyeballing the data to understand if the confusions were alright and the classes were just difficult to distinguish because of shortcomings of the model. We merged, split or outright removed classes based on these detected confusions.
Final filtering. After all the data was collected, deduplicated and the classes were selected, we ran a final manual clip filtering stage. Here the class scores from the twostream model were again useful as they allowed sorting the examples from most confident to least confident -a measure of how prototypical they were. We found that noisy examples were often among the lowest ranked examples and focused on those. The ranking also made adjacent any remaining duplicate videos, which made it easier to filter out those too.
Discussion: dataset bias I
We are familiar with the notion of dataset bias leading to lack of generalization: where a classifier trained on one dataset, e.g. Caltech 256 [10], does not perform well when tested on another, e.g. PASCAL VOC [8]. Indeed it is even possible to train a classifier to identify which dataset an image belongs to [22].
There is another sense of bias which could arise from unbalanced categories within a dataset. For example, gender imbalance in a training set could lead to a corresponding performance bias for classifiers trained on this set. There are precedents for this, e.g. in publicly available face detectors not being race agnostic 1 , and more recently in learning a semantic bias in written texts [4]. It is thus an important question as to whether Kinetics leads to such bias.
To this end we carried out a preliminary study on (i) whether the data for each action class of Kinetics is gender balanced, and (ii) if, there is an imbalance, whether it leads to a biased performance of the action classifies.
The outcome of (i) is that in 340 action classes out of the 400, the data is either not dominated by a single gender, or it is mostly not possible to determine the gender -the latter arises in classes where, for example, only hands appear, or the 'actors' are too small or heavily clothed. The classes that do show gender imbalance include 'shaving beard' and 'dunking basketball', that are mostly male, and 'filling eyebrows' and 'cheerleading', that are mostly female.
The outcome of (ii) for these classes we found little evidence of classifier bias for action classes with gender imbalance. For example in 'playing poker', which tends to have more male players, all videos with female players are correctly classified. The same happens for 'Hammer throw'. We can conjecture that this lack of bias is because the classifier is able to make use of both the objects involved in an action as well as the motion patterns, rather than simply physical appearance.
Imbalance can also be examined on other 'axes', for example age and race. Again, in a preliminary investigation we found very little clear bias. There is one exception where there is clear bias to babies -in 'crying', where many of the videos of non-babies crying are misclassified; another example is 'wrestling', where the opposite happens: adults wrestling in a ring seem to be better classified than children wrestling in their homes, but it is hard to tell whether the deciding factor is age or the scenes where the actions happen. Nevertheless, these issues of dataset imbalance and any resulting classifier bias warrant a more thorough investigation, and we return to this in section 5.
Discussion: dataset bias II
Another type of bias could arise because classifiers are involved in the dataset collection pipeline: it could be that these classifiers lead to a reduction in the visual variety of the clips obtained, which in turn leads to a bias in the action classifier trained on these clips. In more detail, although the videos are selected based on their title (which is provided by the person uploading the video to YouTube), the position of the candidate clip within the video is provided by an image (RGB) classifier, as described above. In practice, using a classifier at this point does not seem to constrain the variety of the clips -since the video is about the action, the particular frame chosen as part of the clip may not be crucial; and, in any case, the clip contains hundreds of more frames where the appearance (RGB) and motion can vary considerably. For these reasons we are not so concerned about the intermediate use of image classifiers.
Benchmark Performance
In this section we first briefly describe three standard ConvNet architectures for human action recognition in video. We then use these architectures as baselines and compare their performance by training and testing on the Kinetics dataset. We also include their performance on UCF-101 and HMDB-51.
We consider three typical approaches for video classification: ConvNets with an LSTM on top [7,26]; two-stream networks [9,19]; and a 3D ConvNet [13,21,23]. There have been many improvements over these basic architectures, e.g. [9], but our intention here is not to perform a thorough study on what is the very best architecture on Kinetics, but instead to provide an indication of the level of difficulty of the dataset. A rough graphical overview of the three types of architectures we compare is shown in figure 3, and the specification of their temporal interfaces is given in table 3.
For the experiments on the Kinetics dataset all three architectures are trained from scratch using Kinetics. How-ever, for the experiments on UCF-101 and HMDB-51 the architectures (apart from the 3D ConvNet) are pre-trained on ImageNet (since these datasets are too small to train the architectures from scratch).
ConvNet+LSTM
The high performance of image classification networks makes it appealing to try to reuse them with as minimal change as possible for video. This can be achieved by using them to extract features independently from each frame then pooling their predictions across the whole video [14]. This is in the spirit of bag of words image modeling approaches [16,17,24], but while convenient in practice, it has the issue of entirely ignoring temporal structure (e.g. models can't potentially distinguish opening from closing a door).
In theory, a more satisfying approach is to add a recurrent layer to the model [7,26], such as an LSTM, which can encode state, and capture temporal ordering and long range dependencies. We position an LSTM layer with batch normalization (as proposed by Cooijmans et al. [6]) after the last average pooling layer of a ResNet-50 model [11], with 512 hidden units. We then add a fully connected layer on top of the output of the LSTM for the multi-way classification. At test time the classification is taken from the model output for the last frame.
Two-Stream networks
LSTMs on features from the last layers of ConvNets can model high-level variation, but may not be able to capture fine low-level motion which is critical in many cases. It is also expensive to train as it requires unrolling the network through multiple frames for backpropagation-through-time.
A different, very practical approach, introduced by Simonyan and Zisserman [19], models short temporal snapshots of videos by averaging the predictions from a single RGB frame and a stack of 10 externally computed optical flow frames, after passing them through two replicas of an ImageNet-pretrained ConvNet. The flow stream has an adapted input convolutional layer with twice as many input channels as flow frames (because flow has two channels, horizontal and vertical), and at test time multiple snapshots are sampled from the video and the action prediction is averaged. This was shown to get very high performance on existing benchmarks, while being very efficient to train and test.
3D ConvNets
3D ConvNets [13,21,23] seem like a natural approach to video modeling. They are just like standard 2D convolutional networks, but with spatio-temporal filters, and have a very interesting characteristic: they directly create hierarchical representations of spatio-temporal data. One issue with these models is that they have many more parameters than 2D ConvNets because of the additional kernel dimension, and this makes them harder to train. Also, they seem to preclude the benefits of ImageNet pre-training and previous work has defined relatively shallow custom architectures and trained them from scratch [13,14,21,23]. Results on benchmarks have shown promise but have not yet matched the state-of-the-art, possibly because they require more training data than their 2D counterparts. Thus 3D ConvNets are a good candidate for evaluation on our larger dataset.
For this paper we implemented a small variation of C3D [23], which has 8 convolutional layers, 5 pooling layers and 2 fully connected layers at the top. The inputs to the model are short 16-frame clips with 112 × 112-pixel crops. Differently from the original paper we use batch normalization after all convolutional and fully connected layers. Another difference to the original model is in the first pooling layer, where we use a temporal stride of 2 instead of 1, which reduces the memory footprint and allows for bigger batchesthis was important for batch normalization (especially after the fully connected layers, where there is no weight tying). Using this stride we were able to train with 15 videos per batch per GPU using standard K40 GPUs.
At test time, we split the video uniformly into crops of 16 frames and apply the classifier separately on each. We then average the class scores, as in the original paper.
Implementation details
The ConvNet+LSTM and Two-Stream architecures use ResNet-50 as the base architecture. In the case of the Two-Stream architecture, a separate ResNet-50 is trained independently for each stream. As noted earlier, for these architectures the ResNet-50 model is pre-trained on Ima-geNet for the experiments on UCF-101 and HMDB-51, and trained from scratch for experiments on Kinetics. The 3D-ConvNet is not pre-trained.
We trained the models on videos using standard SGD with momentum in all cases, with synchronous parallelization across 64 GPUs for all models. We trained models on Kinetics for up to 100k steps, with a 10x reduction of learning rate when validation loss saturated, and tuned weight decay and learning rate hyperparameters on the validation set of Kinetics. All the models were implemented in Ten-sorFlow [1].
The original clips have variable resolution and frame rate. In our experiments they are all normalized so that the larger image side is 340 pixels wide for models using ResNet-50 and 128 pixels wide for the 3D ConvNet. We also resample the videos so they have 25 frames per second.
Data augmentation is known to be of crucial importance for the performance of deep architectures. We used random cropping both spatially -randomly cropping a 299 × 299 patch (respectively 112 × 112 for the 3D ConvNet) -and temporally, when picking the starting frame among those early enough to guarantee a desired number of frames. For shorter videos, we looped the video as many times as necessary to satisfy each model's input interface. We also applied random left-right flipping consistently for each video during training.
At test time, we sample from up to 10 seconds of video, again looping if necessary. Better performance could be obtained by also considering left-right flipped videos at test time and by adding additional augmentation, such as photometric, during training. We leave this to future work.
Baseline evaluations
In this section we compare the performance of the three baseline architectures whilst varying the dataset used for training and testing. Table 4 shows the classification accuracy when training and testing on either UCF-101, HMDB-51 or Kinetics. We train and test on split 1 of UCF-101 and HMDB-51, and on the train/val set and held-out test set of Kinetics.
There are several noteworthy observations. First, the performance is far lower on Kinetics than on UCF-101, an indication of the different levels of difficulty of the two datasets. On the other hand, the performance on HMDB-51 is worse than on Kinetics -it seems to have a truly difficult test set, and it was designed to be difficult to appearance-centered methods, while having little training data. The parameterrich 3D-ConvNet model is not pre-trained on ImageNet, unlike the other baselines. This translates into poor performance on all datasets but especially on UCF-101 and HMDB-51 -on Kinetics it is much closer to the performance of the other models, thanks to the much larger training set of Kinetics.
• Class difficulty. We include a full list of Kinetics classes sorted by classification accuracy under the twostream model in figure 4. Eating classes are among the hardest, as they sometimes require distinguishing what is being eaten, such as hotdogs, chips and doughnutsand these may appear small and already partially consumed, in the video. Dancing classes are also hard, as well as classes centered on a specific body part, such as "massaging feet", or "shaking head".
• Class confusion. The top 10 class confusions are provided in table 5. They mostly correspond to finegrained distinctions that one would expect to be hard, for example 'long jump' and 'triple jump', confusing burger with doughnuts. The confusion between 'swing dancing' and 'salsa dancing' raises the question of how accurate motion modeling is in the two-stream model, since 'swing dancing' is typically much faster-paced and has a peculiar style that makes it easy for humans to distinguish from salsa.
• Classes where motion matters most. We tried to analyze for which classes motion is more important and which ones were recognized correctly using just appearance information, by comparing the recognition accuracy ratios when using the flow and RGB streams of the two-stream model in isolation. We show the five classes where this ratio is largest and smallest in table 6.
Conclusion
We have described the Kinetics Human Action Video dataset, which has an order of magnitude more videos than previous datasets of its type. We have also discussed the procedures we employed collecting the data and for ensuring its quality. We have shown that the performance of standard existing models on this dataset is much lower than on UCF-101 and on par with HMDB-51, whilst allowing large models such as 3D ConvNets to be trained from scratch, unlike the existing human action datasets.
We have also carried out a preliminary analysis of dataset imbalance and whether this leads to bias in the classifiers trained on the dataset. We found little evidence that the resulting classifiers demonstrate bias along sensitive axes, such as across gender. This is however a complex area that deserves further attention. We leave a thorough analysis for future work, in collaboration with specialists from complementary areas, namely social scientists and critical humanists.
We will release trained baseline models (in TensorFlow), so that they can be used, for example, to generate features for new action classes. 0.0 'making a sandwich' 0.0 Table 6: Classes with largest and smallest ratios of recognition accuracy when using flow and RGB. The highest ratios correspond to when flow does better, the smallest to when RGB does better. We also evaluated the ratios of rgb+flow to rgb accuracies and the ordering was quite similar.
Figure 1 :
1Example classes from the Kinetics dataset.
•
No, this does not contain an example of the action
Figure 2 :
2Labeling interface used in Mechanical Turk. • You are unsure if there is an example of the action • Replay the video • Video does not play, does not contain a human, is an image, cartoon or a computer game.
Figure 3 :
3Video architectures used as baseline human action classifiers.
Figure 4 :
4List of 20 easiest and 20 hardest Kinetics classes sorted by class accuracies obtained using the two-stream model.
Table 2 :
2Kinetics Dataset Statistics. The number of clips for each class in the train/val/test partitions.
Table 3 :
3Number of parameters and temporal input sizes of the models.ConvNet+LSTM and Two-Stream use ResNet-50
Table 4 :
4Baseline comparisons across datasets: (left) training and testing on split 1 of UCF-101; (middle) training and testing on split 1 of HMDB-51; (right) training and testing on Kinetics (showing top-1/top-5 performance). ConvNet+LSTM and Two-Stream use ResNet-50 ConvNet modules, pretrained on ImageNet for UCF-101 and HMDB-51 examples but not for the Kinetics experiments. Note that the Two-Stream architecture numbers on individual RGB and Flow streams can be interpreted as a simple baseline which applies a ConvNet independently on 25 uniformly sampled frames then averages the predictions.
Table 5 :
5Top-12 class confusions in Kinetics, using the two-stream model.Class
Flow/RGB accuracy ratio
'rock scissors paper'
5.3
'sword fighting'
3.1
'robot dancing'
3.1
'air drumming'
2.8
'exercising arm'
2.5
'making a cake'
0.1
'cooking sausages'
0.1
'sniffing'
0.1
'eating cake'
https://www.media.mit.edu/posts/ media-lab-student-recognized-for-fighting-bias-in-machine-learn
Acknowledgements:The collection of this dataset was funded by DeepMind. We are very grateful for help from Andreas Kirsch, John-Paul Holt, Danielle Breen, Jonathan Fildes, James Besley and Brian Carver. We are grateful for advice and comments from Tom Duerig, Juan Carlos Niebles, Simon Osindero, Chuck Rosenberg and Sean Legassick; we would also like to thank Sandra and Aditya for data clean up.A. List of Kinetics Human Action ClassesThis is the list of classes included in the human action video dataset. The number of clips for each action class is given by the number in brackets following each class name.B. List of Parent-Child GroupingsThese lists are not exclusive and are not intended to be comprehensive. Rather, they are a guide for related human action classes.arts and crafts(12)arranging flowers blowing glass brush painting carving pumpkin clay pottery making decorating the christmas tree drawing getting a tattoo knitting making jewelry spray painting weaving basket athletics -jumping(
M Abadi, A Agarwal, P Barham, E Brevdo, Z Chen, C Citro, G S Corrado, A Davis, J Dean, M Devin, arXiv:1603.04467Large-scale machine learning on heterogeneous distributed systems. arXiv preprintM. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, et al. Tensorflow: Large-scale machine learning on heteroge- neous distributed systems. arXiv preprint arXiv:1603.04467, 2016.
2d human pose estimation: New benchmark and state of the art analysis. M Andriluka, L Pishchulin, P Gehler, B Schiele, Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference. IEEEM. Andriluka, L. Pishchulin, P. Gehler, and B. Schiele. 2d human pose estimation: New benchmark and state of the art analysis. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on. IEEE, 2014.
Activitynet: A large-scale video benchmark for human activity understanding. F Caba Heilbron, V Escorcia, B Ghanem, J C Niebles, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionF. Caba Heilbron, V. Escorcia, B. Ghanem, and J. C. Niebles. Activitynet: A large-scale video benchmark for human activ- ity understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015.
Semantics derived automatically from language corpora contain humanlike biases. A Caliskan, J J Bryson, A Narayanan, Science. 3566334A. Caliskan, J. J. Bryson, and A. Narayanan. Semantics de- rived automatically from language corpora contain human- like biases. Science, 356(6334):183-186, 2017.
Quo vadis, action recognition? new models and the kinetics dataset. J Carreira, A Zisserman, IEEE International Conference on Computer Vision and Pattern Recognition CVPR. J. Carreira and A. Zisserman. Quo vadis, action recogni- tion? new models and the kinetics dataset. In IEEE Interna- tional Conference on Computer Vision and Pattern Recogni- tion CVPR, 2017.
T Cooijmans, N Ballas, C Laurent, A Courville, arXiv:1603.09025Recurrent batch normalization. arXiv preprintT. Cooijmans, N. Ballas, C. Laurent, and A. Courville. Recurrent batch normalization. arXiv preprint arXiv:1603.09025, 2016.
Long-term recurrent convolutional networks for visual recognition and description. J Donahue, L Hendricks, S Guadarrama, M Rohrbach, S Venugopalan, K Saenko, T Darrell, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionJ. Donahue, L. Anne Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Dar- rell. Long-term recurrent convolutional networks for visual recognition and description. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2625-2634, 2015.
The pascal visual object classes challenge: A retrospective. M Everingham, S A Eslami, L Van Gool, C K Williams, J Winn, A Zisserman, International Journal of Computer Vision. 1111M. Everingham, S. A. Eslami, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman. The pascal visual object classes challenge: A retrospective. International Journal of Com- puter Vision, 111(1):98-136, 2015.
Convolutional two-stream network fusion for video action recognition. C Feichtenhofer, A Pinz, A Zisserman, IEEE International Conference on Computer Vision and Pattern Recognition CVPR. C. Feichtenhofer, A. Pinz, and A. Zisserman. Convolutional two-stream network fusion for video action recognition. In IEEE International Conference on Computer Vision and Pat- tern Recognition CVPR, 2016.
Caltech-256 object category dataset. G Griffin, A Holub, P Perona, G. Griffin, A. Holub, and P. Perona. Caltech-256 object cat- egory dataset. 2007.
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, Computer Vision and Pattern Recognition (CVPR), 2016 IEEE Conference on. K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learn- ing for image recognition. In Computer Vision and Pattern Recognition (CVPR), 2016 IEEE Conference on, 2016.
Batch normalization: Accelerating deep network training by reducing internal covariate shift. S Ioffe, C Szegedy, arXiv:1502.03167arXiv preprintS. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
3d convolutional neural networks for human action recognition. S Ji, W Xu, M Yang, K Yu, IEEE transactions on pattern analysis and machine intelligence. 35S. Ji, W. Xu, M. Yang, and K. Yu. 3d convolutional neural networks for human action recognition. IEEE transactions on pattern analysis and machine intelligence, 35(1):221- 231, 2013.
Large-scale video classification with convolutional neural networks. A Karpathy, G Toderici, S Shetty, T Leung, R Sukthankar, L Fei-Fei, Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. the IEEE conference on Computer Vision and Pattern RecognitionA. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei. Large-scale video classification with convo- lutional neural networks. In Proceedings of the IEEE con- ference on Computer Vision and Pattern Recognition, pages 1725-1732, 2014.
HMDB: a large video database for human motion recognition. H Kuehne, H Jhuang, E Garrote, T Poggio, T Serre, Proceedings of the International Conference on Computer Vision (ICCV). the International Conference on Computer Vision (ICCV)H. Kuehne, H. Jhuang, E. Garrote, T. Poggio, and T. Serre. HMDB: a large video database for human motion recog- nition. In Proceedings of the International Conference on Computer Vision (ICCV), 2011.
Learning realistic human actions from movies. I Laptev, M Marszalek, C Schmid, B Rozenfeld, Computer Vision and Pattern Recognition. IEEECVPR 2008. IEEE Conference onI. Laptev, M. Marszalek, C. Schmid, and B. Rozenfeld. Learning realistic human actions from movies. In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pages 1-8. IEEE, 2008.
Unsupervised learning of human action categories using spatial-temporal words. J C Niebles, H Wang, L Fei-Fei, International journal of computer vision. 793J. C. Niebles, H. Wang, and L. Fei-Fei. Unsupervised learn- ing of human action categories using spatial-temporal words. International journal of computer vision, 79(3):299-318, 2008.
. O Russakovsky, J Deng, H Su, J Krause, S Satheesh, S Ma, S Huang, A Karpathy, A Khosla, M Bernstein, A Berg, F Li, Imagenet large scale visual recognition challenge. IJCVO. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, S. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg, and F. Li. Imagenet large scale visual recognition challenge. IJCV, 2015.
Two-stream convolutional networks for action recognition in videos. K Simonyan, A Zisserman, Advances in Neural Information Processing Systems. K. Simonyan and A. Zisserman. Two-stream convolutional networks for action recognition in videos. In Advances in Neural Information Processing Systems, pages 568-576, 2014.
Ucf101: A dataset of 101 human actions classes from videos in the wild. K Soomro, A R Zamir, M Shah, arXiv:1212.0402arXiv preprintK. Soomro, A. R. Zamir, and M. Shah. Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402, 2012.
Convolutional learning of spatio-temporal features. G W Taylor, R Fergus, Y Lecun, C Bregler, European conference on computer vision. SpringerG. W. Taylor, R. Fergus, Y. LeCun, and C. Bregler. Convolu- tional learning of spatio-temporal features. In European con- ference on computer vision, pages 140-153. Springer, 2010.
Unbiased look at dataset bias. A Torralba, A A Efros, Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on. IEEEA. Torralba and A. A. Efros. Unbiased look at dataset bias. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 1521-1528. IEEE, 2011.
Learning spatiotemporal features with 3d convolutional networks. D Tran, L Bourdev, R Fergus, L Torresani, M Paluri, 2015 IEEE International Conference on Computer Vision (ICCV). IEEED. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri. Learning spatiotemporal features with 3d convolutional net- works. In 2015 IEEE International Conference on Computer Vision (ICCV), pages 4489-4497. IEEE, 2015.
Action recognition with improved trajectories. H Wang, C Schmid, International Conference on Computer Vision. H. Wang and C. Schmid. Action recognition with improved trajectories. In International Conference on Computer Vi- sion, 2013.
Actions˜transforma-tions. X Wang, A Farhadi, A Gupta, CVPR. X. Wang, A. Farhadi, and A. Gupta. Actions˜transforma- tions. In CVPR, 2016.
abseiling (1146) 2. air drumming (1132) paintball (1140) 242. playing piano (691) 243. playing poker (1134) 244. playing recorder (1148) 245. playing saxophone (916) 246. playing squash or racquetball (980) 247. playing tennis (1144) 248. playing trombone (1149) 249. playing trumpet (989) 250. playing ukulele (1146) 251. playing violin (1142) 252. playing volleyball (804) 253. playing xylophone (746) 254. pole vault (984) 255. presenting weather forecast (1050) 256. pull ups (1121) 257. pumping fist (1009) 258. pumping gas (544) 259. punching bag (1150) 260. punching person (boxing) (483) 261. push up (614) 262. pushing car (1069) 263. pushing cart (1150) 264. pushing wheelchair (465) 265. reading book (1148) elephant (1104) 271. riding mechanical bull (698) 272. J Yue-Hei, M Ng, S Hausknecht, O Vijayanarasimhan, R Vinyals, G Monga, Toderici, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognition1Beyond short snippets: Deep networks for video classification. riding mountain bike (495) 273. riding mule (476) 274. riding or walking with horse (1131) 275. riding scooter (674) (1140) 400. zumba (1093J. Yue-Hei Ng, M. Hausknecht, S. Vijayanarasimhan, O. Vinyals, R. Monga, and G. Toderici. Beyond short snip- pets: Deep networks for video classification. In Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4694-4702, 2015. 1. abseiling (1146) 2. air drumming (1132) paintball (1140) 242. playing piano (691) 243. playing poker (1134) 244. playing recorder (1148) 245. playing saxophone (916) 246. playing squash or racquetball (980) 247. playing tennis (1144) 248. playing trombone (1149) 249. playing trumpet (989) 250. playing ukulele (1146) 251. playing violin (1142) 252. playing volleyball (804) 253. playing xylophone (746) 254. pole vault (984) 255. presenting weather forecast (1050) 256. pull ups (1121) 257. pumping fist (1009) 258. pumping gas (544) 259. punching bag (1150) 260. punching person (boxing) (483) 261. push up (614) 262. pushing car (1069) 263. pushing cart (1150) 264. pushing wheelchair (465) 265. reading book (1148) elephant (1104) 271. riding mechanical bull (698) 272. riding mountain bike (495) 273. riding mule (476) 274. riding or walking with horse (1131) 275. riding scooter (674) (1140) 400. zumba (1093)
| []
|
[
"Shuffle algebras for quivers as quantum groups",
"Shuffle algebras for quivers as quantum groups"
]
| [
"Andrei Negut ",
"Francesco Sala ",
"Olivier Schiffmann "
]
| []
| []
| We define a quantum loop group U + Q associated to an arbitrary quiver Q = (I, E) and maximal set of deformation parameters, with generators indexed by I × Z and some explicit quadratic and cubic relations. We prove that U + Q is isomorphic to the (generic, small) shuffle algebra associated to the quiver Q and hence, by [Neg21a], to the localized K-theoretic Hall algebra of Q. For the quiver with one vertex and g loops, this yields a presentation of the spherical Hall algebra of a (generic) smooth projective curve of genus g (invoking the results of [SV12]). We extend the above results to the case of non-generic parameters satisfying a certain natural metric condition. As an application, we obtain a description by generators and relations of the subalgebra generated by absolutely cuspidal eigenforms of the Hall algebra of an arbitrary smooth projective curve (invoking the results of [KSV17]). | null | [
"https://export.arxiv.org/pdf/2111.00249v3.pdf"
]
| 240,354,440 | 2111.00249 | 7fbd648908d1bd03a2ba965579cf1fe6e57d9b75 |
Shuffle algebras for quivers as quantum groups
May 2023
Andrei Negut
Francesco Sala
Olivier Schiffmann
Shuffle algebras for quivers as quantum groups
8May 2023
We define a quantum loop group U + Q associated to an arbitrary quiver Q = (I, E) and maximal set of deformation parameters, with generators indexed by I × Z and some explicit quadratic and cubic relations. We prove that U + Q is isomorphic to the (generic, small) shuffle algebra associated to the quiver Q and hence, by [Neg21a], to the localized K-theoretic Hall algebra of Q. For the quiver with one vertex and g loops, this yields a presentation of the spherical Hall algebra of a (generic) smooth projective curve of genus g (invoking the results of [SV12]). We extend the above results to the case of non-generic parameters satisfying a certain natural metric condition. As an application, we obtain a description by generators and relations of the subalgebra generated by absolutely cuspidal eigenforms of the Hall algebra of an arbitrary smooth projective curve (invoking the results of [KSV17]).
1. Introduction 1.1. Let Q be a finite quiver, with vertex set I and edge set E; edge loops and multiple edges are allowed. The Hall algebra of the category of representations of Q over a finite field is well-known to contain a copy of the quantized enveloping algebra U + q (g Q ), where g Q is the Kac-Moody Lie algebra associated to Q (or the Bozec-Kac-Moody Lie algebra when Q has edges loops). Cohomological Hall algebras associated to Q for a Borel-Moore homology theory (including K-theory) were more recently introduced, in relation to Donaldson-Thomas theory on the one hand and Nakajima quiver varieties on the other hand (see [KS11,SV13,YZ18]). More precisely, the K-theoretic Hall algebra of Q is the vector space:
K Q := n∈N I K T (Rep n Π Q )
where Π Q is the preprojective algebra of Q, and Rep n Π Q is the stack of complex ndimensional representations of Π Q . The vector space K Q is equipped with a natural Hall multiplication making it into an associative algebra 1 . Here T is a torus acting in a Hamiltonian way on Rep n Π Q by appropriately rescaling the maps attached to the arrows e ∈ E. The algebra K Q acts on the T-equivariant K-theory groups of Nakajima quiver varieties (see [Neg21b] for a review in our language):
N w = v∈N I N v,w
and is in fact the largest algebra thus acting via Hecke correspondences. When Q is a finite type quiver, K Q is isomorphic to the positive half of the quantum loop algebra (in Drinfeld's sense) of g Q . The situation of an affine quiver, in which case K Q is isomorphic to a quantum toroidal algebra, is studied in detail for the Jordan quiver in [SV13], for cyclic quivers in [Neg15] and for arbitrary affine quivers in [VV20]. More generally, for a quiver without edge loops and a specific one-dimensional torus T, there is an algebra homomorphism:
(1.1) U + q (Lg Q ) −→ K Q
which recovers Nakajima's construction of representations of quantum affinizations of Kac-Moody algebras on the equivariant K-theory of quiver varieties. The map (1.1) is surjective (under some mild conditions on the torus action on Rep n Π Q ), but it is not known to be injective in general (see [VV20]). Beyond these cases, however, very little is known. Moreover, even though Rep n Π Q is equivariantly formal for any T (and hence K T (Rep n Π Q ) is free as a K T (pt)-module), the structure of K Q as an algebra depends in a rather subtle way on T. Note that there is a natural gauge action of the group (C * ) I on T, but as soon as Q contains edge loops or multiple edges, the quotient of T by this gauge group is nontrivial.
1.2. In the present paper, we consider the case when the torus T = (C * ) |E| × C * is as large as possible (each of the first |E| copies of C * scale anti-diagonally the two coordinates of Π Q corresponding to a given edge, while the last copy of C * scales diagonally one half of the coordinates of Π Q ) and we work over the fraction field:
(1.2) F = Frac(K T (pt)) = Q(q, t e ) e∈E
Our main result provides an explicit description of:
K Q,loc := K Q K T (pt)
F by generators and relations which we will now summarize. Let E be the "double" of the edge set E, i.e. there are two edges e = − → ij and e * = − → ji in E for every edge e = − → ij ∈ E. The set E is equipped with a canonical involution e ↔ e * . We extend the notation t e to an arbitrary e ∈ E by the formula: for any e ∈ E. For any i, j ∈ I, consider the rational function 2 :
(1.4) ζ ij (x) = 1 − xq −1 1 − x δ i j e= − → ij ∈E 1 t e − x e= − → ji ∈E 1 − t e qx
and set:
ζ ij (x) = ζ ij (x) · (1 − x) δ i j
Let U + Q be the F-algebra generated by elements e i,d for i ∈ I, d ∈ Z subject to the following set of quadratic and cubic relations, in which we set When Q is a tree, the quotient of (C * ) |E| × C * by the action of the gauge group (C * ) |I| is one-dimensional, hence up to renormalization K Q can be defined as C *equivariant K-theory (in other words, we do not lose any information by assuming that t e = q 1 2 , for all e ∈ E). In addition, one can check that in the case of an A 2 quiver, the cubic relations (1.6) are equivalent to the standard q-Serre relations (see Example 3.9). With this in mind, Theorem 1.3 implies:
Theorem 1.4. Suppose that Q is a tree, and that T scales the symplectic form on Rep n Π Q nontrivially. Then the localization:
U + q (Lg Q ) K T (pt) F −→ K Q,loc
of the map (1.1) is an algebra isomorphism.
2 Note that this is actually the rational function denoted by ζ ′ ij in [Neg21a].
Cohomological Hall algebras of quivers are known (at least in the case of Borel-Moore homology and K-theory) to embed in a suitable big shuffle algebra V Q , whose multiplication encodes the structure of Q (see [SV20,VV20,YZ18]). In the K-theoretic case and for maximal T, recent work ([Neg21a, Zha19]) identified the image of this embedding as the small shuffle algebra S Q ⊂ V Q determined by the so-called 3-variable wheel conditions. Theorem 1.3 is thus a direct corollary of the following theorem, which is the main result of the present paper.
Theorem 1.5. There is an algebra isomorphism U +
Q ∼ = S Q .
For a general Q and a general choice of T (which satisfies some mild conditions), there is a chain of algebra homomorphisms:
U + Q −→ K Q,loc −→ S Q The content of Theorem 1.5 is that these maps are all isomorphisms for T maximal.
Our main tool to prove Theorem 1.5 is the combinatorics of words developed in [Neg21a] (which was in turn influenced by [NT21], and further back, by the seminal work of [LR95,Lec04,Ros02]). It would be interesting to extend the above result to the case of a smaller torus T; this would necessitate some more complicated wheel conditions, and result in higher degree relations in U + Q (see the last section of [Neg21a]). For instance, when two vertices are joined by more than one edge, it is customary to set the corresponding weights of the torus action to be in a geometric progression, as this yields the q-Serre relations (which are of degree two more than the number of edges, so more complicated than cubic in the case of multiple edges).
1.6. Let us mention an important application of Theorem 1.5. When Q is the quiver S g with one vertex and g loops, it is known by combining [SV12] and [Neg21a] that the spherical subalgebra of the Hall algebra H sph X of the category of coherent sheaves on a genus g curve X defined over the finite field k of q −1 elements is isomorphic to K Q (extended by a commutative Cartan subalgebra). To make the previous statement precise, the equivariant parameters t 1 , . . . , t g , q/t 1 , . . . , q/t g must be set equal to the inverses of the Weil numbers σ 1 , . . . , σ g , σ 1 , . . . , σ g of X. Thus, the rational function (1.4) for the quiver Q = S g corresponds to the renormalized zeta function of the curve X:
ζ X (x) = 1 − xq −1 1 − x g e=1 (σ e − x)(1 − σ e x −1 )
For any e = 1, . . . , g, set:
Q e (z 1 , z 2 , z 3 ) = 1≤i<j≤3 f =e σ f − z j z i 1 − σ f z i z j
Then we have the following result (see Section 6 for details).
Theorem 1.7. When the curve X has distinct Weil numbers (we will refer to this as the "generic" case), its spherical Hall algebra H sph X is generated by elements κ ±1 , θ 0,l , 1 vec d for l ≥ 1, d ∈ Z, subject to the following set of relations:
H + (z)H + (w) = H + (w)H + (z) (1.7) E(z)H + (w) = H + (w)E(z) ζ X z w ζ X w z (for |w| ≫ |z|) (1.8) E(z)E(w)ζ X w z = E(w)E(z)ζ X z w (1.9)
and for all e = 1, . . . , g and m ∈ Z the relation:
(1.10) (xyz) m (x + z)(xz − y 2 )Q e (x, y, z)E(x)E(y)E(z) ct = 0
(see (3.4) for the notation [. . . ] ct ). In the formulas above, we set:
E(z) = d∈Z 1 vec d z −d , H + (z) = κ 1 + l≥1 θ 0,l z −l
For g = 0, one recovers the defining relations for U + q (Lsl 2 ) (see [Kap97]), while for g = 1 one gets the relations describing the elliptic Hall algebra of [BS12]. Note that there is a slight discrepancy between (1.6) and (1.10); this comes from the fact that we added in the Cartan loop generators {θ 0,l } l∈N , see Section 6. From the point of view of function field automorphic forms, Theorem 1.7 says that in the case of the function field of a generic curve, Eisenstein series for the group GL(n) which are induced from the trivial character of the torus satisfy, in addition to the celebrated functional equation (which is equivalent to (1.9)), g families of cubic relations and no higher degree relations.
One might wonder what happens for a non-generic curve, or for the entire Hall algebra H X rather than the spherical subalgebra H sph X . In Section 7, we answer both of these questions using a version of Theorem 1.5 that holds for equivariant parameters t e which satisfy a certain metric condition (Assumption Ъ introduced in [Neg21a]). As it turns out, this metric condition applies to the inverse Weil numbers of X due to the (generalized) Riemann hypothesis and the functional equation for the zeta function (and more generally for the Rankin-Selberg L-functions attached to a pair of absolutely cuspidal eigenforms); this also allows us to give, using [KSV17], a complete presentation for an arbitrary curve of the subalgebra H abs X generated by the coefficients of all absolutely cuspidal eigenforms (Corollary 7.8). As we show, the structure of this algebra only depends on the various orders of vanishing of the Rankin-Selberg L-functions attached to pairs of absolutely cuspidal eigenforms.
1.8. The plan of the present paper is the following. From now on, we will fix the quiver Q and write simply U + instead of U + Q .
• In Section 2, we consider the F-algebra U + generated by elements {e i,d } i∈I,d∈Z modulo the quadratic relations (1.5) and check that it is naturally dual to the so-called big shuffle algebra V, see Proposition 2.9:
(1.11)
U + ⊗ V op ·,· −−→ F.
• In Section 3, we recall the shuffle algebra S ⊂ V of [Neg21a], which is cut out by the 3-variable wheel conditions (3.1). We show that these wheel conditions arise by pairing with certain cubic elements that will be defined in Proposition 3.5:
(1.12) A (e) d e∈E,d∈Z ∈ U +
This allows us to prove that (1.11) descends to a non-degenerate pairing:
(1.13)
U + ⊗ S op ·,· −−→ F
where U + is the quotient of U + by the ideal generated by the elements (1.12).
Comparing (1.13) with the pairing:
S ⊗ S op ·,· −−→ F
that was studied in [Neg21a] allows us to conclude that U + ∼ = S, thus establishing Theorem 1.5.
• In Section 4, we provide a definition of a natural Drinfeld-type double U of U + .
• In Section 5, we consider specializations of the shuffle algebra S when the equivariant parameters satisfy Assumption Ъ of Definition 5.2. We extend the main results (i.e. the presentation by generators and relations) in this context; this involves some new families of (still cubic) relations.
• In Section 6, we recall the basic notions concerning the spherical Hall algebra H sph X of a generic genus g smooth projective curve X/k and prove Theorem 1.7. • Finally, in Section 7, we use Section 5 to extend the results of Section 6 to an arbitrary curve X, and to the (much) larger subalgebra H abs X ⊂ H X (corresponding by Langlands duality to all geometrically irreducible local systems).
2. The big shuffle algebra and the quadratic quantum loop group 2.1. Let us introduce the big shuffle algebra V and quadratic quantum loop group U + associated to a quiver. We will define a perfect pairing between them, as well as a homomorphism Υ : U + → V.
Remark 2.2. The reader might keep in mind the following analogy: U + is like a Verma module, V is like the corresponding dual Verma module, and Υ is like the canonical map between them. As we will see in Section 3, the image of Υ will be the quantum loop group U + that we are interested in, just like the image of the map from a Verma to the dual Verma is the irreducible highest weight representation.
Alternatively, the more geometry-oriented reader may also think about the inclusion of an affine open subset j : O ֒→ X of an algebraic variety X together with a local system L on O: U + is like j ! L, V is like j * L, and U + is the intermediate extension j ! * L, which is the image of the natural morphism j ! L → j * L.
Throughout the present paper, we fix a finite quiver Q with vertex set I and edge set E. The notation e = − → ij will mean "e is an arrow going from i to j". Let:
(2.1) #− → ij = |arrows − → ij | , # ij = #− → ij + #− → ji
We identify Z I with i∈I Zς i , where ς i is the vector with a single 1 at the i-th spot, and zeroes everywhere else. In terms of quiver representations, ς i is the dimension vector of the simple quiver representation S i supported at i. The set of natural numbers N will be considered to include 0.
2.3. We begin by recalling the big shuffle algebra, defined over the field (1.2):
Definition 2.4. The big shuffle algebra is the vector space:
(2.2) V = n∈N I F . . . , z ±1 i1 , . . . , z ±1 ini , . . .
sym i∈I endowed with the associative product:
(2.3) R(. . . , z i1 , . . . , z ini , . . . ) * R ′ (. . . , z i1 , . . . , z in ′ i , . . . ) = Sym R(. . . , z i1 , . . . , z ini , . . . )R ′ (. . . , z i,ni+1 , . . . , z i,ni+n ′ i , . . . ) i∈I n i ! i∈I n ′ i ! i,j∈I 1≤a≤ni nj <b≤nj +n ′ j ζ ij z ia z jb
and the unit being the function 1 in zero variables (ζ ij (x) is defined in (1.4)).
In Definition 2.4, "sym" refers to the set of Laurent polynomials which are symmetric with respect to the variables z i1 , . . . , z ini for each i ∈ I separately, and "Sym" denotes symmetrization with respect to the variables z i1 , . . . , z i,ni+n ′ i for each i ∈ I separately. Note that even though the right-hand side of (2.3) seemingly has simple poles at z ia − z ib for all i ∈ I and all a < b, these poles vanish when taking the Sym, as the orders of such poles in a symmetric rational function must be even.
The big shuffle algebra is graded by N I × Z, where N I keeps track of the number of variables {n i } i∈I of a Laurent polynomial R, while Z keeps track of the homogeneous degree of R. Under these circumstances, we will write: deg R = (n, d) and refer to:
(2.4) hdeg R = n and vdeg R = d as the horizontal and vertical degree of R, respectively. The graded pieces of the algebra V will be denoted by V n (when grading by horizontal degree only) and by V n,d (when grading by both horizontal and vertical degree).
2.5. Let us now introduce the quadratic quantum loop group associated to Q.
Definition 2.6. The quadratic quantum loop group U + is the F-algebra generated by symbols:
{e i,d } i∈I,d∈Z
modulo the quadratic relations:
(2.5) e i (z)e j (w)ζ ji w z = e j (w)e i (z)ζ ij z w for all i, j ∈ I, where e i (z) = d∈Z e i,d z −d .
The meaning of relation (2.5) is that one cancels denominators (which yields the equivalent relation (1.5)) and then equates the coefficients of every {z k w l } k,l∈Z in the left and right-hand sides, thus yielding relations between the generators e i,d :
(2.6) e i,a e j,b + −1 •=−#ij−δ i j coeff · e i,a+• e j,b−• = = γ · e j,b+#ij+δ i j e i,a−#ij−δ i j + #ij+δ i j −1 •=0 coeff · e j,b+• e i,a−•
where "coeff" denotes various coefficients in F, arising from the numerator of the functions ζ ij that appear in (2.5); observe that the constant γ in the right-hand side of (2.6) is different from 1 if i = j, which will be important in Proposition 2.16. Just like V, the algebra U + is also N I × Z-graded, with:
deg e i,d = (ς i , d).
Therefore, we will use the terms "horizontal degree" and "vertical degree" pertaining to U + as well, in accordance with (2.4).
Proposition 2.7. The assignment e i,d → z d i1 , for i ∈ I, d ∈ Z induces an algebra homomorphism:
(2.7) Υ : U + −→ V.
The proof of Proposition 2.7 is a straightforward computation (indeed, all one needs to show is that relations (2.5) hold in the big shuffle algebra) which we leave as an exercise to the interested reader. We remark that (2.7) is neither injective, nor surjective: its image is the small shuffle algebra, and its kernel is generated by cubic relations, as we will show in Section 3.
2.8. As one of our main tools, we will next define a pairing between U + and V. Let Dz = dz 2πiz . Whenever we write |z1|≫···≫|zn| we are referring to a contour integral taken over concentric circles around the origin in the complex plane (i.e. an iterated residue at 0, or ∞). The following result is close to [Neg21a, Proposition 3.3].
Proposition 2.9. There is a well-defined pairing:
(2.8) U + ⊗ V op ·,· −−→ F
given for all R ∈ V n and all i 1 , . . . , i n ∈ I, d 1 , . . . , d n ∈ Z by:
(2.9) e i1,d1 . . . e in,dn , R = |z1|≫···≫|zn| z d1 1 . . . z dn n R(z 1 , . . . , z n ) 1≤a<b≤n ζ i b ia z b za n a=1
Dz a if ς i1 + · · · + ς in = n, and 0 otherwise. Implicit in the notation (2.9) is that the symbol z a is plugged into one of the variables z ia•a of R, for all a ∈ {1, . . . , n} 3 .
Remark 2.10. While we have presented (2.8) as an F-linear pairing of vector spaces, note that it naturally extends to a bialgebra pairing once one enlarges the algebras involved according to the standard procedure for quantum loop groups (which we recall in Section 4). The notation V op in (2.8) is meant to underscore this fact.
Proof. To check that (2.8) is a well-defined pairing, we need to make sure that any linear relations between the e i,d 's yield linear relations between the right-hand sides of (2.9), for any R ∈ V op . Since the defining relations in the algebra U + are quadratic, the computation reduces to the case n = 2 (indeed, if one were to multiply all quadratic expressions below on the left and on the right by various products of e i,d 's, the following argument would still carry through). In this case, we may rewrite (2.9) in terms of the generating series e i (x) and e j (y) as follows:
e i (x)e j (y), R = |z1|≫|z2| δ z1 x δ z2 y R(z 1 , z 2 ) ζ ji z2 z1
Dz 1 Dz 2
where the δ function is δ(x) = d∈Z x d . Therefore, we also have:
e i (x)e j (y) ζ ji y x x δ i j , R = |z1|≫|z2| δ z1 x δ z2 y ζ ji y x x δ i j R(z 1 , z 2 ) ζ ji z2 z1 Dz 1 Dz 2 = = |z1|≫|z2| δ z1 x δ z2 y ζ ji z2 z1 z δ i j 1 R(x, y) ζ ji z2 z1 Dz 1 Dz 2 = R(x, y)(x − y) δ i j (2.10)
(the equalities above are due to the well-known property:
δ z x P (z) = δ z x P (x)
for any Laurent polynomial P ). Analogously, we have:
(2.11) e j (y)e i (x) ζ ij x y (−y) δ i j , R = R(x, y)(x − y) δ i j
so the two pairings (2.10) and (2.11) are equal, as we needed to show.
3 The choice of •a ∈ {1, . . . , n ia }, ∀a ∈ {1, . . . , n} is immaterial due to the symmetry of R, as long as we ensure that •a = • b for all a = b such that ia = i b .
2.11. We will now provide a basis of U + , following [Neg21a] (which was inspired by ideas of [LR95,Lec04,Ros02] and [NT21]). Consider the set of letters:
i (d) ∀i ∈ I, d ∈ Z
As usual, a word is any sequence of letters:
i (d1) 1 . . . i (dn) n ∀i 1 , . . . , i n ∈ I, d 1 , . . . , d n ∈ Z. If w = i (d1) 1 . . . i (dn) n
is a word we call n its length, define its degree as:
deg w = (ς i1 + · · · + ς in , d 1 + · · · + d n ) ∈ N I × Z
and set:
e w = e i1,d1 . . . e in,dn ∈ U +
We write W for the set of all words. By definition, U + is linearly spanned by the collection of elements e w for w ∈ W, but there are linear relations among them. We will now point out a subset of words, such that the corresponding elements yield a linear basis of U + . For this, we introduce a total order on the set of words. We begin by fixing a total order on the set of vertices I of Q.
Definition 2.12 ([NT21]).
We define a total order on the set of letters as follows:
i (d) < j (e) if d > e or d = e and i < j
We extend this to the total lexicographic order on words by:
i (d1) 1 . . . i (dn) n < j (e1) 1 . . . j (em) m if i (d1) 1 = j (e1) 1 , . . . , i (d k ) k = j (e k ) k and either i (d k+1 ) k+1 < j (e k+1 ) k+1
or k = n < m.
Definition 2.13 ([Neg21a]). A word w = i (d1) 1 . . . i (dn) n
is called non-increasing if:
(2.12) d a < d b + a≤s<b # isi b or d a = d b + a≤s<b # isi b and i a ≥ i b for all 1 ≤ a < b ≤ n (see (2.1) for the definition of # ij ).
2.14. Let W ≤ denote the set of non-increasing words. The motivation for introducing them is twofold, and is embodied by Lemma 2.15 and Proposition 2.16.
Lemma 2.15 ([Neg21a]). There are finitely many non-increasing words of given degree, which are bounded above by any given word v.
Proof. Let us assume we are counting non-increasing words [i
(d1) 1 . . . i (dn) n
] with d 1 + · · · + d n = d for fixed n and d. The fact that such words are bounded above implies that d 1 is bounded below. But then the inequality (2.12) implies that d 2 , . . . , d n are also bounded below. The fact that d 1 + · · · + d n is fixed implies that there can only be finitely many choices for the exponents d 1 , . . . , d n . Since there are also finitely many choices for i 1 , . . . , i n ∈ I, this concludes the proof.
Proposition 2.16 ([Neg21a]). The set {e w } w∈W ≤ is a linear basis of U + . Proof (sketch).
A similar result appears in [Neg21a] for a quotient of U + . As the proof given in loc. cit. only uses relations (2.5), we may apply it to our situation. More precisely, loc. cit. shows how to iterate formula (2.6) in order to obtain that for any word v, the element e v belongs to the linear span of elements e w with w ∈ W ≤ satisfying:
w ≥ v, min(v) − β(n) ≤ min(w) ≤ max(w) ≤ max(v) + β(n) (2.13) wherev = (d 1 , .
. . , d n ) and β(n) is a universal constant. This implies that:
U + = span{e w } w∈W ≤
Let us briefly recall how [Neg21a] showed that the elements {e w } w∈W ≤ are linearly independent, as this will be useful later. Consider any ordered monomial:
(2.14) µ = z −k1 i1•1 . . . z −kn in•n where we assume that • a = • b if a = b and i a = i b .
The associated word of µ is:
(2.15) w µ = i (d1) 1 . . . i (dn) n
where:
(2.16) d a = k a + t>a #− − → iait − s<a #− − → isia
The lexicographically largest of the associated words of various orderings of a given monomial µ will be called the leading word of µ. It was shown in [Neg21a, Lemma 4.8] that the leading word is the only one among all associated words which is nonincreasing in the sense of (2.12). More generally, the leading word of any non-zero R ∈ V, denoted by lead(R), will be the lexicographically largest of the leading words (2.15) for all the monomials which appear in R with non-zero coefficient. Conversely, any w ∈ W ≤ appears as the leading word:
(2.17) w = lead(Sym µ)
of the monomial µ as in (2.14), chosen such that formula (2.15) holds.
Analogously to [Neg21a, Formula (4.18)], one can show by direct inspection that:
(2.18) e w , R is = 0 if w = lead(R) = 0 if w > lead(R)
The formula above immediately shows the linear independence of the elements e w , as w runs over non-increasing words. Indeed, if one were able to write such an element e w as a linear combination of elements e v involving only strictly larger non-increasing words v then this would contradict (2.18) for R = Sym µ with µ as in (2.17).
Corollary 2.17. The pairing U + ⊗ V op ·,· −−→ F defined in (2.9) is non-degenerate.
Proof. For non-degeneracy in the second factor, we need to show that any R ∈ V op which pairs trivially with the whole of U + actually vanishes; this is just the obvious fact that if the power series expansion of the rational function:
R(z 1 , . . . , z n ) 1≤a<b≤n ζ i b ia z b za
(in the domain corresponding to an arbitrary order of the variables z 1 , . . . , z n and an arbitrary choice of the indices i 1 , . . . , i n ∈ I) vanishes, then R = 0.
Let us now consider some non-zero element:
φ = w∈W ≤ a w · e w ∈ U +
Let w be the smallest word such that a w = 0, and choose a monomial µ whose leading word is w (see (2.14), (2.15) and (2.17)). Then by (2.18), φ, Sym µ = 0. This gives the non-degeneracy of the pairing in the first factor.
2.18. For future use, we spell out a "finite support" variant of Corollary 2.17. Let T ⊂ W ≤ be a finite set of non-increasing words. We set:
U +,T = w∈T F · e w ⊂ U +
and let V T ⊂ V denote the set of symmetric Laurent polynomials spanned by monomials having the property that their leading word (2.15) lies in T . Then we claim that the restriction of the pairing (2.8) to:
(2.19) U +,T ⊗ V op T ·,· −−→ F
is non-degenerate. Indeed, the two vector spaces above manifestly have the same finite dimension (due to the uniqueness of the leading word (2.15) associated to any given monomial), so it suffices to show that (2.19) is non-degenerate in the second argument. This follows from the fact that the leading word w of any 0 = R ∈ V T lies in T , and (2.18) implies that e w , R = 0.
3. The small shuffle algebra and the quantum loop group 3.1. We will now define a certain subalgebra of V, determined by the so-called wheel conditions. These first arose in the context of elliptic quantum groups in [FO01], and the version herein is inspired by the particular wheel conditions of [FHH + 09] (which corresponds to the case when Q is the Jordan quiver).
Definition 3.2 ([Neg21a]
). The small shuffle algebra is the subspace S ⊂ V consisting of Laurent polynomials R(. . . , z i1 , . . . , z ini , . . . ) that satisfy the wheel conditions:
(3.1) R zia=qzic,z jb =tezic = 0 for any edge E ∋ e = − → ij and all a = c, and further a = b = c if i = j.
It is well-known (and straightforward to show) that S as defined above is a subalgebra of V. Because the wheel conditions are vacuous for R of horizontal degree {ς i } i∈I , we conclude that the homomorphism (2.7) actually maps into the small shuffle algebra, i.e.:
(3.2) Υ : U + −→ S
The map Υ was shown to be surjective in [Neg21a]. We will obtain in Theorem 3.8 below a generators-and-relations description of the algebra U + /Ker Υ ∼ = S, by describing a set of generators for the kernel of the map Υ.
Example 3.3. Let us consider the quiver Q of type A 2 , consisting of two vertices {i, j} with a single edge, say e : i → j. Up to gauge transformation, we may specialize the equivariant parameter to t e = q 1 2 . In this case, it is known that the small shuffle algebra S is isomorphic to the positive half of the quantum loop group U + q − 1 2 (Lsl 3 ), which is the quotient of U + by the set of cubic q-Serre relations, i.e.:
(3.3) S ≃ U + P s,t (x 1 , x 2 , y) + P s,t (x 2 , x 1 , y)
where for any s = t ∈ {i, j}, we have set:
P s,t (x 1 , x 2 , y) = e s (x 1 )e s (x 2 )e t (y) − q 1 2 + q − 1 2 e s (x 1 )e t (y)e s (x 2 ) + e t (y)e s (x 1 )e s (x 2 ).
3.4. The wheel conditions (3.1) can be interpreted as certain linear conditions on elements R ∈ V. As can be expected in light of Corollary 2.17, these linear conditions are given by taking the pairing (2.8) with certain elements of U + , which we now explicitly describe. We first introduce some more notation. Recall that:
ζ ij (x) = ζ ij (x) · (1 − x) δ i j ∈ F[x ±1
] Given a Laurent polynomial P (x, y, z) and three formal series e i (x), e j (y), e k (z) as in Definition 2.6, we will write:
(3.4) P (x, y, z)e i (x)e j (y)e k (z) ct ∈ U +
for the constant term of the expression in square brackets in (3.4). For example, if P (x) = x a y b z c for various integers a, b, c, then (3.4) equals e i,a e j,b e k,c .
For any edge E ∋ e = − → ij and any triple of integers (a, b, c) ∈ Z 3 , we define:
(3.5) A (e) a,b,c = x a 1 x2 q b y te c (1 − q) (1 − t e ) δ i j 1 − te q δ i j · X (e) (x 1 , x 2 , y) ct ∈ U + 2ς i +ς j ,a+b+c
where:
X (e) (x 1 , x 2 , y) = ζ ii x2 x1 ζ ji y x1 ζ ji y x2 1 − x2 x1q 1 − yq x2te · e i (x 1 )e i (x 2 )e j (y) + ζ ii x1 x2 ζ ji y x2 ζ ij x1 y − x2te y − y x1 δ i j 1 − yq x2te 1 − x1te y · e i (x 2 )e j (y)e i (x 1 ) + ζ ii x2 x1 ζ ij x1 y ζ ij x2 y x2te yq y 2 x1x2 δ i j 1 − x2 x1q 1 − x1te y · e j (y)e i (x 1 )e i (x 2 )
The linear factors in x 1 , x 2 , y in the denominators above are all canceled by some factors in the numerator, hence the expression in the square brackets of (3.5) is a Laurent polynomial in x 1 , x 2 , y times the product of the series e i (x 1 ), e i (x 2 ), e j (y).
A (e) (x) = d∈Z A (e) d x −d we have: (3.6) A (e) (x), R = R zi1=x,zj1=tex,zi2=qx ∀R ∈ V op 2ς i +ς j for any edge E ∋ e = − → ij .
Proof. We need to show that for any integers a + b + c = d, we have:
(3.7) A (e) d , R = coefficient of x −d in R zi1=x,zj1=tex,zi2=qx
for all R ∈ V op . This will also imply that A (e) a,b,c only depends on d, because of the non-degeneracy of the pairing (Corollary 2.17). We have:
A (e) d = C 1 + C 2 + C 3
where C 1 , C 2 , C 3 correspond to the three terms defining X (e) a,b,c . Abbreviating:
D = (z i1 ) a zi2 q b zj1 te c (1 − q) (1 − t e ) δ i j 1 − te q δ i j Dz i1 Dz i2 Dz j1
we have, using (2.9):
C 1 , R = |zi1|≫|zi2|≫|zj1| R(z i1 , z i2 , z j1 ) 1 − zi2 zi1 1 − zj1 zi1 δ i j 1 − zj1 zi2 δ i j D 1 − zi2 zi1q 1 − zj1q zi2te C 2 , R = |zi2|≫|zj1|≫|zi1| R(z i1 , z i2 , z j1 ) zi1te zj1 1 − zi2 zi1 1 − zj1 zi1 δ i j 1 − zj1 zi2 δ i j D 1 − zj1q zi2te 1 − zi1te zj1 C 3 , R = |zj1|≫|zi1|≫|zi2| R(z i1 , z i2 , z j1 ) zi2 zi1q zi1te zj1 1 − zi2 zi1 1 − zj1 zi1 δ i j 1 − zj1 zi2 δ i j D 1 − zi2 zi1q 1 − zi1te zj1
(we replace z j1 by z i3 in the formulas above if i = j; note that in the middle formula for C 2 , R , we took the liberty of replacing z i1 ↔ z i2 in the arguments of the symmetric Laurent polynomial R). Then (3.7) is an immediate consequence of the following identity of formal series:
δ z i2 z i1 q δ z j1 z i1 t e = ev |zi1|≫|zi2|≫|zj1| 1 1 − zi2 zi1q 1 − zj1q zi2te + ev |zi2|≫|zj1|≫|zi1| zi1te zj1 1 − zj1q zi2te 1 − zi1te zj1 + ev |zj1|≫|zi1|≫|zi2| zi2 zi1q zi1te zj1 1 − zi2 zi1q 1 − zi1te zj1 (3.8)
where ev |x|≫|y|≫|z| is the operator of taking the Laurent expansion of a rational function in the asymptotic region |x| ≫ |y| ≫ |z|. The formula above is a straightforward computation involving formal power series, which we leave as an exercise to the interested reader.
3.6. By Proposition 3.5, different choices of integers a, b, c yield elements A (e) d which are equal modulo the ideal generated by relations (2.5). In order to write A (e) d "canonically", one can expand these elements in the basis {e w } w∈W ≤ of U + (recall Proposition 2.16):
(3.9) A (e) d = w=[i (α 1 ) 1 i (α 2 ) 2 i (α 3 ) 3 ]∈W ≤ c (e) d,w · e i1,α1 e i2,α2 e i3,α3
The coefficients c (e) d,w occurring in the expression above are not nice (even in relatively simple cases, such as a vertex with a single loop, or two distinct vertices with two edges between them), but it is easy to see that the transformation:
d → d + 3 , w = [i (α1) 1 i (α2) 2 i (α3) 3 ] → [i (α1+1) 1 i (α2+1) 2 i (α3+1) 3
].
rescales c (e) d,w by a monomial in q, t e . This allows us to conclude that there exists a large enough natural number N , which only depends on Q, such that:
|α t − α s | ≤ N, ∀s, t ∈ {1, 2, 3} for all words [i (α1) 1 i (α2) 2 i (α3) 3
] which appear with non-zero coefficient in (3.9), for all edges e ∈ E and all d ∈ Z. We will use this fact in the proof of Theorem 3.8. 3.7. Our main result is that the two-sided ideal J ⊂ U + generated by A (e) d , for all e ∈ E and d ∈ Z, coincides with the kernel of the map (3.2). To this end, we define the (positive half of the) generic quantum loop group as:
(3.10)
U + = U + J.
Note that taking the quotient by J is equivalent to imposing X (e) (x 1 , x 2 , y) = 0 for all edges e, which is precisely the content of (1.6). Our main result, which is simply a restatement of Theorem 1.5, is the following.
Theorem 3.8. The assignment e i,d → z d i1 for i ∈ I, d ∈ Z induces an algebra isomorphism:
Υ : U + ∼ − → S.
Example 3.9. Let us work out the cubic relation (1.6) when Q is the A 2 quiver of Example 3.3, with the goal of recovering the q-Serre relation (3.3). Recall that we set t e = q 1 2 , so relation (1.6) states that:
x 1 − yq 1 2 e i (x 1 )e i (x 2 )e j (y)+ x 2 q 1 2 − x 1 q − 1 2 e i (x 2 )e j (y)e i (x 1 ) + yq − 1 2 − x 2 e j (y)e i (x 1 )e i (x 2 ) = 0 (3.11)
The quadratic relations (2.5) hold in both U + and U + for s = t ∈ {i, j}. Let us show how to obtain the q-Serre relation (for s = i, t = j) from (3.11), (3.12), (3.13). As a consequence of (3.13), we have:
x 2 q 1 2 − yq e i (x 2 )e j (y)e i (x 1 ) = x 2 q − yq 1 2 e j (y)e i (x 2 )e i (x 1 ) yq −1 − x 1 q − 1 2 e i (x 2 )e j (y)e i (x 1 ) = yq − 1 2 − x 1 q −1 e i (x 2 )e i (x 1 )e j (y)
Adding the two relations together, and then subtracting (3.11) from the result yields:
y q −1 − q e i (x 2 )e j (y)e i (x 1 ) = x 2 q − yq 1 2 e j (y)e i (x 2 )e i (x 1 ) + yq − 1 2 − x 1 q −1 e i (x 2 )e i (x 1 )e j (y) + x 1 − yq 1 2 e i (x 1 )e i (x 2 )e j (y) + yq − 1 2 − x 2 e j (y)e i (x 1 )e i (x 2 )
Symmetrizing the relation above with respect to x 1 ↔ x 2 yields:
y q −1 − q e i (x 2 )e j (y)e i (x 1 ) + e i (x 1 )e j (y)e i (x 2 ) = = x 2 q − yq 1 2 + yq − 1 2 − x 1 e j (y)e i (x 2 )e i (x 1 ) + yq − 1 2 − x 1 q −1 + x 2 − yq 1 2 e i (x 2 )e i (x 1 )e j (y) + x 1 − yq 1 2 + yq − 1 2 − x 2 q −1 e i (x 1 )e i (x 2 )e j (y) + yq − 1 2 − x 2 + x 1 q − yq 1 2 e j (y)e i (x 1 )e i (x 2 )
By applying (3.12), the right-hand side of the expression above is equal to:
y q − 1 2 − q 1 2 e j (y)e i (x 2 )e i (x 1 ) + e i (x 2 )e i (x 1 )e j (y) + e i (x 1 )e i (x 2 )e j (y) + e j (y)e i (x 1 )e i (x 2 )
Dividing by y(q − 1 2 − q 1 2 ), we obtain precisely the q-Serre relation: (3.14) P s,t (x 1 , x 2 , y) + P s,t (x 2 , x 1 , y) = 0 of (3.3). We have just showed that the usual q-Serre relations hold in the generic quantum loop group. The distinction between the cubic relations (3.14) and the cubic relations (1.6) is explained by the fact that the two sets of relations can be obtained from each other by adding appropriate multiples of the quadratic relations.
Proof of Theorem 3.8. Let us recall the non-degenerate pairing:
(3.15) S ⊗ S op ·,· ′ − −− → F defined in Proposition 3.3 of [Neg21a]
. Comparing our formula (2.9) with formula (3.30) of loc. cit., we see that the two pairings are compatible, in the sense that:
(3.16) f, ι(g) = Υ(f ), g ′ ∈ F, ∀ f ∈ U + , g ∈ S op ,
where ι : S op → V op is the tautological inclusion. To show that the homomorphism Υ of (3.2) descends to a homomorphism: Since the homomorphism Υ of (3.2) is surjective, so is Υ. It thus remains to show that Υ is injective, and because the pairing (3.15) is non-degenerate, the task boils down to showing that an element:
Υ : U + → S,(3.17) ψ = w∈W ≤ c w · e w ∈ U +
which pairs trivially with S op lies in J. Clearly, it suffices to do so for a homogeneous ψ. Let us fix a degree (n, d) ∈ N I × Z, fix some ψ ∈ U + of degree (n, d) as above and define, for a pair of positive integers M, m: For notational simplicity, we will write T = T M,m . Observe that T is finite, and that if M is picked large enough, the set T will contain all the words which appear with non-zero coefficient in (3.17). As we are free to pick M and m arbitrarily large, it might seem strange that we bother to subtract the quantity A∋s>t / ∈A # isit from the right-hand side of the inequality in (3.18). However, this is because of the following straightforward claim, whose proof we leave as an exercise to the reader: the aforementioned inequality holds for the leading word of a monomial µ if and only if it holds for the associated word (2.15) of any other ordering of the variables of µ. With this in mind, we have reduced our task to proving the following claim:
Claim 3.10. Let S op T = S op ∩ V op T .
For M ≫ m ≫ 1, the pairing:
U +,T J ∩ U +,T ⊗ S op T ·,· ′ T −−−→ F induced by (2.19) is non-degenerate.
Proof of Claim 3.10. Let us denote by ·, · T the restriction of ·, · to U +,T ⊗ V op T . By construction, the pairings ·, · ′ T and ·, · T are compatible in the sense that:
f, ι T (g) T = Υ(f ), g ′ T ∈ F, ∀ f ∈ U +,T , g ∈ S op T ,
where ι : S op T → V op T is the tautological inclusion; observe that Υ( U +,T ) ⊆ S T . By Section 2.18, ·, · T is a non-degenerate pairing of finite-dimensional vector spaces. To show that ·, · ′ T is also non-degenerate, it hence suffices to check that: S op T = (J ∩ U +,T ) ⊥ Note that the inclusion S op T ⊆ (J ∩ U +,T ) ⊥ is obvious, so we will prove the opposite: S op T ⊇ (J ∩ U +,T ) ⊥ Equivalently, this requires us to show that:
(3.19) ∀R ∈ V op T \S op T , ∃ψ ∈ J ∩ U +,T such that ψ, R = 0
Let us fix R as above. As it does not satisfy one of the wheel conditions, we have:
(3.20) R zia=qx,z jb =tex,zic=x = 0 for some edge e = − → ij . By (2.9) and (3.6), this entails:
(3.21) e i1,d1 . . . e in−3,dn−3 A (e) d , R = 0 for certain i 1 , . . . , i n−3 ∈ I and d 1 , . . . , d n−3 , d ∈ Z (to see this claim, one must expand the integral (2.9) in the domain where the variables z ia , z jb , z ic of (3.20) have smaller absolute value than all other variables of R). Assume that the word:
(3.22) i (d1) 1 . . . i (dn−3) n−3
is maximal such that (3.21) holds. In particular, this implies that the word (3.22) must be non-increasing (as we have seen in the proof of Proposition 2.16, any e v can be written as a linear combination of e w as w > v runs over non-increasing words). To establish the required (3.19), it therefore suffices to show that:
(3.23) ψ := e i1,d1 . . . e in−3,dn−3 A (e) d ∈ U +,T To this end, let us label the variables of R other than z ia , z jb , z ic by z 1 , . . . , z n−3 (of colors i 1 , . . . , i n−3 , respectively). Using formula (2.9), relation (3.21) states: As:
ζ ij (u) ∈ u −#− → ji F[[u]] × , ∀i, j ∈ I
the non-vanishing (3.24) and the maximality of the word (3.22) imply that:
z k1 1 . . . z kn−3 n−3 x d+ n−3 a=1 2#− → iai +#−→ iaj R(z 1 , . . . , z n−3 , qx, t e x, x) ct = 0
where we write for all a ∈ {1, . . . , n − 3}:
k a = d a + s<a #− − → isia − t>a #− − → iait − 2#− → iai − #− → iaj
(compare with (2.16)). Thus, we conclude that the Laurent polynomial R ∈ V op T must include with non-zero coefficient a monomial of the form:
z −k1 1 . . . z −kn−3 n−3 z −kn−2 ia z −kn−1 jb z −kn jc
where:
k n−2 = d n−2 + n−3 a=1 #− → iai − #− → ij − #− → ii k n−1 = d n−1 + n−3 a=1 #− → iaj + #− → ij − #− → ji k n = d n + n−3 a=1 #− → iai + #− → ji + #− → ii
for some integers d n−2 + d n−1 + d n = d. We will henceforth write i n−2 = i, i n−1 = j and i n = i. As explained in the paragraph following (3.18), although the word:
i (d1) 1 . . . i (dn−3) n−3 i (dn−2) j (dn−1) i (dn)
might fail to be non-increasing (our assumption only states that its prefix obtained by removing the last three letters is non-increasing), its exponents still satisfy the inequality in (3.18). In particular, we have:
(3.25) s∈A d s ≥ −M |A| + m|A| 2 − A∋s>t / ∈A # isit
where A = B or A = B ⊔{n−2, n−1, n}, for arbitrary B ⊆ {1, . . . , n−3}. However, as explained in Subsection 3.6, the element ψ of (3.23) is a linear combination of products of the form:
(3.26) e i1,d1 . . . e in−3,dn−3 e j1,δ1 e j2,δ2 e j3,δ3
where {j 1 , j 2 , j 3 } = {i, i, j} and the numbers δ 1 , δ 2 , δ 3 all satisfy:
δ 1 + δ 2 + δ 3 = d and δ u − d 3 ≤ c 1 , ∀u ∈ {1, 2, 3}
for some global constant c 1 . For every term of the form (3.26) that appears in ψ, let us consider the largest index x ∈ {0, . . . , n − 3} such that:
d x < d x+1 − c 2
where the global constant c 2 is chosen much larger than the number β(n) that appears in (2.13). As explained in Proposition 2.16, we may write:
e ix+1,dx+1 . . . e in−3,dn−3 e j1,δ1 e j2,δ2 e j3,δ3 ∈ j (l x+1 ) x+1 ...j (ln ) n ∈W ≤ F · e jx+1,lx+1 . . . e jn,ln
in such a way that the exponents l x+1 , . . . , l n that appear with non-zero coefficient in the right-hand side satisfy the following properties:
• all the numbers l x+1 , . . . , l n are within a global constant c 3 away from their average, which will be denoted by o,
• the large difference between d x and d x+1 ensures that all concatenated words
(3.27) i (d1) 1 . . . i (dx) x j (lx+1) x+1 . . . j (ln)
n which arise in the procedure above are non-increasing (recall that the word (3.22) was non-increasing to begin with, and thus so are all of its prefixes).
To prove (3.23), it therefore remains to show that the concatenated words that appear in ( We have thus proved Claim 3.10, and hence also Theorem 3.8.
4. The Drinfeld double of U + 4.1. We will now recall the standard procedure of upgrading U + ∼ = S to a Hopf algebra isomorphism, by extending and doubling the two algebras involved (we will follow the conventions of [Neg21a]). Consider the opposite algebra:
U − = U +,op ∼ = S op whose generators will be denoted by {f i,d } i∈I,d∈Z ; we set f i (z) = d∈Z f i,d z −d .
Definition 4.2. The extended algebras U ≥ and U ≤ are defined as the semi-direct products:
U ≥ = U + F F[h + i,d ] i∈I,d≥0 U ≤ = U − F F[h − i,d ] i∈I,d≥0
where the multiplication is governed by the following relations for all i, j ∈ I:
e i (z)h + j (w) = h + j (w)e i (z) ζ ij z w ζ ji w z (4.1) f i (z)h − j (w) = h − j (w)f i (z) ζ ji w z ζ ij z w . (4.2) Here h ± j (w) = ∞ d=0 h ± j,d w ∓d .
The rational functions in the right-hand sides of (4.1) and (4.2) are expanded in the same asymptotic direction of w as h ± j (w).
The algebras U ≥ and U ≤ are actually bialgebras, with respect to the coproduct:
(4.3) ∆ h ± i (z) = h ± i (z) ⊗ h ± i (z), (4.4) ∆ (e i (z)) = e i (z) ⊗ 1 + h + i (z) ⊗ e i (z), (4.5) ∆ (f i (z)) = f i (z) ⊗ h − i (z) + 1 ⊗ f i (z)
(strictly speaking, the coproduct above consists of infinite sums, meaning that U ≥ and U ≤ are topological bialgebras; we will not dwell upon this fact, as it is quite routine in the theory of quantum affinizations). The final step in the construction of the Drinfeld double is to note that there is a (unique) bialgebra pairing:
U ≥ ⊗ U ≤ ·,· −−→ F
which satisfies the following properties:
i) for all i, j ∈ I, we have:
h + i (z), h − j (w) = ζ ij z w ζ ji w z
where the right-hand side is expanded as |z| ≫ |w|, and:
ii) for all i, j ∈ I and d, k ∈ Z, we have:
e i,d , f j,k = δ i j δ 0 d+k
This property implies that the restriction of ·, · to U + ⊗ U − coincides with (3.15) under the identifications U + ∼ = S and U − ∼ = S op .
Summarizing the discussion above, we may define by the usual Drinfeld double procedure (see [Neg21a, Subsection 4.15] for a review) the following algebra.
Definition 4.3. The generic quantum loop group U is the F-algebra generated by elements: {e i,k , f i,k , h ± i,l |i ∈ I, k ∈ Z, l ≥ 0} satisfying the fact that h ± i,0 are invertible, as well as the quadratic relations:
e i (z)e j (w)ζ ji w z = e j (w)e i (z)ζ ij z w f j (w)f i (z)ζ ji w z = f i (z)f j (w)ζ ij z w for all i, j ∈ I,1 − x2 x1q 1 − yq x2te · f j (y)f i (x 2 )f i (x 1 ) + ζ ii x1 x2 ζ ji y x2 ζ ij x1 y − x2te y − y x1 δ i j 1 − yq x2te 1 − x1te y · f i (x 1 )f j (y)f i (x 2 ) + ζ ii x2 x1 ζ ij x1 y ζ ij x2 y x2te yq y 2 x1x2 δ i j 1 − x2 x1q 1 − x1te y · f i (x 2 )f i (x 1 )f j (y) = 0
for any edge E ∋ e = − → ij , as well as:
e i (z)h ± j (w) = h ± j (w)e i (z) ζ ij z w ζ ji w z (for |z ±1 | ≪ |w ±1 |) f i (z)h ± j (w) = h ± j (w)f i (z) ζ ji w z ζ ij z w (for |z ±1 | ≪ |w ±1 |)
and:
[e i,d , f j,k ] = δ i j · −h + i,d+k if d + k > 0 h − i,0 − h + i,0 if d + k = 0 h − i,−d−k if d + k < 0
for all i, j ∈ I. Formulas (4.3), (4.4), (4.5) endow U with the structure of a bialgebra. Finally, there is a triangular decomposition as F-vector spaces:
U ≃ U + ⊗ U 0 ⊗ U − where U + , U 0 , U − are respectively generated by {e i,k } i,k , {h ± i,l } i,l and {f i,k } i,k .
As is standard in the theory of quantum groups, one can write down antipode maps that make U ≥ and U ≤ into Hopf algebras, and also define central extensions by making the series h + (z) and h − (w) "almost" commute with each other (see [Neg20] for a survey in the case of the Jordan quiver). We will not describe these in detail. 1. An important question is to determine what happens to U + and S when the parameters q and {t e } e∈E are no longer generic, i.e. when we work over a field different from (1.2). In the present Section, we will provide an answer under Assumption Ъ of [Neg21a], given as follows:
Definition 5.2. Let K be a field endowed with elements q and {t e } e∈E , such that there exists a field homomorphism:
ρ : K −→ C
for which |ρ(q)| < |ρ(t e )| < 1 for all e ∈ E.
We emphasize the fact that K will henceforth refer to a choice of a field together with the elements q, t e as above. With this in mind, we may define:
K V and K U +
simply by replacing F with K in the definition of V and U + of Definitions 2.4 and 2.6, respectively. All the results of Section 2 will continue to hold in the present context, notably the existence of a non-degenerate pairing:
K U + ⊗ K V op ·,· −−→ K
analogous to (2.8).
5.3. However, as soon as we reach Definition 3.2, we notice that (3.1) cannot be the correct definition of the small shuffle algebra anymore (for one thing, it might be the case that t e = t e ′ for two distinct edges e, e ′ between vertices i and j, in which case two wheel conditions (3.1) actually impose the same restriction on R). As shown in [Neg21a], the way to fix this in the context of Definition 5.2, is to set:
K V ⊃ K S = R(. . . , z i1 , . . . , z ini , . . . ) such that ∀i ∈ I (5.1) R zi2=qzi1 is divisible by (j,b) / ∈{(i,1),(i,2)} E∋e= − → ij (z jb − t e z i1 )
It is easy to see that (5.1) is equivalent to the following condition for all i, j ∈ I and all γ ∈ K (below, we replace z j1 by z i3 if i = j):
(5.2) R zi2=qzi1 is divisible by (z j1 − γz i1 ) ♭ij (γ)
where ♭ ij (γ) denotes the number of edges e = − → ij in E for which t e = γ. This is why if the parameters {t e } e∈E are all distinct, we recover the wheel conditions (3.1).
As shown in [Neg21a], the definition (5.1) ensures that K S is generated by {z d i1 } d∈Z i∈I , and that there is a non-degenerate pairing:
(5.3) K S ⊗ K S op −→ K analogous to (3.15). 5.4. We will now define a quotient: K U + ։ K U + for which the analogue of Theorem 3.8 holds. Let us recall the notation of Subsection 3.4, and define for all i, j ∈ I, k ∈ Z >0 , all γ ∈ K and all integers (a, b, c) ∈ Z 3 :
(5.4) K A (i,j,γ|k) a,b,c = x a 1 x2 q b y γ c (1 − q) (1 − γ) δ i j 1 − γ q δ i j · K X (i,j,γ|k) (x 1 , x 2 , y) ct ∈ K U + 2ς i +ς j ,a+b+c
where:
(5.5) K X (i,j,γ|k) (x 1 , x 2 , y) is defined as the formal series:
(−1) k−1 ζ ii x2 x1 ζ ji y x1 ζ ji y x2 y x1γ 1 − x2 x1q 1 − yq x2γ k · e i (x 1 )e i (x 2 )e j (y) + k ′ ,k ′′ ∈Z>0 k ′ +k ′′ =k+1 (−1) k ′ −1 ζ ii x1 x2 ζ ji y x2 ζ ij x1 y x1γ y k ′′ −1 − yq x1γ − y x1 δ i j 1 − yq x2γ k ′ 1 − x1γ y k ′′ · e i (x 2 )e j (y)e i (x 1 ) + ζ ii x2 x1 ζ ij x1 y ζ ij x2 y x1γ y k−1 y 2 x1x2 δ i j 1 − x2 x1q 1 − x1γ y k · e j (y)e i (x 1 )e i (x 2 )
If k ≤ ♭ ij (γ), the linear factors in x 1 , x 2 , y in the denominators above are all canceled by similar linear factors in the numerator, hence (5.5) is a Laurent polynomial in x 1 , x 2 , y times the product of the series e i (x 1 ), e i (x 2 ), e j (y). Note that when k = ♭ ij (γ) = 1, the expression (5.4) is slightly different from (3.5), but they both yield equivalent sets of cubic relations modulo the ideal of quadratic relations (2.5). As such, it is straightforward to generalize Proposition 3.5 to the following result.
+ ev |zj1|≫|zi1|≫|zi2| zi1γ zj1 k−1 1 − zi2 zi1q 1 − zi1γ zj1 k (5.7)
If we relabel z i1 = x, z j1 = yγ, z i2 = zq, then formula (5.7) has left-hand side:
(5.8) d1,d2∈Z x −d1−d2 y d1 z d2 −d 1 k − 1
Because of the elementary identity
ev |v|≫|u| 1 1 − u v k = ∞ d=0 (−u) d v d −k d = (−1) k−1 ∞ d=0 u d v d −d − 1 k − 1
the right-hand side of (5.7) is equal to:
∞ d1=1 ∞ d2=1−d1 x −d1−d2 y d1 z d2 −d 1 k − 1 + k ′ ,k ′′ ∈Z>0 k ′ +k ′′ =k+1 −1 d2=−∞ −d2−1 d1=−∞ x −d1−d2 y d1 z d2 d 2 k ′ − 1 −d 1 − d 2 k ′′ − 1 + 0 d1=−∞ ∞ d2=0 x −d1−d2 y d1 z d2 −d 1 k − 1 (5.9)
To obtain (5.9), we relabeled certain indices and used the Taylor series expansion:
1 (1 − u) k = ∞ d=0 (−u) d −k d = (−1) k−1 ∞ d=0 u d −d − 1 k − 1
Using a simple combinatorial identity involving binomial coefficients, equations (5.8) and (5.9) are easily seen to be equal to each other, thus establishing (5.7). 5.6. Formula (5.6) shows that (5.2) holds for R if and only if:
(5.10) K A (i,j,γ|k) a,b,c
, R = 0 ∀k ≤ ♭ ij (γ) and a, b, c ∈ Z Therefore, generalizing (3.10), we define:
(5.11) K U + = K U + K J
where K J denotes the two-sided ideal of the quadratic quantum loop group generated by the elements (5.4), for all i, j ∈ I, γ ∈ K, k ∈ {1, . . . , ♭ ij (γ)} and a, b, c ∈ Z.
Remark 5.7. One might (and should) define K J in (5.11) to be the two-sided ideal generated by finitely many of the elements (5.4) in every fixed degree. Indeed, to ensure that (5.10) holds for any fixed i, j, γ, one needs to consider only finitely many choices of a, b, c ∈ Z for any given a + b + c ∈ Z (compare with Proposition 3.5).
The analogue of Theorem 3.8 in the setting at hand is the following.
Theorem 5.8. The assignment e i,d → z d i1 for all i ∈ I, d ∈ Z induces an algebra isomorphism:
K U + ∼ − → K S.
We will now sketch the main points of the proof of Theorem 5.8, leaving the details to the interested reader. Just like the proof of Theorem 3.8 started with the nondegenerate pairing (3.15), in the case at hand we start from the pairing (5.3), whose non-degeneracy was the main reason for introducing the specific conditions (5.2) in [Neg21a]. Then the proof of Theorem 3.8 carries through as stated, once one replaces the elements (3.5) of the quadratic quantum loop group by the elements (5.4) (in both situations, one need only consider finitely many such elements in any given degree; see Remark 5.7).
6. The spherical Hall algebra of a generic curve 6.1. Let us fix g ∈ N and specialize the quiver to Q = S g , the quiver with one vertex and g loops. In this special case, we will show how to connect the extended quantum loop group U ≥ of Definition 4.2 with the Hall algebra of a smooth projective curve of genus g over a finite field. In the present Section, we will treat the case of a generic curve (in the sense of having distinct Weil numbers), and then in the next Section we will show how to adapt to the case of an arbitrary curve.
In more detail, let X be a smooth, geometrically connected projective curve of genus g defined over the finite field k = F q −1 (here the unusual choice of notation for the cardinality of the finite field is made for the purpose of compatibility with (1.4)). Let Coh r,d (k) denote the groupoid of coherent sheaves on X of rank r and degree d. We refer to [Sch12b,Lectures 1,4] for the definition of the Hall algebra of the category of coherent sheaves on X. As a vector space:
(6.1) H X = (r,d)∈(Z 2 ) + Fun 0 (Coh r,d (k), Q) ⊗ C[κ ±1 ]
is the space of finitely supported functions on Coh(k) = r,d Coh r,d (k), where:
(Z 2 ) + := {(r, d) ∈ Z 2 | r ≥ 0, d ∈ Z such that d ≥ 0 if r = 0}.
The element κ is usually denoted by κ 1,0 in the literature, and satisfies the following commutation relations :
κf κ −1 = q r(g−1) f, f ∈ Fun 0 (Coh r,d (k), Q);
one often adds a central element (denoted by κ 0,1 ) to H X , although we will not do so, as it only plays a significant part when considering the double Hall algebra.
Definition 6.2. The spherical Hall algebra H sph X is the subalgebra of H X generated by κ ±1 together with the following elements:
1 vec 1,d = χ P ic d (k) , 1 0,l = χ Coh 0,l (k) , (d ∈ Z, l ≥ 1)
where χ Y stands for the characteristic function of a subgroupoid Y ⊂ Coh(k).
Theorem 6.5 ([SV12, Theorem 1]). The assignment 1 vec 1,d → z d 1 ∈ V X|1,d for d ∈ Z extends to an algebra isomorphism H sph,+ X ∼ − → V sph X .
6.6. As long as the Weil numbers of X are distinct (which implies that {t e } e∈E are distinct complex numbers), the small shuffle algebra behaves just like in the case of generic parameters. In the language of Section 5, this is because conditions (5.1) are still none other than the usual 3-variable wheel conditions (3.1). Thus, the following is proved just like [Neg21a, Theorem 1.1].
Theorem 6.7. We have V sph X = S X , namely the subalgebra of V X determined by the 3-variable wheel conditions (3.1) (but with t e specialized as in (6.2)).
Proof of Theorem 1.7. By combining Theorem 6.5, Theorem 6.7 and Theorem 1.5, we only need to check that the collections of relations (1.6) and (1.10) are equivalent (assuming relations (1.7), (1.8), (1.9) hold). By (1.8), taking commutators with the symbols θ 0,l means that formula (1.10) is equivalent to:
p(x, y, z)(x + z)(xz − y 2 )Q e (x, y, z)E(x)E(y)E(z) ct = 0
for any symmetric Laurent polynomial p. Thus, we need to prove that for any given R(z 1 , z 2 , z 3 ) ∈ V X , the wheel conditions (3.1) are equivalent to the relations:
P (x, y, z)(x + z) 1 y − y xz Q e (x, y, z)E(x)E(y)E(z) ct , R = 0
for any symmetric Laurent polynomial P (x, y, z) = p(x, y, z)xyz. By definition of the pairing (2.9), the condition above reads precisely:
(6.4) |z1|≫|z2|≫|z3| Q e (z 1 , z 2 , z 3 ) ζ X z2 z1 ζ X z3 z1 ζ X z3 z2 (z 1 + z 3 ) 1 z 2 − z 2 z 1 z 3 (P R)(z 1 , z 2 , z 3 )Dz 1 Dz 2 Dz 3 = 0
By the definition of Q e , the rational function on the first row above is equal to: 1
ζ (e) 1 z2 z1 ζ (e) 1 z3 z1 ζ (e) 1 z3 z2 where ζ (e) 1
is the rational function (1.4) for the Jordan quiver, associated to the single equivariant parameter t e (besides q). This reduces the problem to the case g = 1, in which case the equivalence of (6.4) and the wheel conditions (3.1) follows from the main results of [Sch12a] and [Neg14]. In more detail, it is straightforward to show that for any symmetric Laurent polynomials P and R, we have:
|z1|≫|z2|≫|z3| (z 1 + z 3 ) 1 z2 − z2 z1z3 (P R)(z 1 , z 2 , z 3 ) ζ (e) 1
Dx
(it suffices to do so when P R(z 1 , z 2 , z 3 ) = Sym z a 1 z b 2 z c 3 for various a, b, c ∈ Z, in which case the formula above is a straightforward computation involving power series). Clearly, the vanishing of the right-hand side of the equation above (for all symmetric Laurent polynomials P ) is equivalent to (3.1).
7. The whole Hall algebra of an arbitrary curve 7.1. We now briefly explain how one can adapt and mix the results in the previous two Sections to give a presentation, first of the spherical Hall algebra of an arbitrary smooth projective curve of genus g, and then of a much larger subalgebra of the Hall algebra of such a curve (under the technical hypothesis that the cotangent bundle Ω X admits a square root). Recall that X is a smooth, projective, geometrically connected curve of genus g defined over a finite field F q −1 , whose Weil numbers are denoted as in (6.2) and whose renormalized ζ function is given in (6.3).
Let us first consider the spherical Hall algebra of X. Observe that because:
|σ e | = q − 1 2 ⇒ |t e | = q 1 2
for all e, we are in the situation of Assumption Ъ (i.e. Definition 5.2). We can therefore apply the results of Theorem 5.8 and obtain a presentation of H sph X by generators and relations, the form of which only depends on the multiplicities of the various Weil numbers, i.e. on the potential numerical coincidences between the elements of the multiset {t e , t e * } e=1,...,g . This leads to the following result. We use the same definitions as in Theorem 1.7 for the generating series E(z), H(z), and we let X (γ|k) (x 1 , x 2 , x 3 ) be defined as in (5.5), but with ζ ij (u) replaced by (1 − u)ζ X (u) and e i (u), e j (u) both replaced by E(u).
Theorem 7.2. Let γ 1 , . . . , γ s be the distinct elements of {t 1 , . . . , t g , q/t 1 , . . . , q/t g }, and let k 1 , . . . , k s stand for their respective multiplicities. Then H sph X is isomorphic to the algebra generated by κ ±1 , θ 0,l , 1 vec d for l ≥ 1, d ∈ Z subject to the relations (1.7), (1.8), (1.9) and the relations:
X (γi|k) (x 1 , x 2 , x 3 ) = 0,
for all i ∈ {1, . . . , s} and k ∈ {1, . . . , k i }.
7.3. Let us keep the same notation as above concerning the curve X, and consider the entire Hall algebra H X (i.e. not just the spherical part). For the most part, we will drop the subscript X from the notation of the Hall algebra, as the curve will be fixed. We will recall a few features of H and refer to [KSV17] for precise definitions and more details. We will assume that X has a theta characteristic, i.e that Ω X admits a square root Ω 1/2 X (see [KSV17,Definition 3.9]). Let H + , H 0 be the subalgebras of functions (6.1) on the stacks of vector bundles and torsion sheaves on X, respectively. We will write H r,d , H + r,d , etc, for the graded pieces of the aforementioned Hall algebras, corresponding to sheaves of rank r and degree d. There is a canonical decomposition of the commutative algebra H 0 according to support:
H 0 = x∈X(k) H x , where H x = C[T x,1 , T x,2 , . . .]
is generated by primitive elements T x,l ∈ H 0,l·deg(x) , for l ≥ 1. Next, there is a left action · of H 0 on H + by Hecke operators, which is defined as the composition:
H 0 ⊗ H + m −→ H π −→ H +
where m is the multiplication map and π is the natural projection map. It satisfies:
T x,l · f = [T x,l , f ] ∀ x ∈ X(k),h · f = χ(h)f, ∀ h ∈ H 0 .
There is an action of C * on the set of pairs (χ, f ) as above given by:
t · χ(h) = t deg(h) χ(h), t · f d = t d f d .
Let Σ r be the set of all eigenvalues of cuspidal eigenforms of rank r. The strong multiplicity one theorem says that Σ r is a finite union of C * -orbits, and that for any χ ∈ Σ r there exists a unique (up to scalar) eigenform f χ of eigenvalue χ. To a pair of characters (χ, χ ′ ) ∈ Σ r × Σ s one associates a Rankin-Selberg L-function LHom(χ, χ ′ ; z) which is known to enjoy the following properties (see [Laf02,App. B] for the formulas and notation below; recall that our base field is k = F q −1 ):
i) LHom(t −deg χ, χ ′ ; z) = LHom(χ, t deg χ ′ ; z) = LHom(χ, χ ′ ; tz).
ii) If C * · χ = C * · χ ′ then LHom(χ, χ ′ ; z) is a polynomial in z of degree 2(g − 1)rs and constant term 1, all of whose zeros are of complex norm q −1/2 .
iii) If χ = χ ′ then there exists a positive integer d χ such that:
LHom(χ, χ ′ ; z) = P (z) (1 − z dχ )(1 − (z/q) dχ )
where P (z) is a polynomial in z of degree 2(g − 1)rs + 2d χ and constant term 1, all of whose zeros are of complex norm q −1/2 . In i) above, t − deg refers to the function on H 0 given by h → t − deg(h) . The statements in ii) and iii) concerning the norms of the zeros of P (z) is known as the generalized Riemann hypothesis and is proved by Lafforgue (see [Laf02, Thm. VI.10]). 7.5. A cuspidal eigenform f χ is called absolutely cuspidal if d χ = 1. We will denote by H abs the subalgebra of H generated by H x for x ∈ X(k) and the collection of Fourier modes of the absolutely cuspidal eigenforms. We let Σ abs r ⊂ Σ r be the collection of eigenvalues of absolutely cuspidal eigenforms. We will now give a full presentation of H abs (see Remark 7.9 for some motivation in considering this specific subalgebra). To this end, for χ ∈ Σ r consider the generating series:
E χ (z) = d∈Z f d z −d ∈ H abs [[z ±1 ]]
where f is the associated cuspidal eigenform (well-defined up to a scalar). The relevance of Rankin-Selberg L-functions in our context stems from the following: as in [Kap97, Theorem 3.3], there exists a series Ψ χ (z) ∈ H 0 [[z −1 ]] such that:
∆(E χ (z)) = E χ (z) ⊗ 1 + κ r Ψ χ (z) ⊗ E χ (z)
and for any χ, χ ′ ∈ Σ := r Σ r ,
E χ (z)Ψ χ ′ (w) = Ψ χ ′ (w)E χ (z)
LHom(χ ′ , χ; qz/w) LHom(χ ′ , χ; z/w) .
The properties above and the shape of the L-functions LHom(χ, χ; z) for absolutely cuspidal characters χ suggest an isomorphism between H abs and the shuffle algebras considered in the present paper. To make this precise, consider the following renormalization of LHom, for a pair of characters (χ, χ ′ ) ∈ Σ abs r × Σ abs s : ζ χχ ′ (z) = θ χ,χ ′ z (g−1)rs LHom(χ, χ ′ ; z −1 ) if C * · χ = C * · χ ′ 1 − z q 1 − 1 zq z (g−1)r 2 LHom(χ, χ; z −1 ) if χ = χ ′ where θ χ,χ ′ = π (χ * ⊠ χ ′ ) Ω 1/2 X and π (χ * ⊠ χ ′ ) is defined as in [KSV17, Remark 3.3]. Moreover, we have for all t:
(7.1) ζ t −deg χ,χ ′ (z) = ζ χ,t deg χ ′ (z) = ζ χχ ′ (t −1 z)
Using the functional equation for Rankin-Selberg L-functions ([KSV17, Proposition 3.7]), it is straightforward to check that:
ζ χχ ′ (z) ζ χ ′ χ (z −1 )
= q (g−1)rs LHom(χ, χ ′ ; z −1 ) LHom(χ, χ ′ ; qz −1 ) and that the sets of zeros {u χ,χ ′ 1 , . . . , u χ,χ ′ 2(g−1)rs } of the polynomials ζ χχ ′ (z) satisfy the following relations for all χ, χ ′ ∈ Σ abs : (7.2) u χ,χ ′ 1 , . . . , u χ,χ ′ 2(g−1)rs = 1 qu χ ′ ,χ 1 , . . . , 1 qu χ ′ ,χ 2(g−1)rs Special care must be taken with the formula above when χ = χ ′ , in which case the zeroes of the polynomial ζ χχ (z) 1−z 1−zq −1 enjoy the symmetry property above. 7.6. Let Σ abs = {χ} be a fixed set of representatives of Σ abs /C * , and fix a corresponding set {f χ } of cuspidal eigenforms (choosing different representatives will modify the rational functions defined in the previous Subsection according to (7.1)).
To this datum, we associate the quiver Q abs X , with vertex set:
I = ∞ r=1
Σ abs r 4 and edge set E defined as follows : -if (χ, χ ′ ) ∈ Σ abs r × Σ abs s are distinct, then we draw (g − 1)rs arrows going from χ to χ ′ -if χ ∈ Σ abs r then there are (g − 1)r 2 + 1 loops at χ. From the definition of ζ χχ ′ (z) and properties ii), iii) and (7.2) of Rankin-Selberg L-functions, we see that the function ζ χχ ′ (z) is, up to the (nonzero) constant factor θ χ,χ ′ , exactly of the form (1.4), upon the specialization of the parameters: Let us denote by K S the associated (small) shuffle algebra, as in (5.1).
Theorem 7.7. The assignment z d χ → f χ,d for χ ∈ Σ abs extends to an algebra isomorphism:
K S ∼ − → H +,abs
Proof. This is essentially [KSV17, Theorem 3.10] (the shuffle kernels differ by the factors θ χ,χ ′ , but this does not affect any of the arguments in the present paper).
Note that [KSV17] does not single out absolutely cuspidal eigenforms; however, the kernels for the shuffle algebras considered there are, for not absolutely cuspidal eigenforms, different from the ones considered in the current paper.
In combination with Theorem 5.8, we deduce the following presentation for H abs . Let us set, for x ∈ X(k) and χ ∈ Σ abs :
E χ (z) = d∈Z f χ,d z −d , T x (z) = ∞ l=1 T x,l z −l
Corollary 7.8. The algebra H abs is isomorphic to the algebra generated by elements κ, {T x,l | x ∈ X(k), l ≥ 1} and {f χ,d | χ ∈ Σ abs , d ∈ Z} modulo the following set of relations:
κ and T x,l all commute (x ∈ X(k), l ≥ 1) κE χ (z) = q (g−1)r E χ (z)κ , (χ ∈ Σ abs r ) ,
[T x (z), E χ (w)] = ∞ l=1 χ(T x,l ) w z l E χ (w) , 4
Even though the set I is countable, the results in the present paper still hold as stated, because all direct summands of (2.2) and all relations (1.6) only involve finitely many elements of I.
1
There is a specific choice of a line bundle involved in the definition of the multiplication; we refer to [Neg21a] for details.
•·
For any pair (i, j) ∈ I 2 , the quadratic relation:(1.5) e i (z)e j (w) ζ ji w z z δ i j = e j (w)e i (z) ζ ij z w (−w) δ i j • For any edge E ∋ e = − →ij , the cubic relation: e i (x 1 )e i (x 2 )e j j (y)e i (x 1 )e i (x 2 ) = 0 Theorem 1.3. There is an algebra isomorphism K Q,loc ≃ U + Q .
b,c only depend on d = a + b + c (and will henceforth simply be denoted A (e) d ). Setting:
(
Lsl 3 ), and they read:e s (x)e s (y) (xq − y) = e s (y)e s (x) (x − yq) (3.12)e s (x)e t (y)
one needs to prove that for any e ∈ E and d ∈ Z, we have: non-degeneracy of the pairing (3.15), it suffices to show that Υ(A(e) d ) pair trivially with anything in S op . Using (3.16), this is equivalent to showing that A (e) d pair trivially with anything in S op under the pairing (2.8), which follows from (3.6).
≤ , deg(w) = (n, d), s∈A d s ≥ −M |A| + m|A| 2 − A∋s>t / ∈A # isit , ∀A ⊆ {1, . . . , n} .
d R(z 1 , . . . , z n−3 , qx, t e x, x)
3.27) are in T . Property (3.25) implies that s∈B d s ≥ −M |A| + m|A| 2 − 2n 2 |E| ≥ −M |A| + m|A| 2 − 2n 2 |E| (3.29) for A = B or A = B ⊔ {x + 1, . . . , n} respectively, where B ⊆ {1, . . . , x} is arbitrary. Assume for the purpose of contradiction that the defining property of T is violated for A = B ⊔ C with B ⊆ {1, . . . , x} and C a proper subset of {x + 1, . . . , n}, i.< −M |A| + m|A| 2 We claim that (3.28)-(3.29) and (3.30) are incompatible (for m chosen large enough compared to the constant c 3 mentioned in the first bullet above). Indeed, the properties listed in the first bullet above allow us to obtain the following inequalities from (3.28), (3.29), (3.30): s∈B d s ≥ −M |B| + m|B| 2 − 2n 2 |E| (3.31) s∈B d s + y(o − c 3 ) < −M (|B| + y) + m(|B| + y) 2 (3.32) s∈B d s + (n − x)(o + c 3 ) ≥ −M (|B| + n − x) + m(|B| + n − x) 2 − 2n 2 |E| (3.33) where y = |C| lies in {1, . . . , n − x − 1}. Subtracting (3.31) from (3.32) yields o − c 3 < −M + m(y + 2|B|) + 2n 2 |E| and subtracting (3.32) from (3.33) yields o + c 3 ≥ −M + m(n − x + y + 2|B|) − 2n 2 |E|The two inequalities above are incompatible if m is chosen large enough compared to c 3 and 2n 2 |E|, thus yielding the desired contradiction.
Remark 4. 4 .
4Since it arises as a Drinfeld double, U has an interesting universal R-matrix (up to the usual issues involving completions and central extensions that are required to rigorously define it). This object is studied in [Neg21b], where it is conjectured that it matches the R-matrices defined via Nakajima quiver varieties by Aganagic, Maulik, Okounkov and Smirnov (cf. [MO19, OS16, Oko18, AO21]). 5. Special values of the parameters 5.
Dz 1
1Dz 2 Dz 3 = = (P R)(x, xt e , xq) − (P R)(x, xq te , xq) q te − t e 1 q − t e 1 q − q te q −3
,
(χ, χ ′ ) ∈ (Σ abs ) 2 .
l ≥ 1. 7.4. An element f ∈ H + is called cuspidal if the standard coproduct on H X (cf. [Kap97, Theorem 3.3]) satisfies:∆(f ) ∈ (f ⊗ 1) ⊕ (H 0 ⊗ H + ).We will denote by H cusp the subspace of cuspidal functions. It is finite-dimensional in any fixed rank r and degree d. It is known that H cusp is a minimal generating subspace of H + , and that H cusp is stable under the Hecke action of H 0 . Cuspidal functions are also closely related to the following standard construction in the theory of automorphic forms. Let χ : H 0 → C be an algebra character. A cuspidal eigenform of rank r and of eigenvalue χ is a nontrivial formal infinite sum f = d f d ∈ d H +,cuspr,d
such that:
AcknowledgementsWe would like to thank Alexander Tsymbaliuk for many interesting discussions and substantial feedback. A.N. gratefully acknowledges NSF grants DMS-1760264 and DMS-1845034, as well as support from the Alfred P. Sloan Foundation and the MIT Research Support Committee. The work of F. S. was partially supported by JSPS KAKENHI Grant Number JP21K03197.Proposition 5.5. For all i, j ∈ I, k ∈ N, γ ∈ K and (a, b, c) ∈ Z 3 , we have:(the integral refers to the difference of the residues in the variable z i1 at 0 and ∞ of the integrand) where:Proof. Consider the following formal series, for all k ∈ N:The following property is well-known:for all Laurent polynomials P (z i1 , z i2 , z j1 ), where [. . . ] ct refers to the constant term in the variables z i1 , z i2 , z j1 . With this in mind, formula (5.6) reduces to the following identity of formal power series (whose k = 1 case is a closely related version of (3.8)):Let us consider the subalgebras:In Proposition 6.4, we will recall the way to realize H sph X as a semidirect product of H sph,+ X with H sph,0 X . In order to set this up, it is convenient to introduce new generators {T 0,l }, {θ 0,l }, l ≥ 1 of H sph,0 X through the relations:is a free commutative polynomial algebra in κ ±1 and any one of the families of generators {1 0,l } l≥1 , {T 0,l } l≥1 or {θ 0,l } l≥1 .6.3. Let σ 1 , σ 1 , . . . , σ g , σ g denote the Weil numbers of X, paired up such that σ e σ e = q −1 for all e = 1, . . . , g. Therefore, we may write: for all e = 1, . . . , g, and this would be compatible with (1.3). With this notation, the particular case of the rational function (1.4) when Q is the g loop quiver (one vertex with g loops) is a renormalized form of the zeta function of X:Proof. See [SV12, Corollary 1.4 and Section 1.5]. Note that 1 vec 1,d κ = q 1−g κ1 vec 1,d .Concerning the structure of H sph,+ X , we have the following result. Let V X be the C-algebra defined by the relations (2.3) when Q is the g-loop quiver (and the equivariant parameters t e are specialized as in (6.2)), and let V sph X be the subalgebra of V X generated by its horizontal degree 1 pieces {V X|1,d } d∈Z .w and the collection of cubic relations determined by setting (5.5) equal to 0.In particular, all the relations satisfied by Eisenstein series attached to absolutely cuspidal eigenforms are a consequence, in addition to the usual functional equation, of certain cubic relations (and the structure of these only depend on the multiplicities of the zeros of the corresponding Rankin-Selberg L-functions).Remark 7.9. One might wonder if there is an analogue for H X of the generic spherical Hall algebra. In the context of a quiver Q, such an analogue takes the form of a quantized graded Borcherds algebra, whose Cartan datum encodes the dimensions of the spaces of cuspidal functions in H Q , see[BS19]. By a recent theorem of H. Yu,[Yu18], the dimensions of the spaces of absolutely cuspidal functions in H X are given by some universal polynomials in the Weil numbers of X (depending on the rank of the sheaves considered). This suggests that it is H abs X rather than H X which admits a natural generic form, and that the classical limit of such a generic form of H abs X would be a Lie algebra in the category of GSp 2g (C)-modules. However, we do not know at the moment how to encode the zeros of the various Rankin-Selberg L-functions (which account for the "quantum" parameters).
Elliptic stable envelopes. M Aganagic, A Okounkov, J. Amer. Math. Soc. 341M. Aganagic and A. Okounkov, Elliptic stable envelopes, J. Amer. Math. Soc. 34 (2021), no. 1, 79-133.
Counting absolutely cuspidals for quivers. T Bozec, O Schiffmann, Math. Z. 2921-2T. Bozec and O. Schiffmann, Counting absolutely cuspidals for quivers, Math. Z. 292 (2019), no. 1-2, 133-149.
On the Hall algebra of an elliptic curve, I, Duke Math. I Burban, O Schiffmann, J. 1617I. Burban and O. Schiffmann, On the Hall algebra of an elliptic curve, I, Duke Math. J. 161 (2012), no. 7, 1171-1231.
A commutative algebra on degenerate CP 1 and Macdonald polynomials. ] B + 09, K Feigin, A Hashizume, J Hoshino, S Shiraishi, Yanagida, J. Math. Phys. 50942+ 09] B. Feigin, K. Hashizume, A. Hoshino, J. Shiraishi, and S. Yanagida, A commutative algebra on degenerate CP 1 and Macdonald polynomials, J. Math. Phys. 50 (2009), no. 9, 095215, 42.
Quantized moduli spaces of the bundles on the elliptic curve and their applications, Integrable structures of exactly solvable two-dimensional models of quantum field theory. B L Feigin, A V Odesskii, NATO Sci. Ser. II Math. Phys. Chem. 35Kluwer Acad. PublB. L. Feigin and A. V. Odesskii, Quantized moduli spaces of the bundles on the elliptic curve and their applications, Integrable structures of exactly solvable two-dimensional models of quantum field theory (Kiev, 2000), NATO Sci. Ser. II Math. Phys. Chem., vol. 35, Kluwer Acad. Publ., Dordrecht, 2001, pp. 123-137.
Eisenstein series and quantum affine algebras. M Kapranov, J. Math. Sci. 84M. Kapranov Eisenstein series and quantum affine algebras, J. Math. Sci. (New York) 84 (1997), pp. 1311-1360.
The Hall algebra of a curve. M Kapranov, O Schiffmann, E Vasserot, Selecta Math. (N.S.). 231M. Kapranov, O. Schiffmann and E. Vasserot, The Hall algebra of a curve, Selecta Math. (N.S.) 23 (2017), no. 1, 117-177.
Cohomological Hall algebra, exponential Hodge structures and motivic Donaldson-Thomas invariants. M Kontsevich, Y Soibelman, Commun. Number Theory Phys. 52M. Kontsevich and Y. Soibelman, Cohomological Hall algebra, exponential Hodge structures and motivic Donaldson-Thomas invariants, Commun. Number Theory Phys. 5 (2011), no. 2, 231-352.
. L Lafforgue, Invent. Math. 1471Chtoucas de Drinfeld et correspondance de Langlands.L. Lafforgue, Chtoucas de Drinfeld et correspondance de Langlands., Invent. Math. 147 (2002), no. 1, 1-241.
Dual canonical bases, quantum shuffles and q-characters. B Leclerc, Math. Z. 2464B. Leclerc, Dual canonical bases, quantum shuffles and q-characters, Math. Z. 246 (2004), no. 4, 691-732.
Standard Lyndon bases of Lie algebras and enveloping algebras. P Lalonde, A Ram, Trans. Amer. Math. Soc. 3475P. Lalonde and A. Ram, Standard Lyndon bases of Lie algebras and enveloping alge- bras, Trans. Amer. Math. Soc. 347 (1995), no. 5, 1821-1830.
D Maulik, A Okounkov, Quantum groups and quantum cohomology. 209D. Maulik and A. Okounkov, Quantum groups and quantum cohomology, Astérisque (2019), no. 408, ix+209.
The shuffle algebra revisited. A Negut, Int. Math. Res. Not. 22IssueA. Negut , , The shuffle algebra revisited, Int. Math. Res. Not., Issue 22 (2014), 6242- 6275
arχiv:1504.06525Quantum algebras and cyclic quiver varieties. Ph.D. Thesis Columbia University[Neg15] , Quantum algebras and cyclic quiver varieties, Ph.D. Thesis Columbia Uni- versity (2015), arχiv:1504.06525.
The R-matrix of the quantum toroidal algebra. 10.1017/S1474748022000184Shuffle algebras for quivers and wheel conditions. Shuffle algebras for quivers and R-matrices, The R-matrix of the quantum toroidal algebra, arχiv:2005.14182. [Neg21a] , Shuffle algebras for quivers and wheel conditions, arχiv:2108.08779. [Neg21b] , Shuffle algebras for quivers and R-matrices, J. Inst. Math. Jussieu, 1-36 (2022), doi:10.1017/S1474748022000184
A Negut, A Tsymbaliuk, arχiv:2102.11269Quantum loop groups and shuffle algebras via Lyndon words. A. Negut , and A. Tsymbaliuk, Quantum loop groups and shuffle algebras via Lyndon words, arχiv:2102.11269.
Enumerative geometry and geometric representation theory, Algebraic geometry: Salt Lake City. A Okounkov, MR 3821158Proc. Sympos. Pure Math. 97Amer. Math. SocA. Okounkov, Enumerative geometry and geometric representation theory, Algebraic geometry: Salt Lake City 2015, Proc. Sympos. Pure Math., vol. 97, Amer. Math. Soc., Providence, RI, 2018, pp. 419-457. MR 3821158
Quantum difference equation for Nakajima varieties. A Okounkov, A Smirnov, Invent. Math. 229A. Okounkov and A. Smirnov, Quantum difference equation for Nakajima varieties, Invent. Math. 229, 1203-1299 (2022).
Lyndon bases and the multiplicative formula for R-matrices. M Rosso, preprintM. Rosso, Lyndon bases and the multiplicative formula for R-matrices, preprint, 2002.
Drinfeld realization of the elliptic Hall algebra. O Schiffmann, J. Algebraic Combin. 35O. Schiffmann, Drinfeld realization of the elliptic Hall algebra, J. Algebraic Combin. 35 (2012), 237-262.
Lectures on Hall algebras, Geometric methods in representation theory. Sémin. Congr. 24-II. , Lectures on Hall algebras, Geometric methods in representation theory, Sémin. Congr. 24-II (2012), 1-141.
Hall algebras of curves, commuting varieties and Langlands duality. O Schiffmann, E Vasserot, Math. Ann. 3534O. Schiffmann and E. Vasserot, Hall algebras of curves, commuting varieties and Lang- lands duality, Math. Ann. 353 (2012), no. 4, 1399-1451.
The elliptic Hall algebra and the K-theory of the Hilbert scheme of A 2. Duke Math. J. 1622, The elliptic Hall algebra and the K-theory of the Hilbert scheme of A 2 , Duke Math. J. 162 (2013), no. 2, 279-366.
On cohomological Hall algebras of quivers: generators. J. Reine Angew. Math. 760, On cohomological Hall algebras of quivers: generators, J. Reine Angew. Math. 760 (2020), 59-132.
K-theoretic Hall algebras, quantum groups and super quantum groups. M Varagnolo, E Vasserot, arχiv:2011.01203M. Varagnolo and E. Vasserot, K-theoretic Hall algebras, quantum groups and super quantum groups, arχiv:2011.01203.
The cohomological Hall algebra of a preprojective algebra. Y Yang, G Zhao, Proc. Lond. Math. Soc. 3Y. Yang and G. Zhao, The cohomological Hall algebra of a preprojective algebra, Proc. Lond. Math. Soc. (3) 116 (2018), no. 5, 1029-1074.
Comptage de systèmes locaux l-adiques sur une courbe. H Yu, arχiv:1807.04659H. Yu, Comptage de systèmes locaux l-adiques sur une courbe, arχiv:1807.04659.
Y Zhao, arχiv:1909.07870The Feigin-Odesskii wheel conditions and sheaves on surfaces. Cambridge, MA, USAAndrei Negut , ) MIT, Department of MathematicsY. Zhao, The Feigin-Odesskii wheel conditions and sheaves on surfaces, arχiv:1909.07870. (Andrei Negut , ) MIT, Department of Mathematics, Cambridge, MA, USA
Email address: [email protected] (Francesco Sala) Università di Pisa, Dipartimento di Matematica, Largo Bruno Pontecorvo 5. 56127Bucharest, Romania; Pisa (PI), ItalySimion Stoilow Institute of MathematicsSimion Stoilow Institute of Mathematics, Bucharest, Romania Email address: [email protected] (Francesco Sala) Università di Pisa, Dipartimento di Matematica, Largo Bruno Pon- tecorvo 5, 56127 Pisa (PI), Italy
| []
|
[
"The VANDELS survey: the ionizing properties of star-forming galaxies at 3 ≤ ≤ 5 using deep rest-frame ultraviolet spectroscopy",
"The VANDELS survey: the ionizing properties of star-forming galaxies at 3 ≤ ≤ 5 using deep rest-frame ultraviolet spectroscopy"
]
| [
"A Saldana-Lopez \nDepartment of Astronomy\nUniversity of Geneva\n51 Chemin Pegasi1290VersoixSwitzerland\n",
"★ D Schaerer \nDepartment of Astronomy\nUniversity of Geneva\n51 Chemin Pegasi1290VersoixSwitzerland\n\nCNRS\nIRAP\n14 Avenue E. Belin31400ToulouseFrance\n",
"J Chisholm \nDepartment of Astronomy\nThe University of Texas at Austin\nStop C14002515, 78712-1205Speedway, AustinTXUSA\n",
"A Calabrò \nINAF -Osservatorio Astronomico di Roma\nvia Frascati 3300078Monteporzio CatoneItaly\n",
"L Pentericci \nINAF -Osservatorio Astronomico di Roma\nvia Frascati 3300078Monteporzio CatoneItaly\n",
"F Cullen \nInstitute for Astronomy\nUniversity of Edinburgh\nRoyal Observatory\nEH9 3HJEdinburghUK\n",
"A Saxena \nSub-department of Astrophysics\nUniversity of Oxford\nKeble RoadOX1 3RHOxfordUK\n\nDepartment of Physics and Astronomy\nUniversity College London\nGower StreetWC1E 6BTLondonUK\n",
"R Amorín \nInstituto de Investigación Multidisciplinar en Ciencia y Tecnología\nUniversidad de La Serena\nRaúl Bitrán, La Serena1305Chile\n",
"A C Carnall \nInstitute for Astronomy\nUniversity of Edinburgh\nRoyal Observatory\nEH9 3HJEdinburghUK\n",
"F Fontanot \nINAF -Trieste Observatory\nvia GB Tiepolo 1134143TriesteItaly\n",
"J P U Fynbo \nCosmic Dawn Center (DAWN)\nJagtvej 128DK2200Copenhagen NDenmark\n\nNiels Bohr Institute\nUniversity of Copenhagen\nBlegdamsvej 17DK2100Copenhagen ØDenmark\n",
"L Guaita \nDepartamento de Ciencias Fisicas\nFacultad de Ciencias Exactas\nUniversidad Andres Bello\nFernandez Concha 700\n\nLas Condes\nSantiagoChile\n",
"N P Hathi \nSpace Telescope Science Institute\n21218BaltimoreMDUSA\n",
"P Hibon \nEuropean Southern Observatory\nAlonso de Córdova 3107Vitacura, Santiago de ChileChile\n",
"Z Ji \nDepartment of Astronomy\nUniversity of Massachusetts\n710 North Pleasant Street01003-9305Amherst, AmherstMAUSA\n",
"D J Mcleod \nInstitute for Astronomy\nUniversity of Edinburgh\nRoyal Observatory\nEH9 3HJEdinburghUK\n",
"E Pompei \nEuropean Southern Observatory\nAlonso de Córdova 3107Vitacura, Santiago de ChileChile\n",
"G Zamorani \nINAF -Astrophysics and Space Science Observatory\nVia Piero Gobetti 93/340129BolognaItaly\n"
]
| [
"Department of Astronomy\nUniversity of Geneva\n51 Chemin Pegasi1290VersoixSwitzerland",
"Department of Astronomy\nUniversity of Geneva\n51 Chemin Pegasi1290VersoixSwitzerland",
"CNRS\nIRAP\n14 Avenue E. Belin31400ToulouseFrance",
"Department of Astronomy\nThe University of Texas at Austin\nStop C14002515, 78712-1205Speedway, AustinTXUSA",
"INAF -Osservatorio Astronomico di Roma\nvia Frascati 3300078Monteporzio CatoneItaly",
"INAF -Osservatorio Astronomico di Roma\nvia Frascati 3300078Monteporzio CatoneItaly",
"Institute for Astronomy\nUniversity of Edinburgh\nRoyal Observatory\nEH9 3HJEdinburghUK",
"Sub-department of Astrophysics\nUniversity of Oxford\nKeble RoadOX1 3RHOxfordUK",
"Department of Physics and Astronomy\nUniversity College London\nGower StreetWC1E 6BTLondonUK",
"Instituto de Investigación Multidisciplinar en Ciencia y Tecnología\nUniversidad de La Serena\nRaúl Bitrán, La Serena1305Chile",
"Institute for Astronomy\nUniversity of Edinburgh\nRoyal Observatory\nEH9 3HJEdinburghUK",
"INAF -Trieste Observatory\nvia GB Tiepolo 1134143TriesteItaly",
"Cosmic Dawn Center (DAWN)\nJagtvej 128DK2200Copenhagen NDenmark",
"Niels Bohr Institute\nUniversity of Copenhagen\nBlegdamsvej 17DK2100Copenhagen ØDenmark",
"Departamento de Ciencias Fisicas\nFacultad de Ciencias Exactas\nUniversidad Andres Bello\nFernandez Concha 700",
"Las Condes\nSantiagoChile",
"Space Telescope Science Institute\n21218BaltimoreMDUSA",
"European Southern Observatory\nAlonso de Córdova 3107Vitacura, Santiago de ChileChile",
"Department of Astronomy\nUniversity of Massachusetts\n710 North Pleasant Street01003-9305Amherst, AmherstMAUSA",
"Institute for Astronomy\nUniversity of Edinburgh\nRoyal Observatory\nEH9 3HJEdinburghUK",
"European Southern Observatory\nAlonso de Córdova 3107Vitacura, Santiago de ChileChile",
"INAF -Astrophysics and Space Science Observatory\nVia Piero Gobetti 93/340129BolognaItaly"
]
| [
"MNRAS"
]
| The physical properties of Epoch of Reionization (EoR) galaxies are still poorly constrained by observations. To better understand the ionizing properties of galaxies in the EoR, we investigate deep, rest-frame ultraviolet (UV) spectra of 500 star-forming galaxies at 3 ≤ ≤ 5 selected from the public ESO-VANDELS spectroscopic survey. The absolute ionizing photon escape fraction ( abs esc , i.e., the ratio of leaking against produced ionizing photons) is derived by combining absorption line measurements with estimates of the UV attenuation. The ionizing production efficiency ( ion , i.e., the number of ionizing photons produced per non-ionizing UV luminosity) is calculated by fitting the far-UV (FUV) stellar continuum of the VANDELS galaxies. We find that the abs esc and ion parameters increase towards low-mass, blue UV-continuum slopes and strong Ly emitting galaxies, and both are slightly higher-than-average for the UV-faintest galaxies in the sample. Potential Lyman Continuum Emitters (LCEs, abs esc ≥ 5%) and selected Lyman Alpha Emitters (LAEs, Ly ≤ −20Å) show systematically higher ion (log ion (Hz/erg) ≈ 25.38, 25.41) than non-LCEs and non-LAEs (log ion (Hz/erg) ≈ 25.18, 25.14) at similar UV magnitudes. This indicates very young underlying stellar populations (≈ 10 Myr) at relatively low metallicities (≈ 0.2 Z ). The FUV non-ionizing spectra of potential LCEs is characterized by blue UV slopes (≤ −2), enhanced Ly emission (≤ −25Å), strong UV nebular lines (e.g., high C 1550/C 1908 ≥ 0.75 ratios), and weak absorption lines (≤ 1Å). The latter suggests the existence of low gas-column-density channels in the interstellar medium, which enables the escape of ionizing photons. By comparing our VANDELS results against other surveys in the literature, our findings imply that the ionizing budget in the EoR was likely dominated by UV-faint, low-mass and dustless galaxies. | 10.1093/mnras/stad1283 | [
"https://export.arxiv.org/pdf/2211.01351v2.pdf"
]
| 253,255,085 | 2211.01351 | 6ebc81287a9656f76d9a502ee3254bdf3c587977 |
The VANDELS survey: the ionizing properties of star-forming galaxies at 3 ≤ ≤ 5 using deep rest-frame ultraviolet spectroscopy
2022
A Saldana-Lopez
Department of Astronomy
University of Geneva
51 Chemin Pegasi1290VersoixSwitzerland
★ D Schaerer
Department of Astronomy
University of Geneva
51 Chemin Pegasi1290VersoixSwitzerland
CNRS
IRAP
14 Avenue E. Belin31400ToulouseFrance
J Chisholm
Department of Astronomy
The University of Texas at Austin
Stop C14002515, 78712-1205Speedway, AustinTXUSA
A Calabrò
INAF -Osservatorio Astronomico di Roma
via Frascati 3300078Monteporzio CatoneItaly
L Pentericci
INAF -Osservatorio Astronomico di Roma
via Frascati 3300078Monteporzio CatoneItaly
F Cullen
Institute for Astronomy
University of Edinburgh
Royal Observatory
EH9 3HJEdinburghUK
A Saxena
Sub-department of Astrophysics
University of Oxford
Keble RoadOX1 3RHOxfordUK
Department of Physics and Astronomy
University College London
Gower StreetWC1E 6BTLondonUK
R Amorín
Instituto de Investigación Multidisciplinar en Ciencia y Tecnología
Universidad de La Serena
Raúl Bitrán, La Serena1305Chile
A C Carnall
Institute for Astronomy
University of Edinburgh
Royal Observatory
EH9 3HJEdinburghUK
F Fontanot
INAF -Trieste Observatory
via GB Tiepolo 1134143TriesteItaly
J P U Fynbo
Cosmic Dawn Center (DAWN)
Jagtvej 128DK2200Copenhagen NDenmark
Niels Bohr Institute
University of Copenhagen
Blegdamsvej 17DK2100Copenhagen ØDenmark
L Guaita
Departamento de Ciencias Fisicas
Facultad de Ciencias Exactas
Universidad Andres Bello
Fernandez Concha 700
Las Condes
SantiagoChile
N P Hathi
Space Telescope Science Institute
21218BaltimoreMDUSA
P Hibon
European Southern Observatory
Alonso de Córdova 3107Vitacura, Santiago de ChileChile
Z Ji
Department of Astronomy
University of Massachusetts
710 North Pleasant Street01003-9305Amherst, AmherstMAUSA
D J Mcleod
Institute for Astronomy
University of Edinburgh
Royal Observatory
EH9 3HJEdinburghUK
E Pompei
European Southern Observatory
Alonso de Córdova 3107Vitacura, Santiago de ChileChile
G Zamorani
INAF -Astrophysics and Space Science Observatory
Via Piero Gobetti 93/340129BolognaItaly
The VANDELS survey: the ionizing properties of star-forming galaxies at 3 ≤ ≤ 5 using deep rest-frame ultraviolet spectroscopy
MNRAS
0002022Accepted 2023 April 24. Received 2023 April 24; in original form 2022 November 21Preprint 12 May 2023 Compiled using MNRAS L A T E X style file v3.0cosmology: dark agesreionizationfirst stars -galaxies: high-redshiftISMstellar content -ISM: dustextinction -ultraviolet: galaxies
The physical properties of Epoch of Reionization (EoR) galaxies are still poorly constrained by observations. To better understand the ionizing properties of galaxies in the EoR, we investigate deep, rest-frame ultraviolet (UV) spectra of 500 star-forming galaxies at 3 ≤ ≤ 5 selected from the public ESO-VANDELS spectroscopic survey. The absolute ionizing photon escape fraction ( abs esc , i.e., the ratio of leaking against produced ionizing photons) is derived by combining absorption line measurements with estimates of the UV attenuation. The ionizing production efficiency ( ion , i.e., the number of ionizing photons produced per non-ionizing UV luminosity) is calculated by fitting the far-UV (FUV) stellar continuum of the VANDELS galaxies. We find that the abs esc and ion parameters increase towards low-mass, blue UV-continuum slopes and strong Ly emitting galaxies, and both are slightly higher-than-average for the UV-faintest galaxies in the sample. Potential Lyman Continuum Emitters (LCEs, abs esc ≥ 5%) and selected Lyman Alpha Emitters (LAEs, Ly ≤ −20Å) show systematically higher ion (log ion (Hz/erg) ≈ 25.38, 25.41) than non-LCEs and non-LAEs (log ion (Hz/erg) ≈ 25.18, 25.14) at similar UV magnitudes. This indicates very young underlying stellar populations (≈ 10 Myr) at relatively low metallicities (≈ 0.2 Z ). The FUV non-ionizing spectra of potential LCEs is characterized by blue UV slopes (≤ −2), enhanced Ly emission (≤ −25Å), strong UV nebular lines (e.g., high C 1550/C 1908 ≥ 0.75 ratios), and weak absorption lines (≤ 1Å). The latter suggests the existence of low gas-column-density channels in the interstellar medium, which enables the escape of ionizing photons. By comparing our VANDELS results against other surveys in the literature, our findings imply that the ionizing budget in the EoR was likely dominated by UV-faint, low-mass and dustless galaxies.
INTRODUCTION
Several data sets (see Goto et al. 2021, and references therein) measuring the redshift ( ) evolution of the volume-averaged neutral ★ E-mail: [email protected] hydrogen fraction provide evidence for the last major phase change underwent by the Universe, the Cosmic Reionization. Between = 9 and = 6 (Planck Collaboration et al. 2016), the number of ionizing photons emitted per unit time ( ion ) overcame the recombination rate (Γ HI ) of hydrogen atoms, ion ≥ Γ HI (Madau et al. 1999), so that the bulk of H neutral gas within the Intergalactic Medium (IGM) progressively transitioned to an ionized state (see Dayal & Ferrara 2018, for a review).
Yet, the sources mainly responsible for expelling such a vast amount of ionizing (or Lyman Continuum, LyC) photons to the IGM remain elusive (see Robertson 2022, for a review). Overall consensus exists about the minor contribution of Active Galactic Nuclei (AGN) to the ionizing budget during the Epoch of Reionization (EoR, Hassan et al. 2018;Kulkarni et al. 2019;Dayal et al. 2020), mainly because of their lower number density at early epochs (e.g., Matsuoka et al. 2018, but see Fontanot et al. (2012) and Cristiani et al. (2016) for a different viewpoint). This said, AGNs have played a key role in keeping the IGM ionized after the EoR (see e.g., Becker & Bolton 2013), whereas stars seem to provide a small contribution to the ionizing radiation budget at < 5 (Tanvir et al. 2019). Clearly, assuming that star-forming (SF) galaxies drove Cosmic Reionization shifts the current debate on whether more massive, UV-bright galaxies (Madau & Haardt 2015) or in opposite, low-mass, UV-faint counterparts (Robertson et al. 2013(Robertson et al. , 2015 dominated the ionizing emissivity at the EoR.
On the one hand, the remarkable modeling efforts by Sharma et al. (2016) and Naidu et al. (2020), based on the evolution of the star-formation rate surface-density of galaxies, and the works by Naidu et al. (2022) and Matthee et al. (2022), based on the fraction of Ly emitters (LAEs) over time, support a late-completed and rapid Reionization purely dominated by more massive and moderately luminous sub− ★ UV systems. These results are compatible with the rapid reionization modeled by Mason et al. (2019), in which the ionizing emissivity was constrained from CMB optical depth and Ly forest dark pixel fraction data. On the other hand, semi-empirical models like those in Finkelstein et al. (2019), based on observational constraints on the UV luminosity function during the EoR, and independent cosmological hydrodynamical simulation such as Rosdahl et al. (2022), suggest an early-completed Reionization conducted primarily by low-mass and fainter galaxies (see also Trebitsch et al. 2022). The most recent measurements of the mean free path of ionizing photons by Becker et al. (2021) have shown a much shorter value than previously thought at 5 ≤ ≤ 6, supporting the rapid reionization scenario that, according to recent models, could still be conducted by the faintest galaxies (Cain et al. 2021).
In observations, the problem reduces to solving the equation for the ionizing emissivity of the average galaxy population ( ion ), i.e., the number of ionizing photons emitted per unit time and comoving volume (Robertson 2022):
ion = abs esc ion UV(1)
where abs esc stands for the absolute escape fraction of ionizing photons (i.e., the ratio between the number of escaping versus produced LyC photons by massive stars), and ion is the so-called ionizing photon production efficiency (ionizing photons generated per non-ionizing intrinsic UV luminosity). UV accounts for the non-ionizing UV luminosity density at the EoR, resulting from the integral of the UV luminosity function (UVLF, the number of galaxies per UV luminosity and comoving volume).
The UVLF (or UV , equivalently) of galaxies is relatively wellconstrained up to the very high-redshift Universe (Bouwens et al. 2015;Davidzon et al. 2017;Bouwens et al. 2021;Donnan et al. 2023;Finkelstein et al. 2023). At the EoR, some works observe a decrease in the UV luminosity density from = 8 to 10 Harikane et al. 2022), compatible with a fast build-up of the dark matter halo mass function at those redshifts while, in contrast, some others do not find such a suppression at all (McLeod et al. 2016;Livermore et al. 2017). So far, UV-bright galaxies are found to be several orders of magnitude less numerous than the bulk of the UV-faint detected galaxies (Atek et al. 2015), and therefore thought to play a minor role during Reionization (but see Marques-Chaves et al. 2022a), although a possible excess in the number of sources at the bright-end of the UVLF (Rojas-Ruiz et al. 2020), might make flip the argument towards an EoR governed by the "oligarchs".
Our paradigm of the Early Universe is rapidly changing thanks to the James Webb Space Telescope (JWST). The JWST Early Release Science and Observations have just made possible the photometric selection (e.g., Atek et al. 2023;Furtak et al. 2023;Labbé et al. 2023) and spectroscopic characterization (Arellano-Córdova et al. 2022;Brinchmann 2022;Schaerer et al. 2022b;Tacchella et al. 2022;Trussler et al. 2022;Carnall et al. 2023;Curti et al. 2023;Trump et al. 2023) of galaxies at the EoR. Joint efforts combining both photometry and spectroscopy of some of the first JWST programmes (see e.g., Harikane et al. 2023;Isobe et al. 2023;Nakajima et al. 2023) have shown as the best approach so far to study the properties of EoR galaxies in the context of galaxy evolution, with surveys such as GLASS (Castellano et al. 2022a,b;Nanayakkara et al. 2022;Mascia et al. 2023a;Santini et al. 2023), UNCOVER (Bezanson et al. 2022;Weaver et al. 2023), CEERS Topping et al. 2022;Finkelstein et al. 2023;Fujimoto et al. 2023;Tang et al. 2023) or JADES (Cameron et al. 2023;Saxena et al. 2023;Curtis-Lake et al. 2023;Robertson et al. 2023).
The ionizing production efficiency of galaxies ( ion ) is more uncertain. Surveys targeting H emitters (HAEs, see Bouwens et al. 2016;Matthee et al. 2017a;Shivaei et al. 2018;Atek et al. 2022) and LAEs (Harikane et al. 2018;Nakajima et al. 2018a) at intermediate redshifts show, in general, higher production efficiencies for the low-mass and UV-faint galaxies (Prieto-Lyon et al. 2022), although with a huge scatter within which ion spans a wide range of values depending on the galaxy type (log ion (Hz/erg) = 24 − 26). Reassuringly, the overall ion evolution with galaxy properties at these redshifts is in line with the usual formalism by which a log ion (Hz/erg) ≥ 25.2 SFG population with constant SFH (over 100Myr) is able to fully reionize the Universe, assuming a fixed escape fraction of abs esc ≥ 5% (Robertson et al. 2013). Even so, given the stochastic nature of the LyC emission, linked to the predominantly bursty star-formation histories (SFHs) in low-mass SFGs (Muratov et al. 2015), these assumptions might not realistically apply anymore (see discussion in Atek et al. 2022).
Finally, the LyC absolute escape fraction ( abs esc ) is by far the most unknown parameter in Eq. 1 (Faisst 2016). Starting a decade ago, the search for LyC emitters (LCEs), targeting Lyman Break Galaxies (LBGs) at = 0.7−4 through expensive imaging (Vanzella et al. 2010(Vanzella et al. , 2015Mostardi et al. 2015;de Barros et al. 2016;Grazian et al. 2016;Micheva et al. 2017;Japelj et al. 2017;Rutkowski et al. 2017;Grazian et al. 2017;Naidu et al. 2018;Alavi et al. 2020;Bian & Fan 2020;Begley et al. 2022) and spectroscopic campaigns (Steidel et al. 2001;Shapley et al. 2006;Bridge et al. 2010;Marchi et al. 2017;Meštrić et al. 2021;Prichard et al. 2022) remained unsuccessful, where most of the estimates for global escape fraction of the SF galaxy-population relied on stacking and abs esc upper limits. The systematic searches at 3 by the Lyman Continuum Escape Survey (LACES, Fletcher et al. 2019;Nakajima et al. 2020) and other surveys by Saxena et al. (2022a) and Rivera-Thorsen et al. (2022) Vanzella et al. (2018); Saha et al. (2020); Marques-Chaves et al. (2021.
However, our knowledge about LCEs and their physical properties is mainly due to the Keck Lyman Continuum Survey (KLCS, Steidel et al. 2018;Pahl et al. 2021Pahl et al. , 2023 at 3, and the pioneering work by Izotov et al. (2016bIzotov et al. ( ,a, 2018aIzotov et al. ( ,b, 2021Izotov et al. ( , 2022 and the recent Low-Redshift Lyman Continuum Survey (LzLCS, Flury et al. 2022a) at 0.3. Particularly, compactness, high ionization parameters (traced by optical line ratios), high star-formation rate (SFR) surface-densities, strong Ly emission, and low dust-attenuation (traced by the UV slope) seem to characterize the strongest LCEs at low-(see also Wang et al. 2019;Izotov et al. 2020), with ionizing photon production efficiencies analogued to 6 galaxies (Schaerer et al. 2016;Chisholm et al. 2022). Interestingly, and according to the analysis of new JWST observations by Mascia et al. (2023a), these characteristics resemble the properties of galaxies at the EoR (see also Endsley et al. 2022;Schaerer et al. 2022b;Cullen et al. 2023;Lin et al. 2023).
Since the flux at LyC wavelengths will not be accessible at the EoR due to the increase of the IGM opacity (Inoue et al. 2014), indirect tracers of LyC radiation are needed. Thanks to the advent of the LzLCS (Flury et al. 2022a), in which both far-UV (FUV) and optical spectra are available for each galaxy, several abs esc diagnostics have been statistically tested for the first time in Flury et al. (2022b). Among them, those which properly account for the neutral gas and dust column densities as well as for geometrical effects (see Seive et al. 2022), so that LyC radiation escapes only along favoured, cleared sight-lines in the interstellar medium (ISM), remain the most promising proxies. In particular, the peak separation of the Ly line (Verhamme et al. 2017;Gazagnes et al. 2020;Naidu et al. 2022), the depth of the low-ionization state (LIS) UV absorption lines (Reddy et al. 2016b;Chisholm et al. 2018;Gazagnes et al. 2018;Saldana-Lopez et al. 2022) and the Mg doublet ratio (Henry et al. 2018;Chisholm et al. 2020;Xu et al. 2022) seem to closely probe the measured escape fraction of LzLCS galaxies (but see cautionary theoretical work by Mauerhofer et al. 2021;Katz et al. 2022).
In this work, we aim to indirectly study the ionizing properties of high-SFGs and their evolution along with the different galaxy-properties. For that, we make use of a sample of 500 deep, rest-frame ultraviolet spectra at 3 ≤ ≤ 5 drawn from the VAN-DELS survey (McLure et al. 2018b;Pentericci et al. 2018;Garilli et al. 2021). In particular, LyC absolute escape fractions ( abs esc ) are derived by measuring the depth of the absorption lines in combination with the UV attenuation, whilst ionizing production efficiencies ( ion ) are computed based on the best Spectral Energy Distribution (SED) fit to the FUV stellar continuum (based on the work by Chisholm et al. 2019). Our study complements ongoing efforts to understand the properties of LyC emitting galaxies at low (LzLCS, Flury et al. 2022a) and high redshifts (KLCS, Steidel et al. 2018), thereby setting the pathway to interpret the ionizing signatures of EoR-galaxies, whose number of detections has been dramatically boosted thanks to the high-quality performance of the first JWST observations.
The layout of this article is as follows. The VANDELS survey and sample definition are described in Sect. 2. The code for fitting the stellar SED of VANDELS spectra, and the methods for predicting individual ionizing efficiencies and escape fractions are outlined in Sect 3. The main results of this research, looking for correlations between abs esc and ion with the different galaxy properties are summarized in Sect. 4. Our results on the ionizing efficiency of high-galaxies are compared with different estimates in the lit- erature in Sect. 5, and finally the possible redshift evolution of the abs esc × ion product is discussed in Sect. 6, by comparing our values against state-of-the-art low-and high-surveys. We summarize our findings in Sect. 7.
Throughout this paper, a standard flat ΛCDM cosmology is used, with a matter density parameter Ω M = 0.3, a vacuum energy density parameter Ω Λ = 0.7, and a Hubble constant of 0 = 70 km s −1 Mpc −1 . All magnitudes are in AB system (Oke & Gunn 1983), and we adopt a solar metallicity value of 12 + log(O/H) = 8.69. All the stellar metallicities are quoted relative to the solar abundance (Z ) from Asplund et al. (2009), which has a composition by mass of Z = 0.014. Emission and absorption line equivalent widths (EWs) are given in rest-frame (unless stated otherwise), with positive (negative) EWs meaning lines seen in absorption (emission).
SAMPLE
We rely on rest-frame FUV spectroscopy in order to estimate the ionizing properties of high-SFGs. At 3 ≤ ≤ 5, the rest-frame UV spectrum of galaxies (1200 − 2000Å) is accessible from the ground through optical spectroscopy. The criteria by which we build our sample of high-rest-UV spectra (2.1), and the estimation of the main SED parameters (2.2) and other spectroscopic features (2.3) are described in detail in the following sections. Additionally, the survey properties of the two main comparison samples of LCEs in the literature are summarized (2.4).
Deep rest-frame UV spectra: the VANDELS survey
The VANDELS survey (final Data Release 4 in Garilli et al. 2021) is an ESO Public Spectroscopic Survey composed of around 2100 optical, high signal-to-noise ratio (S/N) and medium-resolution (R) spectra of galaxies at redshifts = 1−6.5. The VANDELS footprints are centered on two of the HST Cosmic Assembly Near-Infrared Deep Extragalactic Legacy Survey (CANDELS, Grogin et al. 2011;Koekemoer et al. 2011) fields, in particular the CDFS (Chandra Deep Field South, see Guo et al. 2013) and the UDS (UKIDSS Ultra Deep Survey, see Galametz et al. 2013), but covering a wider area. The primary targets were selected from the parent CDFS and UDS photometric catalogs attending to the quality of their photometric redshifts (see McLure et al. 2018b;Pentericci et al. 2018, for details).
Ultra deep, multi-object optical spectroscopy (at 4800 − 10000Å observed-frame) was conducted with the VIMOS instrument (Le Fèvre et al. 2003), at the Very Large Telescope (VLT), resulting in 1019 and 1068 sources observed in the CDFS and UDS, respectively, over 2087 measured spectroscopic reshifts in total. The final VANDELS catalog reaches a target selection completeness of 40% at AB = 25. VANDELS spectra have an unprecedented average S/N ≥ 7 per resolution element over 80% of the spectra, thanks to exposure times spanning from 20 up to 80 hours on source. The VIMOS -Full-With at Half Maximum-spectral resolution is ≡ /Δ ≈ 600 at 5500Å, with a wavelength dispersion of 2.5Å/pixel. For additional information about the survey design and data reduction, we encourage the reader to check McLure et al. (2018b); Pentericci et al. (2018) and Garilli et al. (2021) papers.
According to the selection criteria, the galaxies in VAN-DELS can be classified into passive/quiescent ( AB ≤ 22.5, 1 ≤ phot < 2.5), bright SFGs ( AB ≤ 25, 2.4 ≤ phot < 5.5), LBGs (25 ≤ AB < 27, AB ≤ 27.5, 3 ≤ phot < 6.5), and a smaller subset of AGN candidates (Garilli et al. 2021). For non-AGN type galaxies, the spectroscopic redshifts are "flagged" as a function of the reliability of the measurement as marginal ( 40%), fair ( 80%) or robust (≥ 95%) reliability, corresponding to flag = 1, 2, 3/4, respectively (see Le Fèvre et al. 2015;Garilli et al. 2021, for more details). Based on a subset of VANDELS galaxies with C ]1908 detection, Llerena et al. (2022) found that the the systemic redshifts are slightly larger than the spectroscopic redshifts, with a mean difference of 0.002.
Our final sample comprises VANDELS-DR4 galaxies under the SFGs and LBGs categories only, whose spectroscopic redshifts fall in the 3 ≤ spec ≤ 5 range and have the highest reliability in the redshift measurement (i.e., flag=3/4). Given the VIMOS instrumental sensitivity, the redshift constraint ensures complete wavelength coverage from 1200 to 1600Å for all galaxies in the sample (in practice, from Ly 1216 to C 1550). We then compute the mean S/N per pixel over two regions free of absorption features, specifically 1350 − 1375Å and 1450 − 1475Å in the restframe (25Å). Then, we select the galaxies with S/N ≥ 2 in the two spectral windows simultaneously ( 50% of the remaining sample). Worth noticing is that the uncertainties of the DR4 spectra were systematically underestimated (see Garilli et al. 2021). For this reason, we applied individual correction factors to the error spectra provided by the VANDELS collaboration (Talia et al., in preparation). The average correction factor is ×1.4.
In summary, our VANDELS working sample includes 534 galaxies at 3 ≤ spec ≤ 5 with a median S/N ≈ 5 in the 1400Å continuum range, where 297 sources were observed in the CDFS versus 237 in the UDS field. Fig. 1 shows the observed −band (either HST/F160W or VISTA/H) apparent magnitude distribution as a function of the spectroscopic redshift for the selected sources, together with the entire VANDELS-DR4 sample. The median (16 ℎ and 84 ℎ percentiles) F160W and spec of the working sample cor- Our working sample is indicated through red filled symbols, and the large blue circles display the SFR running medians at each inter-quartile range of stellar mass. The golden dashed line follows the MS relation by Speagle et al. (2014), for comparison. Our sample fall ∼ 0.1dex. systematically above the MS at = 3.5, but it still probes the upper bound of the SFG population at these redshifts. respond to AB = 24.8 +0.8 −0.9 AB and spec = 3.56 +0.41 −0.36 , respectively.
Photometric properties of the VANDELS galaxies: stellar masses
The wide imaging coverage of VANDELS allow a robust characterization of the SED of every galaxy (Garilli et al. 2021). The physical integrated properties of the VANDELS-DR4 galaxies were obtained by SED fitting using the Bayesian Analysis of Galaxies for Physical Inference and Parameter EStimation (B ) code 1 . B (see Carnall et al. 2018) inputs the spectroscopic redshifts measured by the VANDELS team (Pentericci et al. 2018) and all the available CANDELS (Galametz et al. 2013;Guo et al. 2013) plus ground-based photometry (McLure et al. 2018b) to run a Bayesian algorithm which samples the posterior of the SED parameters. It makes use of the 2016 updated version of the Bruzual & Charlot (2003) models, including the MILES stellar spectral library (Falcón-Barroso et al. 2011) and the stellar evolutionary tracks of Bressan et al. (2012) and Marigo et al. (2013). An exponentially declining SFH is assumed, with a minimum timescale of 10 Myr and a minimum age of 10 Myr. The stellar metallicity was fixed to 0.2 , coinciding with the average stellar metallicity found by Cullen et al. (2019) and Calabrò et al. (2021), probing a similar sample of galaxies in VANDELS. The dust attenuation was modeled using the Salim et al. (2018) prescription, and nebular emission was considered by adopting a constant ionization parameter of log( ) = −3 in the fits.
For our VANDELS-DR4 working sample, the median stellar mass and SFR are log( ★ /M ) = 9.34 +0.31 −0.39 and 1 B (Carnall et al. 2018) is a state-of-the-art P code for modeling galaxy spectra and fitting spectroscopic and photometric data, for details go to https://bagpipes.readthedocs.io/en/latest/. SFR/(M yr −1 ) = 14.46 +25.30 −8.47 , respectively, with 0.1M and 2.4M yr −1 typical uncertainties. The log SFR − log M ★ distribution for the selected sample lays 0.1 to 0.2 dex above the SF mainsequence (MS) relation of Speagle et al. (2014) at similar redshifts, with a scatter of ±1 dex in SFR, approximately (see McLure et al. 2018b). In Fig. 2, we explicitly show how the SFR running medians at the 25 ℎ , 50 ℎ and 75 ℎ stellar mass inter-percentile ranges fall above the Speagle et al. (2014) MS relation at = 3.5. This means that, at comparable stellar masses (i.e., log( ★ /M ) = 8.5 − 10.5), our sample is probing a slightly higher SFR regime than the bulk of SFGs, although it is still representative of the whole population given the intrinsic scatter of the MS at high-(see the discussion in Cullen et al. 2021). A similar behavior was found in Calabrò et al. (2022). Finally, our sample also suffers from luminosity (or stellar mass) incompleteness i.e., the highest redshifts objects ( 4) are biased towards the brightest AB absolute magnitudes because of the current flux-limiting nature of the VANDELS survey.
Spectral properties of the VANDELS galaxies: Ly equivalent widths
In this work, we make use of the methods and measurements provided by the VANDELS collaboration (Talia et al., in preparation) to compute the Ly equivalent width ( Ly ) of galaxies, either in emission or absorption. The Ly (H 1216) flux and equivalent width were measured by fitting a single-Gaussian profile to the Ly line using the dedicated code 2 . If the S/N of the line reach an input threshold, the script allows a ±1000 kms −1 offset of the line center respect to rest-frame value given by the spectroscopic redshift. The continuum level was defined by the Bruzual & Charlot (2003) best-fit template to the entire spectrum (which gives a first order correction for underlying stellar absorption), and the flux and equivalent width was measured over ±8000 kms −1 each side of the line peak (see Sect. 3.2 for the actual definition of the equivalent width). By convention, the fits report negative equivalent widths when the Ly line appears in emission, whereas a positive value is reported when the line is absorption dominated. Given the limited variety of Ly profiles observed in the VANDELS sample due to a medium ≈ 600 spectral resolution (see Kornei et al. 2010;Cullen et al. 2020, and methods therein), this approach only constitutes a first order estimation of Ly . We also note that, according to Talia et al. (in preparation), for 198 object ( 37% of the sample) a single Gaussian fit did not provide a good fit to the Ly line. Rather, two Gaussian components were needed. However, a more detailed calculation is out of the scope of this paper, and we finally adopt single Gaussian fits as our fiducial estimates of the Ly flux. Doing so, the median Ly equivalent width is Ly (Å) = 4.58 +19.01 −9.43 for our selected sample. According to the definition by Pentericci et al. (2009), 104 out of 534 of galaxies ( 20% of the sample) can be classified as LAEs with Ly ≥ −20Å (although other authors may adopt different criteria, e.g., see Stark et al. 2011;Nakajima et al. 2018a;Kusakabe et al. 2020).
2 is a C++ simple software that can be used to derive spectroscopic redshifts from 1D spectra and measure line properties (fluxes, velocity widths, offsets, etc.). More in https://github.com/cschreib/ slinefit, by Corentin Schreiber.
Comparison samples of LCEs
Asides from our VANDELS sample, primarily composed by SFGs at 3 ≤ ≤ 5, we will consider as benchmarks other samples of confirmed LCEs from the literature (see Sect. 6). The LCEs comparison samples are the following:
The Low-Lyman Continuum Survey, or LzLCS, is a large HST programme (Flury et al. 2022a, PI: Jaskot, HST Project ID: 15626) targeting 66 SFGs at 0.22 ≤ ≤ 0.43, selected from SDSS and GALEX observations. Each galaxy was observed with the low-resolution COS/G140L grating, covering the LyC and the non-ionizing rest-frame FUV continuum (600 − 1450Å), with a spectral resolution of ≈ 1000 at 1100Å. In combination with these 66 galaxies, the LzLCS also includes a compilation of 23 archival sources drawn from the literature (Izotov et al. 2016b(Izotov et al. ,a, 2018aWang et al. 2019;Izotov et al. 2021). Out of 89 galaxies, 50 were detected in the LyC, with a median absolute escape fraction of abs esc = 0.04 +0.16 −0.02 , while the remaining 39 galaxies show strong abs esc upper limits, typically below 1%. The Keck Lyman Continuum Survey, or KLCS, is an extensive Keck/LRIS spectroscopic campaign (Steidel et al. 2018) to obtain deep rest-UV spectra of selected LBGs at 2.8 ≤ ≤ 3.4. The LRIS spectra of KLCS galaxies cover the LyC region and a wide FUV window longwards of the Ly line (880 − 1650Å), at a spectral resolution of ≈ 800 for LRIS-B and ≈ 1400 for LRIS-R. In the most recent HST imaging follow-up by Pahl et al. (2021Pahl et al. ( , 2023, they re-analyze a sample of 124 KLCS galaxies and look for foreground and low-redshift contamination that could potentially lie in projection within the angular extent of each LyC detected galaxy. In total, 13 individually-detected and 107 LyC undetected sources resulted in a sample-averaged escape fraction of abs esc = 0.06 ± 0.01.
METHODS
In this section, we first describe the F CUS code (Sect. 3.1), a customized P script to fit the stellar continuum of extragalactic UV spectra. Then, we present the methodology to indirectly infer the ionizing absolute escape fraction of galaxies ( abs esc ), by using the information provided by the UV absorption lines and the global SED fit to the non-ionizing UV continuum (Sect. 3.2). Finally, we discuss about the procedure for stacking individual VANDELS spectra, and the systematic effects included in the measurements due to the instrumental resolution and stacking (Sect. 3.3).
The F CUS code: F tting the stellar Continuum of U Spectra
With the purpose of quantifying the stellar continuum properties underlying the VANDELS spectra, we have created F CUS 3 . F CUS is a customized P script that stands for FItting the stellar Continuum of Uv Spectra and, in short, it returns an estimation of the galaxy light-weighted stellar age, metallicity and dust extinction as well as other secondary SED parameters by using a combination of best-fit stellar population templates. This methodology was widely described and previously tested in Chisholm et al. (2019, hereafter C19), and has been used in other papers such as Gazagnes et al. (2018Gazagnes et al. ( , 2020 and Saldana-Lopez et al. (2022, hereafter SL22). We refer the reader to those papers for similar approaches but slightly different applications of the current method.
Stellar population models
As described in SL22, the stellar continuum modeling was achieved by fitting every observed spectrum with a linear combination of multiple bursts of single-age and single-metallicity stellar population models. By default, F CUS inputs the fully theoretical -99 single-star models without stellar rotation (S99, Leitherer et al. 2011Leitherer et al. , 2014 using the Geneva evolution models (Meynet et al. 1994), and computed with the WM-B method (Pauldrach et al. 2001;Leitherer et al. 2010). The S99 models assume a Kroupa (2001) initial mass function (IMF) with a high-(low-)mass exponent of 2.3 (1.3), and a high-mass cutoff at 100 M . The spectral resolution of the S99 models is ≡ /Δ 2500, and it remains approximately constant at FUV wavelengths.
Four different metallicities (0.05, 0.2, 0.4 and 1 ) and ten ages for each metallicity (1,2,3,4,5,8,10,15,20 and 40 Myr) were chosen as a representative set of ×40 models for our high-UV spectra. In detail, ages were chosen to densely sample the stellar ages where the stellar continuum features appreciable change (see C19). For example, at older ages than 40 Myr, the FUV stellar continuum does not appreciably change such that a 40 Myr model looks very similar to a 100 Myr population. We caution the reader that the S99 high-resolution models do not densely sample the HR diagram at effective temperatures below 15,000K (Leitherer et al. 2010). The cooler models have artificially bluer continuum slopes, which makes older stellar populations appear slightly bluer than expected for their temperatures (for details, see Chisholm et al. 2022).
Finally, a nebular continuum was generated by self-consistently processing the stellar population synthesis models through the 17.0 code 4 (Ferland et al. 2017), assuming similar gasphase and stellar metallicities, an ionization parameter of log( ) = −2.5, and a volume hydrogen density of n = 100 cm −3 . The output nebular continua for each stellar population were added to the stellar models. The inclusion of the nebular continuum is only appreciable at the youngest ages of the bursts (≤ 10Myr) at wavelengths ≥ 1200Å, and produces redder stellar models than before, which has a pronounced effect on the fitted B−V of the young stellar populations. C19 tested the effect of different ionization parameters on the fitted stellar ages and metallicities, and found that the goodness of the fits does not significantly change for different log( ) values.
Fitting method
The observed spectra were first manually placed into the rest-frame by multiplying by the corresponding 1/(1 + ) factor. Both the spectra and the models were then normalized by the median flux within a wavelength interval free of stellar and ISM features (1350− 1370Å), and all the fits were performed in the same rest-wavelength range (1200 − 1925Å in the case of VANDELS spectra). Finally, the models were convolved by a Gaussian kernel to the instrumental resolution ( (VIMOS) ≈ 600).
First, the non-stellar features and the spectral regions that are affected by host-galaxy ISM absorptions and sky subtraction residuals had to be masked out. Bad pixels with zero flux were neither considered in the fit. When ran with F CUS, the UV stellar continuum ( ★ ( )) is fitted with a linear combination of multiple S99 models plus the nebular continuum. 99 single stellar population is usually not able to fully reproduce both the stellar features and the shape of the continuum of extragalactic spectra, such that a combination of instantaneous burst models is actually needed (C19).
Adopting a simple geometry where the dust is located as a uniform foreground screen surrounding the galaxy 5 , this results in: 2,3,4,5,8,10,15,20,40 ≡ 0.05, 0.2, 0.4, 1
★ ( ) = 10 −0.4 B−V ∑︁ , 99 , ≡ 1,
where 10 −0.4 is the UV attenuation term, = B−V , and is given by the adopted dust-attenuation law. The linear coefficients determine the weight of single-stellar population within the fit, and the best fit is chosen through a non-linear 2 minimization algorithm with respect to the observed data ( 6 package, Newville et al. 2016). Errors were derived in a Monte-Carlo way, varying observed pixel fluxes by a Gaussian distribution whose mean is zero and standard deviation is the 1 −error of the flux in the same pixel, and re-fitting the continuum over iterations per spectrum (we chose = 100, enough realizations to sample the posterior continuum so that it approaches "Gaussianity" on each pixel). Fig. 3 shows the F CUS output for one of the galaxies in VAN-DELS (objID:CDFS017345, at = 3.61), with the observed (in black) and fitted stellar continuum (in red). The bottom panel of this figure additionally shows the distribution of the light-fractions at a given burst-age and metallicity for the CDFS017345 best-fit. This galaxy is dominated by a young and low-metallicity population ( 60% of the total light at 3Myr) with 4.80 Myr average light-weighted age and 0.18 Z light-weighted metallicity (see the following section). CDFS017345 is moderately attenuated, with a UV dust-attenuation parameter of B−V 0.12 mag. In Fig. 3, and together with other nebular (gold) and ISM lines (black) usually present in VANDELS spectra, we also highlight (in blue) the main stellar features that helped our algorithm to estimate the age and the metallicity of the stellar population: the N 1240 and C 1550 P-Cygni stellar wind profiles (see also O 1371 and Si 1402). Note that the inclusion of C 1550 requires a careful masking of the strong ISM component of this line, where only the blue wing of the P-Cygni was used in the fit. While N 1240 is mostly sensitive to the age of the burst -with the P-Cygni profile being more prominent at younger ages-, the C 1550 feature, on the other hand, changes accordingly to the metallicity (C19) of the stellar population -whose asymmetric profile is stronger at higher stellar metallicities-. Therefore, these two distinct spectral features partially break the age-★ degeneracy in the FUV, although the method is still affected by age-attenuation degeneracy effects. 5 The effect of a clumpy gas-to-dust geometry has been discussed in Gazagnes et al. (2018) and SL22. Regrettably, the current resolution and S/N of the VANDELS spectra is not enough to disentangle between the two geometries, so we assume the most simple scenario of a uniform screen of dust as a our fiducial case. 6 summary chart with the best-fit SED fitting parameters, i.e., reduced-2 , B−V (in mag., R16), light-weighted stellar age (in Myr) and metallicity (in Z ).
Results
Given the best-fit SED for the stellar continuum and the best-fit coefficients in Eq. 2, F CUS also provides a handful of other secondary SED parameters, that we described here.
Stellar age and metallicity
The light-weighted stellar age and metallicity ( ★ ) of the best-fit stellar population can be easily obtained using the weights as:
Age = age , ★ =(3)
where age , are the age and metallicity of -th synthetic stellar model. For instance, in the example of Fig. 3 Fig. 4). Compare to these works, our average metallicity is slightly higher: log( ★ /Z ) −0.63, although still compatible within the spread of the distributions.
Our shift in log ★ with respect to the previous works must be driven by use of instantaneous burst of star-formation instead of a continuous SFH in the theoretical stellar models. Our election of single-burst models is motivated by the fact that mixed-age populations usually do a better job when reproducing age-sensitive tracers (such us N 1240) of individual FUV galaxy-spectra (see Sect. 5.3 in C19). Moreover, by accounting for the dominance of young stellar populations, bursty star formation histories, which are expected to hold for intermediate-to-low-mass SFGs at high-redshift (e.g., Trebitsch et al. 2017), produce significantly more ionizing photons, at higher energies, than continuous star formation histories.
The effect that the choice of either constant SFRs or instantaneous bursts of star-formation have in the stellar age, metallicity and ionizing properties of the mean SFG galaxy population at 3 will be discussed in a future publication. We refer the reader to Cullen et al. (2019) and Calabrò et al. (2021) for more details on the stellar metallicity of VANDELS galaxies.
Dust-attenuation parameter: B−V The amount of dustattenuation is given by the 10 −0.4 term in Eq. 2, where = B−V is defined as the "UV attenuation", and the specific functional form for is determined by the dust-attenuation law. Therefore, the resulting best-fit values for B−V will differ depending on the assumed dust-law, and this will have important consequences for other related quantities which explicitly depend on B−V , like the absolute escape fraction ( abs esc ) as we will discuss later on (see Sect. 6).
With the goal of testing the influence of the dust-attenuation law in our results, we use two extreme cases for in SF galaxies (Shivaei et al. 2020): the Reddy et al. (2016a, hereafter R16) and Small Magellanic Cloud (Prevot et al. 1984;Bouchet et al. 1985;Gordon et al. 2003, hereafter SMC) attenuation curves. As widely discussed in the dedicated sections in SL22, and references therein, the use of R16 is, on the one hand, motivated by the fact that this law is one of the only properly defined below 1000Å by a significant number of galaxies. On the other hand, the use of steeper SMC-like curves has appeared to be more suitable for high-SFGs (Salim et al. 2018), low-metallicity starburst (Shivaei et al. 2020), and LCEs (Izotov et al. 2016a).
After performing the SED fits with F CUS using both the R16 and SMC prescriptions for , we find an average attenuation parameter of B−V = 0.22 +0.09 −0.08 mag. for R16 and B−V = 0.10 +0.04 −0.03 mag. when using SMC (see Fig. 4). This corresponds to UV attenuations of 1.82 and 2.83 mag. at 1600Å and 912Å, using R16 (1.25, 2.63 mag. for SMC). Although the resulting light-weighted stellar ages and metallicities are similar in both sets of fits (similar light-fractions), the UV attenuation term 10 −0.4 is slightly higher for SMC at all wavelengths, meaning that the fitted coefficients are slightly lower for SMC than R16 law, so that both fits can match the observations similarly (reduced-2 distributions are similar). Lower coefficients for SMC means that all absolute quantities derived from the intrinsic SEDs will also change accordingly.
UV-continuum slope (at 1500Å): 1500
spec The UV spectroscopic continuum slope at 1500Å, also called −slope, was computed by fitting a power-law of the form ∝ to every individual spectrum (Meurer et al. 1999). To do so, we take the average flux density in seven 15Å-width spectral windows, between 1275 − 1825Å (similar to the ones in Calzetti et al. 1994), and fit a linear relation to the log − log values using C -F 7 . For consistency, and in order to avoid inhomogeneous wavelength sampling for every source due to the different redshifts, corresponds to the continuum flux obtained from the best-fit SED model of the galaxy ( ★ ( )), which was described in the previous Sect. 3.1. This assumption imprints an intrinsic dependency of the −slope on the chosen attenuation law, since it modifies the shape of the dust-attenuated spectrum through the term. Thus, median of the −slope distribution for the VANDELS sample is 1500 spec = −1.34 +0.45 −0.42 for R16 and 1500 spec = −1.06 +0.54 −0.51 for SMC. When compared to the slopes obtained in Calabrò et al. (2021) for the same objects ( 1500 −1.80), our spectroscopic −slopes are overall redder (lower) than the former by 0.4, approximately (see distribution in Fig. 4). In Calabrò et al. (2021), the UV slopes were derived by fitting a power-law to the available multi-band VANDELS photometry, taking the photometric bands whose bandwidths lie inside the 1230-2750Å rest-frame wavelength range. We investigated the possible reasons that may reach to this discrepancy and conclude that it is mainly caused by the use of a wider range in wavelength with respect to the spectroscopic slopes. Although more effects will be discussed later in Sect. 6, there is always the possibility that the SED of galaxies in the rest-UV is not a perfect power law (Bouwens et al. 2012), a behavior that was previously observed in the VANDELS galaxies by Calabrò et al. (2021). Differential shifts between UV slopes derived from different methods (either photometry or spectroscopy) have been already reported in the literature (e.g., see Hathi et al. 2016).
Ionizing photon production efficiency: ion The ionizing photon production efficiency is defined as the total number of ioniz-7 C -F is a -based optimization algorithm included in the S P package for scientific analysis: in https://docs.scipy.org/doc/ scipy/reference/generated/scipy.optimize.curve_fit.html.
ing photons per unit time produced by a radiation field ( ) normalized by its intrinsic UV luminosity ( UV ). In practice (see C19):
ion [Hz/erg] = [s −1 ] 1500 [erg s −1 Hz −1 ](4)
where 1500 is the intrinsic (dust-free) SED luminosity at 1500Å (in units) and is the number of ionizing photons produced by the best-fit stellar population. is calculated as the integral of the intrinsic SED below the Lyman limit ( < (H 0 ) = 912 Å, or equivalently > (H 0 ) = 13.6 eV):
= ∫ < (H 0 ) ★ ( ) .
When averaged over the whole galaxy population and integrated over the full range of UV luminosities, ion can be used to compute the total emissivity of ionizing photons ( ion ) at any redshift (see Eq. 1).
For typical SF galaxies that can be described through a singleburst of star-formation, S99 templates predict an exponentially declining ion with age, whose log ion values span over 26−24 Hz/erg for 1 − 10Myr stellar populations at any metallicity, dropping dramatically towards ages older than 10 Myr (C19). In contrast, mixedage models can provide systematically higher ion for a given mean age than instantaneous burst models at the same evolutionary stage. Accordingly, our mixed-age fits give a median ion of log ion = 25.30 +0.28 −0.42 Hz/erg using the R16 law. ion does not strongly depend on the assumed attenuation law because it has been calculated as the ratio between two absolute quantities ( and UV ). Since the fitted stellar ages and metallicities are similar for both R16 and SMC (as they are primarily fixed by single spectral features), the shape of the intrinsic SED is also similar for both laws and therefore ion stays unaltered.
Intrinsic 900-to-1500Å flux ratio: (F 900 /F 1500 ) int The ionizing-to-nonionizing flux ratio, namely at 900-to-1500Å, depends on the physical properties of galaxies like the mean stellar age, metallicity, IMF and star formation history. Following C19, S99 single-burst bases set a limit of (F 900 /F 1500 ) int ≤ 2, exponentially declining with age down to negligible values at ages older than 20Myr, where int 900 ≈ 0. Again, a mixed-age population could in principle increase the flux ratio at intermediate ages with respect to a single-burst of star-formation. The median of the 900-to-1500Å flux ratio for the VANDELS sample results in 900 / 1500 int = 0.66 +0.41 −0.32 . This quantity does not show any dependency on the attenuation law for the same reasons as ion .
In photometric campaigns, when searching for LCEs at high-(e.g., Grazian et al. 2016Grazian et al. , 2017, authors usually relate the observed to the intrinsic 900-to-1500Å flux ratio in order to infer the relative LyC escape fraction of galaxies. Usually, these works only have access to the observed but not the intrinsic ratio, and they end up by assuming a constant value typically set by stellar population models predictions (e.g, Steidel et al. 2001). In the same works, a more common way to look at this quantity is indeed the 1500-to-900Å flux ratio but in units. Typically assumed values in the literature are 1500 / 900 int = 3 − 5 (see e.g., Guaita et al. 2016). We obtain a median value of 1500 / 900 int ≈ 4.2 for the VANDELS sample at 3 ≤ ≤ 5 (Fig. 4), in agreement with the previous studies. Fig. 4 offers a summary of the above-mentioned distributions: Age, ★ , B−V , 1500 spec , ion , and (F 900 /F 1500 ) int ; where the values resulting from our SED fits using either R16 or SMC law are shown in red and dark-blue, respectively. Error bars cover the 16 ℎ to 84 ℎ percentiles of each distribution, whose values have been quoted with respect to the median in the current section. spec , log ion , (F 900 /F 1500 ) int , measured R(LIS) and predicted log abs esc . Results derived using R16 or SMC attenuation laws are plotted in red and dark-blue, respectively. The error bars on top of each histogram encompass the 16 ℎ , 50 ℎ and 84 ℎ percentiles. The distributions of the mean residual flux of the LIS lines are shown in different colors because, by definition, this quantity does not depends on the attenuation law. Dashed vertical lines indicate log ion (Hz/erg) = 25.2 and abs esc = 5%.
Predicting ionizing escape fractions using UV absorption lines
The picket-fence model
The observation that neutral lines of hydrogen and other strong lowionization state (LIS) lines do not become black at minimum depth (or maximum optical depth) suggests a partial covering of the stellar continuum sources by the same cold, neutral and low-ionized gas (Heckman et al. 2001(Heckman et al. , 2011. In this scenario, it is expected that the residual flux of the absorption lines correlates with the fraction of LyC photons that escape the galaxy via uncovered channels. The commonly used 'picket-fence' model (Reddy et al. 2016b;Vasei et al. 2016) connecting the UV absorption features of a galaxy with the escaping ionizing radiation assumes a galaxy described by a patchy, ionization-bounded ISM (Zackrisson et al. 2013), where both the neutral and low-ionized enriched gas are distributed in highcolumn-density regions surrounding the ionizing radiation field (clumps). The fraction of sight-lines which are optically thick to the transition along these dense clouds is usually parametrized by the so-called covering fraction, ( ). Optically thick gas absorbs all of the continuum light at a given velocity whereas optically thin gas transmits all of the continuum. If the dust is homogeneously distributed as a foreground screen on top of the stars, the residual flux of the lines can be simply related to the gas covering fraction as follows:
= 1 − ( ),(5)
This simple model also assumes that the lines are described by a single gas component or, in other words, that all velocity components of the gas have the same covering fraction. Additionally, if one accounts for the dust attenuation within the galaxy ( B−V ), the escape fraction of ionizing photons ( abs, LIS esc ) can be predicted from the depth of the neutral and other LIS absorption lines using the following formulae (Reddy et al. 2016b;Gazagnes et al. 2018;Steidel et al. 2018):
abs, LIS esc = 10 −0.4 912 B−V × ( × + ),(6)
where the B−V is the UV dust-attenuation parameter measured according to the methods described in the previous Sect. 3.1, using a uniform screen geometry (see Eq. 2), and is the measured residual flux of the transition. The [ , ] coefficients correspond to the LIS to H lines residual flux conversion (see Gazagnes et al. (2018), Reddy et al. (2022) and SL22).
This methodology has been successfully validated against direct measurements of the absolute escape fraction of low-LCEs in Chisholm et al. (2018) and Gazagnes et al. (2020) and recently thanks to the advent of the LzLCS by SL22, where we pointed to this method as a good predictor for the real abs esc of galaxies at any redshift. The depth of the LIS lines have also been applied to predict the escape fraction of high-galaxy composites in Steidel et al. (2018) and Pahl et al. (2021), and some individual highspectra in Chisholm et al. (2018), with reasonable agreement with the observed abs esc values. However, this approach is not without of caveats (see Sect. 7.4 in SL22): the choice of the dust-attenuation law and the assumptions on the gas and dust geometry are some of the limitations of the picket-fence model. Indeed, as suggested by Mauerhofer et al. (2021), a 'picket-fence' gas/dust distribution is shown to be a very simplistic approximation to the real ISM geometry in state-of-theart galaxy simulations. In principle, all these assumptions and model dependencies can contribute to explain the scatter seen in the observed R(LIS) − abs esc relation (see SL22), but this model could still be a good approximation for unresolved high-studies, where the LyC emission coming from a single sight-line usually dominates.
Absorption line measurements
Our goal is to apply Eq. 6 to VANDELS spectra in order to indirectly estimate their ionizing escape fractions. To do so, we first measured the equivalent widths ( ) and residual fluxes ( ) for a set of ×4 UV LIS lines, namely Si 1260, O +Si 1302, C 1334 and Si 1527, which had simultaneous wavelength coverage in all the spectra at 3 ≤ ≤ 5. The equivalent width ( ) was then computed individually for every absorption line following Trainor et al. (2019):
= ∫ Δ 1 − / ★ ( )(7)
where is the observed spectral flux density and ★ ( ) is the modeled stellar continuum. The integration window (Δ ) was defined as ±1000kms −1 from the wavelength of minimum depth for the line in question. Then, the residual flux was measured as the median over a narrrow velocity interval of ±150kms −1 around the minimum flux of the line, or equivalently:
= / ★(8)
One of the conditions of applicability of the picket-fence model requires that the column density of gas is large enough that the absorption lines are saturated in the curve of growth (i.e., optically thick limit). In order to test the condition of saturation, we performed a similar analysis to the one in SL22 (see also Trainor et al. 2015;Calabrò et al. 2022), comparing equivalent-width ratios for transitions of the same ion at different wavelengths. Most of the galaxies have a Si 1260-to-Si 1527 equivalent-width ratio which is compatible within the optically thick limits ( 1527 / 1260 ≤ 2.55) given by the theoretical curve-of-growth for these transitions (Draine 2011). We conclude that the ISM conditions are such that Si is optically thick (saturated), and we assume that C is also saturated for typical Si/C abundances (see discussion in Mauerhofer et al. 2021). Calabrò et al. (2022) independently found optically thick Si (log( SiII / cm −2 ) > 12.7) along a sample of VANDELS C emitters at similar redshifts, whereas higher ionization lines like Si 1400 were closer to the optically thin limit as they probe more rarefied gas.
The effect of resolution on the absorption lines When measuring absorption line properties from observed spectra, the low spectral resolution ( ) tends to make the lines broader and less deep, and the actual residual flux ( ) can be overestimated. To account for these effects we performed mock absorption line simulations. Considering the picket-fence model with a uniform dustscreen, we simulated Si 1260 absorption Voigt profiles assuming Gaussian distributions for the column density of Si ( SiII , cm −2 ), the Doppler broadening parameter ( , kms −1 ) and the line velocity shift ( , kms −1 ). The simulated spectra were then degraded to (VIMOS) ≈ 600 spectral resolution. Finally, the impact of the S/N was folded onto the simulations by assuming a constant S/N = 5 that matches the median of the S/N distribution of the VANDELS spectra. A set of ×100 simulations were run for every of the input covering fractions in the ( ) = 0−1 range, and the median measured residual flux of each distribution was then compared to the theoretical value of 1 − . See App. A for a more comprehensive view of our simulations.
As a result, a theoretical residual flux of (1 − ) = 0.1, 0.3 and 0.5 would be observed as R(LIS) = 0.25, 0.4 and 0.55 due to the smearing effect of the VIMOS instrumental resolution (i.e., 60%, 25% and 10% relative error), while a negligible correction will be applied at (1 − ) ≥ 0.6 because this regime is dominated by the S/N of the spectra. The resulting calibration (Eq. A3) was applied as a correction factor to the individually measured residual fluxes.
Indirect abs esc estimations
The mean residual flux of the LIS lines, R(LIS), is calculated by taking the average of the individual Si 1260, O +Si 1302, C 1334 and Si 1527) line's depths. If these lines were all saturated, their residual fluxes should be very similar and the mean is representative, allowing to gain S/N over the measurement. E.g, the average LIS residual flux of CDFS01735 (Fig. 3) is R(LIS) ≈ 0.69 (0.75 previous to the correction by resolution). Fig. 4 shows the histogram of the average R(LIS) distribution, characterized by R(LIS) = 0.31 +0.22 −0.23 (0.35 +0.19 −0.20 without resolution correction). Then, using the dust-attenuation parameter ( B−V ) provided by the F CUS SED fits, we applied Eq. 6 to every single galaxy in our sample. For the [ , ] coefficients in this equation, we chose the calibration by Gazagnes et al. (2018) i.e., [ , ] = [(0.63 ± 0.12), (0.44 ± 0.07)]. We remark that this calibration was obtained over a sample of emission line galaxies but, as stated in SL22, we do not expect this relation to change with galaxy type. As an example, the LyC escape fraction for CDFS017345 results in abs esc ≈ 16%, adopting the R16 attenuation law.
The predicted abs esc distribution following this method is also presented in Fig. 4. As expected, the inferred abs esc values show a clear dependency on the dust-attenuation law, with the median of the R16 distribution ×1.5 lower than the SMC law abs esc distribution. When using the R16 dust-attenuation law, the median ionizing absolute escape fraction, abs esc , for our VANDELS sample of 534 SFGs at 3 ≤ ≤ 5 is:
abs esc = 0.02 ± 0.01 (0.03 ± 0.02),
while, when using the SMC attenuation law, it results in: abs esc = 0.03 ± 0.02 (0.04 ± 0.03), where the numbers in brackets correspond to the median abs esc previous to any resolution correction on the residual flux of the lines (see Eq. 6). As a comparison, Begley et al. (2022) obtained abs esc = 0.07 ± 0.02 for a similar sample of 3.5 VANDELS galaxies with deep VIMOS/U−band observations (see Fig. 4), that is, a factor of ×2 higher than our median value using the SMC law, although compatible within 1 uncertainty. Moreover, our average abs esc is in agreement with the extrapolations of the recent Trebitsch et al. (2022) model at 4. In Sect. 6, we will compare our results with the escape fraction derived from other surveys targeting LCEs in the literature, at lower and higher redshifts.
Composite spectra
Stacked spectra were built with the goal of increasing the S/N with respect to individual galaxy data, allowing us to clear out some of the underlying physical correlations between the different parameters in this study. 2022), all the individual spectra in the sample were first shifted into the rest-frame using the VANDELS spectroscopic redshift and then resampled onto a common wavelength range, specifically 1200 − 1925Å. According to the median redshift of the sample i.e., spec = 3.56, the resulting spectral binning was chosen to be 2.5Å/(1 + spec ) = 0.55Å (2.5 Å equals the VIMOS wavelength dispersion per resolution element). Before co-adding the spectra, they were normalized to the mean flux in the 1350 − 1370Å restframe interval. The final (normalized) flux at each wavelength was taken as the unweighted median of all the individual flux values after a regular 3 clipping in order to reject outliers and bad pixels. The uncertainty on the stacked spectrum was calculated via bootstrap resampling of the spectra included in the composite.
Guided by this scheme, we performed stacks in bins of UV magnitude ( 1500 ), UV intrinsic luminosity ( int 1500 ), UVcontinuum slope ( 1500 spec ) and Ly equivalent width ( Ly ). Concretely, we divided the sample according to the 25 ℎ , 50 ℎ and 75 ℎ percentiles (quartiles) of every property, resulting in four subsamples for each quantity sorted as Q1, Q2, Q3 and Q4. Then, the resulting stacks were processed through the F CUS code and all the secondary SED parameters described in Sect. 3.1 were obtained. App. B, Table B1 contains the main properties and inferred SED parameters ( abs esc and ion ) of the different composites.
The effect of stacking on the absorption lines Even though the equivalent widths of the H and metal lines do not change with the gas column density in an optically thick medium, the lines can be slightly broader or narrower depending on the gas thermal (Doppler) and turbulence velocities. More importantly, different galaxies may have different gas flow velocities which translates into red-or blueshifted line centers relative to the systemic velocity. Moreover, the use of the spectroscopic instead of the systemic redshift introduces an additional source of uncertainty in the position of minimum depth of the lines (see Llerena et al. 2022). All these effects contribute so that the UV-lines residual flux of the resulting galaxy composite can potentially be overestimated.
To correct for this smearing effect and making use of the simulations described in App. A, we randomly generated a set of = 100 simulated Si 1260 line profiles with different intrinsic gas properties (column densities, Doppler broadening, inflow/outflow velocities, etc.), but fixing the covering fraction (i.e., equivalent to one minus the residual flux). We then stacked the line profiles following the same methods and assumptions described in this section, where the effects of instrumental resolution ( (VIMOS) ≈ 600) and a constant S/N = 5 were also incorporated in the mock realizations. After that, we measured the depth of the composite Si 1260 line profile and compared it to the input value given by the covering fraction. According to the our simulations (Eq. A4), an input (1 − ) = 0.1, 0.3, 0.5 and 0.8 would require corrections factors as large as 70%, 40%, 25% and 10% due to the effect of stacking. Calabrò et al. (2022) showed that, although the bulk ISM velocity is globally in outflow, the average shift of the LIS lines is very close to the systemic velocity (−60 km/s), i.e., within the actual spectral resolution. Therefore, our simulations -which assume a single VIMOS resolution element (±150 km/s) as the standard deviation of the distribution for the velocity shift of the linesmight actually over-predict this effect. For that reason, we prefer not to apply such stacking corrections to our line measurements, but we encourage the reader to check App. A for more details.
It is also worth mentioning that Calabrò et al. (2022) do not find any correlation between the stellar mass or SFRs and the velocity shift of the ISM lines. Thence, we will not expect differential corrections on the residual flux when combining galaxies with very different masses or SFRs, a conclusion that can be extrapolated to other galaxy properties (e.g., UV magnitudes).
RESULTS
The following paragraphs describe on the main results of this paper. In Sect. 4.1, the global relations between the ionizing escape fractions and production efficiencies with different galaxy properties are shown on an individual galaxy-basis. In Sect. 4.2, the LCEs and non-LCEs composites are presented, and the differences in their non-ionizing rest-UV spectra are placed in the context of the physical ISM conditions which enable the ionizing radiation to escape. In Sect. 4.3, the global ionizing properties of the LCEs versus non-LCEs samples are discussed. Lastly, Sect. 4.4 summarizes the properties of previously known LCEs reported in the literature which were included in our sample.
The ionizing escape fraction and production efficiency dependence with observed galaxy properties
Fig. 5 investigates how our predicted ionizing absolute escape fraction ( abs esc ) and the ionizing photon production efficiency ( ion ) depends on galaxy properties. The individual VANDELS measurements are shown in the background together with the results from our stacking analysis on top (large symbols). Systematic differences in abs esc and ion due to the use of a shallower (R16, filled diamonds) or steeper (SMC, open diamonds) attenuation law are also explored. In general, the escape fraction values derived using the SMC law shift to slightly systematically higher escapes (by a factor of ×1.5) compared to the R16 ones.
A Kendall− statistics was applied to each pair of variables in order to figure out the strength ( ) and significance ( −value) of the correlation. For a sample of 500 objects, we consider correlations to be significant if . 10 −3 (3 ) and strong if | | 0.1. Coincidentally, all significant correlations studied in this work showed up to be strong correlations, so that in Fig. 5 we only indicate the coefficients for significant correlations (thick-framed panels), otherwise we write val. > 10 −3 .
Observed absolute UV magnitude
The predicted abs esc and the observed UV absolute magnitude ( 1500 ) for the VANDELS sample at 3 ≤ ≤ 5 do not show any clear correlation when considering the whole range of UV magnitudes. Tanvir et al. (2019) found no significant correlation of HI with galaxy UV luminosity across a wide range of redshifts. Assuming that the escape fraction is primarily regulated by the same optically thick neutral gas along the line-of-sight, as suggested by Eq. 6, a lack of correlation between abs esc and 1500 would therefore be expected, supporting our global trend.
However, fainter (low-mass) galaxies usually host lower gas and dust fractions compared to brighter systems (Trebitsch et al. 2017). Together with their low-gravitational potential and often bursty SFHs (Muratov et al. 2015), the expelling of gas and dust in a turbulent ISM (Kakiichi & Gronke 2021) favours the creation of holes through which the LyC photons can freely escape. In this scenario, a higher abs esc is naturally expected for the faintest galaxies (see Sect. 7.2 in SL22).
Even though the correlation is tentative, 1500 −20 and fainter VANDELS galaxies tend to have higher escape fractions (Fig. 5). This mild tendency has been reported through the study of galaxy composites at high-by the KLCS (Steidel et al. 2018;Pahl et al. 2021Pahl et al. , 2023 as well as observed at low-by the LzLCS (Flury et al. 2022a;Chisholm et al. 2022, and SL22). Contrarily, models such as Sharma et al. (2016) . Scatter plots searching for correlations between the predicted ionizing photon escape fraction ( abs esc , first column), the ionizing production efficiency (log ion , second column) and the product of the two (log abs esc × ion , third column, see Eq. 1), versus different galaxy properties: observed and intrinsic UV absolute magnitudes ( 1500 , int 1500 ), UV-continuum slope at 1500Å from the best-fit SED ( 1500 spec ), and Ly equivalent width (−1 × Ly ). The Kendall ( ) correlation coefficients for individual R16 measurements (coloured dots in the background) are shown at the top but only for significant correlations (thick-framed panels). Results from our stacking analysis are displayed with filled (R16) and open (SMC) thick diamonds. Typical error bars for individual sources are plotted at the bottom part of each panel, and the arrows along the int 1500 panels measure the shift in int 1500 due to the use of the SMC dust-attenuation law. Grey-shaded regions mark assumed canonical values of abs esc ≥ 5% and log ion = 25.2 − 25.3 in classical Reionization models (Robertson et al. 2013).
ing escape fraction towards bright UV systems, in disagreement with our picket-fence formulation 8 . The ionizing production efficiency shows a very smooth (although non-significant) dependence on 1500 , with the stacked measurement at the faintest UV magnitude bin ( 1500 −20) being higher than the typically assumed canonical value for cosmic reionization of log ion (Hz/erg) = 25.2 (Robertson et al. 2013). Our results are in agreement with other studies in the literature, for example with the results by Bouwens et al. (2016) at similar UV magnitudes but, contrary to the former, our 1500 range is actually not wide enough to show a clear ion − 1500 trend. Even though, it has been shown that, at similar redshifts, the ion evolution with UV luminosity is a very smooth correlation (see Lam et al. 2019;Emami et al. 2020;Prieto-Lyon et al. 2022). A more complete picture on the ion − 1500 relationship will be given in Sect. 5.
In general, both abs esc and ion distributions with 1500 are characterized by a large scatter ( 1dex, e.g., 0.1% to 10% in abs esc ) and a weak evolution with 1500 , where only the faintest galaxies in the sample have tentatively higher values in abs esc (≥ 5%) and log ion (≥ 25.2 Hz/erg). Finally, the log abs esc × ion product preserves the overall evolution of abs esc with the UV magnitude.
Intrinsic (dust-free) UV luminosity
The intrinsic UV absolute magnitude for each galaxy ( int 1500 ) was calculated from the best-fit SED by taking the flux at 1500Å previous to attenuation by dust (i.e., the dust-free SED), and computing the AB magnitude via the usual distance modulus formulae.
On the one hand, abs esc versus int 1500 shows one of the strongest correlations of the present study ( ≈ 0.4), where the less intrinsically luminous galaxies have the highest abs esc . From our abs esc prescription (Eq. 6), this correlation is expected since the dustattenuation is by definition directly related to the escape fraction of galaxies, where the most UV-bright galaxies are attenuated (Finkelstein et al. 2012;Bouwens et al. 2014) and host substantially higher, more extended gas and dust reservoirs at the same time (see e.g., Ma et al. 2020, from the perspective of the FIRE-2 simulation). This behavior was previously reported in Begley et al. (2022), where they demonstrate how intrinsically UV-faint galaxies at 3.5 would require statistically higher escape fractions in order to reproduce the observed distribution of the ionizing-to-nonionizing flux ratio.
On the other hand, the ion versus int 1500 distribution appears completely flat irrespective of the int 1500 bin and the dustattenuation law. According to Eq. 4, the dust correction factor applied to any ion measurement would be: A <912 /A 1500 . This ratio does not strongly depend on the dust-law, thus no ion dependence on the attenuation will be expected neither. Therefore, the abs esc × ion product inherits the same dependence on the UV luminosity as abs esc does. When comparing the R16 and SMC stacks predictions (see Fig. 5), their abs esc × ion show the largest differences at the bright int 1500 end i.e., the more deviation appears for the more attenuated, redder galaxies. 8 This said, the hypothesis proposed in Naidu et al. (2020) of a reionization driven by −20 ≤ 1500 ≤ −18 galaxies (what they called the 'oligarchs'), cannot necessarily be ruled out by our observations because our galaxies do not span beyond UV ≥ −19 AB.
UV-continuum slope at 1500Å
Standing out as the strongest correlation of this study ( ≈ −0.6, see Fig. 5), the abs esc inversely scales with the best-fit UV-continuum slope at 1500Å ( 1500 spec ) so that the bluest galaxies in the sample emit a significantly larger fraction of ionizing photons than their redder counterparts. The "Dust Only" case (dotted line) follows the resulting Eq. 6 assuming there is no gas along the line of sight, so the dust is the only source of attenuation for LyC photons. This scenario, although provides a physical upper limit to the previous equation, does not expect to hold observationally. Because the dust and cold gas within the ISM are spatially correlated, galaxies which are more dust-attenuated will also have lower residual fluxes of the LIS lines, and they will deviate from the "Dust only" case towards low abs esc values, as we see in Fig. 5. This trend has been directly observed by Flury et al. (2022b) in the LzLCS survey, also lately investigated by Chisholm et al. (2022). Additionally, a decrease in abs esc with increasing UV colors was shown in Pahl et al. (2021) using KLCS galaxy composites at high-. Also interestingly, Begley et al. (2022) found that UV blue galaxies at 3.5 would require statistically higher escape fractions in order to reproduce the observed ionizing-to-nonionizing flux ratio distribution.
Regarding our indirect approach to predict abs esc (Eq. 6), a negative correlation between the escape fraction and 1500 spec must be expected by definition, since the UV slope is inherently linked to the dust-attenuation ( B−V ), so that redder SEDs usually means more attenuated galaxies (Meurer et al. 1999). For example, we obtain abs esc ≥ 5% for galaxies whose 1500 spec ≤ −1.5, decreasing to abs esc ≈ 0.1% at 1500 spec ≈ −0.5. As stated in Chisholm et al. (2022), this steep relation between the escape fraction and the UV slope is particularly important at the highest redshifts because (1) it can be easily used as an observational proxy to indirectly infer abs esc at the EoR (see Trebitsch et al. 2022), and (2) primordial/fainter galaxies are thought to host less dust than actual SF galaxies (Finkelstein et al. 2012;Bouwens et al. 2014;Cullen et al. 2023), which naturally explains why higher redshift galaxies emit more ionizing photons than their lower redshift counterparts (Finkelstein et al. 2019). As we will discuss in Sect. 6, our abs esc − 1500 spec results agree with the relation = 0.3 published by Chisholm et al. (2022) but extrapolated to redder UV slopes and lower escape fractions. This similarity suggests that the abs esc − 1500 spec does not strongly change with redshift. The ion parameter is also strongly correlated with 1500 spec , so that ion rapidly increases as the UV slope decreases, only giving log ion (Hz/erg) ≥ 25.2 for galaxies whose slopes are 1500 spec ≤ −1.5. Our results are in agreement with other studies in the literature, for example with the results by Bouwens et al. (2016) and Matthee et al. (2017a), but shifted to redder 1500 values. For more details on the ion − 1500 relationship and comparison with the literature, we refer to Sect. 5.
Qualitatively, the largest differences between the R16 and SMC derived abs esc values are found among the redder objects i.e., the most attenuated galaxies in the sample. However, both R16 and SMC derived ion values are similar irrespective of the dust-attenuation law.
Ly equivalent width
The relation between the properties of the Ly 1216 line and the escape and production efficiency of ionizing photons is also striking. This can be seen in Fig. 5, where the predicted abs esc monotonically raises as the Ly equivalent width also increases ( Ly ). For typ-ically assumed values of Ly ≤ −20Å in LAEs (e.g., Pentericci et al. 2009, but see Stark et al. (2011); Kusakabe et al. (2020)), we obtain abs esc ≥ 5%. This result is in concordance with previous results in the literature. As demonstrated by SL22 (but see also Gazagnes et al. 2020;Izotov et al. 2021), the strongest LCEs are usually among the strongest LAEs i.e., having the highest Ly equivalent widths.
With the exception of photon scattering and the effect of gas kinematics, the same mechanisms that regulates the escape of ionizing photons also regulate the leakage of Ly photons and therefore the shape and strength of the Ly line (e.g. Henry et al. 2015;Verhamme et al. 2015). Considering the aforementioned picket-fence model with a very simplistic assumption on the geometry of galaxies, these main mechanisms are (1) the neutral gas column density -whose optically-thick spatial distribution is parametrized in Eq. 6 via the covering fraction -and (2) the dust-attenuation term ( B−V ). In this way the Ly emission ( Ly ) and the escape of LyC photons ( abs esc ) would remain physically related (see Verhamme et al. 2017;Izotov et al. 2020;Flury et al. 2022b;Maji et al. 2022). In fact, studying the afterglow gas of a significant sample of Gamma Ray Burst (GRB) host galaxies, Vielfaure et al. (2021) found that the gas columns density of such gas was indeed optically thick, so that "the bulk of Ly photons produced by massive stars in the star-forming region hosting the GRB will be surrounded by these opaque lines of sights".
In order to illustrate the role of dust attenuation and the gas covering fraction in the escape of Ly and LyC photons, in Fig. 6 we plot the average LIS absorption profile for different stacks in bins of Ly equivalent width (see also Trainor et al. 2019). This plot shows the combined profile of several ISM LIS lines (Si 1260, O +Si 1302, C 1334, Si 1527), color-coded by the inferred UV dust-attenuation ( B−V ) and the predicted escape fraction for each composite ( abs esc ) is indicated in the legend. The resulting Ly profiles can also be seen in the inset, and finally Ly (in Å) is plotted against the LIS equivalent width of the combined LIS absorption profiles ( LIS , in Å).
As a result, as long as the dust-attenuation increases and the gas covering decreases (increasing residual fluxes), the resulting escape fractions and the Ly equivalent widths also decrease, from Ly values typically found in LAEs towards absorption, even damped, Ly profiles. Relatedly, the relation between the Ly and the strength of the LIS lines has been widely studied in individual and stacked rest-UV spectra of LBGs at = 2 − 5 (Shapley et al. 2003;Jones et al. 2012;Henry et al. 2015;Trainor et al. 2015;Du et al. 2018;Pahl et al. 2020), and our results agree with the overall picture described in these papers. On the one hand, the observed connection between the LIS equivalent width and the reddening can only be explained if the dust resides within the same clouds as the neutral gas in a clumpy ISM geometry (SL22). On the other hand, the dwindling of the Ly emission with the dust-attenuation is a a natural consequence of the Mass Metallicity Relation (MZR, see Maiolino & Mannucci 2019, for a review), and the empirical relation between the Ly strength and the stellar mass (Cullen et al. 2020).
A large Ly can also be ascribed to a boost in the production of ionizing photons (Nakajima et al. 2018a). The VANDELS high-Ly − ion relation can be found in the bottom-mid panel of Fig. 5, where a significantly higher log ion (Hz/erg) ≥ 25.2 is found for the strongest LAEs only, with a Ly ≈ −30Å. Thanks to JWST data in combination with HST photometry, an increasing log ion with Ly strength has also been shown for individual galaxies at the late-edge of the EoR (Ning et al. 2022). However, in the works by Cullen et al. (2020) and Reddy et al. (2022), ion has been demonstrated to be insufficient to solely account for the whole variation in Ly , rather "the covering fraction of optically-thick H (gas) appears to be the principal factor modulating the escape of Ly , with most of the Ly photons in down-the-barrel observations of galaxies escaping through low-column-density or ionized channels in the ISM", as stated in Reddy et al. (2022).
Similarly, our abs esc × ion − Ly relationship results from the convolution of the abs esc and ion dependence on the Ly strength, where the abs esc − Ly behavior plays the major role. The higher ion values for the objects with the highest Ly equivalent width in the sample (LAEs) naturally increase the abs esc × ion product (Matthee et al. 2022).
Stellar mass
The predicted abs esc strongly scales with the stellar mass ( ★ ) of the VANDELS galaxies so that low-mass galaxies have statistically higher escape fractions than more massive systems (Fig. 7). For instance, only log ★ ( ) < 9 VANDELS galaxies show cosmologically relevant escape fractions of abs esc ≥ 5% (Robertson et al. 2013, see grey-shaded area in Fig. 5), and these are actually 1dex higher than the average escape fraction of the most massive galaxies in the sample (log ★ ( ) ≥ 10). According to our picket-fence model, a physical abs esc − ★ connection is expected (Eq. 6) because more massive galaxies are intrinsically more attenuated than low-mass systems (see e.g., Finkelstein et al. 2012;McLure et al. 2018a;Fudamoto et al. 2020). As log ★ increases, the dust-attenuation increases and the line-depth decreases accordingly, and both yield progressively decreasing escape fractions ( abs esc ). This is consistent with the results by Reddy et al. (2022) where, using a joint modeling of the composite FUV and optical spectra of high-SFGs, they demonstrate how the H
Reddy+ law SMC law
Galaxies dominate the ion. budget at EoR (canonical ξ ion ) Figure 7. Relation between our derived abs esc and the stellar mass ( ★ , in ). The abs esc − ★ correlation is strong and significant for both R16 and SMC dust-laws, although systematically higher escapes are reported when using steeper, SMC-like attenuation laws. The layout is the same as in Fig.5. and the gas-enriched covering fraction decreases with the galaxy stellar mass.
So far, there is no consensus in simulations on whether more massive galaxies should emit a higher (e.g., Naidu et al. 2020) or lower (Rosdahl et al. 2022) fraction of their produced ionizing photons to the IGM compared to their less massive counterparts, with some of them even suggesting a turnover at intermediate masses (see Ma et al. 2020). In observations, while at low-there does not seem to be any clear relation between these two quantities (Izotov et al. 2021;Flury et al. 2022b Lately, Pahl et al. (2023) have also shown a negative and significant abs esc − log ★ trend in the 9 ≤ log( ★ / ) ≤ 10 stellar mass range, using 3 KLCS stacks. However, at similar redshifts, Begley et al. (2022) did not find any statistical distinction when splitting their sample into lower and higher masses with respect to the median, suggesting that ★ is at best a secondary indicator of the average abs esc . In the 8 ≤ log( ★ / ) ≤ 10 mass interval, our stacking analysis with VANDELS support the former scenario by which low-mass galaxies have higher escape fractions (similar to Ma et al. 2020;Pahl et al. 2023). In this vein, our picket-fence formalism put the physical picture suggested by models like Naidu et al. (2020) up against the ropes, since the latter yields a monotonic increase of the escape fraction with stellar mass, behaviour also disfavoured by other works in the recent literature (see compilation by Pahl et al. 2023).
Finally, whilst AGNs may not contribute directly to the ionizing photon budget at the EoR (e.g., Hassan et al. 2018), semianalytical simulations by Seiler et al. (2018) show that they could indirectly influence reionization by clearing out holes in the ionized regions, so that abs esc can be boosted after quasar wind events. In such scenario, the mean escape fraction peaks for intermediatemass galaxies, around log( ★ / ) ≈ 8. Regrettably, our sample is not able to probe this hypothesis, since it does not extend to lower stellar masses.
Non-ionizing rest-UV properties of potential LCEs at
3 ≤ ≤ 5
Having predicted the ionizing escape fraction ( abs esc ) for every galaxy allows us to split the sample into potential LCEs, with abs esc ≥ 0.05 (64 out of 534 galaxies, i.e., ≈ 10% of LyC detection rate), and non-LCEs with abs esc < 0.05 (470 galaxies, assuming R16). We then build composite spectra for LCEs and non-LCEs candidates, following the same method as described in Sect. 3.3. The main properties, fitted SED parameters and UV lines measurements are summarized in App. B, Table B2. Both LCEs and non-LCEs stacks are presented Fig. 8, together with a handful of insets and labels which highlight the differences in their non-ionizing FUV spectra.
UV-continuum slope at 1500Å
The UV slope for LCEs and non-LCEs are remarkably different: 1500 spec = −2.17 ± 0.03 for LCEs versus −1.23 ± 0.01 for non-LCEs composites (see Fig. 8). As seen in the previous section, the UV slope is by definition related to abs esc in the sense that it constitutes a proxy for dust attenuation at UV wavelengths , therefore favouring a picture in which bluer galaxies with lower levels of dust attenuation ( B−V ) display higher values of escape fraction (by construction, see Eq. 6). Having different levels of dust attenuation in LCEs and non-LCEs will partially influence the strength of other nebular emission lines in their UV spectra.
The intrinsically dustier nature of non-LCEs in opposite to the LCEs population has been explicitly reported in SL22 and further investigated in Chisholm et al. (2022) using LzLCS data at 0.3. Also recently, Pahl et al. (2023) showed a monotonic increase of abs esc with B−V from FUV SED fitting in KLCS composite spectra. Likewise, Pahl et al. (2021) and Begley et al. (2022) observationally demonstrated a decrease in abs esc with increasing UV colors at 3, using HST (KLCS) and ground-based (VANDELS) LyC imaging, respectively. All these different LyC data sets at low-and highwill be put together in Sect. 6.
Ly emission
LCEs clearly show a stronger Ly emission than non-LCEs ( Fig. 8): Ly = −29.71 ± 2.46Å against −3.62 ± 0.41Å for LCEs and non-LCEs stacks, respectively. Worth mentioning is that the Ly for the LCEs composite is compatible with the typical definition of LAEs in the literature (i.e., Ly ≤ −20Å see Pentericci et al. 2009), so that the LCEs composite is mostly composed of the Ly emitting galaxies in the VANDELS sample. The non-LCEs composite, however, shows little Ly emission compared to LCEs, with a damped Ly red-wing extending up to 1240Å. This suggests a higher column density of H gas beneath the bulk of the non-LCEs compared to the LCEs population, an unequivocal necessary condition for preventing the leak of Ly and LyC photons (see Henry et al. 2015). This is also compatible with the strong decrease of the Ly equivalent width (by a factor 9) between LCEs and non-LCE, which is stronger than the decrease of ion (a factor 4), and which is naturally explained by the impact of a higher column density and higher dust content on the Ly escape fraction (see e.g. Atek et al. 2014).
Observational evidences for an increase in Ly with increasing abs esc has been reported by a few recent studies: Flury et al. (2022b) through FUV spectroscopy of LzLCS sources at 0.3; Fletcher et al. (2019) and Pahl et al. (2021Pahl et al. ( , 2023
CIII]1908
Rest − frame Wavelength, λ rest (Å)
Normalized Flux, F and by Begley et al. (2022), through the statistical measurement of the average abs esc at 3.5 (VANDELS).
Other nebular emission lines
In Fig. 8, the C 1550, He 1640 and C ]1908 nebular lineprofiles are displayed for the LCEs (in blue) and non-LCEs composites (in red). The C 1550 and He 1640 lines for LCEs show slightly stronger emission and narrower profiles, while the equivalent width of the C ]1908 line is remarkably higher in the LCEs stacked spectra compared to the non-LCEs one. We obtain HeII = −1.11±0.25Å (−0.81±0.06Å) and CIII] = −2.15±0.67Å (−0.73 ± 0.13Å) for LCEs (non-LCEs).
Similarly, Naidu et al. (2022) stacked the UV spectra of potential LyC emitting candidates according to their Ly line properties, showing a very different He 1640 profile respect to the non-emitting candidates (probably attributed to sample selection). Contrarily, in the recent work by Marques-Chaves et al. (2022b) based on direct LyC measurements of low-LzLCS galaxies, the authors do not find any significant correlation between the LyC escape fraction and the spectral hardness, where the He 1640 intensity is fundamentally driven by changes in the metallicity rather than by abs esc . Therefore, the slightly higher observed equivalent widths in He 1640 and O ]1666 for the LCEs composite (see Llerena et al. 2022) may indicate a much dissimilar gas-phase metallicity between the LCEs and non-LCEs stacks than the stellar metallicities that are actually derived from our stellar population modeling (see Sect. 4.3).
Regarding the UV carbon nebular lines and based on the work by Schaerer et al. (2022a); Saxena et al. (2022b) and Mascia et al. (2023b), we compute the C 1550 over C ]1908 flux ratio, and we obtain C /C ] = 0.79 ± 0.21 for LCEs while C /C ] = 0.42 ± 0.20 for non-LCEs (the C 1550 doublet is blended at the current resolution). Schaerer et al. (2022a) empirically demonstrated that both C 1550 and C ]1908 lines tend to be stronger in the LzLCS galaxies with the highest abs esc , and proposed a C /C ] ≥ 0.75 threshold for significant LyC leakage which, once again, is in agreement with our stacking analysis. As we will see in the next section, a combined effect of escaping ionizing radiation (favoured neutral gas and dust geometry) with a higher ion parameter is responsible for enhancing the emission of high-ionization-state lines such as C 1550 and C ]1908 in LCEs. Fe 1608, Al 1670 nd Al 1858 doublet) are detected in our VAN-DELS composite spectra (Fig. 8).
Absorption lines and ISM properties
The N 1240 stellar-wind line in LCEs shows a characteristic P-Cygni profile that can be fully reproduced by stellar templates (excluding potential contamination by AGNs), and it indicates the presence of a young stellar population (≤ 5 Myr). Conversely, N 1240 is suppressed by the absorbing part of the damped Ly red-wing in non-LCEs, indicating a high H column density. Going to redder wavelengths, the Si 1393,1402 high-ionization state absorption lines, which share both a stellar and ISM origin, have the same equivalent width in the LCEs and non-LCEs composites, suggesting a homogeneous distribution of the diffuse high-ionized species in the ISM of LBGs.
The remaining set of LIS absorption lines in LCEs (Si 1260, O +Si 1302, C 1334, Si 1527, Fe 1608, Al 1670 and Al 1858 doublet) clearly show lower equivalent widths than in the non-LCEs stack. For example, the S 1260 and C 1334 in Fig. 8, SiII = 0.99 ± 0.13 (1.82 ± 0.05) for LCEs (non-LCEs), while CII = 0.64 ± 0.13 (1.74 ± 0.04) in the LCEs (non-LCEs) composite. Following the picket-fence model, the fact that strong LCEs have significantly weaker absorption lines, i.e., lower equivalent widths with higher residual fluxes, can be attributed to a lower covering fraction of the spatially co-existing neutral and low-ionized gas, so that the photons from a given transition escape through low column-density channels in the ISM, as well as LyC photons do (Gazagnes et al. 2018.
The observed connection between LIS absorption lines and the strength of the Ly line is also observed in our galaxy composites, and it can be explained by the picket-fence formulation, in which the gas covering fraction along the line-of-sight primarily governs the escape of the absorbed LIS, Ly and LyC photons. This behavior is well summarized in Fig. 9, where the Ly is plotted against the measured (averaged) residual flux of the LIS lines, R(LIS), and the points are color-coded by B−V . Blue and red symbols correspond to our LCEs and non-LCEs galaxy composites, respectively.
As long as the line-of-sight covering fraction of gas decreases and the dust-attenuation increases (traced by higher R(LIS) and lower B−V values), the Ly equivalent width dramatically increases towards values typically found in LAEs. For being the case, the stronger LAEs, which has been previously suggested in this work to be a proxy for the bulk of the LCEs population, fall in the top-right part of the plot.
This trend matches the results by Steidel et al. (2018) in the framework of the KLCS, and echoes the main conclusions by Gazagnes et al. (2020), using individual LyC detected galaxies at low-. In the same Fig. 9, the median results for LCEs and non-LCEs in the LzLCS (Flury et al. 2022a), are indicated by white open hexagons (see also SL22). As we can see, the LzLCS results clearly agree with our VANDELS stacks, which reassure the use of the picket-fence approach for describing the ISM of LCEs. Additionally, the results by Jones et al. (2013) for individual LAEs at = 2 − 4 are shown on top of the VANDELS results, following the general trend.
Ionizing properties of potential LCEs at 3 ≤ ≤ 5
The ionizing properties of LCEs and non-LCEs can be accessed by extrapolating the non-ionizing best-fit SED to ionizing wavelengths. In particular, it is of our most interest to check whether potential differences in the ionizing-to-nonionizing flux ratio, (F 900 /F 1500 ) int , and production efficiency, ion , exist.
First, we input the LCEs and non-LCEs stacked spectra into the F CUS code in order to unveil the average properties of the
LCEs non-LCEs
Intrinsic ionizing SED Figure 11. The ionizing continuum of potential LCEs at 3 ≤ ≤ 5. Left: a comparison between the dust-attenuated SED for LCEs (blue) versus non-LCEs (orange) down to ionizing wavelengths, normalized at 1360Å. The inset shows the intrinsic ionizing spectrum (i.e., dust-free) of LCEs and non-LCEs (in blue and red, respectively). As written in the legend, the intrinsic 900-to-1500 flux ratio (in F units) and the ionizing photon production efficiency ( ion ) are higher for LCEs. The effect of dust-attenuation and neutral hydrogen absorption in both ionizing spectra is sketched through the grey and black arrows, where the blue and orange points represent the corresponding observed LyC flux once both absorptions have been accounted (a constant H cross-section with wavelength has been considered). Right: the AB magnitude distribution of the predicted observed flux in the LyC for the LCEs (blue) and non-LCEs (orange) samples, once multiplied by the average IGM transmission at = 4 of IGM = 0.3.
underlying stellar population of each galaxy sample. Then, we use the best-fit stellar continua to define different galaxy observables (see Sect. 3.1). As we can see, the fit to LCEs provides a lower stellar age (13 ± 3 Myr) than for non-LCEs (19 ± 1 Myr), tentatively suggesting younger stellar populations for LCEs. On the other side, stellar metallicities are very similar for both (Z ★ ≈ 0.2 Z ), and compatible with the median of the total sample. These values can be seen in Fig. 10, where the best-fit F CUS SEDs are superimposed to the LCEs and non-LCEs spectra.
The inferred dust-attenuation parameter is remarkably dissimilar for LCEs and non-LCEs, being B−V ≈ 0.07 and 0.23 mag., respectively. This is by construction, since the parent sample has been splitted based on the abs esc of individual VANDELS galaxies (Eq. 6), which is explicitly related to B−V . It reflects the expected less dusty nature of LCEs compared to the bulk of the LBG population, as we explained in the previous section.
The intrinsic (i.e., dust-free) ionizing spectra of the resulting best-fit SED for LCEs and non-LCEs is presented in the inset of Fig. 11 (left) through blue and red solid lines, correspondingly. The slight decrease in age and metallicity makes the intrinsic ionizingto-nonionizing flux ratio (900-to-1500Å, see C19) to grow up to (F 900 /F 1500 ) int = 0.64 in LCEs from 0.47 for non-LCEs. A different (intrinsic) 900-to-1500Å ratio due to an decrease in age also fosters a rise in the ionizing efficiency to log ion (Hz/erg) = 25.38 for LCEs, while it is 25.18 for the non-LCEs composites. In combination with the covering fraction of neutral gas, the ion parameter is partially responsible for shaping the strength of highionization-state lines such as C 1550 and C ]1908, and other nebular resonant lines such Ly .
Ultimately, we infer what the observed AB magnitude at LyC wavelengths would look like for the LCEs and non-LCEs populations, as a guidance for upcoming surveys targeting LyC emitting galaxies at similar FUV magnitudes. According to the fundamental definition of the absolute ionizing escape fraction of galaxies (see Izotov et al. 2016a, SL22), the observed flux close to the Lyman edge ( 900 ) can be simply predicted by doing: 900 = abs esc × int 900 , where int 900 is the intrinsic LyC flux or, in other words, the ionizing flux previous to any gas or dust absorption (in units). Since we have access to the intrinsic ionizing SED through the F CUS fits, and the abs esc has been predicted following Eq. 6, we compute 900 for every galaxy in the VANDELS sample. We then derive the AB apparent magnitude from the 900 synthetic flux measurements ( LyC , AB). The resulting distribution can be seen in Fig. 11 (right), where the blue (orange) histogram represent the LyC AB distribution for LCEs (non-LCEs), once we manually apply a IGM = 0.3 mean IGM transmission factor, i.e., the mean IGM at = 4 according to Inoue et al. (2014).
The consecutive effects of neutral H gas absorption and dust attenuation in the LCEs and non-LCEs ionizing spectra are represented in Fig. 11 (left) through the black and grey arrows (assuming a constant H cross-section with wavelength), whilst the blue and orange open circles denote the LyC flux measured at 870 − 910Å. After applying a IGM = 0.3 factor and translating into AB magnitudes, these two points would equal the median of the individual LyC AB magnitude distribution shown in the right panel of the same figure.
Clearly, the combined effects of a lower ionizing-tononionizing flux ratio plus lower escape fractions for non-LCEs makes their median output LyC flux dwindle 2mag. down with respect to a hypothetical LCEs population drawn from the same LBG sample. Therefore, a dedicated survey which attempts to detect the bulk of LCEs at 3 ≤ ≤ 5 would required a sensitivity such that it reaches, at least, LyC 31 AB at LyC wavelengths.
Using very deep VIMOS/U-band imaging of the CDFS field (1 limiting magnitude of 30.4 AB), Begley et al. (2022) estimated 10 to 15 LyC detections at 2 for sources whose redshift is 3.35 ≤ ≤ 3.95, assuming an average abs esc ≈ 0.07. Restricting to the CDFS only, we found 7 sources with predicted LyC ≤ 30 AB in the same redshift range. The differences can be attributed to our lower average escape fraction compared to the former work. This said, only two robust LyC emitters have been detected so far in the CDFS field (see following section). The stochasticity of the IGM opacity might reduce our expectations.
Confirmed LCEs in the VANDELS sample
In Fig. 9, we highlighted two LyC emitting galaxies included in our VANDELS sample which have been already published in the literature, in particular within the CDFS field (see Begley et al. 2022): CDFS012448 (Saxena et al. 2022a) and Ion1 Ji et al. 2020).
While both galaxies share a similar SED dust-attenuation ( B−V 0.2 mag.), we measure a much lower residual flux in CDFS012448 (R(LIS) 0.4) than in Ion1 ( 0.55), yielding to a predicted escape fraction of abs esc 3% versus 6%. It is worth stressing that our inferred abs esc value for Ion1 equals the reported value by Ji et al. (2020). In the case of CDFS012448, our abs esc is remarkably different to the one quoted in Saxena et al. (2022a) of ≈ 20%, possibly because the different approaches between papers. Our method was able to identify both galaxies as potential LCEs.
Additionally, these galaxies exhibit very different Ly properties between each other: while CDSFS012448 shows a relatively strong Ly in emission ( Ly −20Å), the Ion1 Ly profile appears in absorption ( Ly 1Å). In our VANDELS sample, 10% of the LCEs ( abs esc ≥ 5%) appear with Ly in absorption. This manifests, once again, the variety of Ly profiles that can be found among the LCEs galaxy population Flury et al. 2022b;Naidu et al. 2022) where, given the fact that Ly can be resonantly scattered whereas LyC cannot be scattered, both Ly and the ionizing emission do not necessarily come from the same spatial location within the host galaxy (e.g., Vanzella et al. 2012). A low IGM transmission could also, in principle, have killed out most of the Ly photons of Ion1.
THE PRODUCTION EFFICIENCY OF IONIZING PHOTONS ( ion ) IN HIGH-REDSHIFT GALAXIES
Here, we investigate the evolution of ion as a function of the UV absolute magnitude and the UV slope (see Bouwens et al. 2016;Chisholm et al. 2022), quantities which will be easily measured for a significant fraction of galaxies at the EoR thanks to upcoming gound-based (ELT, GMT, TMT, ...) and ongoing space-based facilities (JWST). Fig. 12 shows the ion parameter in stacks of the absolute UV magnitude for the VANDELS sample. As it was introduced in Sect. 4 and regardless of the attenuation law (see inset), the ionizing production efficiency monotonically, smoothly increases towards fainter UV magnitudes, so that log ion ≥ 25.2 Hz/erg (Robertson et al. 2013) at 1500 −20, i.e., the faint-end of our sample. Stacked results from other samples in the literature are also included in Fig. 12. The main comparison samples for our case of study are the MOSDEF galaxies from Shivaei et al. (2018) Bouwens et al. (2016) and Matthee et al. (2017a), both tracing photometrically-selected HAEs at similar 1500 than ours. Nevertheless, VANDELS samples a narrower range in 1500 so that the overall trend does not appear as clear as in the former works. At the faintest UV magnitude bin, our ion − 1500 relationship disagrees with the results by Shivaei et al. (2018) who report no log ion evolution with 1500 , a result probably induced by the different dust-attenuation corrections between studies (Balmer Decrements for MOSDEF versus global SED for the rest, see the discussion in Matthee et al. 2017a).
Due to their lower metallicities and burstiness of their SFHs, fainter (low-mass) galaxies are expected to have high ionizing production efficiencies (Ma et al. 2020). If the LyC escape fraction and production efficiency were intertwined (see below), the fact that fainter galaxies produce more ionizing photons per UV luminosity which, at the same time, would escape more easily than in highermass counterparts, supports an early and slow reionization scenario in which the ionizing budget would be dominated by the same lowmass, UV-faint galaxies (see Finkelstein et al. 2019;Trebitsch et al. 2022), thanks to their higher number density at the EoR (Bouwens et al. 2015(Bouwens et al. , 2021.
Recently, Prieto-Lyon et al. (2022) used deep HST and JWST multiband photometry to estimate the H flux and thus the ionizing production efficiency extending to very faint UV magnitudes (−23 ≤ 1500 ≤ −15.5) and spanning a wide range in redshift ( = 3−7). Their results demonstrate that, although the general trend of increasing ion with increasing UV magnitude is very smooth, it holds at any 1500 (see also Lam et al. 2019;Emami et al. 2020). Additionally, our thorough compilation of ion measurements (Fig. 12) illustrates the scatter in this relation ( 1dex) and its dependence on the galaxy type, physical properties of the underlying galaxypopulation and methodology itself (with the dust corrections among the different samples playing a critical role in the scatter). While normal high-SFGs (Nakajima et al. 2018b;Lam et al. 2019) have modest production efficiencies, high-LAEs (Matthee et al. 2017b;Harikane et al. 2018;Nakajima et al. 2018a) and C ] emitters (Nakajima et al. 2018b) show much higher log ion values at similar UV magnitudes, since overall these galaxies need stronger ionizing spectra to produce such high excitation lines (see e.g., Reddy et al. 2018).
Intending to qualitatively illustrate this behavior (Fig. 13), we search for meaningful differences in ion between the bulk of the VANDELS LBG population and other sources like LCEs or LAEs, which would potentially boost ion above the canonical value. This is done by splitting the sample into LAEs and non-LAEs, where the the sources whose Ly > −20Å were considered as LAEs, following the definition by Pentericci et al. (2009, but see Stark et al. (2011Kusakabe et al. (2020)). A similar analysis as we did for the LCEs/non-LCEs composites was then performed for the LAEs/non-LAEs composites, and the results are described in the following lines.
Of particular interest is to search for ion differences within different galaxy subsamples. In Fig. 13 (top), the VANDELS sample is color-coded by potential LCEs ( abs esc ≥ 0.05, in blue) and non-LCEs ( abs esc < 0.05, in red), where approximately 60% of the LCEs sample (39 out of 64 LCEs candidates) has log ion ≥ 25.2 Hz/erg, and they uniformly populate the whole range in UV magnitude. As said in Sect. 4.3, the resulting ion for LCEs and non-LCEs are log ion (Hz/erg) ≈ 25.38 and 25.18, respectively. Compared with other values in the literature, our estimated ionizing efficiency for LCEs is consistent with the average of the LCEs sample of the LzLCS . The fact that our results at 3 ≤ ≤ 5 agree with low-LCEs sample supports the argument by which the LzLCS galaxies might be analogs of higher-redshift reionizers (Flury et al. 2022b), This has been lately demonstrated by Mascia et al. (2023a) through JWST spectroscopy of 29 moderately faint galaxies at 5 ≤ ≤ 8, whose optical line ratios, UV slopes,
Reddy+ law
Prieto-Lyon+23 (HST+JWST galaxies, z = 3 − 7)
LzLCS (LCEs, z = 0.2 − 0.4) Lam+19 (GREATS galaxies, z = 4 − 5)
Maseda+20 (MUSE LAEs, z = 4 − 5)
Harikane+18 (LAEs, z = 5 − 7)
Nakajima+18b (LAEs, z = 3 − 4)
Nakajima+18a (VUDS all SFGs, z = 2 − 4)
Nakajima+18a (VUDS CIII] emitters, z = 2 − 4)
Shivaei+18 (MOSDEF galaxies, z = 2 − 3)
Matthee+17b (LAEs, z = 6 − 7)
Matthee+17a (HAEs, z = 2 − 3)
Bouwens+16 (HST+IRAC galaxies, z = 4 − 5)
Nakajima+16 ( (Flury et al. 2022a), high-LAEs (Maseda et al. 2020;Nakajima et al. 2018a;Harikane et al. 2018;Matthee et al. 2017b), and normal high-SFGs (Nakajima et al. 2018b;Lam et al. 2019). Our main comparison sources (studies targeting LBGs at UV magnitudes similar to the VANDELS galaxies) are the MOSDEF galaxies at compactness, masses and predicted abs esc were compatible with the strongest LCEs in LzLCS sample (see also Lin et al. 2023).
Similarly, in Fig. 13 (bottom) we explore the ion − 1500 relationship for selected LAEs ( Ly > −20Å, in green) and non-LAEs ( Ly ≤ −20Å, in light-green). Although the 1500 distribution for both galaxy types equally span along the −axis of this figure, around 70% of the LAEs sample (75 out of 104 identified LAEs) falls above the log ion canonical value in the −axis. Our ion estimate for LAEs is in prefect agreement with the results by Harikane et al. (2018) at = 5 − 6, and it matches surprisingly well the global trend found by several high-LAEs surveys at 3 ≤ ≤ 7 (Nakajima et al. 2018a;Harikane et al. 2018;Matthee et al. 2017b, see also Ning et al. (2022)). Similar to Ly , the ionizing efficiency also shapes the emission of other nebular lines ) so that, for instance, higher-than-average ion values have been found among extreme high-[O ] emitters ) and local, high H equivalent width galaxy samples .
The escape of Ly photons and the production of strong UV nebular emission usually requires special ISM and radiation field conditions such as low metalliticy, low dust and neutral gas content, and a vast production of (high-energy) ionizing photons. In Fig. 13 one can visualize a boost in the ionizing production efficiency ( ion ) for high-LAEs' samples. At the same time, a rise in the ionizing photon flux may reduce the covering fraction or column density of neutral hydrogen, which may ease the escape of LyC and Ly photons (see Erb et al. 2014). Recently, Flury et al. (2022b) and Schaerer et al. (2022a) statistically demonstrated that the detection rate of LCEs is enhanced among the LAEs and C emitter galaxy-population (see also Saxena et al. 2022b;Mascia et al. 2023b), reiterating our results.
Moving forward with the discussion, Fig. 14 shows ion as function of the UV-continuum slope for the VANDELS sample when using either the R16 (left) or the SMC (right) attenuation law. Irrespective of the dust law, the ionizing efficiency rapidly decreases with the slope of the continuum, and the trend is more steep for the SMC law. Focusing on the stacks measurements, the canonical limit in ion (log ion ≥ 25.2 Hz/erg) is only achieved for galaxies whose slopes are bluer than 1500 ≤ −1.5. The ion − 1500 relation of a dustless, single-burst stellar population at increasing age is not able to fully reproduce the whole range of 1500 values, so that it would require "some" dust reddening to account for redder slopes (see horizontal arrow in Fig. 14, indicating the change in UV slope after adding 1 mag. of reddening). The nebular continuum would also contribute to increase the spectral slope of the youngest bursts of star formation.
Compared to the already reported relations by Bouwens et al. (2016) at = 4 − 5 and by Matthee et al. (2017a) at = 2 − 3, our results provide a similar slope but systematically shifted to redder 1500 spec . Several effects can contribute to this offset, but we mostly attribute it to the use of broad-band photometry to compute the UV slopes instead of direct spectral measurements, and the different wavelength range probed by multi-band photometry, usually broader than with spectroscopy. In contrast, the flattening of the ion − 1500 relation proposed by Shivaei et al. (2018) (Flury et al. 2022a) and high-LAEs (Matthee et al. 2017b;Harikane et al. 2018;Nakajima et al. 2018a, see Fig. 12 for symbols). The grey-shaded area marks the canonical log ion = 25.2 − 25.3 value given by a simple stellar population at constant SFR over 100Myr (Robertson et al. 2013), and the number of LCEs and LAEs above and below this limit is indicated in the right side of the plot. In conclusion, LCEs and LAEs have systematically higher ion than the bulk of the LBG population (see also Reddy et al. 2022).
for local, compact SFGs, but shifted to lower ion than ours. The new results by Prieto-Lyon et al. (2022, teal-dashed line in the plot) using combined HST+JWST photometry are broadly consistent with our estimations.
In summary, our results provide observational evidence for moderately UV-faint, low-mass and less dusty (un-obscured, blue UV colors) high-redshift galaxies most likely being able to drive the Cosmic Reionization at 6 ≤ ≤ 9 (Finkelstein et al. 2019;Chisholm et al. 2022;Trebitsch et al. 2022;Lin et al. 2023). According to recent JWST observations (e.g., see Endsley et al. 2022;Cullen et al. 2023;Mascia et al. 2023a), these properties seem to be common among the galaxy population at the reionization epoch.
These UV-faint galaxies would produce higher amounts of ionizing photons (through young bursts of star formation, possibly at low metallicities) and allow these photons to escape more easily due to their favoured ISM physical conditions (dustless, gas-less holes in the ISM, see Gazagnes et al. 2020, SL22). Due to their high number density at earlier epochs ( ★ UV ≈ −21 at = 6, see Bouwens et al. 2021), they might become the major contributors to the ionizing budget at the EoR (see Eq. 1).
THE IONIZING PROPERTIES OF GALAXIES: A DETAILED COMPARISON BETWEEN LOW-AND HIGH-REDSHIFT STUDIES
We now compare the ionizing properties of SFGs at 3 ≤ ≤ 5 with the estimated at other other redshifts and examine if and how they could evolve into the EoR. To do so, we put our abs esc and ion trends along with observed properties of galaxies, together with other spectroscopic surveys of LCEs in the literature. On the one hand, we use the LzLCS (Flury et al. 2022a), probing emission line galaxies at 0.3, as our "EoR-analogue sample" of galaxies. The more extreme nature of the LzLCS galaxies, characterized by high ionization parameters, high SFR surface-densities and strong nebular emission lines, makes it the perfect sample to compare with, since EoR galaxies are expected (Schaerer et al. 2016;Boyett et al. 2022) and seem to hold these properties (e.g., Endsley et al. 2022;Schaerer et al. 2022b;Mascia et al. 2023a;Lin et al. 2023). At 3, the KLCS (Steidel et al. 2018;Pahl et al. 2021Pahl et al. , 2023 is used for comparison, because it targets LBGs at similar UV magnitudes (and other properties) to the VANDELS galaxies. For both the LzLCS 9 and the KLCS direct LyC measurements are available. Besides drawing conclusions on the possible redshift evolution of the ionizing properties of galaxies, we will discuss the main systematic uncertainties and caveats of our methodology.
In Fig. 15, the relations between abs esc , ion and the abs esc × ion product with the UV magnitude, UV-continuum slope and Ly strength are compared for the three samples: VANDELS (at = 4, SMC), KLCS (at = 3) and LzLCS (at = 0.3). Particularly important is how the abs esc × ion product compares between the different surveys (third column of the figure).
Interestingly, the properties of the LzLCS and the VANDELS samples show overall fairly consistent trends, despite the redshift range difference and different methods (LzLCS represents direct abs esc measurements, and VANDELS predictions are based on absorption lines). The samples are also quite complementary. Extending over a wide range of UV magnitudes (−22 ≤ 1500 ≤ −18) and stellar masses (10 8 M ≤ ★ ≤ 10 10 M ), the escape fraction of LzLCS galaxies tentatively decreases with the UV brightness and stellar mass. At variance with this, VANDELS galaxies span a much narrower range in terms of UV magnitudes, and it is unable to reveal a clear abs esc correlation with 1500 . Again, the ionizing efficiencies ( ion ) of both low-and high-samples are quite comparable, except for the lack of some low ion objects and the presence of stronger Ly emitters in LzLCS.
LzLCS galaxies show systematically bluer UV slopes (−2.5 ≤ 1500 ≤ −1) than VANDELS galaxies, which translates into much higher abs esc values for the bluest galaxies in the LzLCS (up to 9 We note that the abs esc and ion for LzLCS galaxies were obtained by running our F CUS code in Flury et al. (2022a) and SL22. The abs esc was calculated as the ratio between the intrinsic ionizing flux predicted by F CUS and the observed flux at LyC wavelengths. (2022) using HST+JWST data at = 3 − 7. The grey-shaded area marks the canonical log ion = 25.2 − 25.3 value given by a simple stellar population at constant SFR over 100Myr (Robertson et al. 2013). log ion decreases with the UV slope almost unanimously for all samples. abs esc ≈ 80%) than any of the galaxies within the VANDELS survey. However, both samples show a significant decrease in their ionizing properties ( abs esc and ion ) with the UV-continuum slope. Strikingly, our VANDELS results follow the same slope provided by the fit to LzLCS data (grey lines in Fig. 15, see Chisholm et al. 2022), but extrapolated to (≥ 0.5) redder UV slopes and lower values of abs esc and ion . This is interesting as it reassures the applicability of our method for inferring the escape fraction of galaxies (see Chisholm et al. (2018), SL22), since the abs esc quoted for the LzLCS is actually the observed escape fraction from measurements of the LyC flux. However, it is worth mentioning the shift in the intercept between the three surveys, with VANDELS and KLCS giving effectively higher abs esc × ion values at a given 1500 than the extrapolation by Chisholm et al. (2022). This is interesting by itself since (1) it might hint a possible redshift evolution of the abs esc × ion − 1500 relation between 0.3 (LzLCS) and 3 (KLCS, VANDELS) or (2), if the high-surveys probe intrinsically lower UV slopes, they may get a better handle on the actual abs esc × ion − 1500 trend in redder galaxies. In any case, the VANDELS abs esc and abs esc × ion relations with 1500 are compatible with Chisholm et al. (2022) within 1 , so none of these hypothesis can be confirmed nor ruled out.
Finally, due to the more extreme nature of the LzLCS objects, their abs esc − Ly and ion − Ly relationships are shifted to higher values of the Ly equivalent width respect to the VANDELS spectra. The intrinsically higher photon production efficiencies ( ion ) in LzLCS galaxies boost the strength of the Ly emission. Regarding the abs esc × ion correlation with Ly for the LzLCS, only LCEs (yellow circles, while downward triangles represent non-LCEs) provide log abs esc × ion (Hz/erg) ≥ 24.2 (see Robertson et al. 2013). Even though our VANDELS survey populate a similar parameters space in terms of 1500 , 1500 , Ly than the KLCS, the VANDELS abs esc values fall systematically below the KLCS points, the latter acting like an upper envelope in all the correlations. In or-der to understand these differences, we now carefully list the main limitations and caveats of our "picket-fence" modeling. There are two main assumptions on the applicability of Eq. 6: (1) the dustattenuation law and (2) the conversion between the metals and H covering fractions (see the full dedicated section in SL22).
First, the preference for a shallower (R16) or a steeper (SMC) attenuation curve can be tested by comparing our SED-derived attenuation versus independent measurements of the UV attenuation of galaxies. For example, one can translate IRX-excess measurements of high-galaxies into UV attenuation at 1600Å ( 1600 , in mag.), by using simple energy balance arguments (see Schaerer et al. 2013). In Fig. 16 Even if the attenuation law was known, abs esc differences may reach depending on the way the UV dust-attenuation ( B−V ) is measured. The dust-attenuation parameter at UV wavelengths is mainly constraint by the slope of the UV continuum (e.g., 1500 , see Chisholm et al. 2022), by comparing the observed 1500 against predictions of the intrinsic int 1500 from SED fitting, either from the spectra or from multi-band photometry. In our case, F CUS Figure 15. The absolute photon escape fraction ( abs esc ), the ionizing production efficiency (log ion ) and the product of the two (log abs esc × ion , see Eq. 1) for different low-and high-samples. Our predictions for the VANDELS galaxies at 3 ≤ ≤ 5 are shown through dark-blue points in the background (SMC law), and the running locus of the stacking results is indicated via the thick blue line. For comparison, the red open symbols represent the latest KLCS (Steidel et al. 2018) composite results at ∼ 3 by Pahl et al. (2021), and the yellow circles show individual measurements at ∼ 0.3 from LzLCS (Flury et al. 2022a, triangles are 1 upper limits). The dashed grey lines draw the fit relations (median ±1 error) to the LzLCS points . Grey-shaded regions mark the canonical values of abs esc ≥ 5% and log ion = 25.2 − 25.3 in classical Reionization models (Robertson et al. 2013).
is able to estimate B−V based on the spectral shape of the UV continuum. However, the slope of the current spectral observations might be subject to uncertainty due to different factors (see Garilli et al. 2021): instrumental calibrations, reduction pipeline issues, flux losses because of the atmospheric dispersion and sky subtraction residuals. An alternative solution would be to use an independent determination of the UV slope. With this purpose, we use the 1500 measurements from Calabrò et al. (2021), taking all the photometric bands whose bandwidths lie entirely inside the 1230-2750Å restframe wavelength range, and the relation given in Chisholm et al. (2022) (Eq. 8, assuming int 1500 = −2.5) to convert this 1500 to B−V . Plugging these new dust-attenuation values into Eq. 6 pro-duces higher estimations of abs esc . The systematic deviation between the spectroscopic (F CUS) and photometric (Calabrò et al. 2021) UV slopes by 0.5 on average, corresponds to 1 mag difference in UV attenuation at LyC wavelengths ( 912 ), which leads to a ×2.5 higher photon escape fraction. As sketched in Fig. 17, the new abs esc values (pink pentagons) are in closer agreement with the measurements by Pahl et al. (2021) and Begley et al. (2022), and fully consistent with the physical scenario in which the escape fraction of galaxies decreases towards brighter systems.
Second, in Fig. 18 we study the influence that the adopted conversion between the metals and H covering fraction has on the predicted escape fractions. For comparison, we compute the
E B−V from β spec (FiCUS) Begley+22
NO resolution correction Figure 17. A representation of the potential systematic bias in abs esc due to the different values in the UV dust-attenuation that can be derived from dissimilar measurements of the −slope. The blue diamonds show our default escape fractions from F CUS, i.e., B−V is derived from spectroscopic measurements of the UV continuum slope ( spec ). Conversely, the pink pentagons display the abs esc values when B−V is inferred from the −slope measurements by Calabrò et al. (2021), using photometry only ( phot ). The average abs esc = 0.07 ± 0.02 reported by Begley et al. (2022) at 3.5 is indicated through the grey shaded-area, and the results by Pahl et al. (2021) are plotted through black squares. The effect of spectral resolution due to changes in the depths of the absorption lines is represented with the dashed line. predicted abs esc values for the KLCS sample adopting the "Screen Model" (see Table 9 in Steidel et al. 2018), which is identical to our default "picket-fence" geometry assuming a uniform slab of dust. We are not able to reconcile our escape fractions with the KLCS points, with our predictions lying a factor of ×2 below them. However, if one assumes a 1:1 metals-to-H covering fraction correspondence so that (LIS) = (HI), instead of the usual linear regression by which (HI > (LIS)) (see Gazagnes et al. (2018) and SL22), the resulting abs esc would agree with the estimate of the average abs esc by Begley et al. (2022, grey shaded area), which is now much more compatible with KLCS at the same time.
Finally, the resolution correction function applied to the residual flux of the LIS lines (App. A) might also affect our escape fraction estimates. In Fig. 17 and Fig. 18, we explicitly show the effect of our custom correction in the inferred abs esc (dashed blue line). In the more conservative case in which the spectral resolution is totally dismissed and non-considered in Eq. 6, abs esc would increase by a factor of ×1 − 1.5 at most: this is by far the effect with the least influence on the average abs esc among the ones considered in this manuscript.
In conclusion, different effects can potentially augment our average abs esc . Among them, the determination of the UV dustattenuation has the strongest impact in the escape fraction estimates. In App. C (Fig. C1), we plot the abs esc values resulting from plugging into Eq. 6 the dust-attenuation values derived from the 1500 measurements from Calabrò et al. (2021). As we can see, all VAN-DELS, KLCS and LzLCS abs esc correlations with physical quantities now converge to the same global trends, and they share a common behaviour by which abs esc increases towards fainter UV magnitudes, bluer UV slopes and stronger Ly emission. In view of these results, the use of 1500 from photometry as a proxy of the UV dust attenuation in combination with the depth of the absorption lines as a tracer of H -empty channels in the ISM of galaxies (Eq. 6), seem to be a robust way of predicting the escape fraction of galaxies at any redshift. Independently, there is also the chance of (1) adopting a very steep dust-attenuation law, whose extrapolation towards LyC wavelengths would be even higher than when using an SMClike law and (2) using a favourable metals-to-H covering fraction's conversion, in which the metals would spatially trace the neutral gas more closely than with the current assumed relation.
The remaining question is whether a universal relation between the ionizing properties of galaxies ( abs esc and ion ) and their physical properties ( 1500 , 1500 , Ly etc.) exists, in which the different surveys would populate different regions of the hypothetical universal trend or, contrarily, the observed differences between surveys (VANDELS, KLCS, LzLCS) indicates an evolution of these properties with redshift. Still, as far as our sample is concerned, the abs esc × ion dependency along with galaxy properties is mainly due to escape fraction variations, so that the redshift evolution of this product will ultimately follow the changes in the ISM geometry, gas and dust composition across cosmic time.
In any case, and regardless the caveats that we have just described, the LzLCS, KLCS and VANDELS results suggest a scenario in which moderately UV-faint, low-mass and dustless galaxies most likely dominate the SFG ionizing photon budget at any epoch. The stellar populations of EoR-galaxies would be characterized by strong radiation fields, with high ionizing production efficiencies (boosting the nebular emission), and whose ISM conditions would be such that they facilitate the escape of a copious amount of LyC photons to the IGM. This can be speculatively attributed to the stellar mass build-up and changes in the gas and dust properties of galaxies between the Cosmic Noon ( 3) and the EoR (6 ≤ ≤ 9). NO resolution correction C f (HI) = C f (LIS) Figure 18. A representation of the potential systematic bias in abs esc (blue diamonds) due to the conversion between the metals and neutral gas covering fractions (blue dotted line). For comparison we also show the predicted abs esc values for the KLCS stacks (red symbols), by applying our methodology to the published covering fractions and UV attenuations from Steidel et al. (2018). The layout is the same as in Fig. 17.
SUMMARY AND CONCLUSIONS
In this work, we fully exploit and highlight the ability of absorption line and rest-UV spectroscopy alone to decipher the ionising properties of high-z SFGs. Our novel approach make use of deep, rest-frame UV spectra from the VANDELS survey at 3 ≤ ≤ 5, to compute the ionizing absolute photon escape fraction ( abs esc ) and ionizing photon production efficiency ( ion ) of galaxies. abs esc has been derived by combining absorption line measurements with estimates of the UV attenuation (see Saldana-Lopez et al. 2022), while the ion parameter was computed by fitting the FUV stellar continuum of the VANDELS galaxies (following Chisholm et al. 2019).
In particular, we have searched for correlations between abs esc and ion along with different galaxy properties (UV magnitude, UV slope, Ly strength, etc.), and we thoroughly compared with independent literature estimates. We found that:
• The predicted abs esc monotonically decreases with the stellar mass, the UV-continuum slope and the Ly equivalent width of the VANDELS galaxies (Fig. 5, 6, 7). We find a non-significant correlation between abs esc and the UV magnitude, although the faintest galaxies tentatively have higher escape fractions.
• The estimated ion statistically increases towards blue UVcontinuum slopes and strong Ly emitting galaxies (Fig. 12, 14), and it smoothly raises beyond the canonical value towards the UVfaintest galaxies in the sample.
• Potential Lyman Continuum Emitters (LCEs) and selected Lyman Alpha Emitters (LAEs) show systematically higher ion (log ion (Hz/erg) ≈ 25.38, 25.41, respectively) than non-LCEs and non-LAEs at similar UV magnitudes (log ion (Hz/erg) ≈ 25.18, 25.14), and our ion values are in agreement with other LAEs surveys in the literature (Fig. 13).
Additionally, we constructed the average composite FUV spec-trum of LCEs at 3 ≤ ≤ 5 (Fig. 8, 10, 11), by stacking potential, individual emitters in the VANDELS survey (selected based on our predicted abs esc ≥ 5%), and explained their non-ionizing spectral properties in the framework of the ISM conditions which enable ionizing photons to escape. Our results show that the FUV spectra of typical high-LCEs would be characterized by:
• Blue UV slopes ( 1500 spec −2) compared to non-LCEs, which directly translates into low UV attenuation ( B−V 0.1 mag.) and therefore low column density of dust along the line-of-sight.
• Enhanced Ly emission ( Ly −25Å) and strong UV nebular lines in contrast to the non-LCEs population, particularly high C 1908/C 1550 ratios ( 0.75). Together with the intrinsically higher ion parameter for LCEs, this indicates very young underlying stellar populations (≈ 10 Myr) at relatively low metallicities (≈ 0.2 Z ).
• Weak ( 1Å) ISM LIS absorption lines (e.g., Si 1260, C 1334), while the HIS absorption lines (e.g., Si 1400) are of similar strength as the bulk of non-LCEs. This, together with the low UV attenuation for LCEs, suggest the presence of dustless, low gas column-density channels in the ISM which favour the escape of ionzing photons.
Finally, we have compared our findings with other LyC surveys in the literature (Fig. 15), concretely the Keck Lyman Continuum Survey (or KLCS, Steidel et al. 2018;Pahl et al. 2021Pahl et al. , 2023, targeting 3 LBGs at similar UV magnitudes to VANDELS, and the Low-Redshift Lyman Continuum Survey (or LzlCS, Flury et al. 2022a,b), which targetted emission line galaxies at 0.3. VAN-DELS and LzLCS show overall fairly consistent trends, with LzLCS shifted to fainter UV magnitudes, bluer UV slopes and stronger Ly emission, and therefore their abs esc and ion are enhanced with respect to VANDELS galaxies. The escape fraction of KLCS galaxies fall above our estimates at all UV magnitudes. We mainly ascribed this discrepancy to the way the amount of UV attenuation is measured, and we propose to use 1500 measurements from photometry as an independent proxy of the UV dust attenuation of galaxies (Fig. 17). The dust-attenuation law (Fig. 16) and the metals-to-neutral-gas covering fraction conversion (Fig. 18) constitutes additional sources of uncertainty to the escape fraction.
Our joint analysis of the VANDELS, LzLCS and KLCS results shed light onto the fact that UV-faint, low-mass and dustless galaxies likely dominated the ionizing budget during the EoR. Their stellar populations would most likely be characterized by strong radiation fields, with high ionizing production efficiencies, and whose ISM conditions are such that they favour the escape of LyC photons. Recent JWST observations (e.g., Endsley et al. 2022;Schaerer et al. 2022b;Cullen et al. 2023;Lin et al. 2023;Mascia et al. 2023a), have revealed that blue UV slopes and strong emission lines also characterized the less massive and moderately faint galaxies at the EoR, supporting our results.
An increasing number of high-quality, FUV spectra at > 6 will be available in the mid-future thanks to upcoming ground-based (ELT, GMT, TMT, ...) and currently ongoing space-based facilities (JWST). This will give us the first insights into the properties of galaxies during the EoR. Nevertheless, according to our work, additional efforts are still needed in order to correctly decipher the physics underlying such vast data-sets. The two main questions that we think are urgent and clearly needed for future studies are: (1) if unique, what is the dust-attenuation law governing low-mass, low-metallicity galaxies? And (2) do metals trace the same spatial location as the neutral gas within the ISM of these galaxies? Figure C1. The ionizing photon escape fraction ( abs esc ) versus the UV magnitude ( 1500 ), the UV slope at 1500Å ( 1500 ) and the Ly equivalent width ( Ly ). Blue dots represent the predicted abs esc values for VANDELS galaxies (Eq. 6, using SMC) when the dust-attenuation ( B−V ) is inferred from the 1500 measurement by Calabrò et al. (2021). Layout is the same as Fig. 15.
have provided the first statistical correlations between individual escape fraction measurements and diverse galaxy properties. Other remarkable individual detections are Vanzella et al. (2012, 2015); de Barros et al. (2016); Shapley et al.(2016);
Figure 2 .
2Star-forming Main Sequence (MS) diagram i.e., log SFR − log M ★ , for the VANDELS-DR4 (gray open circles).
Figure 3 .
3Top: F CUS SED fitting results for the CDFS017345 VANDELS galaxy. The observed and error spectra are displayed in black and light-green. The best-fit stellar continuum is overplotted in red, while the spectral regions masked during the fit are shown in gray. The spectrum, shown in F units, has been normalized over 1350 − 1370Å. The most prominent stellar features, nebular and ISM absorption lines are indicated with blue, gold and black vertical lines at the top part of the figure. Bottom left: light-fractions as a function of age and metallicity for the best-fit composite stellar population model. Bottom right:
Figure 4 .
4Histograms showing the distribution of diverse secondary products resulting from our F CUS SED fitting to the VANDELS spectra. From top-left to bottom-right: light-weighted stellar Age, log ★ , B−V , 1500
Following Cullen et al. (2019); Calabrò et al. (2021); Llerena et al. (
Figure 5
5Figure 5. Scatter plots searching for correlations between the predicted ionizing photon escape fraction ( abs esc , first column), the ionizing production efficiency (log ion , second column) and the product of the two (log abs esc × ion , third column, see Eq. 1), versus different galaxy properties: observed and intrinsic UV absolute magnitudes ( 1500 , int 1500 ), UV-continuum slope at 1500Å from the best-fit SED ( 1500 spec ), and Ly equivalent width (−1 × Ly ). The Kendall ( ) correlation coefficients for individual R16 measurements (coloured dots in the background) are shown at the top but only for significant correlations (thick-framed panels). Results from our stacking analysis are displayed with filled (R16) and open (SMC) thick diamonds. Typical error bars for individual sources are plotted at the bottom part of each panel, and the arrows along the int 1500 panels measure the shift in int 1500 due to the use of the SMC dust-attenuation law. Grey-shaded regions mark assumed canonical values of abs esc ≥ 5% and log ion = 25.2 − 25.3 in classical Reionization models (Robertson et al. 2013).
Figure 6 .
6Combined low-ionization-state (LIS) absorption profile for the stacks in bins of Ly equivalent width ( Ly , see values in App. B), normalized by the local continuum and constructed by averaging the profiles of individual LIS absorption lines (Si 1260, O +Si 1302, C 1334, Si 1527). The composites are color coded by the dust-attenuation parameter ( B−V ) derived from the F CUS SED fits, and the predicted escape fraction for each composite ( abs esc ) is indicated in the legend. The insets show the resulting continuum-normalized Ly profile for the stacked spectra (left), and the evolution of the Ly equivalent width as function of the LIS equivalent width in absorption (right).
), even when spanning a wide range of stellar masses, a gradual increase in abs esc towards lower masses have been recently reported at high-in Fletcher et al. (2019); Saxena et al. (2022a), for individual detections.
Figure 8 .
8Composite spectra (main plot) for LCEs (blue) and non-LCEs samples (orange) normalized at 1360Å (grey band). The position of the main stellar-wind lines (N 1240, O 1371, Si 1400 doublet, C 1550), nebular lines (Ly 1216, He 1640, O ]1666, C ]1908) and ISM lines (Si 1260, O +Si 1302, C 1334, Si 1527, Fe 1608, Al 1670 and Al 1858 doublet) are marked in dark-blue, gold and black labels, respectively. The Ly (in Å) and 1500 spec values are also indicated in the inset. The top panel show the number of objects included in each composite as a function of wavelength, and the bottom panel zooms in some of the lines, highlighting fundamental differences in their non-ionizing spectra (equivalent width for each line are listed, see text).
Figure 9 .Figure 10 .
910Several stellar-wind (N 1240, Si 1400, C 1550) and other ISM UV absorption lines (Si 1260, O +Si 1302, C 1334, Si 1527, The relation between the Ly equivalent width (−1 × Ly , in Å) and the measured mean residual flux of the LIS lines (R(LIS)). The points are color-coded by the dust-attenuation parameter ( B−V , in mag.). Blue and red circles indicate the position of the LCEs and non-LCEs composite spectra, once corrected by the effect of resolution on the depth of the lines (see App. A). For comparison, the empty hexagons represent the measurements of LCEs and non-LCEs stacks from the LzLCS (Flury et al. 2022a, = 0.2 − 0.4), and the black crosses show the individual measurements by Jones et al. (2013, = 2 − 4). Arrows point to already confirmed LCEs in our sample: Ion1 byVanzella et al. (2012) and CDFS012448 bySaxena et al. (2022a). Main F CUS SED fitting results for the LCEs (top) and non-LCEs (bottom) composite spectra: B−V , stellar age and metallicity (see legend). The best-fit SED models for LCEs (blue) and non-LCEs (orange) are plotted on top of each composite (in grey), which is normalized at 1360Å. The main stellar, nebular and ISM lines are labeled through dark-blue, gold and black labels, as inFig. 8.
at = 2 − 3, the extensive HST+IRAC searches by Bouwens et al. (2016) at = 4−5, and the HAEs sample of Matthee et al. (2017a) at = 2 − 3. Our results are qualitatively in agreement with
Figure 12 .
12log ion as a function of the observed absolute UV magnitude. The red thick symbols show the results of our 3 ≤ ≤ 5 VANDELS composite spectra in bins of UV magnitude when using the R16 attenuation law (SMC dust-law results are shown in the inset). Stacked results from other literature samples at various redshifts are represented via open white symbols (see legend), in particular for low-LCEs
= 2 − 3 (Shivaei et al. 2018, orange squares), the extensive HST+IRAC campaigns at = 4 − 5 byBouwens et al. (2016, blue diamonds), and the HAEs sample at = 2 − 3 by Matthee et al. (2017a, grey crosses). The individual LAEs and CIV emitters measurements from Nakajima et al. (2016) at = 3 − 4 and Stark et al. (2015) at = 7 are plotted in coloured circles. The teal dashed line follows the extrapolation to brighter magnitudes of the recent HST+JWST results at = 3 − 7 (Prieto-Lyon et al. 2022). The grey-shaded area marks the canonical log ion = 25.2 − 25.3 value given by a simple stellar population at constant SFR over 100Myr (Robertson et al. 2013). For reference, ★ UV ≈ −21 at = 6 (Bouwens et al. 2021). Overall, log ion monotonically increases with 1500 .
Figure 13 .
13at 1500 > −1.6 is not consistent with our findings.Izotov et al. (2017) reported a similar general trend of decreasing ion with increasing UV slope log ion versus 1500 color-coded for different galaxy samples. Top: potential LCEs versus non-LCEs (in blue and red, respectively). Bottom: selected LAEs and non-LAEs (in green and light-green). The corresponding measurements for LCEs/non-LCEs and LAEs/non-LAEs composite spectra are represented through thick filled symbols, and the thick black line represents our stacked measurements in bins of UV magnitude. Stacked results from other literature samples are represented via open white symbols, in particular for low-LCEs
Figure 14 .
14log ion as function of the UV-continuum slope at 1500Å ( 1500 ). The red (blue) dots show the results of our 3 ≤ ≤ 5 VANDELS individual spectra assuming R16 (SMC) dust-attenuation law. Thick symbols represent the results from our composites in bins of 1500 . Orange squares show the MOSDEF results at = 2 − 3 by Shivaei et al. (2018), while blue diamonds and grey crosses correspond to the HAEs measurements in Bouwens et al. (2016) at = 4 − 5 Matthee et al. (2017a) at = 2 − 3, respectively. Open symbols were taken from Lam et al. (2019), and the teal dashed line corresponds to the results by Prieto-Lyon et al.
Figure 16 .
16The UV attenuation at 1600Å ( 1600 , in mag.) as function of the stellar mass ( ★ , in ) for individual VANDELS galaxies (dots in the background) and our stacks measurements (big circles). The solid and dashed green thick-lines represent independent estimates byMcLure et al. (2018a) andFudamoto et al. (2020) from IRX-excess measurements at = 2 − 3 and = 3 − 4, respectively. Red symbols assume a R16 dust-attenuation law while the blue symbols assume an SMC attenuation curve.
Figure A1 .Figure A2 .
A1A2The impact of resolution and stacking on the depth of the absorption lines. Left: the colored lines represent a set of ×100 mock profiles of the SiII1260 line with an input covering fraction of = 0.85 (equivalent to SiII = 1 − = 0.15, delimited by the horizontal black line), and where the Si gas column density, SiII (cm −2 ), the Doppler broadening parameter, (kms −1 ), and the gas velocity, (kms −1 ), were varied following Gaussian distributions (see text). Right: distribution of individual SiII measurements for all the simulations once the lines have been degraded to a spectral resolution of (VIMOS) = 600 (red line), including a constant Gaussian noise of median S/N = 5. The blue line indicates the resulting residual flux after co-adding all the spectra (stacking). Calibration between the theoretical residual flux of the SiII1260 line and the actual measured flux when accounting for (1, in red) the effect of a finite instrumental resolution and S/N ( (VIMOS) ≈ 600, S/N = 5), and (2, in blue) the joint effect of resolution, S/N plus stacking a set of = 100 simulated lines (seeFig. A1). Dashed-black line shows the 1:1 relation.
galaxies is highlighted with red filled symbols, and the vertical dashed line indicates the minimum redshift at which LyC imaging is available through the VIMOS/U-band. The top-right and bottom histograms show the AB and spec distributions of both the original VANDELS-DR4 and working samples (shaded gray and red, respectively). Errorbars comprise the 16 ℎ , 50 ℎ and 84 ℎ percentiles of the distributions.1
2
3
4
5
6
18
20
22
24
26
28
30
H
AB
534 galaxies
z ≥ 3.34
VIMOS/U
LyC imaging
1
2
3
4
5
6
z spec
0
100
200
300
400
Number
VANDELS-DR4
This Work
100 200 300 400
Number
Figure 1. −band AB magnitude versus spectroscopic redshift ( spec ) for
the VANDELS-DR4 (gray open circles, 2087 galaxies). Our working sample
of 534
(seeCullen et al. 2018, for details), our VANDELS 1600 versus stellar mass results -using both R16 (in red) and SMC (in blue) laws-are compared against the estimations byMcLure et al. (2018a) at = 2 − 3 and Fudamoto et al. (2020) at = 3 − 4 from IRX-excess measurements (solid and dashed green lines, respectively). The VANDELS stacks measurements are represented through thick red and thick blue circles. If the more recent analysis of Fudamoto et al. (2020) is adopted, the SMC law, and hence higher abs esc , are preferred. However, the McLure et al. (2018a) results at similar redshift would indicate an intermediate attenuation between R16 and SMC.The origin of these differences is difficult to understand and we currently have no way to distinguish which attenuation law is more appropriate. As discussed in Sect. 4, assuming a most favourable attenuation law like SMC, the abs esc would be ×1.5 higher compared to the ones derived using R16.
MNRAS 000, 1-29(2022)
The F CUS code is publicly available and can be cloned from the author's G H repository: https://github.com/asalda/FiCUS.git.MNRAS 000, 1-29(2022)
https://trac.nublado.org/
NIST stands for National Institute of Standards and Technology: https: //physics.nist.gov/PhysRefData/ASD/lines_form.html.
ACKNOWLEDGEMENTSThe authors thank the anonymous referee for providing useful comments, which have certainly improved the quality of this paper. The authors also thank Hakim Atek, Jorytt Matthee and Irene Shivaei for kindly providing tabulated data fromAtek et al. (2022),Matthee et al. (2017a)andShivaei et al. (2018). ASL and DS acknowledge support from Swiss National Science Foundation. The Cosmic Dawn Center is funded by the Danish National Research Foundation under grant No.140. RA acknowledges support from ANID Fondecyt Regular 1202007. JPUF is supported by the DFF grant 1026-00066.DATA AVAILABILITYThe VANDELS Data Release 4 (DR4) is now publicly available and can be accessed using the VANDELS database at http:// vandels.inaf.it/dr4.html, or through the ESO archives. Any secondary product and/or data underlying this article will be shared on reasonable request to the corresponding author.APPENDIX A: THE IMPACT OF RESOLUTION AND STACKING ON THE DEPTH OF THE ABSORPTION LINESHere we provide two calibrations to correct the observed line-depths measurements when (1) inherently degraded by the instrumental resolution and (2) artificially smoothed during usual stacking procedures. As we discussed in the main text, the spectrograph resolution makes the absorption lines wider but less deep, conserving the flux. Together with the noise, sensitivity and other possible instrumental aberrations, this can lead to a systematically overestimated measurement of the residual flux.To account for these effects, we simulate Si 1260 line intensities ( ) assuming a foreground dust-screen geometry of the picketfence model, describing the line through a single gas component as(Draine 2011where ( ) represents the covering fraction of the line in question and the = product is usually known as the optical depth of the line. The line cross-section, , which shapes the absorption profile, is given by the Voigt function so that:where oscillator strengths and Einstein coefficients A are taken from the NIST database 10 . For the Voigt function, Voigt( ,A ,b), we use the numerical approximation described inSmith et al. (2015), and assume the following Gaussian distributions (N ( , 2 )) of the input parameters: − ∼ N (16, 1) cm −2 − ∼ N (125, 25) km s −1 − ∼ N (0, 150) km s −1 , = 0 + / where represents the median and 2 the standard deviation of the normal distribution. The gas column density distribution ( , cm −2 ) is chosen so that the Si 1260 equivalent width of the line falls within the optically thick limits of the curve-of-growth or, in other words, the line is always saturated. The line velocity shift ( , km s −1 ) randomly changes to account for inflows or outflowing gas so that the line center is blue-or red-shifted accordingly, i.e., = 0 + / , being 0 = 1260.42Å for the Si 1260 line. Finally, the gas thermal (or Doppler) broadening ( , km s −1 ) varies around typical values for normal star-forming galaxies(Steidel et al. 2018). The input covering fraction is fixed for every set of simulations to ( ) = 0.3, 0.5, 0.65, 0.75, 0.85 and 1, respectively.After performing a set of ×100 ideal Voigt profiles for each covering fraction, we introduce the effect of resolution by convolving the simulated spectra with a Gaussian kernel whose FWHM matches the instrumental resolution (a.k.a., (VIMOS) ≈ 600). A constant S/N value (S/N = 5, the median of the VANDELS sample) is then added to every single spectra. At fixed ( ), the median measured (observed) residual flux of each mock distribution is compared to the theoretical line residual flux of = 1 − ( ) according to the picket-fence configuration, producing the desired calibration.Fig. A1presents the resulting line profiles for a set of ×100 simulations of the Si 1260 line, all of them with an input covering fraction of ( ) = 0.85 ( = 0.15). The median of the distribution of individually measured values is 0.3 (red line), showing that the observed line depth can statistically differ as much as 40% with respect to the real value.In the last step, we proceed to stack all the spectra contained on each simulation pack at fixed ( ), following the same methods described in the text. The co-added spectra for the ( ) = 0.85 simulation is also shown inFig. A1(blue line), with a 65% relative difference in residual flux with respect to the real input value.To sum up, the resulting calibrations -fitting a linear polynomial to the measured versus theoretical values-are:andSiII[stacking] = 0.73 × (1 − (SiII)) + 0.29,when accounting for instrumental resolution plus S/N (Eq. A3), and spectral stacking (Eq. A4), respectively. Both relations are plotted inFig. A2through the dashed red and dashed blue lines. These calibrations were applied to individual residual flux measurements as well as when measuring the residual flux over composite spectra.APPENDIX B: STACK MEASUREMENTS AND FICUS SED RESULTSAs stated in Sect. 3.3, the VANDELS sample was divided according to the 25 ℎ , 50 ℎ and 75 ℎ ditribution percentiles (quartiles) of the different targeted quantities, resulting in four sub-samples for each quantity which were quoted as Q1, Q2, Q3 and Q4. Then, stacked spectra in bins of UV magnitude ( 1500 ), UV intrinsic luminosity ( int 1500 ), UV continuum slope ( 1500 spec ) and Ly equivalent width ( Ly ) were built following the methodology described in the same section.Table B1includes the ionizing escape fractions ( abs esc ) and production efficiencies ( ion ) derived for every composite spectra. It also contains the same results when using either the R16 or SMC dust-attenuation laws, for comparison.Table B2summarizes the outputs of our F CUS SED fits and the main UV spectral measurements for the LCEs, non-LCEs, LAEs and non-LAEs stacks.APPENDIX C: ALTERNATIVE ESCAPE FRACTION ESTIMATESHere we offer complementary predictions for the ionizing photon escape fraction. abs esc is derived by combining absorption line measurements with estimates of the UV attenuation, in a similar manner as in the main text (Eq. 6). However, the dust-attenuation parameter ( B−V ) has now been estimated from the UV slope measurements ofCalabrò et al. (2021), by fitting a power law to the VANDELS photometry, requiring the whole bandwidths to reside between 1230 and 2750Å rest-frame. Eq. 8 inChisholm et al. (2022)was then used to convert 1500 to B−V assuming an SMC attenuation law. VAN-DELS, KLCS and LzLCS abs esc correlations with physical quantities converge to the same global trends (seeFig. C1), with abs esc increasing towards fainter UV magnitudes, bluer UV slopes and stronger Ly emission. We refer to Sect. 6 for more details. This paper has been typeset from a T E X/L A T E X file prepared by the author. Notes. Column 1: galaxy properties. Column 2: physical units. Columns 3 and 4: list of measurements for the LCEs and non-LCEs stacks. a All tabulated results use the R16 dust-attenuation law. b Measured residual fluxes not corrected by the spectral resolution (see App. A).
. A Alavi, 10.3847/1538-4357/abbd43ApJ. 90459Alavi A., et al., 2020, ApJ, 904, 59
. K Z Arellano-Córdova, 10.3847/2041-8213/ac9ab2ApJ. 23Arellano-Córdova K. Z., et al., 2022, ApJ, 940, L23
. M Asplund, N Grevesse, A J Sauval, P Scott, 10.1146/annurev.astro.46.060407.145222ARA&A. 47481Asplund M., Grevesse N., Sauval A. J., Scott P., 2009, ARA&A, 47, 481
. H Atek, D Kunth, D Schaerer, J M Mas-Hesse, M Hayes, G Östlin, J.-P Kneib, 10.1051/0004-6361/201321519A&A. 56189Atek H., Kunth D., Schaerer D., Mas-Hesse J. M., Hayes M., Östlin G., Kneib J.-P., 2014, A&A, 561, A89
. H Atek, 10.1088/0004-637X/800/1/18ApJ. 80018Atek H., et al., 2015, ApJ, 800, 18
. H Atek, L J Furtak, P Oesch, P Van Dokkum, N Reddy, T Contini, G Illingworth, S Wilkins, 10.1093/mnras/stac360MNRAS. 5114464Atek H., Furtak L. J., Oesch P., van Dokkum P., Reddy N., Contini T., Illingworth G., Wilkins S., 2022, MNRAS, 511, 4464
. H Atek, 10.1093/mnras/stac3144MNRAS. 5191201Atek H., et al., 2023, MNRAS, 519, 1201
. G D Becker, J S Bolton, 10.1093/mnras/stt1610MNRAS. 4361023Becker G. D., Bolton J. S., 2013, MNRAS, 436, 1023
. G D Becker, A D'aloisio, H M Christenson, Y Zhu, G Worseck, J S Bolton, 10.1093/mnras/stab2696MNRAS. 5081853Becker G. D., D'Aloisio A., Christenson H. M., Zhu Y., Worseck G., Bolton J. S., 2021, MNRAS, 508, 1853
. R Begley, 10.1093/mnras/stac1067MNRAS. 5133510Begley R., et al., 2022, MNRAS, 513, 3510
. R Bezanson, 10.48550/arXiv.2212.04026arXiv:2212.04026arXiv e-printsBezanson R., et al., 2022, arXiv e-prints, p. arXiv:2212.04026
. F Bian, X Fan, 10.1093/mnrasl/slaa007MNRAS. 49365Bian F., Fan X., 2020, MNRAS, 493, L65
. P Bouchet, J Lequeux, E Maurice, L Prevot, M L Prevot-Burnichon, A&A. 149330Bouchet P., Lequeux J., Maurice E., Prevot L., Prevot-Burnichon M. L., 1985, A&A, 149, 330
. R J Bouwens, 10.1088/0004-637X/754/2/83ApJ. 75483Bouwens R. J., et al., 2012, ApJ, 754, 83
. R J Bouwens, 10.1088/0004-637X/793/2/115ApJ. 793115Bouwens R. J., et al., 2014, ApJ, 793, 115
. R J Bouwens, 10.1088/0004-637X/803/1/34ApJ. 80334Bouwens R. J., et al., 2015, ApJ, 803, 34
. R J Bouwens, R Smit, I Labbé, M Franx, J Caruana, P Oesch, M Stefanon, N Rasappu, 10.3847/0004-637X/831/2/176ApJ. 831176Bouwens R. J., Smit R., Labbé I., Franx M., Caruana J., Oesch P., Stefanon M., Rasappu N., 2016, ApJ, 831, 176
. R J Bouwens, 10.3847/1538-3881/abf83eAJ. 16247Bouwens R. J., et al., 2021, AJ, 162, 47
. K N K Boyett, D P Stark, A J Bunker, M Tang, M V Maseda, 10.1093/mnras/stac1109MNRAS. 5134451Boyett K. N. K., Stark D. P., Bunker A. J., Tang M., Maseda M. V., 2022, MNRAS, 513, 4451
. A Bressan, P Marigo, L Girardi, B Salasnich, C Dal Cero, S Rubele, A Nanni, 10.1111/j.1365-2966.2012.21948.xMNRAS. 427127Bressan A., Marigo P., Girardi L., Salasnich B., Dal Cero C., Rubele S., Nanni A., 2012, MNRAS, 427, 127
. C R Bridge, 10.1088/0004-637X/720/1/465ApJ. 720465Bridge C. R., et al., 2010, ApJ, 720, 465
. J Brinchmann, 10.48550/arXiv.2208.07467arXiv:2208.07467Brinchmann J., 2022, arXiv e-prints, p. arXiv:2208.07467
. G Bruzual, S Charlot, 10.1046/j.1365-8711.2003.06897.xMNRAS. 3441000Bruzual G., Charlot S., 2003, MNRAS, 344, 1000
. C Cain, A D'aloisio, N Gangolli, G D Becker, 10.3847/2041-8213/ac1aceApJ. 91737Cain C., D'Aloisio A., Gangolli N., Becker G. D., 2021, ApJ, 917, L37
. A Calabrò, 10.1051/0004-6361/202039244A&A. 64639Calabrò A., et al., 2021, A&A, 646, A39
. A Calabrò, 10.1051/0004-6361/202244364A&A. 667117Calabrò A., et al., 2022, A&A, 667, A117
. D Calzetti, A L Kinney, T Storchi-Bergmann, 10.1086/174346ApJ. 429582Calzetti D., Kinney A. L., Storchi-Bergmann T., 1994, ApJ, 429, 582
. A J Cameron, 10.48550/arXiv.2302.04298arXiv:2302.04298arXiv e-printsCameron A. J., et al., 2023, arXiv e-prints, p. arXiv:2302.04298
. A C Carnall, R J Mclure, J S Dunlop, R Davé, 10.1093/mnras/sty2169MNRAS. 4804379Carnall A. C., McLure R. J., Dunlop J. S., Davé R., 2018, MNRAS, 480, 4379
. A C Carnall, 10.1093/mnrasl/slac136MNRAS. 51845Carnall A. C., et al., 2023, MNRAS, 518, L45
. M Castellano, 10.48550/arXiv.2212.06666arXiv:2212.06666arXiv e-printsCastellano M., et al., 2022a, arXiv e-prints, p. arXiv:2212.06666
. M Castellano, 10.3847/2041-8213/ac94d0ApJ. 93815Castellano M., et al., 2022b, ApJ, 938, L15
. J Chisholm, 10.1051/0004-6361/201832758A&A. 61630Chisholm J., et al., 2018, A&A, 616, A30
. J Chisholm, J R Rigby, M Bayliss, D A Berg, H Dahle, M Gladders, K Sharon, 10.3847/1538-4357/ab3104ApJ. 882182Chisholm J., Rigby J. R., Bayliss M., Berg D. A., Dahle H., Gladders M., Sharon K., 2019, ApJ, 882, 182
. J Chisholm, J X Prochaska, D Schaerer, S Gazagnes, A Henry, 10.1093/mnras/staa2470MNRAS. 4982554Chisholm J., Prochaska J. X., Schaerer D., Gazagnes S., Henry A., 2020, MNRAS, 498, 2554
. J Chisholm, 10.1093/mnras/stac2874MNRAS. 5175104Chisholm J., et al., 2022, MNRAS, 517, 5104
. S Cristiani, L M Serrano, F Fontanot, E Vanzella, P Monaco, 10.1093/mnras/stw1810MNRAS. 4622478Cristiani S., Serrano L. M., Fontanot F., Vanzella E., Monaco P., 2016, MNRAS, 462, 2478
. F Cullen, 10.1093/mnras/sty469MNRAS. 4763218Cullen F., et al., 2018, MNRAS, 476, 3218
. F Cullen, 10.1093/mnras/stz1402MNRAS. 4872038Cullen F., et al., 2019, MNRAS, 487, 2038
. F Cullen, 10.1093/mnras/staa1260MNRAS. 4951501Cullen F., et al., 2020, MNRAS, 495, 1501
. F Cullen, 10.1093/mnras/stab1340MNRAS. 505903Cullen F., et al., 2021, MNRAS, 505, 903
. F Cullen, 10.1093/mnras/stad073MNRAS. 52014Cullen F., et al., 2023, MNRAS, 520, 14
. M Curti, 10.1093/mnras/stac2737MNRAS. 518425Curti M., et al., 2023, MNRAS, 518, 425
. E Curtis-Lake, 10.1051/0004-6361/201730419Nature Astronomy. 70A&ACurtis-Lake E., et al., 2023, Nature Astronomy, Davidzon I., et al., 2017, A&A, 605, A70
. P Dayal, A Ferrara, 10.1016/j.physrep.2018.10.002Phys. Rep. 7801Dayal P., Ferrara A., 2018, Phys. Rep., 780, 1
. P Dayal, 10.1093/mnras/staa1138MNRAS. 4953065Dayal P., et al., 2020, MNRAS, 495, 3065
. C T Donnan, 10.1093/mnras/stac3472MNRAS. 5186011Donnan C. T., et al., 2023, MNRAS, 518, 6011
. B T Draine, 10.3847/1538-4357/aabfcfPhysics of the Interstellar and Intergalactic Medium. Princeton Series in Astrophysics Du X., et al. 86075ApJDraine B. T., 2011, Physics of the Interstellar and Intergalactic Medium. Princeton Series in Astrophysics Du X., et al., 2018, ApJ, 860, 75
. N Emami, B Siana, A Alavi, T Gburek, W R Freeman, J Richard, D R Weisz, D P Stark, 10.3847/1538-4357/ab8f97ApJ. 895116Emami N., Siana B., Alavi A., Gburek T., Freeman W. R., Richard J., Weisz D. R., Stark D. P., 2020, ApJ, 895, 116
. R Endsley, D P Stark, L Whitler, M W Topping, Z Chen, A Plat, J Chisholm, S Charlot, arXiv:2208.14999Endsley R., Stark D. P., Whitler L., Topping M. W., Chen Z., Plat A., Chisholm J., Charlot S., 2022, arXiv e-prints, p. arXiv:2208.14999
. D K Erb, 10.1088/0004-637X/795/1/33ApJ. 79533Erb D. K., et al., 2014, ApJ, 795, 33
. A L Faisst, 10.3847/0004-637X/829/2/99ApJ. 82999Faisst A. L., 2016, ApJ, 829, 99
. J Falcón-Barroso, P Sánchez-Blázquez, A Vazdekis, E Ricciardelli, N Cardiel, A J Cenarro, J Gorgas, R F Peletier, 10.1051/0004-6361/201116842A&A. 53295Falcón-Barroso J., Sánchez-Blázquez P., Vazdekis A., Ricciardelli E., Cardiel N., Cenarro A. J., Gorgas J., Peletier R. F., 2011, A&A, 532, A95
. G J Ferland, Rev. Mex. Astron. Astrofis. 53385Ferland G. J., et al., 2017, Rev. Mex. Astron. Astrofis., 53, 385
. S L Finkelstein, 10.1088/0004-637X/756/2/164ApJ. 756164Finkelstein S. L., et al., 2012, ApJ, 756, 164
. S L Finkelstein, 10.3847/1538-4357/ab1ea8ApJ. 87936Finkelstein S. L., et al., 2019, ApJ, 879, 36
. S L Finkelstein, 10.3847/2041-8213/acade4ApJ. 94613Finkelstein S. L., et al., 2023, ApJ, 946, L13
. T J Fletcher, M Tang, B E Robertson, K Nakajima, R S Ellis, D P Stark, A Inoue, 10.3847/1538-4357/ab2045ApJ. 87887Fletcher T. J., Tang M., Robertson B. E., Nakajima K., Ellis R. S., Stark D. P., Inoue A., 2019, ApJ, 878, 87
. S R Flury, 10.3847/1538-4365/ac5331ApJS. 2601Flury S. R., et al., 2022a, ApJS, 260, 1
. S R Flury, 10.3847/1538-4357/ac61e4ApJ. 930126Flury S. R., et al., 2022b, ApJ, 930, 126
. F Fontanot, S Cristiani, E Vanzella, 10.1111/j.1365-2966.2012.21594.xMNRAS. 4251413Fontanot F., Cristiani S., Vanzella E., 2012, MNRAS, 425, 1413
. Y Fudamoto, 10.1093/mnras/stz3248MNRAS. 4914724Fudamoto Y., et al., 2020, MNRAS, 491, 4724
. S Fujimoto, 10.48550/arXiv.2301.09482arXiv:2301.09482arXiv e-printsFujimoto S., et al., 2023, arXiv e-prints, p. arXiv:2301.09482
. L J Furtak, M Shuntov, H Atek, A Zitrin, J Richard, M D Lehnert, J Chevallard, 10.1093/mnras/stac3717MNRAS. 5193064Furtak L. J., Shuntov M., Atek H., Zitrin A., Richard J., Lehnert M. D., Chevallard J., 2023, MNRAS, 519, 3064
. A Galametz, 10.1088/0067-0049/206/2/10ApJS. 20610Galametz A., et al., 2013, ApJS, 206, 10
. B Garilli, 10.1051/0004-6361/202040059A&A. 647150Garilli B., et al., 2021, A&A, 647, A150
. S Gazagnes, J Chisholm, D Schaerer, A Verhamme, J R Rigby, M Bayliss, 10.1051/0004-6361/201832759A&A. 61629Gazagnes S., Chisholm J., Schaerer D., Verhamme A., Rigby J. R., Bayliss M., 2018, A&A, 616, A29
. S Gazagnes, J Chisholm, D Schaerer, A Verhamme, Y Izotov, 10.1051/0004-6361/202038096A&A. 63985Gazagnes S., Chisholm J., Schaerer D., Verhamme A., Izotov Y., 2020, A&A, 639, A85
. K D Gordon, G C Clayton, K A Misselt, A U Landolt, M J Wolff, 10.1086/376774ApJ. 594279Gordon K. D., Clayton G. C., Misselt K. A., Landolt A. U., Wolff M. J., 2003, ApJ, 594, 279
. H Goto, 10.3847/1538-4357/ac308bApJ. 923229Goto H., et al., 2021, ApJ, 923, 229
. A Grazian, 10.1051/0004-6361/201526396A&A. 58548Grazian A., et al., 2016, A&A, 585, A48
. A Grazian, 10.1051/0004-6361/201730447A&A. 60218Grazian A., et al., 2017, A&A, 602, A18
. N A Grogin, 10.1088/0067-0049/197/2/35ApJS. 19735Grogin N. A., et al., 2011, ApJS, 197, 35
. L Guaita, 10.1051/0004-6361/201527597A&A. 587133Guaita L., et al., 2016, A&A, 587, A133
. Y Guo, 10.1088/0067-0049/207/2/24ApJS. 20724Guo Y., et al., 2013, ApJS, 207, 24
. Y Harikane, 10.3847/1538-4357/aabd80ApJ. 85984Harikane Y., et al., 2018, ApJ, 859, 84
. Y Harikane, 10.3847/1538-4357/ac53a9ApJ. 9291Harikane Y., et al., 2022, ApJ, 929, 1
. Y Harikane, 10.3847/1538-4365/acaaa9ApJS. 2655Harikane Y., et al., 2023, ApJS, 265, 5
. S Hassan, R Davé, S Mitra, K Finlator, B Ciardi, M G Santos, 10.1093/mnras/stx2194MNRAS. 473227Hassan S., Davé R., Mitra S., Finlator K., Ciardi B., Santos M. G., 2018, MNRAS, 473, 227
. N P Hathi, 10.1051/0004-6361/201526012A&A. 58826Hathi N. P., et al., 2016, A&A, 588, A26
. T M Heckman, K R Sembach, G R Meurer, C Leitherer, D Calzetti, C L Martin, 10.1086/322475ApJ. 55856Heckman T. M., Sembach K. R., Meurer G. R., Leitherer C., Calzetti D., Martin C. L., 2001, ApJ, 558, 56
. T M Heckman, 10.1088/0004-637X/730/1/5ApJ. 7305Heckman T. M., et al., 2011, ApJ, 730, 5
. A Henry, C Scarlata, C L Martin, D Erb, 10.1088/0004-637X/809/1/19ApJ. 80919Henry A., Scarlata C., Martin C. L., Erb D., 2015, ApJ, 809, 19
. A Henry, D A Berg, C Scarlata, A Verhamme, D Erb, 10.3847/1538-4357/aab099ApJ. 85596Henry A., Berg D. A., Scarlata C., Verhamme A., Erb D., 2018, ApJ, 855, 96
. A K Inoue, I Shimizu, I Iwata, M Tanaka, 10.1093/mnras/stu936MNRAS. 4421805Inoue A. K., Shimizu I., Iwata I., Tanaka M., 2014, MNRAS, 442, 1805
. Y Isobe, M Ouchi, K Nakajima, Y Harikane, Y Ono, Y Xu, Y Zhang, H Umeda, 10.48550/arXiv.2301.06811arXiv:2301.06811Isobe Y., Ouchi M., Nakajima K., Harikane Y., Ono Y., Xu Y., Zhang Y., Umeda H., 2023, arXiv e-prints, p. arXiv:2301.06811
. Y I Izotov, D Schaerer, T X Thuan, G Worseck, N G Guseva, I Orlitová, A Verhamme, 10.1093/mnras/stw1205MNRAS. 4613683Izotov Y. I., Schaerer D., Thuan T. X., Worseck G., Guseva N. G., Orlitová I., Verhamme A., 2016a, MNRAS, 461, 3683
. Y I Izotov, I Orlitová, D Schaerer, T X Thuan, A Verhamme, N G Guseva, G Worseck, 10.1038/nature16456Nature. 529178Izotov Y. I., Orlitová I., Schaerer D., Thuan T. X., Verhamme A., Guseva N. G., Worseck G., 2016b, Nature, 529, 178
. Y I Izotov, N G Guseva, K J Fricke, C Henkel, D Schaerer, 10.1093/mnras/stx347MNRAS. 4674118Izotov Y. I., Guseva N. G., Fricke K. J., Henkel C., Schaerer D., 2017, MNRAS, 467, 4118
. Y I Izotov, D Schaerer, G Worseck, N G Guseva, T X Thuan, A Verhamme, I Orlitová, K J Fricke, 10.1093/mnras/stx3115MNRAS. 4744514Izotov Y. I., Schaerer D., Worseck G., Guseva N. G., Thuan T. X., Verhamme A., Orlitová I., Fricke K. J., 2018a, MNRAS, 474, 4514
. Y I Izotov, G Worseck, D Schaerer, N G Guseva, T X Thuan, Fricke Verhamme, A Orlitová, I , 10.1093/mnras/sty1378MNRAS. 4784851Izotov Y. I., Worseck G., Schaerer D., Guseva N. G., Thuan T. X., Fricke Verhamme A., Orlitová I., 2018b, MNRAS, 478, 4851
. Y I Izotov, D Schaerer, G Worseck, A Verhamme, N G Guseva, T X Thuan, I Orlitová, K J Fricke, 10.1093/mnras/stz3041MNRAS. 491468Izotov Y. I., Schaerer D., Worseck G., Verhamme A., Guseva N. G., Thuan T. X., Orlitová I., Fricke K. J., 2020, MNRAS, 491, 468
. Y I Izotov, G Worseck, D Schaerer, N G Guseva, J Chisholm, T X Thuan, K J Fricke, A Verhamme, 10.1093/mnras/stab612MNRAS. 5031734Izotov Y. I., Worseck G., Schaerer D., Guseva N. G., Chisholm J., Thuan T. X., Fricke K. J., Verhamme A., 2021, MNRAS, 503, 1734
. Y I Izotov, J Chisholm, G Worseck, N G Guseva, D Schaerer, J X Prochaska, 10.1093/mnras/stac1899MNRAS. 5152864Izotov Y. I., Chisholm J., Worseck G., Guseva N. G., Schaerer D., Prochaska J. X., 2022, MNRAS, 515, 2864
. J Japelj, 10.1093/mnras/stx477MNRAS. 468389Japelj J., et al., 2017, MNRAS, 468, 389
. Ji Z , 10.3847/1538-4357/ab5fdcApJ. 888109Ji Z., et al., 2020, ApJ, 888, 109
. T Jones, D P Stark, R S Ellis, 10.1088/0004-637X/751/1/51ApJ. 75151Jones T., Stark D. P., Ellis R. S., 2012, ApJ, 751, 51
. T A Jones, R S Ellis, M A Schenker, D P Stark, 10.1088/0004-637X/779/1/52ApJ. 77952Jones T. A., Ellis R. S., Schenker M. A., Stark D. P., 2013, ApJ, 779, 52
. K Kakiichi, M Gronke, 10.3847/1538-4357/abc2d9ApJ. 90830Kakiichi K., Gronke M., 2021, ApJ, 908, 30
. H Katz, 10.1093/mnras/stac1437MNRAS. 5154265Katz H., et al., 2022, MNRAS, 515, 4265
. A M Koekemoer, 10.1088/0067-0049/197/2/36ApJS. 19736Koekemoer A. M., et al., 2011, ApJS, 197, 36
. K A Kornei, A E Shapley, D K Erb, C C Steidel, N A Reddy, M Pettini, M Bogosavljević, 10.1088/0004-637X/711/2/693ApJ. 711693Kornei K. A., Shapley A. E., Erb D. K., Steidel C. C., Reddy N. A., Pettini M., Bogosavljević M., 2010, ApJ, 711, 693
. P Kroupa, 10.1046/j.1365-8711.2001.04022.xMNRAS. 322231Kroupa P., 2001, MNRAS, 322, 231
. G Kulkarni, G Worseck, J F Hennawi, 10.1093/mnras/stz1493MNRAS. 4881035Kulkarni G., Worseck G., Hennawi J. F., 2019, MNRAS, 488, 1035
. H Kusakabe, 10.1051/0004-6361/201937340A&A. 63812Kusakabe H., et al., 2020, A&A, 638, A12
. I Labbé, 10.1038/s41586-023-05786-2Nature. 616266Labbé I., et al., 2023, Nature, 616, 266
. D Lam, 10.1051/0004-6361/201935227A&A. 627164Lam D., et al., 2019, A&A, 627, A164
Instrument Design and Performance for Optical/Infrared Ground-based Telescopes. Le Fèvre, O , 10.1117/12.460959Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series. Iye M., Moorwood A. F. M.4841Le Fèvre O., et al., 2003, in Iye M., Moorwood A. F. M., eds, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series Vol. 4841, Instrument Design and Performance for Optical/Infrared Ground-based Telescopes. pp 1670-1681, doi:10.1117/12.460959
. Le Fèvre, O , 10.1051/0004-6361/201423829A&A. 57679Le Fèvre O., et al., 2015, A&A, 576, A79
. C Leitherer, P A Ortiz Otálvaro, F Bresolin, R.-P Kudritzki, B Lo Faro, A W A Pauldrach, M Pettini, S A Rix, 10.1088/0067-0049/189/2/309ApJS. 189309Leitherer C., Ortiz Otálvaro P. A., Bresolin F., Kudritzki R.-P., Lo Faro B., Pauldrach A. W. A., Pettini M., Rix S. A., 2010, ApJS, 189, 309
C Leitherer, ascl:1104.003Starburst99: Synthesis Models for Galaxies with Active Star Formation. Leitherer C., et al., 2011, Starburst99: Synthesis Models for Galaxies with Active Star Formation (ascl:1104.003)
. C Leitherer, S Ekström, G Meynet, D Schaerer, K B Agienko, E M Levesque, 10.1088/0067-0049/212/1/14ApJS. 21214Leitherer C., Ekström S., Meynet G., Schaerer D., Agienko K. B., Levesque E. M., 2014, ApJS, 212, 14
. Y.-H Lin, 10.48550/arXiv.2303.04572arXiv:2303.04572arXiv e-printsLin Y.-H., et al., 2023, arXiv e-prints, p. arXiv:2303.04572
. R C Livermore, S L Finkelstein, J Lotz, 10.3847/1538-4357/835/2/113ApJ. 835113Livermore R. C., Finkelstein S. L., Lotz J. M., 2017, ApJ, 835, 113
. M Llerena, 10.1051/0004-6361/202141651A&A. 65916Llerena M., et al., 2022, A&A, 659, A16
. X Ma, E Quataert, A Wetzel, P F Hopkins, C.-A Faucher-Giguère, D Kereš, 10.1093/mnras/staa2404MNRAS. 498Ma X., Quataert E., Wetzel A., Hopkins P. F., Faucher-Giguère C.-A., Kereš D., 2020, MNRAS, 498, 2001
. P Madau, F Haardt, 10.1088/2041-8205/813/1/L8ApJ. 8138Madau P., Haardt F., 2015, ApJ, 813, L8
. P Madau, F Haardt, M J Rees, 10.1086/306975ApJ. 514648Madau P., Haardt F., Rees M. J., 1999, ApJ, 514, 648
. R Maiolino, F Mannucci, 10.1007/s00159-018-0112-2A&ARv. 273Maiolino R., Mannucci F., 2019, A&ARv, 27, 3
. M Maji, 10.1051/0004-6361/202142740A&A. 66366Maji M., et al., 2022, A&A, 663, A66
. F Marchi, 10.1051/0004-6361/201630054A&A. 60173Marchi F., et al., 2017, A&A, 601, A73
. P Marigo, A Bressan, A Nanni, L Girardi, M L Pumo, 10.1093/mnras/stt1034MNRAS. 434488Marigo P., Bressan A., Nanni A., Girardi L., Pumo M. L., 2013, MNRAS, 434, 488
. R Marques-Chaves, D Schaerer, J Álvarez-Márquez, L Colina, M Dessauges-Zavadsky, I Pérez-Fournon, A Saldana-Lopez, A Verhamme, 10.1093/mnras/stab2187MNRAS. 507524Marques-Chaves R., Schaerer D., Álvarez-Márquez J., Colina L., Dessauges- Zavadsky M., Pérez-Fournon I., Saldana-Lopez A., Verhamme A., 2021, MNRAS, 507, 524
. R Marques-Chaves, 10.1093/mnras/stac2893MNRAS. 5172972Marques-Chaves R., et al., 2022a, MNRAS, 517, 2972
. R Marques-Chaves, 10.1051/0004-6361/202243598A&A. 6631Marques-Chaves R., et al., 2022b, A&A, 663, L1
. S Mascia, 10.48550/arXiv.2301.02816arXiv:2301.02816arXiv e-printsMascia S., et al., 2023a, arXiv e-prints, p. arXiv:2301.02816
. S Mascia, 10.48550/arXiv.2301.09328arXiv:2301.09328arXiv e-printsMascia S., et al., 2023b, arXiv e-prints, p. arXiv:2301.09328
. M V Maseda, 10.1093/mnras/staa622MNRAS. 4935120Maseda M. V., et al., 2020, MNRAS, 493, 5120
. C A Mason, R P Naidu, S Tacchella, J Leja, 10.1093/mnras/stz2291MNRAS. 4892669Mason C. A., Naidu R. P., Tacchella S., Leja J., 2019, MNRAS, 489, 2669
. Y Matsuoka, 10.3847/1538-4357/aaee7aApJ. 869150Matsuoka Y., et al., 2018, ApJ, 869, 150
. J Matthee, D Sobral, P Best, A A Khostovan, I Oteo, R Bouwens, H Röttgering, 10.1093/mnras/stw2973MNRAS. 4653637Matthee J., Sobral D., Best P., Khostovan A. A., Oteo I., Bouwens R., Röttgering H., 2017a, MNRAS, 465, 3637
. J Matthee, D Sobral, B Darvish, S Santos, B Mobasher, A Paulino-Afonso, H Röttgering, L Alegre, 10.1093/mnras/stx2061MNRAS. 472772Matthee J., Sobral D., Darvish B., Santos S., Mobasher B., Paulino-Afonso A., Röttgering H., Alegre L., 2017b, MNRAS, 472, 772
. J Matthee, 10.1093/mnras/stac801MNRAS. 5125960Matthee J., et al., 2022, MNRAS, 512, 5960
. V Mauerhofer, A Verhamme, J Blaizot, T Garel, T Kimm, L Michel-Dansac, J Rosdahl, 10.1051/0004-6361/202039449A&A. 64680Mauerhofer V., Verhamme A., Blaizot J., Garel T., Kimm T., Michel-Dansac L., Rosdahl J., 2021, A&A, 646, A80
. D J Mcleod, R J Mclure, J S Dunlop, 10.1093/mnras/stw904MNRAS. 4593812McLeod D. J., McLure R. J., Dunlop J. S., 2016, MNRAS, 459, 3812
. R J Mclure, 10.1093/mnras/sty522MNRAS. 4763991McLure R. J., et al., 2018a, MNRAS, 476, 3991
. R J Mclure, 10.1093/mnras/sty1213MNRAS. 47925McLure R. J., et al., 2018b, MNRAS, 479, 25
. G R Meurer, T M Heckman, D Calzetti, 10.1086/307523ApJ. 52164Meurer G. R., Heckman T. M., Calzetti D., 1999, ApJ, 521, 64
. U Meštrić, E V Ryan-Weber, J Cooke, R Bassett, L J Prichard, M Rafelski, 10.1093/mnras/stab2615MNRAS. 5084443Meštrić U., Ryan-Weber E. V., Cooke J., Bassett R., Prichard L. J., Rafelski M., 2021, MNRAS, 508, 4443
. G Meynet, A Maeder, G Schaller, D Schaerer, C Charbonnel, A&AS. 10397Meynet G., Maeder A., Schaller G., Schaerer D., Charbonnel C., 1994, A&AS, 103, 97
. G Micheva, I Iwata, A K Inoue, Y Matsuda, T Yamada, T Hayashino, 10.1093/mnras/stw2700MNRAS. 465316Micheva G., Iwata I., Inoue A. K., Matsuda Y., Yamada T., Hayashino T., 2017, MNRAS, 465, 316
. R E Mostardi, A E Shapley, C C Steidel, R F Trainor, N A Reddy, B Siana, 10.1088/0004-637X/810/2/107ApJ. 810107Mostardi R. E., Shapley A. E., Steidel C. C., Trainor R. F., Reddy N. A., Siana B., 2015, ApJ, 810, 107
. A L Muratov, D Kereš, C.-A Faucher-Giguère, P F Hopkins, E Quataert, N Murray, 10.1093/mnras/stv2126MNRAS. 4542691Muratov A. L., Kereš D., Faucher-Giguère C.-A., Hopkins P. F., Quataert E., Murray N., 2015, MNRAS, 454, 2691
. R P Naidu, B Forrest, P A Oesch, K.-V H Tran, B P Holden, 10.1093/mnras/sty961MNRAS. 478791Naidu R. P., Forrest B., Oesch P. A., Tran K.-V. H., Holden B. P., 2018, MNRAS, 478, 791
. R P Naidu, S Tacchella, C A Mason, S Bose, P A Oesch, C Conroy, 10.3847/1538-4357/ab7cc9ApJ. 892109Naidu R. P., Tacchella S., Mason C. A., Bose S., Oesch P. A., Conroy C., 2020, ApJ, 892, 109
. R P Naidu, 10.1093/mnras/stab3601MNRAS. 5104582Naidu R. P., et al., 2022, MNRAS, 510, 4582
. K Nakajima, R S Ellis, I Iwata, A K Inoue, H Kusakabe, M Ouchi, B E Robertson, 10.3847/2041-8205/831/1/L9ApJ. 8319Nakajima K., Ellis R. S., Iwata I., Inoue A. K., Kusakabe H., Ouchi M., Robertson B. E., 2016, ApJ, 831, L9
. K Nakajima, T Fletcher, R S Ellis, B E Robertson, I Iwata, 10.1093/mnras/sty750MNRAS. 4772098Nakajima K., Fletcher T., Ellis R. S., Robertson B. E., Iwata I., 2018a, MNRAS, 477, 2098
. K Nakajima, 10.1051/0004-6361/201731935A&A. 61294Nakajima K., et al., 2018b, A&A, 612, A94
. K Nakajima, R S Ellis, B E Robertson, M Tang, D P Stark, 10.3847/1538-4357/ab6604ApJ. 889161Nakajima K., Ellis R. S., Robertson B. E., Tang M., Stark D. P., 2020, ApJ, 889, 161
. K Nakajima, M Ouchi, Y Isobe, Y Harikane, Y Zhang, Y Ono, H Umeda, M Oguri, 10.48550/arXiv.2301.12825arXiv:2301.12825Nakajima K., Ouchi M., Isobe Y., Harikane Y., Zhang Y., Ono Y., Umeda H., Oguri M., 2023, arXiv e-prints, p. arXiv:2301.12825
. T Nanayakkara, arXiv:2207.13860arXiv e-printsNanayakkara T., et al., 2022, arXiv e-prints, p. arXiv:2207.13860
M Newville, T Stensitzki, D B Allen, M Rawlik, A Ingargiola, A Nelson, 1606.014Lmfit: Non-Linear Least-Square Minimization and Curve-Fitting for Python. Newville M., Stensitzki T., Allen D. B., Rawlik M., Ingargiola A., Nelson A., 2016, Lmfit: Non-Linear Least-Square Minimization and Curve-Fitting for Python (ascl:1606.014)
. Y Ning, Z Cai, L Jiang, X Lin, S Fu, D Spinoso, arXiv:2211.136202022Ning Y., Cai Z., Jiang L., Lin X., Fu S., Spinoso D., 2022, arXiv e-prints, p. arXiv:2211.13620
. P A Oesch, R J Bouwens, G D Illingworth, I Labbé, M Stefanon, 10.3847/1538-4357/aab03fApJ. 855105Oesch P. A., Bouwens R. J., Illingworth G. D., Labbé I., Stefanon M., 2018, ApJ, 855, 105
. J B Oke, J E Gunn, 10.1086/160817ApJ. 266713Oke J. B., Gunn J. E., 1983, ApJ, 266, 713
. A J Pahl, A Shapley, A L Faisst, P L Capak, X Du, N A Reddy, P Laursen, M W Topping, 10.1093/mnras/staa355MNRAS. 4933194Pahl A. J., Shapley A., Faisst A. L., Capak P. L., Du X., Reddy N. A., Laursen P., Topping M. W., 2020, MNRAS, 493, 3194
. A J Pahl, A Shapley, C C Steidel, Y Chen, N A Reddy, 10.1093/mnras/stab1374MNRAS. 5052447Pahl A. J., Shapley A., Steidel C. C., Chen Y., Reddy N. A., 2021, MNRAS, 505, 2447
. A J Pahl, A Shapley, C C Steidel, N A Reddy, Y Chen, G C Rudie, A L Strom, 10.1093/mnras/stad774MNRAS. 5213247Pahl A. J., Shapley A., Steidel C. C., Reddy N. A., Chen Y., Rudie G. C., Strom A. L., 2023, MNRAS, 521, 3247
. A W A Pauldrach, T L Hoffmann, M Lennon, 10.1051/0004-6361:20010805A&A. 375161Pauldrach A. W. A., Hoffmann T. L., Lennon M., 2001, A&A, 375, 161
. L Pentericci, A Grazian, A Fontana, M Castellano, E Giallongo, S Salimbeni, P Santini, 10.1051/0004-6361:200810722A&A. 494553Pentericci L., Grazian A., Fontana A., Castellano M., Giallongo E., Salim- beni S., Santini P., 2009, A&A, 494, 553
. L Pentericci, 10.1051/0004-6361/201833047A&A. 616174Pentericci L., et al., 2018, A&A, 616, A174
. 10.1051/0004-6361/201628897A&A. 596108Planck Collaboration et al., 2016, A&A, 596, A108
. M L Prevot, J Lequeux, E Maurice, L Prevot, B Rocca-Volmerange, A&A. 132389Prevot M. L., Lequeux J., Maurice E., Prevot L., Rocca-Volmerange B., 1984, A&A, 132, 389
. L J Prichard, 10.3847/1538-4357/ac3004ApJ. 92414Prichard L. J., et al., 2022, ApJ, 924, 14
. G Prieto-Lyon, arXiv:2211.12548arXiv e-printsPrieto-Lyon G., et al., 2022, arXiv e-prints, p. arXiv:2211.12548
. N A Reddy, C C Steidel, M Pettini, M Bogosavljević, 10.3847/0004-637X/828/2/107ApJ. 828107Reddy N. A., Steidel C. C., Pettini M., Bogosavljević M., 2016a, ApJ, 828, 107
. N A Reddy, C C Steidel, M Pettini, M Bogosavljević, A E Shapley, 10.3847/0004-637X/828/2/108ApJ. 828108Reddy N. A., Steidel C. C., Pettini M., Bogosavljević M., Shapley A. E., 2016b, ApJ, 828, 108
. N A Reddy, 10.3847/1538-4357/aaed1eApJ. 86992Reddy N. A., et al., 2018, ApJ, 869, 92
. N A Reddy, 10.3847/1538-4357/ac3b4cApJ. 92631Reddy N. A., et al., 2022, ApJ, 926, 31
. T E Rivera-Thorsen, M Hayes, J Melinder, 10.1051/0004-6361/202243678A&A. 666145Rivera-Thorsen T. E., Hayes M., Melinder J., 2022, A&A, 666, A145
. B E Robertson, 10.1146/annurev-astro-120221-044656ARA&A. 60121Robertson B. E., 2022, ARA&A, 60, 121
. B E Robertson, 10.1088/0004-637X/768/1/71ApJ. 76871Robertson B. E., et al., 2013, ApJ, 768, 71
. B E Robertson, R S Ellis, S R Furlanetto, J S Dunlop, 10.1088/2041-8205/802/2/L19ApJ. 80219Robertson B. E., Ellis R. S., Furlanetto S. R., Dunlop J. S., 2015, ApJ, 802, L19
. B E Robertson, 10.3847/1538-4357/ab7659Nature Astronomy. 891146ApJRobertson B. E., et al., 2023, Nature Astronomy, Rojas-Ruiz S., Finkelstein S. L., Bagley M. B., Stevans M., Finkelstein K. D., Larson R., Mechtley M., Diekmann J., 2020, ApJ, 891, 146
. J Rosdahl, 10.1093/mnras/stac1942MNRAS. 5152386Rosdahl J., et al., 2022, MNRAS, 515, 2386
. M J Rutkowski, 10.3847/2041-8213/aa733bApJ. 84127Rutkowski M. J., et al., 2017, ApJ, 841, L27
. K Saha, 10.1038/s41550-020-1173-5Nature Astronomy. 41185Saha K., et al., 2020, Nature Astronomy, 4, 1185
. A Saldana-Lopez, 10.1051/0004-6361/202141864A&A. 66359Saldana-Lopez A., et al., 2022, A&A, 663, A59
. S Salim, M Boquien, J C Lee, 10.3847/1538-4357/aabf3cApJ. 85911Salim S., Boquien M., Lee J. C., 2018, ApJ, 859, 11
. P Santini, 10.3847/2041-8213/ac9586ApJ. 94227Santini P., et al., 2023, ApJ, 942, L27
. A Saxena, 10.1093/mnras/stab3728MNRAS. 511120Saxena A., et al., 2022a, MNRAS, 511, 120
. A Saxena, 10.1093/mnras/stac2742MNRAS. 5171098Saxena A., et al., 2022b, MNRAS, 517, 1098
. A Saxena, 10.48550/arXiv.2302.12805arXiv:2302.12805arXiv e-printsSaxena A., et al., 2023, arXiv e-prints, p. arXiv:2302.12805
. D Schaerer, S De Barros, P Sklias, 10.1051/0004-6361/201220002A&A. 5494Schaerer D., de Barros S., Sklias P., 2013, A&A, 549, A4
. D Schaerer, Y I Izotov, A Verhamme, I Orlitová, T X Thuan, G Worseck, N G Guseva, 10.1051/0004-6361/201628943A&A. 5918Schaerer D., Izotov Y. I., Verhamme A., Orlitová I., Thuan T. X., Worseck G., Guseva N. G., 2016, A&A, 591, L8
. D Schaerer, 10.1051/0004-6361/202243149A&A. 65811Schaerer D., et al., 2022a, A&A, 658, L11
. D Schaerer, R Marques-Chaves, L Barrufet, P Oesch, Y I Izotov, R Naidu, N G Guseva, G Brammer, 10.1051/0004-6361/202244556A&A. 6654Schaerer D., Marques-Chaves R., Barrufet L., Oesch P., Izotov Y. I., Naidu R., Guseva N. G., Brammer G., 2022b, A&A, 665, L4
. J Seiler, A Hutter, M Sinha, D Croton, 10.1093/mnrasl/sly122MNRAS. 48033Seiler J., Hutter A., Sinha M., Croton D., 2018, MNRAS, 480, L33
. T Seive, J Chisholm, F Leclercq, G Zeimann, 10.1093/mnras/stac2180MNRAS. 5155556Seive T., Chisholm J., Leclercq F., Zeimann G., 2022, MNRAS, 515, 5556
. A E Shapley, C C Steidel, M Pettini, K L Adelberger, 10.1086/373922ApJ. 58865Shapley A. E., Steidel C. C., Pettini M., Adelberger K. L., 2003, ApJ, 588, 65
. A E Shapley, C C Steidel, M Pettini, K L Adelberger, D K Erb, 10.1086/507511ApJ. 651688Shapley A. E., Steidel C. C., Pettini M., Adelberger K. L., Erb D. K., 2006, ApJ, 651, 688
. A E Shapley, C C Steidel, A L Strom, M Bogosavljević, N A Reddy, B Siana, R E Mostardi, G C Rudie, 10.3847/2041-8205/826/2/L24ApJ. 82624Shapley A. E., Steidel C. C., Strom A. L., Bogosavljević M., Reddy N. A., Siana B., Mostardi R. E., Rudie G. C., 2016, ApJ, 826, L24
. M Sharma, T Theuns, C Frenk, R Bower, R Crain, M Schaller, J Schaye, 10.1093/mnrasl/slw021MNRAS. 45894Sharma M., Theuns T., Frenk C., Bower R., Crain R., Schaller M., Schaye J., 2016, MNRAS, 458, L94
. I Shivaei, 10.3847/1538-4357/aaad62ApJ. 85542Shivaei I., et al., 2018, ApJ, 855, 42
. I Shivaei, 10.3847/1538-4357/aba35eApJ. 899117Shivaei I., et al., 2020, ApJ, 899, 117
. A Smith, C Safranek-Shrader, V Bromm, M Milosavljević, 10.1093/mnras/stv5654494336MN-RASSmith A., Safranek-Shrader C., Bromm V., Milosavljević M., 2015, MN- RAS, 449, 4336
. J S Speagle, C L Steinhardt, P L Capak, J D Silverman, 10.1088/0067-0049/214/2/15ApJS. 21415Speagle J. S., Steinhardt C. L., Capak P. L., Silverman J. D., 2014, ApJS, 214, 15
. D P Stark, R S Ellis, M Ouchi, 10.1088/2041-8205/728/1/L2ApJ. 7282Stark D. P., Ellis R. S., Ouchi M., 2011, ApJ, 728, L2
. D P Stark, 10.1093/mnras/stv1907MNRAS. 4541393Stark D. P., et al., 2015, MNRAS, 454, 1393
. C C Steidel, M Pettini, K L Adelberger, 10.1086/318323ApJ. 546665Steidel C. C., Pettini M., Adelberger K. L., 2001, ApJ, 546, 665
. C C Steidel, M Bogosavljević, A E Shapley, N A Reddy, G C Rudie, M Pettini, R F Trainor, A L Strom, 10.3847/1538-4357/aaed28ApJ. 869123Steidel C. C., Bogosavljević M., Shapley A. E., Reddy N. A., Rudie G. C., Pettini M., Trainor R. F., Strom A. L., 2018, ApJ, 869, 123
. S Tacchella, 10.48550/arXiv.2208.03281arXiv:2208.03281arXiv e-printsTacchella S., et al., 2022, arXiv e-prints, p. arXiv:2208.03281
. M Tang, D P Stark, J Chevallard, S Charlot, 10.1093/mnras/stz2236MNRAS. 4892572Tang M., Stark D. P., Chevallard J., Charlot S., 2019, MNRAS, 489, 2572
. M Tang, 10.48550/arXiv.2301.07072arXiv:2301.07072arXiv e-printsTang M., et al., 2023, arXiv e-prints, p. arXiv:2301.07072
. N R Tanvir, 10.1093/mnras/sty3460MNRAS. 4835380Tanvir N. R., et al., 2019, MNRAS, 483, 5380
. M W Topping, D P Stark, R Endsley, A Plat, L Whitler, Z Chen, S Charlot, 10.3847/1538-4357/aca522ApJ. 941153Topping M. W., Stark D. P., Endsley R., Plat A., Whitler L., Chen Z., Charlot S., 2022, ApJ, 941, 153
. R F Trainor, C C Steidel, A L Strom, G C Rudie, 10.1088/0004-637X/809/1/89ApJ. 80989Trainor R. F., Steidel C. C., Strom A. L., Rudie G. C., 2015, ApJ, 809, 89
. R F Trainor, A L Strom, C C Steidel, G C Rudie, Y Chen, R L Theios, 10.3847/1538-4357/ab4993ApJ. 88785Trainor R. F., Strom A. L., Steidel C. C., Rudie G. C., Chen Y., Theios R. L., 2019, ApJ, 887, 85
. M Trebitsch, J Blaizot, J Rosdahl, J Devriendt, A Slyz, 10.1093/mnras/stx1060MNRAS. 470224Trebitsch M., Blaizot J., Rosdahl J., Devriendt J., Slyz A., 2017, MNRAS, 470, 224
. M Trebitsch, arXiv:2212.06177arXiv e-printsTrebitsch M., et al., 2022, arXiv e-prints, p. arXiv:2212.06177
. J R Trump, 10.3847/1538-4357/acba8aApJ. 94535Trump J. R., et al., 2023, ApJ, 945, 35
. J A A Trussler, arXiv:2207.14265arXiv e-printsTrussler J. A. A., et al., 2022, arXiv e-prints, p. arXiv:2207.14265
. E Vanzella, 10.1088/0004-637X/725/1/1011ApJ. 7251011Vanzella E., et al., 2010, ApJ, 725, 1011
. E Vanzella, 10.1088/0004-637X/751/1/70ApJ. 75170Vanzella E., et al., 2012, ApJ, 751, 70
. E Vanzella, 10.1051/0004-6361/201525651A&A. 576116Vanzella E., et al., 2015, A&A, 576, A116
. E Vanzella, 10.1093/mnrasl/sly023MNRAS. 47615Vanzella E., et al., 2018, MNRAS, 476, L15
. K Vasei, 10.3847/0004-637X/831/1/38ApJ. 83138Vasei K., et al., 2016, ApJ, 831, 38
. A Verhamme, I Orlitová, D Schaerer, M Hayes, 10.1051/0004-6361/201423978A&A. 5787Verhamme A., Orlitová I., Schaerer D., Hayes M., 2015, A&A, 578, A7
. A Verhamme, I Orlitová, D Schaerer, Y Izotov, G Worseck, T X Thuan, N Guseva, 10.1051/0004-6361/201629264A&A. 59713Verhamme A., Orlitová I., Schaerer D., Izotov Y., Worseck G., Thuan T. X., Guseva N., 2017, A&A, 597, A13
. J B Vielfaure, 10.1051/0004-6361/202140355A&A. 65383Vielfaure J. B., et al., 2021, A&A, 653, A83
. B Wang, T M Heckman, C Leitherer, R Alexandroff, S Borthakur, R A Overzier, 10.3847/1538-4357/ab418fApJ. 88557Wang B., Heckman T. M., Leitherer C., Alexandroff R., Borthakur S., Overzier R. A., 2019, ApJ, 885, 57
. J R Weaver, 10.48550/arXiv.2301.02671arXiv:2301.02671arXiv e-printsWeaver J. R., et al., 2023, arXiv e-prints, p. arXiv:2301.02671
. X Xu, 10.3847/1538-4357/ac7225ApJ. 933202Xu X., et al., 2022, ApJ, 933, 202
. E Zackrisson, A K Inoue, H Jensen, 10.1088/0004-637X/777/1/39ApJ. 77739Zackrisson E., Inoue A. K., Jensen H., 2013, ApJ, 777, 39
. S De Barros, 10.1051/0004-6361/201527046A&A. 58551de Barros S., et al., 2016, A&A, 585, A51
Inferred ionizing absolute escape fractions ( abs esc ) and production efficiencies ( ion ) for the VANDELS composites in this work. B1 Table, Table B1. Inferred ionizing absolute escape fractions ( abs esc ) and production efficiencies ( ion ) for the VANDELS composites in this work.
Composite ID Bin range Median N gal abs esc (R16, %) ion (R16, Hz/erg) abs esc (SMC, %) ion (SMC, Hz/erg). Composite ID Bin range Median N gal abs esc (R16, %) ion (R16, Hz/erg) abs esc (SMC, %) ion (SMC, Hz/erg)
Column 3: median of the interquartile range. Column 4: number of objects included in the stacks. Columns 5 and 6: estimated escape fractions (in %) and production efficiencies (in Hz/erg) when using R16 dust-attenuation law. Columns 7 and 8: estimated escape fractions (in %) and production efficiencies. Notes, Column 2: interquartile range for each magnitude bin (units are in the spanning titles). in Hz/erg) when using SMC dust-attenuation lawNotes. Column 1: composite identifier. Column 2: interquartile range for each magnitude bin (units are in the spanning titles). Column 3: median of the interquartile range. Column 4: number of objects included in the stacks. Columns 5 and 6: estimated escape fractions (in %) and production efficiencies (in Hz/erg) when using R16 dust-attenuation law. Columns 7 and 8: estimated escape fractions (in %) and production efficiencies (in Hz/erg) when using SMC dust-attenuation law.
Summary chart containing the main spectral measurements and F CUS SED fitting results for LCEs, non-LCEs, LAEs and non-LAEs composites. B2 Table, Table B2. Summary chart containing the main spectral measurements and F CUS SED fitting results for LCEs, non-LCEs, LAEs and non-LAEs composites.
| [
"https://github.com/cschreib/",
"https://github.com/asalda/FiCUS.git.MNRAS"
]
|
[
"The new interaction suggested by the anomalous 8 Be transition sets a rigorous constraint on the mass range of dark matter",
"The new interaction suggested by the anomalous 8 Be transition sets a rigorous constraint on the mass range of dark matter"
]
| [
"Lian-Bao Jia \nSchool of Science\nSouthwest University of Science and Technology\n621010MianyangChina\n",
"Xue-Qian Li \nSchool of Physics\nNankai University\n300071TianjinChina\n"
]
| [
"School of Science\nSouthwest University of Science and Technology\n621010MianyangChina",
"School of Physics\nNankai University\n300071TianjinChina"
]
| []
| The WIMPs are considered one of the favorable dark matter (DM) candidates, but as the upper bounds on the interactions between DM and standard model (SM) particles obtained by the upgraded facilities of DM direct detections get lower and lower. Researchers turn their attentions to search for less massive DM candidates, i.e. light dark matter of MeV scale. The recently measured anomalous transition in 8 Be suggests that there exists a vectorial boson which may mediate the interaction between DM and SM particles. Based on this scenario, we combine the relevant cosmological data to constrain the mass range of DM, and have found that there exists a model parameter space where the requirements are satisfied, a range of 10.4 < ∼ m φ < ∼ 16.7 MeV for scalar DM, and 13.6 < ∼ m V < ∼ 16.7 MeV for vectorial DM is demanded. Then a possibility of directly detecting such light DM particles via the DM-electron scattering is briefly studied in this framework. | 10.1140/epjc/s10052-016-4561-3 | [
"https://arxiv.org/pdf/1608.05443v2.pdf"
]
| 119,191,582 | 1608.05443 | 6a1eba3f9840cb17a8cc9945a0f1a4f59f09a4e8 |
The new interaction suggested by the anomalous 8 Be transition sets a rigorous constraint on the mass range of dark matter
8 Dec 2016
Lian-Bao Jia
School of Science
Southwest University of Science and Technology
621010MianyangChina
Xue-Qian Li
School of Physics
Nankai University
300071TianjinChina
The new interaction suggested by the anomalous 8 Be transition sets a rigorous constraint on the mass range of dark matter
8 Dec 2016* Electronic address:
The WIMPs are considered one of the favorable dark matter (DM) candidates, but as the upper bounds on the interactions between DM and standard model (SM) particles obtained by the upgraded facilities of DM direct detections get lower and lower. Researchers turn their attentions to search for less massive DM candidates, i.e. light dark matter of MeV scale. The recently measured anomalous transition in 8 Be suggests that there exists a vectorial boson which may mediate the interaction between DM and SM particles. Based on this scenario, we combine the relevant cosmological data to constrain the mass range of DM, and have found that there exists a model parameter space where the requirements are satisfied, a range of 10.4 < ∼ m φ < ∼ 16.7 MeV for scalar DM, and 13.6 < ∼ m V < ∼ 16.7 MeV for vectorial DM is demanded. Then a possibility of directly detecting such light DM particles via the DM-electron scattering is briefly studied in this framework.
I. INTRODUCTION
For the time being, we still do not have solid knowledge on dark matter (DM). One of the preferable DM candidates is the weakly interacting massive particles (WIMPs), with WIMP masses of GeV-TeV scale. The recent DM direct detection experiments [1][2][3][4][5] set stringent constraints on the cross section of DM-target nucleus scattering for GeV-TeV scale DM, and the upper bound of the detection cross section will be reduced to the neutrino limit in next decade(s). On one aspect, the existence of DM is convinced by the astronomical observation, while on another aspect, the DM particles have not been detected by all the sophisticated experiments. One may ask if our conjecture on the potential mass range of DM is astray, which results in DM evading the present DM direct detections, namely, can the DM particles are much less massive to be in a sub-GeV range, e.g. in MeV (see Refs. [6][7][8][9][10][11][12][13] for some earlier work). In this scenario, the interactions of the light DM particles just render the nucleus small recoil energies, which are not observable in available experiments for DM direct detections. In this work, we focus on the MeV scale light DM.
The issue concerning DM refers two aspects, one is the identities of DM, i.e what is (are) DM, and another aspect is how DM particles interact among themselves and with SM particles. It is generally believed, the interactions related to the DM sector must be a new type (new types) beyond the standard model (BSM). In this work, to answer the first question, we do not priori assume its identity, but let experimental data determine; to the second question, we look for a new BSM interaction which may offer an interpretation for the present observation. The recent 8 Be experiment has revealed at 6.8σ an anomalous transition between an excited state 8 Be * and the ground state 8 Be [14]. The authors [14,15] argued that this anomaly may be due to the unknown nuclear reactions, but a more preferable possibility is that it is caused by emitting a vectorial boson X during 8 Be * → 8 Be + X, which instantly decays into e + e − pair. The new boson X may be the mediator that we look forward to between DM and SM particle interactions, and this probable is investigated in this paper. A fitted value of X mass is 16.70 ± 0.35(stat) ± 0.5(sys) MeV [14], and in this work we adopt the central mass m X ≃ 16.7 MeV in calculations. The interactions of the vector boson X with quarks and leptons via a scheme of BSM has been argued in the literatures [15][16][17]. In this work, the vector boson X discussed in Ref. [15] is of our concern.
For the scattering between possible scalar, vectorial, fermionic DM and target nucleus, the spin-independent interaction induced by exchanging the vector boson X is dominant (see e.g. Ref. [18]). The vector boson X couples to electron and u,d quarks, and X may also couples to the second and/or the third generation SM charged leptons and up type/down type quarks with equal couplings to the same type fermions (see, e.g. Ref. [16] for more discussions). For the thermally freeze-out DM with such couplings, the DM mass as low as 0.5 GeV has been excluded by the CRESST-II experiment [1]. Thus, the X-mediated sub-GeV DM needs more attention.
Here we focus on MeV scale DM. The energy released by DM annihilation can modify the cosmic microwave background (CMB), and the recent CMB measurement by the Planck satellite [19] sets a stringent bound on the s-wave annihilation of MeV-scale DM [19,20]. For MeV DM with vector form interaction induced by X, the annihilation of fermionic DM pair is s-wave dominant, so is inconsistent with the CMB observation. Thus, the possibility of DM being fermions is disfavored. By contrast, p-wave annihilations of scalar and/or vector DM candidates at freeze out are tolerant by the CMB result. Thus, we concentrate on the case of scalar and vector DM, then the corresponding model parameter space will be derived.
For DM mass in the range of a few MeV/teens MeV, the big bang nucleosynthesis (BBN) and the effective number of relativistic neutrino N ef f at recombination may be altered by the energy release from dark sector annihilations. Thus corresponding observation results will be taken into account to set a lower bound on DM mass.
As recoils of target nucleus are small, the scattering between DM and nucleus is not sensitive for DM in MeV region, thus the direct detection for DM would turn to the DM-electron scattering which might be employed for the light DM hunting, and the issue was investigated in Refs. [21][22][23]. In this work the search for DM via its scattering with electron will be discussed for our concerned model. This work is organized as follows. After this introduction, we present the concrete forms of interactions between SM and DM with new boson X exchanged, and estimate the DM pwave annihilation rate. Next we take into account the constraints by the BBN and CMB to set the mass range of DM, and numerically evaluate the DM-X coupling for the DM mass range of concern. Then we analyze the detection possibility of the MeV DM via the DM-electron scattering. The last section is devoted to a brief conclusion and discussion.
II. INTERACTIONS BETWEEN SM AND DM
Based on the model where the new vector boson X mediates interaction between the SM particles and scalar/vectorial DM, we will analyze the relevant issues. The couplings of X with SM particles has been discussed in Ref. [15]. The effective X-DM coupling can be set in terms of the DM annihilation cross section at DM thermally freeze out.
A. The couplings
We suppose that X mediates a BSM interaction where the new charge in DM −X interaction is e D . The SM fermions are of equipped with also a new charge to couple to X which is parameterized as eε f (in unit of e), and ε f is relevant to the concerned fermion flavor. Let us first formulate the scattering amplitude between scalar DM and SM particles caused by the new interaction where X stands as the mediator. The new effective interaction is in the form
L i S = −e D X µ J µ DM + e 2 D X µ X µ φ * φ − eε f X µ J µ SM ,(1)
where φ is the scalar DM field. J µ DM , J µ SM are the currents of scalar DM, SM fermions, respectively, with
J µ DM = i[φ * (∂ µ φ) − (∂ µ φ * )φ] , scalar DM , (2) J µ SM = Σ ff γ µ f , SM fermions .(3)
To explain the 8 Be anomalous transition, the ε f of the first generation fermion is derived and its value was presented in Ref. [15] as
ε u ≈ ±3.7 × 10 −3 , ε d ≈ ∓7.4 × 10 −3 , 2 × 10 −4 < ∼ |ε e | < ∼ 1.4 × 10 −3 , |ε ν ε e | < ∼ 7 × 10 −5 .(4)
Moreover, if the vector boson X couples to the muon with |ε µ | ≈ |ε e |, the discrepancy between theory and experiment in muon g − 2 can be moderated [15].
k 1 k 2 k 3 V * V µ ν σ (a) X k 1 k 2 k 3 k 4 µ ν ρ σ V * V X X (b) FIG. 1: The vertexes of V V * X, V V * XX.
For the vectorial DM field V , the V − X vertices are shown in Fig. 1
. The V V * X vertex is −ie D [g µν (k 2 − k 1 ) σ + g νσ (k 3 − k 2 ) µ + g σµ (k 1 − k 3 ) ν ], and the V V * XX vertex is ie 2 D (g µρ g νσ + g µσ g νρ − 2g µν g ρσ ).
The couplings of X in SM sector are the same as that of the scalar DM case. For scalar (vectorial) DM, the annihilation φφ * → X → ff (V V * → X → ff ) is a p-wave process. When the scalar (vectorial) DM mass m φ (m V ) is above the X boson mass m X , the annihilation φφ * → XX (V V * → XX) portal is open, as shown in Fig. 2. However the analysis of Refs. [19,20] indicate that the CMB measurement sets a stringent constraint on the MeV scale DM s-wave annihilation. For DM annihilation channels e + e − and 4e, the upper bounds from CMB on the s-wave annihilations of these two channels are as follows: e.g., for DM with the mass of 5 MeV, the cross sections are about below 2.7 × 10 −30 , 4.3 × 10 −30 (cm 3 /s) for e + e − , 4e, respectively; for DM with the mass of 500 MeV, the cross sections are about below 4.2 × 10 −28 , 3.5 × 10 −28 (cm 3 /s) for e + e − , 4e, respectively. For MeV scale DM, these constraints are much below the required thermally freeze-out annihilation cross section, and some tunings are needed if the DM s-wave annihilation exists. Thus for thermally freeze-out DM, to avoid the s-wave annihilation in the process φφ * → XX (V V * → XX), the constraint of m φ (m V ) < m X is mandatory, i.e. the corresponding annihilation is kinematically closed. In addition, as indicated by the 8 Be anomaly transition, the X boson predominantly decays into e + e − , and this implies that it cannot directly decay into DM, otherwise its decay procedure would be dominated by X → φφ * (V V * ). Thus we must demand another constraint m φ (m V ) > m X /2. Therefore, a mass range of DM is m X /2 < m φ (m V ) < m X , and the p-wave annihilation was overwhelming at DM freeze out. Let us first consider the scalar DM. In the mass range m X /2 < m φ < m X , the s-channel annihilation φφ * → X → ff is overwhelming at DM freeze out, as shown in Fig. 3 (a). In one initial DM particle rest frame, the scalar DM annihilation cross section can be written as
1. Scalar DM (a) (b) φ φ * X ff V * V X ffσ ann v r = 1 2 e 2 D e 2 ε 2 f (s − 2m 2 φ ) β f 8π (s − 4m 2 φ )[s − (s − 4m 2 f )/3] (s − m 2 X ) 2 + m 2 X Γ 2 X ,(5)
where v r is the relative velocity of the two DM particles. The factor 1 2 is due to the required φφ * pair in annihilations, and s is the total invariant squared mass. Γ X is the decay width of X, and m f is the mass of the final fermions. The phase space factor β f is
β f = 1 − 4m 2 f s .(6)
Parameterizing Eq. (5) in forms of
σ ann v r = a + bv 2 r + O(v 4 r ),(7)with s = 4m 2 φ + m 2 φ v 2 r + O(v 4 r ),
we can obtain the result
a = 0 , b = e 2 D e 2 ε 2 f β f 8π [m 2 φ − (m 2 φ − m 2 f )/3] (4m 2 φ − m 2 X ) 2 + m 2 X Γ 2 X .(8)
With this parameterization, the thermally averaged annihilation cross section at temperature T is [24,25] [26,27] x f ≃ ln 0.038c(c + 2)
σ ann v r ≈ 6b/x, with x = m φ /T . At DM thermally freeze-out temperature T f , the parameter x f = m φ /T f isgm φ m Pl 6b/x f √ g * x f ,(9)
where c is a parameter of O(1), and we take c = 1/2 for numerical computations. g is the degrees of freedom of DM, and m Pl = 1.22 × 10 19 GeV is the Planck mass. g * is the total effective relativistic degrees of freedom at the temperature T f , and we will adopt the data given by Ref. [28]. The relic density of DM is [26,27]
Ω DM h 2 ≃ 1.07 × 10 9 x f √ g * m Pl (GeV )(3b/x f ) ,(10)
where h is the Hubble parameter (in units of 100 km/(s·Mpc)).
Vectorial DM
Now consider the vectorial DM. In the mass range m X /2 < m V < m X , the annihilation V V * → ff is overwhelming at DM freeze out, as shown in Fig. 3 (b). In one initial particle rest frame, the vectorial DM annihilation cross section is
σ ann v r = 1 2 e 2 D e 2 ε 2 f (s − 2m 2 V ) β f 144π (s − 4m 2 V )(s + 2m 2 f ) (s − m 2 X ) 2 + m 2 X Γ 2 X [4 + 7s m 2 V + s 2 6m 4 V ] .(11)
Again parameterizing Eq. (11) in forms of σ ann v r = a+bv 2
r +O(v 4 r ), with s = 4m 2 V +m 2 V v 2 r +O(v 4 r ), we have a = 0 , b = e 2 D e 2 ε 2 f 108π 13β f (2m 2 V + m 2 f ) (4m 2 V − m 2 X ) 2 + m 2 X Γ 2 X .(12)
The thermally averaged annihilation cross, the relic density of vectorial DM are similar to that we derived for scalar DM, replacing by corresponding input parameters.
III. ANALYSIS ON X-DM COUPLING
The energy released from thermal MeV DM annihilation in the early universe can alter the BBN result and the effective number of relativistic neutrino N ef f . Even though the effects are not violent, it still can be employed to constrain the lower bound of DM mass. After the DM mass range being set, we will calculate the X-DM coupling by means of the DM thermally freeze-out annihilation cross section.
A. DM mass with constraints of N ef f
In the case of m X /2 < m φ (m V ) < m X , the main annihilation product of DM is e + e − . The DM annihilation might heat the electron-photon plasma before freeze out in the early universe. If this happens at the time that the neutrino decoupled from the hot bath, the ratio of the neutrino temperature relative to the photon temperature will be lowered, which causes a reduction of the number of the effective neutrino degrees of freedom [12,29]. The abundances of light elements stemmed from the primordial nucleosynthesis and the CMB power spectra at the recombination epoch would also be affected. For electron neutrinos, a typical decoupling temperature is T d ∼ 2.3 MeV [30]. The value x f of the thermally freeze-out DM is x f ∼ 20. Thus, for the DM of concern, the freeze out of DM is supposed to be after neutrino decoupling, so the effects of DM annihilation need to be taken into account. For the new boson X, the decay width is
Γ X ≃ e 2 ε 2 e (m 2 X + 2m 2 e ) 12πm X 1 − 4m 2 e m 2 X .(13)
With the mass m X ≫ T d and X's lifetime much less than 1 second, the contribution from X's entropy to the BBN is negligible.
Here we focus on the constraints from the primordial abundances of light elements 4 He and deuterium, denoted by Y p and y DP , respectively. The abundance values of 4 He and deuterium are related to the baryon density ω b ≡ Ω b h 2 and the effective number of relativistic neutrinos N ef f (or, in the form of the difference of ∆N ef f ≡ N ef f − 3.046, where N ef f = 3.046 is the standard cosmological prediction value [31,32]). The abundances predicted by the BBN are parameterized as Y p (ω b , ∆N ef f ), y DP (ω b , ∆N ef f ), and the corresponding Taylor expansion forms can be obtained with the PArthENoPE code [33]. If the value ω b = 0.02226 +0.00040 −0.00039 is adopted with the bounds of P lanck TT+lowP+BAO [19], the value of N ef f is also determined by the constraints of 4 He and deuterium abundances. The range of N ef f can be derived with the P lanck data, and that is [19] N ef f = 3.14 +0.
D + Planck TT + lowP + BAO ,(14)
where the helium, deuterium abundances given by Aver et al. [34], Cooke et al. [35] are taken. The updated P lanck-only constraint on N ef f is [19] N ef f = 3.15±0.23 P lanck TT + lowP + BAO .
Considering Eqs. (14), (15), an lower bound N ef f > ∼ 2.9 is taken in calculations. In the case that DM mainly couples to electron-photon plasma and DM particles freeze out later than the neutrino decoupling, the effective number N ef f can be written as [36,37] N ef f = 3.046 [ where I(T γ ) is given by
I(0) I(T d ) ] 4 3 ,(16)I(T γ ) = 1 T 4 γ (ρ e + e − + ρ γ + ρ DM + p e + e − + p γ + p DM ) = 11 45 π 2 + g 2π 2 ∞ y=0 dy y 2 e ξ ± 1 (ξ + y 2 3ξ ) ,(17)
and
ξ = y 2 + (m DM /T γ ) 2 .(18)
Here T γ is the photon temperature, and the integration variable is y = p DM /T γ . The plus/minus sign is for fermionic/bosonic DM particles, respectively. For bosonic DM of concern, the parameter values of the degrees of freedom g B = 2, g B = 6, the mass m DM = m φ , m V are corresponding to the scalar, vectorial DM, respectively. The effective number N ef f as a function of m DM /T d is shown in Fig. 4. Taking the lower bound N ef f > ∼ 2.9, we can obtain that m DM /T d > ∼ 5.2, 6.8 for scalar, vectorial DM, respectively. As the neutrino decoupling is not a sudden process (for more details, see e.g. Refs. [30][31][32]38]), here we take T d > ∼ 2 MeV as a lower bound. Thus, the mass range of DM is derived,
10.4 < ∼ m φ < ∼ 16.7 (MeV) scalar DM , 13.6 < ∼ m V < ∼ 16.7 (MeV) vectorial DM .(19)
B. Numerical result for the X-DM coupling
As the DM mass range being set, we turn to investigate the X-DM coupling. The DM relic density is 0.1197 ± 0.0042 [19]. According to the DM thermally averaged annihilation cross section σ ann v r ≈ 6b/x f at T f , the numerical results of b are shown in Fig. 5, with the solid, dashed curves corresponding to the scalar, vectorial DM, respectively. After the values of b defined in Eq. (7) is obtained, and then the X-DM coupling couplings is also determined. The numerical results of e 2 D ε 2 e are depicted in Fig. 6. Considering the value of ε e given by Eq. (4), we can obtain e 2 D /4π < 1, and thus the X-DM coupling is sufficiently small that the perturbation may apply.
IV. DM-ELECTRON SCATTERING
Now let us turn to investigate the possibility of detecting the light DM of MeV scale by the earth detector.
For the light DM particles, since the recoil of the target nucleus is too small to be substantially observed, one may not detect arrival of DM via the scattering between the MeV DM and target nucleus. Instead, the DM-electron scattering can be employed for the MeV DM hunting. The DM-electron scattering has been investigated in Refs. [21][22][23]. The target atomic electron is in a bound state, and the typical momentum transfer q is of order αm e as a few eV, which may cause excitation/ionization of the electron in inelastic scattering processes. In this work, we study the signals of individual electrons induced by DM-electron scattering. Here, we take the form of the DM-electron scattering cross section as given by Ref. [39], and for scalar DM, that is
σ e = µ 2 φe 16πm 2 φ m 2 e |M φe (q)| 2 q 2 =α 2 m 2 e × |F DM (q)| 2 (20) ≃ 4αe 2 D ε 2 e µ 2 φe m 4 X ,
with µ φe being the φ-electron reduced mass, and F DM (q) ≃ 1 for m X ≫ αm e .
For vectorial DM, the DM-electron scattering cross section is
σ e = µ 2 V e 16πm 2 V m 2 e |M V e (q)| 2 q 2 =α 2 m 2 e × |F DM (q)| 2 (21) ≃ 4αe 2 D ε 2 e µ 2 V e m 4 X ,
with µ V e being the V -electron reduced mass. As the value of e 2 D ε 2 e is fixed, the DM-electron scattering cross sectionσ e can be obtained. The numerical result ofσ e is shown in Fig. 7, where it is noted that the scattering cross section is independent of the momentum transfer (F DM (q) = 1). The upper solid, upper dashed curves are for the scalar, vectorial DM, respectively, and the dot-dashed curve is the excluded bound set by the XENON10 data [40]. It can be seen that, considering the constraint of XENON10, there exists parameter spaces for scalar, vectorial DM to satisfy the constraints. Now we give a brief discussion about the background in the DM-electron scattering. One irreducible background is from the neutrino-electron scattering, which sets the ultimate limit to the sub-GeV DM direct detections. Fortunately, the DM annual modulation effect from the motion of the earth can be employed to reduce the neutrino background [39,41,42]. The teens MeV DM of concern could be probed via the inelastic processes of DM-electron scatterings, e.g. the individual electron signals by the future noble gas and semiconductor targets. For Ar, Xe [39] and Ge, Si [43] with 1 kg·year exposure, the exclusion reach at 95% confidence level via single electron detections are also shown in Fig. 7. Further explorations of DM-electron scatterings are needed, both in theory and experiment.
V. CONCLUSION AND DISCUSSION
The MeV scalar and vectorial DM has been studied in this work, with the new boson X indicated by the 8 Be anomalous transition being the mediator. Considering the constraints of the DM direct detection and CMB observation, we find that for the case of m X /2 < m φ (m V ) < m X , the p-wave dominant annihilation of DM at freeze out does not conflict with the observed data so far. The primordial abundances of light elements and the effective number of relativistic neutrino N ef f at recombination are sensitive to the DM with the mass of a few MeV to teens MeV, thus the corresponding observed results have been employed to set a lower bound on the DM mass. Taking the combined lower bounds N ef f > ∼ 2.9 and the neutrino decoupling temperature T d > ∼ 2 MeV, we derive a mass range of DM: 10.4 < ∼ m φ < ∼ 16.7 MeV for scalar DM, and 13.6 < ∼ m V < ∼ 16.7 MeV for vectorial DM. For the teens MeV scalar, vectorial DM of concern, the numerical result of the DM-X coupling is derived in terms of the DM thermally averaged annihilation cross section. Once this coupling is set, the strength of the interaction between DM and SM particles is determined.
The DM-electron scattering is employed for the teens MeV DM hunting. We investigate on the signal of the individual electrons in DM-electron scattering, and the scattering cross section σ e is calculated for the DM mass range of concern. We find that, considering the constraint of XENON10, there are still parameter spaces left for the teens MeV scalar, vectorial DM to be observed. Beside the individual electrons, signals of individual photons, individual ions, and heat/phonons can also be employed to explore the MeV DM-electron scattering (see. e.g. Ref. [39,44] for more), even though the ion signal is probably too weak for detection. The teens MeV DM of concern could be probed by the future noble gas and semiconductor targets via the DM-electron scattering. In fact, the wave function of electron in the bound state for a certain target material needs to be considered to guarantee the prediction power. It is noted that the detection possibilities and efficiency of DM are target dependent.
As discussed in Ref. [45], the new boson X may be detectable at the e + e − collider, such as BESIII and BaBar. The new boson X may also give an interpretation about the NuTeV anomaly [46]. For the teens MeV scalar, vectorial DM of concern, further investigation both in theory and experiment aspects are needed. We look forward to the exploration of the X-portal DM in the future.
FIG. 2 :
2The annihilation φφ * → XX. The case of V V * → XX is similar.
FIG. 3 :
3The annihilations of φφ * → ff (left) and V V * → ff (right).
FIG. 4 :
4The effective number N ef f as a function of m DM /T d . The solid, dashed curves are the scalar, vectorial DM of concern, respectively. The dotted curve is for the lower bound N ef f = 2.9.
FIG. 5 :
5The parameter b as a function of DM mass. The solid, dashed curves are the scalar, vectorial DM of concern, respectively.
FIG. 6 :
6The values of e 2 D ε 2 e as a function of DM mass. The solid, dashed curves are the scalar, vectorial DM, respectively.
FIG. 7 :
7The DM-electron scattering cross sectionσ e as a function of DM mass with the parameter F DM (q) = 1. The upper solid, upper dashed curves are the scalar, vectorial DM, respectively. The dot-dashed curve is the excluded bound set by the XENON10 data[40]. The lower solid, lower dashed curves are the 95% confidence level exclusion reach of single electron detections set by the 1 kg·year exposure of Ar, Xe[39], respectively. The upper, lower square curves are the 95% confidence level exclusion reach of single electron detections set by the 1 kg·year exposure of Ge, Si[43], respectively.
. G Angloher, CRESST CollaborationarXiv:1509.01515Eur. Phys. J. C. 76125astro-ph.COG. Angloher et al. [CRESST Collaboration], Eur. Phys. J. C 76 (2016) no.1, 25 [arXiv:1509.01515 [astro-ph.CO]].
. R Agnese, SuperCDMS CollaborationarXiv:1509.02448Phys. Rev. Lett. 116771301astro-ph.COR. Agnese et al. [SuperCDMS Collaboration], Phys. Rev. Lett. 116 (2016) no.7, 071301 [arXiv:1509.02448 [astro-ph.CO]].
. D S Akerib, LUX CollaborationarXiv:1512.03506Phys. Rev. Lett. 11616161301astro-ph.COD. S. Akerib et al. [LUX Collaboration], Phys. Rev. Lett. 116 (2016) no.16, 161301 [arXiv:1512.03506 [astro-ph.CO]].
. E Aprile, XENON CollaborationarXiv:1512.07501JCAP. 16040427physics.ins-detE. Aprile et al. [XENON Collaboration], JCAP 1604, no. 04, 027 (2016) [arXiv:1512.07501 [physics.ins-det]].
. A Tan, PandaX-II CollaborationarXiv:1607.07400Phys. Rev. Lett. 11712121303hep-exA. Tan et al. [PandaX-II Collaboration], Phys. Rev. Lett. 117 (2016) no.12, 121303 [arXiv:1607.07400 [hep-ex]].
. P Fayet, Nucl. Phys. B. 187184P. Fayet, Nucl. Phys. B 187 (1981) 184.
. C Boehm, T A Ensslin, J Silk, astro-ph/0208458J. Phys. G. 30279C. Boehm, T. A. Ensslin and J. Silk, J. Phys. G 30 (2004) 279 [astro-ph/0208458].
. C Boehm, D Hooper, J Silk, M Casse, J Paul, astro-ph/0309686Phys. Rev. Lett. 92101301C. Boehm, D. Hooper, J. Silk, M. Casse and J. Paul, Phys. Rev. Lett. 92 (2004) 101301 [astro-ph/0309686].
. D Hooper, F Ferrer, C Boehm, J Silk, J Paul, N W Evans, M Casse, astro-ph/0311150Phys. Rev. Lett. 93161302D. Hooper, F. Ferrer, C. Boehm, J. Silk, J. Paul, N. W. Evans and M. Casse, Phys. Rev. Lett. 93 (2004) 161302 [astro-ph/0311150].
. C Boehm, P Fayet, hep-ph/0305261Nucl. Phys. B. 683219C. Boehm and P. Fayet, Nucl. Phys. B 683 (2004) 219 [hep-ph/0305261].
. P Fayet, hep-ph/0403226Phys. Rev. D. 7023514P. Fayet, Phys. Rev. D 70 (2004) 023514 [hep-ph/0403226].
. P D Serpico, G G Raffelt, astro-ph/0403417Phys. Rev. D. 7043526P. D. Serpico and G. G. Raffelt, Phys. Rev. D 70 (2004) 043526 [astro-ph/0403417].
. P Fayet, hep-ph/0607318Phys. Rev. D. 7454034P. Fayet, Phys. Rev. D 74 (2006) 054034 [hep-ph/0607318].
. A J Krasznahorkay, arXiv:1504.01527Phys. Rev. Lett. 116442501nucl-exA. J. Krasznahorkay et al., Phys. Rev. Lett. 116 (2016) no.4, 042501 [arXiv:1504.01527 [nucl-ex]].
. J L Feng, B Fornal, I Galon, S Gardner, J Smolinsky, T M P Tait, P Tanedo, arXiv:1604.07411Phys. Rev. Lett. 117771803hep-phJ. L. Feng, B. Fornal, I. Galon, S. Gardner, J. Smolinsky, T. M. P. Tait and P. Tanedo, Phys. Rev. Lett. 117 (2016) no.7, 071803 [arXiv:1604.07411 [hep-ph]].
. P H Gu, X G He, arXiv:1606.05171hep-phP. H. Gu and X. G. He, arXiv:1606.05171 [hep-ph].
. J L Feng, B Fornal, I Galon, S Gardner, J Smolinsky, T M P Tait, P Tanedo, arXiv:1608.03591hep-phJ. L. Feng, B. Fornal, I. Galon, S. Gardner, J. Smolinsky, T. M. P. Tait and P. Tanedo, arXiv:1608.03591 [hep-ph].
. M Freytsis, Z Ligeti, arXiv:1012.5317Phys. Rev. D. 83115009hep-phM. Freytsis and Z. Ligeti, Phys. Rev. D 83 (2011) 115009 [arXiv:1012.5317 [hep-ph]].
. P A R Ade, Planck CollaborationarXiv:1502.01589[astro-ph.COP. A. R. Ade et al. [Planck Collaboration], arXiv:1502.01589 [astro-ph.CO].
. T R Slatyer, arXiv:1506.03811Phys. Rev. D. 93223527hep-phT. R. Slatyer, Phys. Rev. D 93 (2016) no.2, 023527 [arXiv:1506.03811 [hep-ph]].
. R Bernabei, arXiv:0712.0562Phys. Rev. D. 7723506astro-phR. Bernabei et al., Phys. Rev. D 77 (2008) 023506 [arXiv:0712.0562 [astro-ph]].
. A Dedes, I Giomataris, K Suxho, J D Vergados, arXiv:0907.0758Nucl. Phys. B. 826148hep-phA. Dedes, I. Giomataris, K. Suxho and J. D. Vergados, Nucl. Phys. B 826 (2010) 148 [arXiv:0907.0758 [hep-ph]].
. J Kopp, V Niro, T Schwetz, J Zupan, arXiv:0907.3159Phys. Rev. D. 8083502hep-phJ. Kopp, V. Niro, T. Schwetz and J. Zupan, Phys. Rev. D 80 (2009) 083502 [arXiv:0907.3159 [hep-ph]].
. M Srednicki, R Watkins, K A Olive, Nucl. Phys. B. 310693M. Srednicki, R. Watkins and K. A. Olive, Nucl. Phys. B 310 (1988) 693.
. P Gondolo, G Gelmini, Nucl. Phys. B. 360145P. Gondolo and G. Gelmini, Nucl. Phys. B 360 (1991) 145.
. E W Kolb, M S Turner, Front. Phys. 691E. W. Kolb and M. S. Turner, Front. Phys. 69 (1990) 1.
. K Griest, D Seckel, Phys. Rev. D. 433191K. Griest and D. Seckel, Phys. Rev. D 43 (1991) 3191.
. M Drees, F Hajkarim, E R Schmitz, arXiv:1503.03513JCAP. 15060625hepphM. Drees, F. Hajkarim and E. R. Schmitz, JCAP 1506 (2015) no.06, 025 [arXiv:1503.03513 [hep- ph]].
. E W Kolb, M S Turner, T P Walker, Phys. Rev. D. 342197E. W. Kolb, M. S. Turner and T. P. Walker, Phys. Rev. D 34 (1986) 2197.
. K Enqvist, K Kainulainen, V Semikoz, Nucl. Phys. B. 374392K. Enqvist, K. Kainulainen and V. Semikoz, Nucl. Phys. B 374 (1992) 392.
. A D Dolgov, hep-ph/0202122Phys. Rept. 370333A. D. Dolgov, Phys. Rept. 370 (2002) 333 [hep-ph/0202122].
. G Mangano, G Miele, S Pastor, T Pinto, O Pisanti, P D Serpico, hep-ph/0506164Nucl. Phys. B. 729221G. Mangano, G. Miele, S. Pastor, T. Pinto, O. Pisanti and P. D. Serpico, Nucl. Phys. B 729 (2005) 221 [hep-ph/0506164].
. O Pisanti, A Cirillo, S Esposito, F Iocco, G Mangano, G Miele, P D Serpico, arXiv:0705.0290Comput. Phys. Commun. 178956astro-phO. Pisanti, A. Cirillo, S. Esposito, F. Iocco, G. Mangano, G. Miele and P. D. Serpico, Comput. Phys. Commun. 178 (2008) 956 [arXiv:0705.0290 [astro-ph]].
. E Aver, K A Olive, R L Porter, E D Skillman, arXiv:1309.0047JCAP. 131117astro-ph.COE. Aver, K. A. Olive, R. L. Porter and E. D. Skillman, JCAP 1311 (2013) 017 [arXiv:1309.0047 [astro-ph.CO]].
. R Cooke, M Pettini, R A Jorgenson, M T Murphy, C C Steidel, arXiv:1308.3240Astrophys. J. 781131astro-ph.COR. Cooke, M. Pettini, R. A. Jorgenson, M. T. Murphy and C. C. Steidel, Astrophys. J. 781 (2014) no.1, 31 [arXiv:1308.3240 [astro-ph.CO]].
. C M Ho, R J Scherrer, arXiv:1208.4347Phys. Rev. D. 87223505astro-ph.COC. M. Ho and R. J. Scherrer, Phys. Rev. D 87 (2013) no.2, 023505 [arXiv:1208.4347 [astro-ph.CO]].
. C M Ho, R J Scherrer, arXiv:1212.1689Phys. Rev. D. 87665016hep-phC. M. Ho and R. J. Scherrer, Phys. Rev. D 87 (2013) no.6, 065016 [arXiv:1212.1689 [hep-ph]].
. S Hannestad, astro-ph/0111423Phys. Rev. D. 6583006S. Hannestad, Phys. Rev. D 65 (2002) 083006 [astro-ph/0111423].
. R Essig, J Mardon, T Volansky, arXiv:1108.5383Phys. Rev. D. 8576007hep-phR. Essig, J. Mardon and T. Volansky, Phys. Rev. D 85 (2012) 076007 [arXiv:1108.5383 [hep-ph]].
. R Essig, A Manalaysay, J Mardon, P Sorensen, T Volansky, arXiv:1206.2644Phys. Rev. Lett. 10921301astro-ph.COR. Essig, A. Manalaysay, J. Mardon, P. Sorensen and T. Volansky, Phys. Rev. Lett. 109 (2012) 021301 [arXiv:1206.2644 [astro-ph.CO]].
. S K Lee, M Lisanti, S Mishra-Sharma, B R Safdi, arXiv:1508.07361Phys. Rev. D. 92883517hep-phS. K. Lee, M. Lisanti, S. Mishra-Sharma and B. R. Safdi, Phys. Rev. D 92 (2015) no.8, 083517 [arXiv:1508.07361 [hep-ph]].
. A K Drukier, K Freese, D N Spergel, Phys. Rev. D. 333495A. K. Drukier, K. Freese and D. N. Spergel, Phys. Rev. D 33 (1986) 3495.
. R Essig, M Fernandez-Serra, J Mardon, A Soto, T Volansky, T T Yu, arXiv:1509.01598JHEP. 160546hep-phR. Essig, M. Fernandez-Serra, J. Mardon, A. Soto, T. Volansky and T. T. Yu, JHEP 1605 (2016) 046 [arXiv:1509.01598 [hep-ph]].
. S Derenzo, R Essig, A Massari, A Soto, T T Yu, arXiv:1607.01009hep-phS. Derenzo, R. Essig, A. Massari, A. Soto and T. T. Yu, arXiv:1607.01009 [hep-ph].
. L B Chen, Y Liang, C F Qiao, arXiv:1607.03970hep-phL. B. Chen, Y. Liang and C. F. Qiao, arXiv:1607.03970 [hep-ph].
. Y Liang, L B Chen, C F Qiao, arXiv:1607.08309hep-phY. Liang, L. B. Chen and C. F. Qiao, arXiv:1607.08309 [hep-ph].
| []
|
[
"Short proofs of coloring theorems on planar graphs",
"Short proofs of coloring theorems on planar graphs"
]
| [
"Oleg V Borodin \nSobolev Institute of Mathematics\nNovosibirsk State University\n630090NovosibirskRussia\n\nUniversity of Illinois at Urbana-Champaign\n61801UrbanaILUSA\n\nUniversity of Illinois at Urbana-Champaign\n61801UrbanaILUSA\n",
"Alexandr V Kostochka [email protected]. ",
"Bernard Lidický [email protected] ",
"Matthew Yancey [email protected]. "
]
| [
"Sobolev Institute of Mathematics\nNovosibirsk State University\n630090NovosibirskRussia",
"University of Illinois at Urbana-Champaign\n61801UrbanaILUSA",
"University of Illinois at Urbana-Champaign\n61801UrbanaILUSA"
]
| []
| A recent lower bound on the number of edges in a k-critical nvertex graph by Kostochka and Yancey yields a half-page proof of the celebrated Grötzsch Theorem that every planar triangle-free graph is 3-colorable. In this paper we use the same bound to give short proofs of other known theorems on 3-coloring of planar graphs, among whose is the Grünbaum-Aksenov Theorem that every planar with at most three triangles is 3-colorable. We also prove the new result that every graph obtained from a triangle-free planar graph by adding a vertex of degree at most four is 3-colorable. | 10.1016/j.ejc.2013.05.002 | [
"https://arxiv.org/pdf/1211.3981v1.pdf"
]
| 1,926,801 | 1211.3981 | 8abddfff3fe865219ac4aed311494f276547fa09 |
Short proofs of coloring theorems on planar graphs
November 19, 2012
Oleg V Borodin
Sobolev Institute of Mathematics
Novosibirsk State University
630090NovosibirskRussia
University of Illinois at Urbana-Champaign
61801UrbanaILUSA
University of Illinois at Urbana-Champaign
61801UrbanaILUSA
Alexandr V Kostochka [email protected].
Bernard Lidický [email protected]
Matthew Yancey [email protected].
Short proofs of coloring theorems on planar graphs
November 19, 2012Research of this author is supported in part by grants 12-01-00448 and 12-01-00631 of the Russian Foundation for Basic Re-search. † University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA and Sobolev Institute of Mathematics, Novosibirsk 630090, Russia. Research of this author is partially supported by the Arnold O. Beckman Research Award of the University of Illinois at Urbana-Champaign and from National Science Foundation grant DMS 08-38434 "EMSW21-MCTP: Research Experi-ence for Graduate Students."
A recent lower bound on the number of edges in a k-critical nvertex graph by Kostochka and Yancey yields a half-page proof of the celebrated Grötzsch Theorem that every planar triangle-free graph is 3-colorable. In this paper we use the same bound to give short proofs of other known theorems on 3-coloring of planar graphs, among whose is the Grünbaum-Aksenov Theorem that every planar with at most three triangles is 3-colorable. We also prove the new result that every graph obtained from a triangle-free planar graph by adding a vertex of degree at most four is 3-colorable.
Introduction
Graphs considered in this paper are simple, i.e., without loops or parallel edges. For a graph G, the set of its vertices is denoted by V (G) and the set of its edges by E(G).
An embedding σ of a graph G = (V, E) in a surface Σ is an injective mapping of V to a point set P in Σ and E to non-self-intersecting curves in Σ such that (a) for all v ∈ V and e ∈ E, σ(v) is never an interior point of σ(e), and σ(v) is an endpoint of σ(e) if and only if v is a vertex of e, and (b) for all e, h ∈ E, σ(h) and σ(e) can intersect only in vertices of P . A graph is planar if it has an embedding in the plane. A graph with its embedding in the (projective) plane is a (projective) plane graph. A cycle in a graph embedded in Σ is contractible if it splits Σ into two surfaces where one of them is homeomorphic to a disk.
A (proper) coloring ϕ of a graph G is a mapping from V (G) to a set of colors C such that ϕ(u) = ϕ(v) whenever uv ∈ E(G). A graph G is kcolorable if there exists a coloring of G using at most k colors. A graph G is k-critical if G is not (k − 1)-colorable but every proper subgraph of G is (k − 1)-colorable. By definition, if a graph G is not (k − 1)-colorable then it contains a k-critical subgraph.
Dirac [12] asked to determine the minimum number of edges in a kcritical graph. Ore conjectured [22] that an upper bound obtained from Hajós' construction is tight. More details about Ore's conjecture can be found in [18][ Problem 5.3] and in [20]. Recently, Kostochka and Yancey [20] confirmed Ore's conjecture for k = 4 and showed that the conjecture is tight in infinitely many cases for every k ≥ 5. In [19] they gave a 2.5-page proof of the case k = 4:
Theorem 1 ([19]). If G is a 4-critical n-vertex graph then |E(G)| ≥ 5n − 2 3 .
Theorem 1 yields a half-page proof [19] of the celebrated Grötzsch Theorem [14] that every planar triangle-free graph is 3-colorable. This paper presents short proofs of some other theorems on 3-coloring of graphs close to planar. Most of these results are generalizations of Grötzsch Theorem.
Examples of such generalizations are results of Aksenov [2] and Jensen and Thomassen [17].
Theorem 2 ( [2,17]). Let G be a triangle-free planar graph and H be a graph such that G = H − h for some edge h of H. Then H is 3-colorable.
Theorem 3 ( [17]). Let G be a triangle-free planar graph and H be a graph such that G = H − v for some vertex v of degree 3. Then H is 3-colorable.
We show an alternative proof of Theorem 2 and give a strengthening of Theorem 3.
Theorem 4. Let G be a triangle-free planar graph and H be a graph such that G = H − v for some vertex v of degree 4. Then H is 3-colorable.
Theorems 2 and 4 yield a short proof of the following extension theorem that was used by Grötzsch [14].
Theorem 5. Let G be a triangle-free planar graph and F be a face of G of length at most 5. Then each 3-coloring of F can be extended to a 3-coloring of G.
An alternative statement of Theorem 2 is that each coloring of two vertices of a triangle-free planar graph G by two different colors can be extended to a 3-coloring of G. Aksenov et al. [3] extended Theorem 2 by showing that each proper coloring of each induced subgraph on two vertices of G extends to a 3-coloring of G. Theorem 6 ([3]). Let G be a triangle-free planar graph. Then each coloring of two non-adjacent vertices can be extended to a 3-coloring of G.
We show a short proof of Theorem 6. Another possibility to strengthen Grötzsch's Theorem is to allow at most three triangles. Theorem 7 ([1,4,15]). Let G be a planar graph containing at most three triangles. Then G is 3-colorable.
The original proof by Grünbaum [15] was incorrect and a correct proof was provided by Aksenov [1]. A simpler proof was given by Borodin [4], but our proof is significantly simpler.
Youngs [30] constructed triangle-free graphs in the projective plane that are not 3-colorable. Thomassen [25] showed that if G is embedded in the projective plane without contractible cycles of length at most 4 then G is 3-colorable. We slightly strengthen the result by allowing two contractible 4-cycles or one contractible 3-cycle.
Theorem 8. Let G be a graph embedded in the projective plane such that the embedding has at most two contractible cycles of length 4 or one contractible cycle of length three such that all other cycles of length at most 4 are noncontractible. Then G is 3-colorable.
It turned out that restricting the number of triangles is not necessary. Havel conjectured [16] that there exists a constant c such that if every pair of triangles in a planar graph G is at distance at least c then G is 3-colorable. The conjecture was proven true by Dvořák, Král' and Thomas [13].
Without restriction on triangles, Steinberg conjectured [23] that every planar graph without 4-and 5-cycles is 3-colorable. Erdős suggested to relax the conjecture and asked for the smallest k such that every planar graphs without cycles of length 4 to k is 3-colorable. The best known bound for k is 7 [9]. A cycle C is triangular if it is adjacent to a triangle other than C. In [6], it is proved that every planar graph without triangular cycles of length from 4 to 7 is 3-colorable, which implies all results in [7,8,9,10,11,21,26,27,28,29].
We present the following result in the direction towards Steinberg's conjecture with a Havel-type constraint on triangles. As a free bonus, the graph can be in the projective plane instead of the plane.
Theorem 9. Let G be a 4-chromatic projective planar graph where every vertex is in at most one triangle. Then G contains a cycle of length 4,5 or 6.
There are numerous other results on the Three Color Problem in the plane. See a recent survey [5] or a webpage maintained by Montassier http: //janela.lirmm.fr/~montassier/index.php?n=Site.ThreeColorProblem.
The next section contains proofs of the presented theorems and Section 3 contains constructions showing that some of the theorems are best possible.
Proofs
Identification of non-adjacent vertices u and v in a graph G results in a graph G obtained from G − {u, v} by adding a new vertex x adjacent to every vertex that is adjacent to at least one of u and v.
The following lemma is a well-known tool to reduce the number of 4-faces. We show its proof for the completeness.
0 v 1 v 2 v 3 z x y (a) v 0 v 1 v 2 v 3 z x = y (b)= v 0 v 1 v 2 v 3 be a 4-face in G such that v 0 v 2 , v 1 v 3 ∈ E(G). Let G i be obtained from G by identifying v i and v i+2 where i ∈ {0, 1}.
If the number of triangles increases in both G 0 and G 1 then there exists a triangle v i v i+1 z for some z ∈ V (G) and i ∈ {0, 1, 2, 3}. Moreover, G contains vertices x and y not in F such that v i+1 zxv i−1 and v i zyv i+2 are paths in G. Indices are modulo 4. See Figure 1.
Proof. Let G, F, G 0 and G 1 be as in the statement of the lemma. Since the number of triangles increases in G 0 there must be a path v 0 zyv 2 in G where z, y ∈ F . Similarly, a new triangle in G 1 implies a path v 1 wxv 3 in G where w, x ∈ F . By the planarity of G, {z, y} and {w, x} are not disjoint. Without loss of generality assume z = w. This results in triangle v 0 v 1 z and paths v 1 zxv 3 and v 0 zyv 2 . Note that x and y do not have to be distinct. See Figure 1
(b).
Proof of Theorem 2. Let H be a smallest counterexample and G be a plane triangle-free graph such that G = H − h for some edge h = uv. Let H have n vertices and e edges and G have f faces. Note that G has n vertices and e − 1 edges. By the minimality of H, H is 4-critical. So Theorem 1 implies e ≥ 5n−2 3 . CASE 1: G has at most one 4-face. Then 5f − 1 ≤ 2(e − 1) and hence f ≤ (2e − 1)/5. By this and Euler's Formula n − (e − 1) + f = 2 applied on G we have 5n − 3e + 1 ≥ 5, i.e., e ≤ 5n−4 3 . This contradicts Theorem 1.
CASE 2: Every 4-face of G contains both u and v and there are at least two such 4-faces F x = ux 1 vx 2 and F y = uy 1 vy 2 . If there exists z ∈ {x 1 , x 2 } ∩ {y 1 , y 2 } then z has degree two in G which contradicts the 4-criticality of G.
Let G be obtained from G by identification of x 1 and x 2 into a new vertex x. If G is not triangle-free then there is a path P = x 1 q 1 q 2 x 2 in G where q 1 , q 2 ∈ F x . Since P must cross uy 1 v and uy 2 v, we may assume that y 1 = q 1 and y 2 = q 2 . However, y 1 y 2 ∈ E(G). This contradicts the existence of P . Hence G is triangle-free. Let H = G + h. By the minimality of H, there exists a 3-coloring ϕ of H . This contradicts that H is not 4-colorable since ϕ can be extended to H by letting ϕ( 3 . This contradicts Theorem 1. CASE 2: G has a 4-face F with vertices v 0 v 1 v 2 v 3 in the cyclic order. Since G is triangle-free, neither v 0 v 2 nor v 1 v 3 are edges of G and Lemma 10 applies. Without loss of generality assume that G 0 obtained from G by identification of v 0 and v 2 is triangle-free.
x 1 ) = ϕ(x 2 ) = ϕ(x). CASE 3: G has a 4-face F with vertices v 0 v 1 v 2 v 3 in the cyclic order where h is neither v 0 v 2 nor v 1 v 3 . Since G is triangle-free, neither v 0 v 2 nor v 1 v 3
By the minimality of H, the graph obtained from H by identification of v 0 and v 2 satisfies the assumptions of the theorem and hence has a 3-coloring. Then H also has a 3-coloring, a contradiction.
Proof of Theorem 5. Let the 3-coloring of F be ϕ. By symmetry, the other subcase is the following. CASE 1.2: ϕ(v 0 ) = ϕ(v 2 ) and ϕ(v 1 ) = ϕ(v 3 ). Let G be obtained from G by adding the edge v 1 v 3 . Since G satisfies the assumptions of Theorem 2, there exists a 3-coloring of G . In any such 3-coloring, (v 1 ) = (v 3 ) and hence (v 0 ) = (v 2 ). By renaming the colors in we obtain an extension of ϕ to a 3-coloring of G. CASE 2: F is a 5-face where v 0 v 1 v 2 v 3 v 4 are its vertices in cyclic order. Observe that up to symmetry there is just one coloring of F . So without loss of generality assume that ϕ(v 0 ) = ϕ(v 2 ) and ϕ(v 1 ) = ϕ(v 3 ).
Let G be obtained from G by adding a vertex v adjacent to v 0 , v 1 , v 2 and v 3 . Since G satisfies the assumptions of Theorem 4, there exists a 3-coloring of G . Note that in any such 3-coloring (v 0 ) = (v 2 ) and (v 1 ) = (v 3 ). Hence by renaming the colors in we can extend ϕ to a 3-coloring of G.
Proof of Theorem 6. Let G be a smallest counterexample and let u, v ∈ V (G) be the two non-adjacent vertices colored by ϕ. If ϕ(u) = ϕ(v) then the result follows from Theorem 2 by considering graph obtained from G by adding the edge uv. Hence assume that ϕ(u) = ϕ(v). CASE 1: G has at most two 4-faces. Let H be a graph obtained from G by identification of u and v. Any 3-coloring of H yields a 3-coloring of G where u and v are colored the same. By this and the minimality of G we conclude that H is 4-critical. Let G have e edges, n + 1 vertices and f faces.
Since G is planar 5f − 2 ≤ 2e. By this and Euler's formula, 2e + 2 + 5(n + 1) − 5e ≥ 10 and hence e ≤ (5n − 3)/3, a contradiction to Theorem 1. CASE 2: G has at least three 4-faces. Let F be a 4-face with vertices v 0 v 1 v 2 v 3 in the cyclic order. Since G is triangle-free, neither v 0 v 2 nor v 1 v 3 are edges of G. Hence Lemma 10 applies.
Without loss of generality let G 0 from Lemma 10 be triangle-free. By the minimality of G, G 0 has a 3-coloring ϕ where ϕ(u) = ϕ(v) unless uv ∈ E(G 0 ). Since uv ∈ E(G), without loss of generality v 0 = u and v 2 v ∈ E(G). Moreover, the same cannot happen to G 1 from Lemma 10, hence G 1 contains a triangle. Thus G contains a path v 1 q 1 q 2 v 3 where q 1 , q 2 ∈ F , and G also Figure 2: Configuration from Theorem 6. contains a 5-cycle C = uv 1 q 1 q 2 v 3 (see Figure 2). By Theorem 5, C is a 5-face.
u = v 0 v 1 v 2 v 3 v q 1 q 2
Hence v is a 2-vertex incident with only one 4-face.
By symmetric argument, u is also a 2-vertex incident with one 4-face and 5-face. However, G has at least one more 4-face where identification of vertices does not result in the edge uv, a contradiction to the minimality of G.
Proof of Theorem 8. Let G be a minimal counterexample with e edges, n vertices and f faces. By minimality, G is a 4-critical and has at most two 4-faces or one 3-face. From embedding, 5f − 2 ≤ 2e. By Euler's formula, 2e+2+5n−5e ≥ 5. Hence e ≤ (5n−3)/3, a contradiction to Theorem 1.
Borodin used in his proof of Theorem 7 a technique called portionwise coloring. We avoid it and build the proof on the previous results arising from Theorem 1.
Proof of Theorem 7. Let G be a smallest counterexample. By minimality, G is 4-critical and every triangle is a face. By Theorem 5 for every separating 4-cycle and 5-cycle C, both the interior and exterior of C contain triangles. CASE 1: G has no 4-faces. Then 5f − 6 ≤ 2e and by Euler's Formula 3e + 6 + 5n − 5e ≥ 10, i.e., e ≤ 5n−4 3 . This contradicts Theorem 1. CASE 3.1: G contains a 3-prism with one of its 4-cycles being a 4-face. We may assume that this face is our F and x = y, see Figure 1(b). Theorem 5 implies that one of zv 0 v 3 x, zv 1 v 2 x is a 4-face. Without loss of generality assume that zv 1 v 2 x is a 4-face. Let G 0 be obtained from G by identification of v 0 and v 2 to a new vertex v. Since G 0 is not 3-colorable, it contains a 4-critical subgraph G 0 . Note that G 0 contains triangle xvz that is not in G but v 0 v 1 z is not in G 0 since d(v 1 ) = 2 in G 0 . By the minimality of G, there exists another triangle T that is in G 0 but not in G. By planarity, x ∈ T . Hence there is a vertex w 1 = v 3 such that v 0 and x are neighbors of w 1 .
CASE 2: G has a 4-face F = v 0 v 1 v 2 v 3 such that v 0 v 2 ∈ E(G).
By considering identification of v 1 and v 3 and by symmetry, we may assume that there is a vertex w 2 = v 0 such that v 3 and z are neighbors of w 2 . By planarity we conclude that w 1 = w 2 . This contradicts the fact that G has at most three triangles. Therefore G is 3-prism-free. CASE 3.2: G contains no 3-prism with one of its 4-cycles being a 4-face. Then x = y, see Figure 1(a). If v 0 x ∈ E(G) then G − v 0 is triangle-free and Theorem 3 gives a 3-coloring of G, a contradiction. Similarly, v 1 y ∈ E(G).
Suppose that zv 0 v 3 x if a 4-face. Let G be obtained from G−v 0 by adding edge xv 1 . If the number of triangles in G is at most three, then G has a 3-coloring ϕ by the minimality of G. Let be a 3-coloring of G such that
(v) = ϕ(v) if v ∈ V (G ) and (v 0 ) = ϕ(x)
. Since the neighbors of v 0 in G are neighbors of x in G , is a 3-coloring, a contradiction. Therefore G has at least four triangles and hence G contains a vertex t = z adjacent to v 1 and x. Since v 1 y ∈ E(G), the only possibility is t = v 2 . Having edge xv 2 results in a 3-prism being a subgraph of G which is already excluded. Hence zv 0 v 3 x is not a face and by symmetry zv 1 v 2 y is not a face either.
Since neither zv 0 v 3 x nor zv 1 v 2 y is a face, each of them contains a triangle in its interior. Since we know the location of all three triangles, Theorem 5 implies that zyv 2 v 3 x is a 5-face. It also implies that the common neighbors of z and v 3 are exactly v 0 and x, and the common neighbors of z and v 2 are exactly v 1 and y. Without loss of generality, let zyv 2 v 3 x be the outer face of G.
Let H 1 be obtained from the 4-cycle zv 0 v 3 x and its interior by adding edge zv 3 . The edge zv 3 is in only two triangles, and there is only one triangle in the interior of the 4-cycle. Hence by the minimality of G, there exists a 3-coloring ϕ 1 of H 1 .
Let H 2 be obtained from the 4-cycle zv 1 v 2 y and its interior by adding edge zv 2 . By the same argument as for H 1 , there is a 3-coloring of ϕ 2 of H 2 .
Rename the colors in ϕ 2 so that ϕ 1 (z) = ϕ 2 (z), ϕ 1 (v 0 ) = ϕ 2 (v 2 ) and ϕ 1 (v 3 ) = ϕ 2 (v 1 ). Then ϕ 1 ∪ ϕ 2 is a 3-coloring of G, a contradiction.
Proof of Theorem 9. Let G be a 4-chromatic projective plane graph where Figure 3: First three 4-critical graphs from the family described by Thomas and Walls [24]. every vertex is in at most one triangle and let G be 4-,5-and 6-cycle free. Then G contains a 4-critical subgraph G . Let G have e edges, n vertices and f faces. Since G is also 4-,5-and 6-cycle-free and every vertex is in at most one triangle, we get f ≤ n 3 + 2e−n 7 . By Eulers formula, 7n+6e−3n+21n−21e ≥ 21. Hence e ≤ 5n/3 − 21/15, a contradiction to Theorem 1.
Tightness
This section shows examples where Theorems 2,4,5,6,7, and 8 are tight.
Theorem 2 is best possible because there exists an infinite family [24] of 4-critical graphs that become triangle-free and planar after removal of just two edges. See Figure 3. Moreover, the same family shows also the tightness of Theorem 7, since the construction has exactly four triangles.
Aksenov [1] showed that every plane graph with one 6-face F and all other faces being 4-faces has no 3-coloring in which the colors of vertices of F form the sequence (1, 2, 3, 1, 2, 3). This implies that Theorem 5 is best possible. It also implies that Theorems 4 and 6 are best possible. See Figure 4 for constructions where coloring of three vertices or an extra vertex of degree 5 force a coloring (1, 2, 3, 1, 2, 3) of a 6-cycle.
Theorem 8 is best possible because there exist embeddings of K 4 in the projective plane with three 4-faces or with two 3-faces and one 6-face.
Figure 1 :
1Triangle adjacent to a 4-face in Lemma 10. Lemma 10. Let G be a plane graph and F
are edges of G. Lemma 10 implies that either v 0 and v 2 or v 1 and v 3 can be identified without creating a triangle. Without loss of generality assume that G , obtained by from G identification of v 0 and v 2 to a new vertex v, is triangle-free. Let H = G + h. By the minimality of H, there is a 3-coloring ϕ of H . The 3-coloring ϕ can be extended to H by letting ϕ(v 0 ) = ϕ(v 2 ) = ϕ(v) which contradicts the 4-criticality of H. Proof of Theorem 4. Let H be a smallest counterexample and G be a plane triangle-free graph such that G = H − v for some vertex v of degree 4. Let H have n vertices and e edges and G have f faces. Then G has n−1 vertices and e − 4 edges. By minimality, H is 4-critical. So Theorem 1 implies e ≥ 5n−2 3 . CASE 1: G has no 4-faces. Then 5f ≤ 2(e − 4) and hence f ≤ 2(e − 4)/5. By this and Euler's Formula (n − 1) − (e − 4) + f = 2 applied to G, we have 5n − 3e − 8 ≥ −5, i.e., e ≤ 5n−3
CASE 1 :
1F is a 4-face where v 0 v 1 v 2 v 3 are its vertices in cyclic order. CASE 1.1: ϕ(v 0 ) = ϕ(v 2 ) and ϕ(v 1 ) = ϕ(v 3 ). Let G be obtained from G by adding a vertex v adjacent to v 0 , v 1 , v 2 and v 3 . Since G satisfies the assumptions of Theorem 4, there exists a 3-coloring of G . In any such 3-coloring, (v 0 ) = (v 2 ) and (v 1 ) = (v 3 ). Hence by renaming the colors in we obtain an extension of ϕ to a 3-coloring of G.
By the minimality, v 0 v 1 v 2 and v 0 v 3 v 2 are both 3-faces and hence G has 4 vertices, 5 edges and it is 3-colorable. CASE 3: For every 4-face F = v 0 v 1 v 2 v 3 , neither v 0 v 2 nor v 1 v 3 are edges of G. By Lemma 10, there exist paths v 0 zyv 2 and v 1 zxv 3 .
Figure 4 :
4Coloring of three vertices by colors 1, 2 and 3 in (a), (b) and (c) or an extra vertex of degree 5 in (d) forces a coloring of the 6-cycle by a sequence (1, 2, 3, 1, 2, 3) in cyclic order.
On continuation of 3-colouring of planar graphs. V A Aksenov, Diskret. Anal. Novosibirsk. 26in RussianV. A. Aksenov. On continuation of 3-colouring of planar graphs. Diskret. Anal. Novosibirsk, 26:3-19, 1974. in Russian.
Chromatic connected vertices in planar graphs. V A Aksenov, Diskret. Analiz. 31in RussianV. A. Aksenov. Chromatic connected vertices in planar graphs. Diskret. Analiz, 31:5-16, 1977. in Russian.
On the continuation of a 3-coloring from two vertices in a plane graph without 3-cycles. V A Aksenov, O V Borodin, N A Glebov, V. A. Aksenov, O. V. Borodin, and N. A. Glebov. On the continuation of a 3-coloring from two vertices in a plane graph without 3-cycles.
. Diskretn. Anal. Issled. Oper. Ser. 1in RussianDiskretn. Anal. Issled. Oper. Ser. 1, 9:3-36, 2002. in Russian.
A new proof of Grünbaum's 3 color theorem. O V Borodin, Discrete Math. 169O. V. Borodin. A new proof of Grünbaum's 3 color theorem. Discrete Math., 169:177-183, 1997.
Colorings of plane graphs: a survey. O V Borodin, 10.1016/j.disc.2012.11.011Discrete Math., accepted. O. V. Borodin. Colorings of plane graphs: a survey. Discrete Math., accepted, DOI:10.1016/j.disc.2012.11.011, 2012.
Planar graphs without triangles adjacent to cycles of. O V Borodin, A N Glebov, A Raspaud, length from 4 to 7 are 3-colorableO. V. Borodin, A. N. Glebov, and A. Raspaud. Planar graphs with- out triangles adjacent to cycles of length from 4 to 7 are 3-colorable.
Thomassen's special issue of. Discrete Math. 310Thomassen's special issue of Discrete Math., 310:2584-2594, 2010.
Planar graphs without adjacent cycles of length at most seven are 3-colorable. O V Borodin, M Montassier, A Raspaud, Discrete Math. 310O. V. Borodin, M. Montassier, and A. Raspaud. Planar graphs without adjacent cycles of length at most seven are 3-colorable. Discrete Math., 310:167-173, 2010.
Planar graphs without 5-and 7-cycles and without adjacent triangles are 3-colorable. O V Borodin, A N Glebov, M Montassier, A Raspaud, Journal of Combinatorial Theory, Series B. 99O.V. Borodin, A.N. Glebov, M. Montassier, and A. Raspaud. Planar graphs without 5-and 7-cycles and without adjacent triangles are 3- colorable. Journal of Combinatorial Theory, Series B, 99:668-673, 2009.
Planar graphs without cycles of length from 4 to 7 are 3-colorable. O V Borodin, A N Glebov, A Raspaud, M R Salavatipour, Journal of Combinatorial Theory, Series B. 93O.V. Borodin, A.N. Glebov, A. Raspaud, and M.R. Salavatipour. Planar graphs without cycles of length from 4 to 7 are 3-colorable. Journal of Combinatorial Theory, Series B, 93:303-311, 2005.
Three-coloring planar graphs without short cycles. M Chen, A Raspaud, W Wang, Inform. Process. Lett. 101M. Chen, A. Raspaud, and W. Wang. Three-coloring planar graphs without short cycles. Inform. Process. Lett., 101:134-138, 2007.
On 3-colorable planar graphs without short cycles. M Chen, W Wang, Appl. Math. Lett. 21M. Chen and W. Wang. On 3-colorable planar graphs without short cycles. Appl. Math. Lett., 21:961-965, 2008.
A theorem of R. L. Brooks and a conjecture of H. G A Dirac, Hadwiger. Proc. London Math. 7G. A. Dirac. A theorem of R. L. Brooks and a conjecture of H. Hadwiger. Proc. London Math., 7:161-195, 1957.
Coloring planar graphs with triangles far apart. submitted. Z Dvořák, D Král, ' , R Thomas, Z. Dvořák, D. Král', and R. Thomas. Coloring planar graphs with triangles far apart. submitted, 2009.
Ein Dreifarbenzatz für Dreikreisfreie Netze auf der Kugel. H Grötzsch, Math.-Natur. Reihe. 8H. Grötzsch. Ein Dreifarbenzatz für Dreikreisfreie Netze auf der Kugel. Math.-Natur. Reihe, 8:109-120, 1959.
Grötzsch's theorem on 3-coloring. B Grünbaum, Michigan Math. J. 10B. Grünbaum. Grötzsch's theorem on 3-coloring. Michigan Math. J., 10:303-310, 1963.
On a conjecture of Grünbaum. I Havel, Journal of Combinatorial Theory, Series B. 7I. Havel. On a conjecture of Grünbaum. Journal of Combinatorial Theory, Series B, 7:184-186, 1969.
The color space of a graph. T Jensen, C Thomassen, J. Graph Theory. 34T. Jensen and C. Thomassen. The color space of a graph. J. Graph Theory, 34:234-245, 2000.
T R Jensen, B Toft, Graph Coloring Problems. Wiley-Interscience Series in Discrete Mathematics and Optimization. New YorkJohn Wiley & SonsT. R. Jensen and B. Toft. Graph Coloring Problems. Wiley-Interscience Series in Discrete Mathematics and Optimization, John Wiley & Sons, New York, 1995.
Ore's Conjecture for k = 4 and Grötzsch theorem. A V Kostochka, M Yancey, SubmittedA. V. Kostochka and M. Yancey. Ore's Conjecture for k = 4 and Grötzsch theorem. Submitted, 2012.
Ore's Conjecture is almost true. Submitted. A V Kostochka, M Yancey, A. V. Kostochka and M. Yancey. Ore's Conjecture is almost true. Sub- mitted, 2012.
On 3-colorable planar graphs without cycles of four lengths. X Luo, M Chen, W F Wang, Inform. Process. Lett. 103X. Luo, M. Chen, and W. F. Wang. On 3-colorable planar graphs with- out cycles of four lengths. Inform. Process. Lett., 103:150-156, 2007.
The Four Color Problem. O Ore, Academic PressNew YorkO. Ore. The Four Color Problem. Academic Press, New York, 1967.
The state of the three color problem. Quo Vadis, Graph Theory? Ann. R Steinberg, Discrete Math. 55R. Steinberg. The state of the three color problem. Quo Vadis, Graph Theory? Ann. Discrete Math., 55:211-248, 1993.
Three-coloring Klein bottle graphs of girth five. R Thomas, B Walls, J. Combin. Theory Ser. B. 92R. Thomas and B. Walls. Three-coloring Klein bottle graphs of girth five. J. Combin. Theory Ser. B, 92:115-135, 2004.
Grötzsch's 3-color theorem and its counterparts for the torus and the projective plane. C Thomassen, Journal of Combinatorial Theory Series B. 62C. Thomassen. Grötzsch's 3-color theorem and its counterparts for the torus and the projective plane. Journal of Combinatorial Theory Series B, 62:268-279, 1994.
On 3-colorable planar graphs without prescribed cycles. W Wang, M Chen, Discrete Math. 307W. Wang and M. Chen. On 3-colorable planar graphs without prescribed cycles. Discrete Math., 307:2820-2825, 2007.
Planar graphs without 4,6,8-cycles are 3-colorable. W Wang, M Chen, Sci. China A. 50W. Wang and M. Chen. Planar graphs without 4,6,8-cycles are 3- colorable. Sci. China A, 50:1552-1562, 2007.
On 3-colorability of planar graphs without adjacent short cycles. Y Q Wang, X H Mao, H J Lu, W F Wang, Sci. China Math. 53Y.Q. Wang, X.H. Mao, H.J. Lu, and W.F. Wang. On 3-colorability of planar graphs without adjacent short cycles. Sci. China Math., 53:1129- 1132, 2010.
On 3-colorable plane graphs without 5-and 7-cycles. B Xu, Journal of Combinatorial Theory, Series B. 96B. Xu. On 3-colorable plane graphs without 5-and 7-cycles. Journal of Combinatorial Theory, Series B, 96:958-963, 2006.
4-chromatic projective graphs. D A Youngs, J. Graph Theory. 21D. A. Youngs. 4-chromatic projective graphs. J. Graph Theory, 21:219- 227, 1996.
| []
|
[
"An Introduction to Hedge Funds",
"An Introduction to Hedge Funds"
]
| [
"Sovan Mitra "
]
| []
| []
| This report was originally written as an industry white paper on Hedge Funds. This paper gives an overview to Hedge Funds, with a focus on risk management issues. We define and explain the general characteristics of Hedge Funds, their main investment strategies and the risk models employed. We address the problems in Hedge Fund modelling, survey current Hedge Funds available on the market and those that have been withdrawn. Finally, we summarise the supporting and opposing arguments for Hedge Fund usage.A unique value of this paper, compared to other Hedge Fund literature freely available on the internet, is that this review is fully sourced from academic references (such as peer reviewed journals) and is thus a bona fide study. This paper will be of interest to: Hedge Fund and Mutual Fund Managers, Quantitative Analysts, "Front" and "Middle" office banking functions e.g. Treasury Management, Regulators concerned with Hedge Fund Financial Risk Management, Private and Institutional Investors, Academic Researchers in the area of Financial Risk Management and the general Finance community. | null | [
"https://arxiv.org/pdf/0904.2731v2.pdf"
]
| 14,145,773 | 0904.2731 | 6b28d1ddf07f60b4b4a5dc669ce3780432c3fd39 |
An Introduction to Hedge Funds
17 Apr 2009
Sovan Mitra
An Introduction to Hedge Funds
17 Apr 2009arXiv:0904.2731v2 [q-fin.GN]Hedge Fundsrisk managementrisk measurementregulation
This report was originally written as an industry white paper on Hedge Funds. This paper gives an overview to Hedge Funds, with a focus on risk management issues. We define and explain the general characteristics of Hedge Funds, their main investment strategies and the risk models employed. We address the problems in Hedge Fund modelling, survey current Hedge Funds available on the market and those that have been withdrawn. Finally, we summarise the supporting and opposing arguments for Hedge Fund usage.A unique value of this paper, compared to other Hedge Fund literature freely available on the internet, is that this review is fully sourced from academic references (such as peer reviewed journals) and is thus a bona fide study. This paper will be of interest to: Hedge Fund and Mutual Fund Managers, Quantitative Analysts, "Front" and "Middle" office banking functions e.g. Treasury Management, Regulators concerned with Hedge Fund Financial Risk Management, Private and Institutional Investors, Academic Researchers in the area of Financial Risk Management and the general Finance community.
Hedge Funds have a significant influence in financial markets, yet knowledge of them is relatively little.
In this paper we introduce Hedge Funds, attempting to firstly propose a definition for Hedge Funds as no common consensus has yet been agreed within the Finance community. We then explain the common investment strategies applied by Hedge Funds e.g. event driven, long-only investment. In the next section, we survey the main risk models applied to analysing Hedge Funds whilst also discussing the difficulties in actually measuring Hedge Fund risks. Finally we finish by surveying current Hedge Funds available on the market and famous Hedge Funds that have been withdrawn.
It is important to note that knowledge and performance of the Hedge Fund industry is guarded with substantial secrecy. Consequently, the quality of information used in any Hedge Fund study, can never be as good as those for other investment products e.g. Mutual Funds (see Fung [FH00a],Fung [FH99a], Do et al. [DFW05]).
Introduction to Hedge Funds
Within the investment industry, many fund types exist: Hedge Funds, investment trusts, unit trusts etc... yet the term Hedge Fund has no explicit definition. In fact the European Central Bank states in its report on Hedge Funds [Gar05] that no common Hedge Fund definition exists. Defining a Hedge Fund is in fact more problematic than it appears. To appreciate the difficulty in defining a Hedge Fund, it is instructive to know its brief history.
Brief History of the Hedge Funds Industry
According to Fung [FH99a], the first ever Hedge Fund was formed by Albert Wislow Jones in 1949, so called as the main investment strategy was to take hedged equity investments. By hedging (the act of removing risk in some investment by taking an investment in another (typically related) investment) Winslow was able to eliminate some market risks.
Hedge Funds then became first well-known after an article in Fortune(1966) mentioning Jones's fund significantly outperforming other Mutual Funds [FH99a]. Although this article initiated wide interest in Hedge Funds, their popularity diminished as it fell victim to the bear markets of 1969-70 and 1973-4. A decade later (1986), interest was revived by Robertson's infamous Tiger Fund [FH99a], which achieved compound annual returns of 43% for 6 years after all expenses. Fung in [FH99a] corroborates the impact that the publicity of Robertson's Fund had on the Hedge Fund industry by showing the rapid expansion of Hedge Funds and CTA funds (commodity trading advisor funds (similar to Hedge Funds)) from 1985-97.
With numerous Hedge Fund investors and the fact that Hedge Funds were virtually unregulated compared to other funds, a multitude of new Hedge Fund trading strategies evolved, including the use of derivatives e.g. options. Now all these funds came to be known as Hedge Funds yet many of them were using investment strategies beyond simply "hedging" that A.Winslow first employed (see [Gar05] for more details). To complicate matters further, as Hedge Fund strategies developed so also did funds other than Hedge Funds begin employing Winslow's equity hedging strategy, thus hedging was no longer unique to Hedge Funds. Today, the word "hedge" in Hedge Funds has become a misnomer, more of a historical hangover from Alfred Winslow rather than a description.
A Definition of Hedge Funds
As the European Central bank states [Gar05]:
"there is no common definition of what constitutes a Hedge Fund,it can be described as an unregulated or loosely regulated fund which can freely use various active investment strategies to achieve positive absolute returns".
As the European Central Bank implies, a Hedge Fund is difficult to define partly because of a lack of clarity of agreement on its term and also due to its diverse trading spectrum. They are typically characterised by high leveraging, derivatives trading and short selling compared to Mutual Funds. One way of defining a Hedge Fund is by comparing the similarities and differences with Mutual Funds. In a sense Hedge Funds are similar to any other portfolio investment in 3 respects:
• they are funded by capital from investors, rather than bank loans or other sources of capital;
• they invest in publicly traded securities e.g. equities and bonds;
• the capital is "managed" or invested by expert fund managers.
The key differences between Hedge Funds and Mutual Funds lies in the degree of regulation, the level and variety of risky investment strategies. Whereas Mutual Funds are required to adhere to strict financial regulations, including the types and levels of risks, Hedge Funds are free to pursue virtually any investment strategy with any level of risk.
Secondly, Hedge Fund investors are typically high net worth individuals or institutional investors e.g. pension funds [Gar05], partly because Hedge Funds typically require high minimum investment amounts. A graph taken from the European Central Bank [Gar05] shows the composition of Hedge Fund investors from 1992-2004.
Mutual funds on the other hand, are typically targetted at the general public and will accept any investor who can meet the minimum investment amount. Hedge Funds in fact are banned from advertising and in some cases the investors are required to be "accredited".
A third key difference is the fund portfolio composition. As Fung [FH99a] states, the majority of Mutual Funds are composed of equities and bonds. Hedge fund portfolio compositions are far more varied, with possibly a significant weighting in nonequity/bond assets e.g. derivatives.
A fourth key difference is that the historical return characteristics and distribution of Hedge Funds tend to differ significantly from Mutual Funds. For example, Capocci et al. [CH04] and Getmansky [GLM04] demonstrate that Hedge Funds empirically display serial correlation in returns. According to Brown [Bro01], Hedge Funds do not perform significantly better than most investment funds; Hedge Funds between 1989-95 earned 300 basis points below the S&P 500. However, other studies conclude that Hedge Funds produce excess market returns (see [CH04], [DFW05]). A graph below from [Gar05] gives the performance of Hedge Funds compared to key indexes. The CSFB/Tremont index is a Hedge Fund index, the "equivalent" of the FTSE-100 for UK stocks.
Hedge Fund Performance Benchmark Targets
With Mutual Funds only 1 type of performance benchmark typically exists; the fund is expected to match or excel a particular index e.g. FTSE-100 index, S&P 500 index. This is an example of a relative return target, which some Hedge Funds adopt as their benchmark. However for Hedge Funds another benchmark exists called absolute return targets.
An absolute return target is the typical benchmark choice for Hedge Funds and is the opposite of relative return. It is a fixed return target and the fund is expected to match/excel it regardless of the overall market performance. Hedge fund managers use two main approaches to achieve absolute return targets: Market Timing and the Non-Directional approach.
Market Timing this approach takes positions by anticipating the market trend or direction (either moving up/down). This approach potentially offers high returns, as demonstrated by Georg Soros in his Quantum Fund when speculating on the British Pound in 1992.
Non-Directional
An example of Non-Directional is A.Winslow's Hedge Fund; it is a fund that eliminates some market risks, hence it can be considered non-directional, whilst also benefitting from relative price movements of assets. According to Fung [FH99a] the non-directional approach has evolved over the last decade and is continuing to develop.
Hedge Fund Organisation
Hedge Funds typically prefer to concentrate their efforts on the key activity of maximising investment return, so non-essential operations are outsourced e.g. "back office" functions. Actual trading transactions too are outsourced to "Prime Brokers". Prime brokers are banks or securities firms, offering brokerage and other financial services to large institutional clients e.g. Pension Funds. It is also worth noting that Hedge Funds typically reside "offshore" to take advantage of more favourable tax treatments and regulations.
Fund of Hedge Funds (FOHF)
A Fund of Hedge Funds is simplistically a Mutual Fund that invests in multiple
Hedge Funds e.g 15-25 different Hedge Funds, furthermore F3 funds or fund of FOHF also exist. All these funds provide diversification benefits and a method of investing in Hedge Funds without requiring the skill to personally assess Hedge Funds individually.
Also, FOHF normally have significantly lower minimum investment levels compared to a standard Hedge Fund, thus increasing investment access to the general public. We now describe the 7 main Hedge Fund investment strategies as given by Fung [FH99a], which in turn are taken from MAR (Managed Account Reports (one of the oldest sources of global managed futures information )). The advantage of applying such strategy categorisation is that different Hedge Fund return characteristics can be explained by them (see [FH99a]).
Hedge Fund Investment Strategies
Event Driven
An event driven strategy means a position is taken to take advantage of price moves arising from new market information release or events occurring. A good example of such a strategy is to capitalise on merger and acquisition announcements, which cause the target company's share price to rise. An example is given below; Mark's and Spencer's share price rose on announcement of a takeover by Philip Green at the end of May 2004.
Global
The Global strategy is an all-round category for funds that invest in assets beyond those based in their home market. Other than that, no more specific technique is associated with this. A typical example would be a Hedge Fund investing in an emerging market such as India.
Global/Macro
The Global/Macro strategies utilise macroeconomic analysis to capitalize on asset price changes that are strongly linked to macroeconomics e.g. currencies, bonds, stock indices, and commodities. As the name implies, this startegy is applied on a global scale. For example, George Soross Quantum Fund reputedly made US$1 billion in 1 day on September 1992 by speculating the British Pound would exit the European Exchange Rate Mechanism.
Market Neutral
Market neutral investment refers to funds that hedge against market risk factors, thereby becoming "neutral" to the market. This strategy profits by speculating on relative price movements between assets or indexes. Examples of this method include long-short equity, stock index arbitrage, fixed income arbitrage. A good example of the long-short equity method is the classic 1949 A.W. Jones Hedge Fund, who took long and short positions in equities.
Sector
Sector Hedge Fund investing concentrate on investing in specific sectors e.g. airlines, telecoms, utilities sectors etc... . The investment instrument itself can be a variety of types e.g. short selling, long and leveraged positions.
Short Selling and Long-Only
Short selling and long-only Hedge Funds are those funds which will only invest by shorting or going long respectively.
Hedge Fund Risk Models
The necessity for Hedge Fund risk modelling and management originates from 2 areas:
• Hedge Funds experiencing some of the greatest losses ever witnessed by the investment community;
• new regulatory pressure enforcing more stringent Hedge Fund risk management. We now describe some of the quantitative risk models employed in modelling Hedge Fund risks.
Markowitz 's Portfolio Theory
Markowitz's Portfolio Theory (from hereon MPT) is typically applied to assets/portfolios whose return probability distributions approximate to a Normal distribution. Although this approximation is not strictly correct for Hedge Funds, it is still a workable risk model. In fact Fung and Hsieh in [FH99b] apply it to rank Hedge Fund performances.
Markowitz proposed a portfolio's risk is equal to the variance of the portfolio's returns. If we define the weighted expected return of a portfolio R p as
R p = N i=1 w i µ i ,(1)
then the portfolio's variance σ 2 p is
σ 2 p = N i=1 N j=1 σ ij w i w j ,(2)
where • N is the number of assets in a portfolio;
• i,j are the asset indices and i, j ∈ {1, ..., N} ;
• w i is the asset weight, subject to the constraints:
0 ≤ w i ≤ 1, N i=1 w i = 1;
• σ ij is the covariance of asset i with asset j;
• µ i is the expected return for asset i.
MPT also introduces the idea of an efficient frontier. For a given set of funds or assets available to invest in, an upper concave boundary exists on the maximum portfolio returns possible as risk or variance increases. Furthermore this concave relation between risk and return incorporates the theory of expected utility concavely increasing with risk.
Notice that MPT shows that some funds can perform lower than the risk free rate.
Naturally one wishes to choose the market portfolio which maximises return for a given level of risk/volatility as shown.
CAPM (Capital Asset Pricing Model)
Capocci and Hubner [CH04] state that in the 1980s CAPM and its variants (e.g.
Jensen's measure) were applied to Hedge fund risk measurement. The CAPM model, based on MPT, was invented by Sharpe [Sha64]:
R a = R f + β(R m − R f ) + ε, where
• R a is expected return of an asset;
• R f is the risk-free rate of return;
• R m is the expected market return;
• ε is the error term;
• β = σ am σ mm ;
• σ am is the market and asset's covariance;
• σ mm is the market's variance.
The CAPM model is applied generally in finance to determine a theoretically appropriate return of an asset. It presumes that investors must be compensated for investing in a risky asset in 2 ways 1)time value of money and 2)risk itself. The time value of money is accounted for by the risk-free rate R f whereas the return from risk arises from β(R m − R f ). The term (R m − R f ) represents the expected risk premium, which is the return obtained above the risk-free rate for investing in a risky asset. The beta term can be considered the "sensitivity" of the asset's risk to market risk (both measured by variance). Consequently more "sensitive" assets ought to produce higher returns by
CAPM. The graph below shows how asset return is linearly related to beta and that no beta implies a risk-free rate of return.
Sharpe Ratio and the Modified Sharpe Ratio
The Sharpe Ratio S, invented by Sharpe [Sha66], is based on MPT's risk measure (variance):
S = R p − R f σ p ,
where σ p is the portfolio return's standard deviation.
The Sharpe ratio can be intepretted as "(Return -Risk-free rate)/risk" since Sharpe considers standard deviation to be a risk measure. The Sharpe ratio provides a portfolio risk measure in terms of the quality of the portfolio's return at its given level of risk. A discussion on the Sharpe ratio can be found at Sharpe's website (www.stanford.edu/ wfsharpe/).
Fung and Hsieh in [FH00b] and [FH99b] use a modified version of the Sharpe ratio to rank Hedge Fund performance so to specifically cater for Hedge Fund return distributions. This is simply the Sharpe ratio without subtracting the risk free rate from the numerator:
Modified Sharpe Ratio= R p σ p .
Jenson's Alpha and Treynor ratio
Based on CAPM, Jensen formulated a portfolio risk measure to quantify portfolio returns above that predicted by CAPM called α:
α = R p − [R f + β p (R m − R f )].
One can interpret α as a measure of "excess returns" or portfolio manager's investment ability or i.e. "beating the market".
The Treynor ratio is a lesser well known portfolio ratio measure, similar to the Sharpe ratio, but assesses portfolio performance on a CAPM model basis:
Treynor Ratio= R p − R f β p .
Like the Sharpe ratio, the Treynor ratio can be interpretted as the "quality" of portfolio return for the given level of risk but risk measured on a CAPM theory basis.
Three Factor Model of Fama and French
The CAPM model is a single factor model that compares a portfolio with the market as a whole. Fama and French modified this model in [FF93] to take into account 2 empirical observations about asset classes that tend to have higher returns:
• small sized companies;
• value stocks (companies with high book to market value).
Having a higher return implies a higher risk premium associated with them. The 3 factor model accounts for these higher premiums with the following equation:
R a = R f + β p1 (R m − R f ) + β p2 SMB + β p3 HML + ε,
where • SMB is the difference in return for small and large sized companies;
• HML is the difference in return for high book to market value and low book to market value companies;
• β p1 , β p2 , β p3 are regression gradients (slopes).
Essentially the 3 factor model is a multiple linear regression equation. Jagadeesh and
Titman in [JT93] modify the CAPM model by adding a momentum to account for return. Fung and Hsieh in [FH04] apply both these models to long/short equity Hedge Funds, giving regression results.
Sharpe's Asset Class Factor Model
Sharpe in [Sha92] invented an asset factor model for risk measurement of Mutual Funds but Fung and Hsieh in [FH97] have applied it to Hedge Funds. This model essentially suggests that most Mutual Fund performances can be replicated by a small number of major asset classes e.g. large capitalisation growth stocks, large capitalisation value stocks, small capitalisation stocks etc... . Using Fung and Hsieh [FH97] notation Sharpe's model is:
R p = k w k F k + ǫ,
subject to:
• w k = j x j λ j ; • ǫ = j x j ǫ j ,
where • j is the asset class;
• k is the total number of asset classes;
• x j is the weighting of asset class j;
• λ j is the factor loading for asset j (change in fund return/change in asset j return);
• ǫ j is the error term for asset j Thus Hedge Fund return is a weighted average of a small number of asset classes, rather than a weighted average of a large number of individual asset returns as in MPT.
VaR (Value at Risk)
VaR (value at risk) was invented by JP Morgan in 1994 as a general risk management tool and has now become the industry standard for risk. It has become a popular and important risk measure primarily because of the Basel Committee, who standardise international banking regulations and practises. Gupta and Liang in [GL05] applied VaR to Hedge Funds, specifically for assessing a Hedge Fund's sufficient capital adequecy.
VaR tells us in monetary terms how much one's portfolio can expect to lose, for a given cumulative probability and for a given time horizon. For example, for a cumulative probability of 99% over a period of 1 day, the VaR amount would tell us the amount by which one would expect the portfolio to lose e.g.$100.
VaR can be calculated by simulation using historical data or some mathematical formula. VaR can also be calculated by the "variance-covariance method" (also known as the delta-normal method) but makes unrealistic assumptions about portfolio returns e.g. returns are normally distributed.
Problems with Hedge Fund Risk Modelling
Most portfolio risk measures make unrealistic modelling assumptions, particularly with respect to the assumed return probability distributions for mutual funds. Risk measurement assumptions become even more unrealistic for Hedge Funds. We now explain the difficulties in Hedge Fund risk measurement.
Investment Strategy and Return Distribution
It has been empirically observed that different investment strategies significantly alter the return distribution, particularly the mean and standard deviation. For example standard deviation, a common risk metric, varies from a low 2.1% in market neutral funds to 16.3% in Global/Macro funds [FH99a]. Consequently, it has been argued it would be better to apply separate risk measures for each Hedge Fund type (according to its strategy), rather than treating all Hedge Funds as part of 1 homogenous class.
Hedge Fund Failure Rate
Hedge fund survival rates are significantly lower than other funds [Gar05] and substantially vary; cumulative failure rates after 7 years range from 32-66% depending on the Hedge Fund's size. The table below from the European Central Bank [Gar05] describes this: See Close Man's website http://www.closefm.com/ for more detail.
Thus
RAB Capital
RAB Capital is a unique Hedge Fund in that it is one of the few UK Hedge Funds (or more specifically FOHF) that is listed on the London Stock Exchange (ticker symbol RAB.L).Their funds are accessible to the general public rather than high net worth individuals, although RAB warns "These funds are not appropriate for a novice investor". They specialise in a variety of absolute return funds, some of which employ the long-only investment strategy, where assets are bought on the basis that they are considered undervalued.
See RAB Capital's website http://www.rabcap.com/ for more detail.
Thames River Capital
Thames River Capital is an absolute return based Hedge Fund, offering a range of regulated and unregulated funds. Each fund uses various investment strategies, ranging from Global strategies (see Global Emerging Market Fund) to market neutral strategies using high leverage.
See Thames River Capital's wesbite http://www.thamesriver.co.uk/ for more detail.
Ikos Hedge Fund
The founder and co-owner of her own hedge fund has made Elena Ambrosiadou one of the richest women in Britain according to the 2006 Sunday Times Rich List. This hedge fund engages in "program trading" whereby trades are executed according to a computer program. This method of trading has the advantage removing any subjective decision making from speculation but can also result in investments that one would strongly and intuitively disapprove. Ikos focus on exchange rate investing but also speculate in equities.
For more information on Ikos see http://www.ikosam.com/.
Famous Hedge Funds Withdrawn From The Market
All major funds are susceptible to collapsing, however, in the case of Hedge Funds this is more frequent and the losses tend to be substantially higher. It is therefore quite informative to understand some of the spectacular Hedge Fund losses. We now describe some Hedge Funds that were previously available on the market but have now ceased trading. before the speculative bubble itself collapsed.
The Case For and Against Hedge Funds
Despite the potential to provide substantial returns, it would appear conclusive that Hedge Funds ought to be abolished or at least highly regulated. However the issue is far more complex than one assumes. We now elaborate on the benefits and disadvantages of Hedge Funds.
The Case for Preserving Hedge Funds
It can be argued Hedge Funds provide an economic benefit to markets, in particular they aid price discovery. It has been suggested that Hedge Funds take contrarian positions; they do not engage in "herd-mentality" trading, unlike Mutual Funds. Therefore
Hedge Funds buy or sell assets according to the perceived fair value.
A second economic benefit of Hedge Funds is that they aid competition and the economic concept of the "invisible hand" [DTZ05] and thrive on market inefficiencies.
As traders do not have instantaneous and costless access to market information, asset mispricing or an arbitrage opportunities must occur e.g. an asset trading in 2 different markets may have different prices. Hedge Funds take advantage of such arbitrage opportunities and so push prices to their no-arbitrage price.
Another important economic benefit of Hedge Funds is liquidity provision. Hedge
Funds typically invest in riskier assets that many investors would not consider. Hedge
Funds therefore provide much needed capital for investments.
Hedge Funds can actually reduce overall risk rather than increase it. Firstly, Hedge
Funds take on riskier investments, thereby "absorbing" some of the risk that would be concentrated in a smaller number of funds. Additionally Hedge Funds are more willing to invest in volatile markets, thereby "absorbing" the effects of market shocks.
Hedge Funds are important as an investment product in itself. They provide sophisticated investors with another vehicle for high returns that would not be available in traditional Mutual Funds [DTZ05]. They also provide diversification (a method of reducing risk without reducing return by investing in more than 1 asset) as they represent a different investment class.
A second benefit from a investor's perspective is that Hedge Funds can provide "absolute" returns. Hedge Funds can achieve this because they pursue a variety of sophisticated investment strategies. Traditional Mutual Funds are limited in trading strategies due to heavy regulation.
The Case Against Hedge Funds
Rather than aid market functioning, Hedge Funds have been criticized for doing more harm than good. Firstly, rather than contrarian investing, Hedge Funds engage in "herding" [DTZ05]. Notable examples include the 1992 ERM crisis and the 1997 Asian Currency Crisis.
Secondly, it was suggested Hedge Funds provide much needed capital by investing in risky assets, yet Hedge Funds have been blamed for exhausting liquidity in the market [DTZ05]. Due to Hedge Funds typically taking large positions and the trading strategies they pursue, they are unable to make trades without causing a massive price moves due to illiquidity (Fung supports this idea in [FH00a] Thirdly, Hedge Funds can prevent efficient market functioning by causing market price distortions, rather than aiding price discovery. Large volume trades can cause significant price movements, rather than price movements occurring due to company/economic fundamentals. Fung in [FH00a] cites such examples as the 1992 ERM Crisis but concludes that Hedge Funds overall do not distort prices beyond their company/economic fundamentals.
The Hedge Fund as a viable alternative investment product has also been heavily disapproved. For instance some quotes from leading academics on Hedge Funds:
• "If you want to invest in something where they steal your money and don't tell you what they're doing, be my guest., Eugene Fama.
• "If there's a license to steal, it's in the hedge fund arena", Burton Malkiel.
In an article in Forbes (May 14, 2004) Bernard Condon claims that "You would do better giving your money to a monkey" than investing in Hedge Funds. As a managed investment product Hedge Funds command the highest management fees, typically around 20%, compared to mutual funds that normally charge around 1%.
The investment strategies employed by various Mutual funds are well documented, ranging from value investing to buying growth stocks, with each having particular risk and return implications. On the contrary, Hedge Fund investment strategies are far less well documented and the variety of strategies are greater than for Mutual Funds. Consequently, there is no widely accepted categorisation of Hedge Fund strategies,for example, Stonham in [Sto99b] identifies 14 Hedge Fund strategy categories whereas Fung [FH99a] only has 7.
Firstly,
Hedge Funds have been responsible for numerous catastrophic losses, causing them to completely collapse and initiate a contagion effect by affecting numerous economic and financial sectors. The most notorious example of such a catastrophic loss being the Long Term Capital Management Hedge Fund, which lost US$2.1 Billion [Sto99b] and almost brought down the entire US financial system. Secondly, as already mentioned, Mutual Funds are tightly regulated whereas Hedge Funds face little regulation. However, as Hedge Funds have gained public attention and therefore more investment interest, this along with spectacular Hedge Fund disasters have prompted increased Hedge Fund regulation. It was not until after the 1997 Asian Currency Crisis though that regulators became interested in regulating Hedge Fund activities [FH00a]. The IMF (International Monetary Fund) initiated a study on the market influence of Hedge Funds by Eichengreen [ES98]. This study described Hedge Funds activities and the potential problem of the market impact of Hedge Funds. Moreover in 2004, the Securities and Exchange Commission now required Hedge Fund managers and sponsors to register as investment advisors under the Investment Advisor's Act of 1940. This greatly increases the number of requirements placed on Hedge Funds e.g. keeping records and creating a code of ethics. For more information on SEC regulation visit the SEC website http://www.sec.gov/.
the inclusion of non-existent Hedge Funds poses a problem when assessing the overall performance of Hedge Funds (similar to the survivorship bias issue with Mutual Fund performance).6. Hedge Funds Available On The Market6.1. Close Man Hedge FundClose Man Hedge Fund is an absolute return Hedge Fund. This fund applies the market neutral investment strategy (specifically fixed income arbitrage) by investing solely in Capital Guaranteed Bonds issued by The Royal Bank of Scotland. Thus the fund is theoretically insulated from market risks but can still benefit from price movements using a variety of techniques. For this particular fund, Close Man will engage in leveraging and using swaps (a type of derivative) to boost returns.
). Additionally, Hedge Funds are usually heavily leveraged, increasing the likelihood of illiquidity e.g.LTCM. However, Gupta in [GL05] investigates capital adequacy using VaR (value at risk) measures and concludes that most Hedge Funds are adequately funded.
Addition -
Additionally Hedge Fund investors have tougher withdrawal constraints. Secondly as Fama mentions, Hedge Funds have poor transparency. Regulatory bodies such as the SEC do not dictate the same strict rules for Hedge Funds that it does for Mutual Funds: there are no rules on publishing records on asset holdings and financial performance, lack of transparency increases the chances of investors being unable to effectively assess risk.Finally, Hedge Funds have a higher failure rate than Mutual Funds and thus a higher credit risk. Hedge Fund face less regulation on leveraging and investment strategies, thus are susceptible to a higher probability of default e.g. LTCM. Consequently there is less likelihood of capital recovery. Hedge Funds are clearly a complex and unique investment product that can produce extraordinary gains as well as losses. They have and continue to thrive on the unregulated aspects of the business, spawning a variety of innovative investment techniques. It has only been in the past 10 years that regulatory bodies have focussed on Hedge Fund regulation to avert previous Hedge Fund disasters e.g. LTCM. Despite the clear necessity to understand such a powerful investment, knowledge and understanding of the Hedge Fund industry remains relatively poor. There is no consensus on the specific definition of a Hedge Fund, very little literature is devoted to Hedge Fund risk modelling and their various investment techniques. Consequently there is a large scope for future research into Hedge Fund risk management.9. Conclusion
day. However years later, his fund suffered massive losses; in 1998 Russia's defaulting crisis created a loss of US $2 billion.
Hedge funds: Omniscient or just plain wrong. Sj Brown, Pacific-Basin Finance Journal. 94SJ Brown. Hedge funds: Omniscient or just plain wrong. Pacific-Basin Finance Journal, 9(4):301-311, 2001.
Analysis of hedge fund performance. D Capocci, G Hübner, Journal of Empirical Finance. 111D. Capocci and G. Hübner. Analysis of hedge fund performance. Journal of Empirical Finance, 11(1):55-89, 2004.
An empirical analysis of hedge fund performance: The case of Australian hedge funds industry. V Do, R Faff, J Wickramanayake, Journal of Multinational Financial Management. 154-5V. Do, R. Faff, and J. Wickramanayake. An empirical analysis of hedge fund performance: The case of Australian hedge funds industry. Journal of Multinational Financial Management, 15(4-5):377-93, 2005.
Highwaymen or heroes: Should hedge funds be regulated?: A survey. J Danielsson, A Taylor, J P Zigrand, Journal of Financial Stability. 14J. Danielsson, A. Taylor, and J.P. Zigrand. Highwaymen or heroes: Should hedge funds be regulated?: A survey. Journal of Financial Stability, 1(4):522-543, 2005.
Hedge Fund and Financial Market Dynamics. D Mathieson, B Chadha, A Jansen, L Kodres, B Eichengreen, S Sharma, International Monetary FundMathieson D. Chadha B. Jansen A. Kodres L. Eichengreen, B. and S. Sharma. Hedge Fund and Financial Market Dynamics. International Monetary Fund, 1998.
Common risk factors in the returns on stocks and bonds. E F Fama, K R French, Journal of Financial Economics. 331E.F. Fama and K.R. French. Common risk factors in the returns on stocks and bonds. Journal of Financial Economics, 33(1):3-56, 1993.
Empirical characteristics of dynamic trading strategies: the case of hedge funds. W Fung, D A Hsieh, Review of Financial Studies. W. Fung and DA Hsieh. Empirical characteristics of dynamic trading strate- gies: the case of hedge funds. Review of Financial Studies, 1997.
A Primer on Hedge Funds. W Fung, D A Hsieh, Journal of Empirical Finance. 63W. Fung and D.A. Hsieh. A Primer on Hedge Funds. Journal of Empirical Finance, 6(3):309-331, 1999.
Is Mean-Variance Analysis Applicable to Hedge Funds? Economic Letters. W Fung, D A Hsieh, 62W. Fung and D.A. Hsieh. Is Mean-Variance Analysis Applicable to Hedge Funds? Economic Letters, 62(1):53-58, 1999.
Measuring the market impact of hedge funds q. W Fung, D A Hsieh, Journal of Empirical Finance. 7W. Fung and D.A. Hsieh. Measuring the market impact of hedge funds q. Journal of Empirical Finance, 7:1-36, 2000.
Performance Characteristics of Hedge Funds and Commodity Funds: Natural vs. Spurious Biases. W Fung, D A Hsieh, The Journal of Financial and Quantitative Analysis. 353W. Fung and D.A. Hsieh. Performance Characteristics of Hedge Funds and Commodity Funds: Natural vs. Spurious Biases. The Journal of Financial and Quantitative Analysis, 35(3):291-307, 2000.
Extracting portable alphas from equity long-short hedge funds. W Fung, D A Hsieh, Journal of Investment Management. 24W. Fung and D.A. Hsieh. Extracting portable alphas from equity long-short hedge funds. Journal of Investment Management, 2(4):1-19, 2004.
Hedge Funds and Their Implications for Financial Stability. D F Garbaravicius, ECB. Occasional Paper Series. 34D. Garbaravicius. F.,Hedge Funds and Their Implications for Financial Sta- bility, ECB. Occasional Paper Series, 34, 2005.
Do hedge funds have enough capital? A value-atrisk approach star, open. A Gupta, B Liang, Journal of Financial Economics. 771A. Gupta and B. Liang. Do hedge funds have enough capital? A value-at- risk approach star, open. Journal of Financial Economics, 77(1):219-253, 2005.
An Econometric Model of Serial Correlation and Illiquidity in Hedge Fund Returns. M Getmansky, A W Lo, I Makarov, Journal of Financial Economics. 743M. Getmansky, A.W. Lo, and I. Makarov. An Econometric Model of Serial Correlation and Illiquidity in Hedge Fund Returns. Journal of Financial Economics, 74(3):529-610, 2004.
Returns to Buying Winners and Selling Losers: Implications for Stock Market Efficiency. N Jegadeesh, S Titman, The Journal of Finance. 481N. Jegadeesh and S. Titman. Returns to Buying Winners and Selling Losers: Implications for Stock Market Efficiency. The Journal of Finance, 48(1):65- 91, 1993.
Capital Asset Prices: A Theory of Market Equilibrium under Conditions of Risk. W F Sharpe, The Journal of Finance. 193W.F. Sharpe. Capital Asset Prices: A Theory of Market Equilibrium under Conditions of Risk. The Journal of Finance, 19(3):425-442, 1964.
Mutual Fund Performance. W F Sharpe, The Journal of Business. 391W.F. Sharpe. Mutual Fund Performance. The Journal of Business, 39(1):119-138, 1966.
Asset Allocation: Management Style and Performance Measurement. W F Sharpe, Journal of Portfolio Management. 182W.F. Sharpe. Asset Allocation: Management Style and Performance Mea- surement. Journal of Portfolio Management, 18(2):7-19, 1992.
Too Close To The Hedge: The Case Of Long Term Capital Management. Part Two: Near-Collapse And Rescue. Paul Stonham, European Management Journal. 174Paul Stonham. Too Close To The Hedge: The Case Of Long Term Capital Management. Part Two: Near-Collapse And Rescue. European Management Journal,Volume 17, Issue 4 ,Pages 382-390, 1999.
Too Close To The Hedge:the Case of Long Term Capital Management LP Part One:Hedge Fund Analytics. Paul Stonham, European Management Journal. 17Paul Stonham. Too Close To The Hedge:the Case of Long Term Capital Management LP Part One:Hedge Fund Analytics. European Management Journal,Vol. 17,p282-289, 1999.
| []
|
[
"Structure-Preserving H ∞ Control for Port-Hamiltonian Systems",
"Structure-Preserving H ∞ Control for Port-Hamiltonian Systems",
"Structure-Preserving H ∞ Control for Port-Hamiltonian Systems",
"Structure-Preserving H ∞ Control for Port-Hamiltonian Systems",
"Structure-Preserving H ∞ Control for Port-Hamiltonian Systems",
"Structure-Preserving H ∞ Control for Port-Hamiltonian Systems"
]
| [
"Tobias Breiten [email protected] ",
"Attila Karsai [email protected] ",
"\nInstitute of Mathematics Technische\nInstitute of Mathematics Technische\nUniversität Berlin\nStraße des 17. Juni 13610623BerlinGermany\n",
"\nUniversität Berlin\nStraße des 17. Juni 13610623BerlinGermany\n",
"\nIntroduction\n\n",
"Tobias Breiten [email protected] ",
"Attila Karsai [email protected] ",
"\nInstitute of Mathematics Technische\nInstitute of Mathematics Technische\nUniversität Berlin\nStraße des 17. Juni 13610623BerlinGermany\n",
"\nUniversität Berlin\nStraße des 17. Juni 13610623BerlinGermany\n",
"\nIntroduction\n\n",
"Tobias Breiten [email protected] ",
"Attila Karsai [email protected] ",
"\nInstitute of Mathematics Technische\nInstitute of Mathematics Technische\nUniversität Berlin\nStraße des 17. Juni 13610623BerlinGermany\n",
"\nUniversität Berlin\nStraße des 17. Juni 13610623BerlinGermany\n",
"\nIntroduction\n\n"
]
| [
"Institute of Mathematics Technische\nInstitute of Mathematics Technische\nUniversität Berlin\nStraße des 17. Juni 13610623BerlinGermany",
"Universität Berlin\nStraße des 17. Juni 13610623BerlinGermany",
"Introduction\n",
"Institute of Mathematics Technische\nInstitute of Mathematics Technische\nUniversität Berlin\nStraße des 17. Juni 13610623BerlinGermany",
"Universität Berlin\nStraße des 17. Juni 13610623BerlinGermany",
"Introduction\n",
"Institute of Mathematics Technische\nInstitute of Mathematics Technische\nUniversität Berlin\nStraße des 17. Juni 13610623BerlinGermany",
"Universität Berlin\nStraße des 17. Juni 13610623BerlinGermany",
"Introduction\n"
]
| []
| We study H ∞ control design for linear time-invariant port-Hamiltonian systems. By a modification of the two central algebraic Riccati equations, we ensure that the resulting controller will be port-Hamiltonian. Using these modified equations, we proceed to show that a corresponding balanced truncation approach preserves port-Hamiltonian structure. We illustrate the theoretical findings using numerical examples and observe that the chosen representation of the port-Hamiltonian system can have an influence on the approximation qualities of the reduced order model. | null | [
"https://export.arxiv.org/pdf/2206.08706v1.pdf"
]
| 249,847,825 | 2206.08706 | 678301c93eafea80c3499b26125c028b2d458ea5 |
Structure-Preserving H ∞ Control for Port-Hamiltonian Systems
17 Jun 2022 June 20, 2022
Tobias Breiten [email protected]
Attila Karsai [email protected]
Institute of Mathematics Technische
Institute of Mathematics Technische
Universität Berlin
Straße des 17. Juni 13610623BerlinGermany
Universität Berlin
Straße des 17. Juni 13610623BerlinGermany
Introduction
Structure-Preserving H ∞ Control for Port-Hamiltonian Systems
17 Jun 2022 June 20, 2022port-Hamiltonian systemsH ∞ control designmodel order reduction
We study H ∞ control design for linear time-invariant port-Hamiltonian systems. By a modification of the two central algebraic Riccati equations, we ensure that the resulting controller will be port-Hamiltonian. Using these modified equations, we proceed to show that a corresponding balanced truncation approach preserves port-Hamiltonian structure. We illustrate the theoretical findings using numerical examples and observe that the chosen representation of the port-Hamiltonian system can have an influence on the approximation qualities of the reduced order model.
Introduction
Linear systems are an important tool in mathematical modeling. Large model classes can be written in a linear form, and many nonlinear systems can be linearized around equlibria to obtain a linear approximation. Control of such systems is an essential part of many applications. A common approach is control by interconnection, where the original system, the plant, is connected to a second linear system, the controller. Two well-known examples of this technique are linear quadratic Gaussian (LQG) control [20] and H ∞ control [13]. The latter is particularly interesting for real-world applications due to the poor robustness properties of LQG control [12].
Often, the considered linear systems have additional mathematical properties that can be interpreted physically, such as, for example, passivity or port-Hamiltonian structure. Although the roots of port-Hamiltonian (pH) systems theory date back as far as the late 1950s [36], they continue to be the focus of active research. For an overview of port-Hamiltonian systems, see, e.g., [24,35,36] and the references therein. In the context of control methods, the property that power-conserving interconnections of pH systems can again be formulated as port-Hamiltonian systems [34,35] is particularly important. Unfortunately, classical LQG and H ∞ control do not necessarily yield port-Hamiltonian controllers, even when the considered plant is port-Hamiltonian. Similarly, they do not preserve other important properties of the plant, which has lead to the development of modified techniques in the past, see for example [8,21] for the LQG setting and [18] for the H ∞ setting.
Our contribution is the introduction of a structure-preserving controller synthesis method which ensures that the resulting closed loop transfer function stays within a prescribed H ∞ margin. To achieve this goal, we take a similar approach as [18] and make use of a result established in [6]. Theorem 6 demonstrates how the algebraic Riccati equations used for classical H ∞ control need to be altered to ensure that the resulting controller has port-Hamiltonian structure. As in [8], the Hamiltonian of the pH plant plays an important role in the construction. To show the closed loop H ∞ bound in Theorem 6, we need to make an additional assumption on the resulting control system, which envolves the existence of solutions to specific Lur'e equations. In order to overcome this additional assumption, in Theorem 11 we present an extension of the method. Subsequently, using the algebraic Riccati equations that play a central role in our approach, we are able to develop a model order reduction method which preserves port-Hamiltonian structure. For that, ideas from system balancing and the effort-constrained method from [30] are used. Let us point out that other structure-preserving model reduction methods have been proposed. These include approaches based on Riccati equations [8], spectral factorization [10], tangential interpolation [15], symplectic geometry [22], Krylov methods [28,29], and optimization [32].
The paper is structured as follows. In Section 2, we collect the necessary background, introduce the notion of port-Hamiltonian systems, and mention connections to Kálmán-Yakubovich-Popov linear matrix inequalities (KYP LMIs). Further, we shortly discuss the notion of (strictly) positive real systems and recall a definition from [37] relying on Lur'e equations to circumvent inconsistencies found in the literature. At the end of the section, we adapt a known statement regarding the power-conserving interconnection of such systems to this setting. Our main results are stated in Section 3. After a precise problem formulation and a motivating example, Theorems 6 and 11 are developed. Both theorems state a pair of modified algebraic Riccati equations that allow for the construction of a controller that has a port-Hamiltonian formulation and ensures an H ∞ bound for the closed loop transfer function. As a consequence of these results, in Section 4 a balancing based model order reduction method is presented. In Section 5, the theoretical results are illustrated using numerical examples. Finally, Section 6 concludes the findings and outlines possible future research objectives.
Notation
We denote the open right and the open left half-plane by C + and C − , respectively, and the imaginary axis by iR. We denote the real part of a complex number z ∈ C by Re(z). The identity matrix with size inferred from the context is denoted by I. Besides, the conjugate transpose of a matrix A is denoted with A H , its spectrum by σ (A), and its kernel by ker(A). Further, the smallest singular value in the economy sized singular value decomposition is termed σ min (A). In other words, if the matrix A has full rank, then σ min (A) is positive. For a symmetric matrix A ∈ C n,n , the smallest eigenvalue is denoted by λ min (A). We write A 0 if x H Ax ≥ 0 for all x ∈ C n , and A ≻ 0 if x H Ax > 0 for all x ∈ C n \ {0}. Similarly, for matrices A and B we write A B if A − B 0. We call a matrix A ∈ C n,n stable if σ (A) ⊆ C − ∪ iR and all purely imaginary eigenvalues of A are semi-simple, and asymptotically stable if σ (A) ⊆ C − .
Preliminaries
In this paper, we consider linear port-Hamiltonian systems without direct feedthrough, which are special cases of linear time-invariant systems. General linear time-invariant systems take the formẋ = Ax + Bu,
y = Cx + Du,
where A ∈ R n,n , B ∈ R n,m ,C ∈ R p,n , and D ∈ R p,m . We will abbreviate the system as (A, B,C, D) and write (A, B,C) if D = 0. Throughout this paper, we assume m, p ≤ n and call these systems square if m = p. A linear system (A, B,C, D) is termed minimal if the pair (A, B) is controllable and the pair (A,C) is observable.
Linear port-Hamiltonian (pH) systems are systems of the forṁ
x = (J − R)Qx + (B − P)u, y = (B + P) T Qx + Du,
where B ∈ R n,m and J, R, Q ∈ R n,n are such that • J is skew-symmetric, • Q is symmetric positive definite, and • D, P and R = R T satisfy
R P P T 1 2 (D + D T ) 0.
We will be interested in the case D = 0 and P = 0. In this case we will write (J, R, Q, B) to characterize the system and use the abbreviations A := (J − R)Q and C := B T Q. The function H : x → 1 2 x T Qx is usually called the Hamiltonian of (J, R, Q, B). We will also call Q the Hamiltonian. Note that by our definition, all pH systems are square.
To prove our main results, we make use of a preliminary result (Proposition 2) which is concerned with the asymptotic stability of the closed loop system matrix of a system resulting from power-conserving interconnection of two systems which satisfy Lur'e equations related to the notions of (strict) positive realness. Although the definitions of positive real systems are consistent in the literature, for strict positive realness there exist multiple definitions. For example, [19] distinguishes between "weak strict positive realness" and "strict positive realness", whereas in [2] this distinction is not made. An overview of different definitions and their connections is given in [37]. There it was argued that, due to their importance for stability analysis, the Lur'e equations should be used for the definition of strict positive realness. We follow this argumentation, but emphasize the distinction from the existing definitions by avoiding the term "positive real". Solutions of Lur'e equations were studied in, e.g., [31].
We say that a square linear system (A, B,C, D) satisfies • the Lur'e equations, if A is stable and there exists a symmetric positive definite matrix P ∈ R n,n and matrices L ∈ R n,m ,W ∈ R m,m that satisfy
A T P + PA = −LL T C T − PB = LW W T W = D + D T .(1)
• the strong Lur'e equations, if A is asymptotically stable, B has full column rank and there exist symmetric positive definite matrices P, S ∈ R n,n and matrices L ∈ R n,m ,W ∈ R m,m that satisfy
A T P + PA = −LL T − S C T − PB = LW W T W = D + D T .
(2)
Let us briefly point out a few differences to the standard notions of positive realness. The usual definition, e.g., [1,7,39] of positive real systems is in terms of the transfer function G(s) = C(sI − A) −1 B + D of (A, B,C, D), which has to be analytic in C + and satisfy
G(s) + G(s) H 0 for all s ∈ C + ∪ iR.
It can then be shown that, under the assumption of minimality of (A, B,C, D), positive realness is equivalent to the system satisfying (1) [7,38]. For strict positive realness, there are multiple slightly different definitions in the literature. For example, in [16] a transfer function is termed strictly positive real if it is analytic in C + and satisfies G(s) + G(s) H ηI for all s ∈ C + for some η > 0. On the other hand, in [2] a transfer function G that is termed strictly positive real satisfies
G(iω) + G(iω) H ≻ 0 for all ω ∈ R.(3)
There, it is argued that for a strictly positive real transfer function G and a sufficiently small ε > 0 also G(· − ε) is positive real. This may contradict the claims made in [19,Examples 3 and 4], where (3) is the definition of "weak strict positive realness".
Remark 1. We may reformulate (1) as the matrix inequality
−PA − A T P C T − PB C − B T P D + D T = LL T LW W T L T W T W = L W T L T W 0, P = P T ≻ 0. (4)
If D = 0, then W = 0 and the above inequality simplifies to
−PA − A T P C T − PB C − B T P 0 0,
which is equivalent to PA + A T P 0 and C = B T P.
We can proceed similarly for (2) and observe that for a linear system (A, B,C) where A is asymptotically stable and B has full column rank a sufficient condition for the satisfaction of the strong Lur'e equations is the existence of a symmetric positive definite matrix P ∈ R n,n such that PA + A T P ≺ 0 and C = B T P.
The inequality (4) is linear in P and is commonly known as Kálmán-Yakubovich-Popov linear matrix inequality (KYP-LMI). It was shown in [38] that there exist extremal solutions X min and X max to (4) in the sense that 0 X min P X max for all solutions P of (4). Similarly, there exist extremal solutions Y min and Y max to a dual KYP-LMI. Further, provided that D + D T is nonsingular, the inequality (4) can be associated with an algebraic Riccati equation using the Schur complement. The solutions X min and Y min are used for positive real balancing [27].
For the sake of completeness, let us recall some well-known observations concerning pH systems, which are also stated in, e.g., [4]. Suppose (J, R, Q, B) is a pH system and A and C are defined as usual. Then AQ −1 = J − R, which shows that J and −R are the respective skewsymmetric and symmetric parts of AQ −1 . Hence, we have
J = 1 2 (AQ −1 − Q −1 A T ) and R = − 1 2 (AQ −1 + Q −1 A T ).(5)
Since R is symmetric positive semi-definite by assumption, also
0 −2QRQ = −QA − A T Q.
As (by our definition) all port-Hamiltonian systems are stable [23, Lemma 3.1], this shows that (J, R, Q, B) satisfies the Lur'e equations. With Remark 1 in mind, let us assume that for a general linear system (A, B,C) there exists a symmetric positive definite matrix P such that PA + A T P 0 and C = B T P.
Then by Sylvester's law of inertia for the symmetric part of AP −1 we have
P −1 (PA + A T P)P −1 = AP −1 + P −1 A T 0.
Since C = B T P and P is symmetric positive definite, we can choose Q := P and define J and R as in (5) to arrive at a port-Hamiltonian formulation (J, R, Q, B) of (A, B,C). We will use this construction in the proofs of Theorems 6 and 11. Note that R is definite and the system satisfies the strong Lur'e equations if PA+A T P ≺ 0 and B has full column rank. For that, recall Remark 1 and see [23,Lemma 3.1] for the asymptotic stability of A.
We have seen that any positive definite solution X to
−A T X − X A C T − X B C − B T X 0 0, X = X T 0(6)
defines a port-Hamiltonian representation of (A, B,C) with X as the Hamiltonian. Since multiple solutions to (6) may exist, this shows that pH formulations are not unique. This fact is, for example, exploited in [8], where the authors determine a pH representation which leads to the minimization of an a priori error bound for model reduction. Another example of this principle is [4], where maximally robust pH representations are studied. We will also make use of the non-uniqueness in Section 5.
Since there is a relationship between the Lur'e equations (1) and (2) and the possibility of a port-Hamiltonian realization, we can ask how these properties influence a closed loop system resulting from power-conserving interconnection of two such systems. Proposition 2 answers this question and appeared in similar form in [5] and [17].
Proposition 2. Suppose Σ 1 = (A 1 , B 1 ,C 1 ) and Σ 2 = (A 2 , B 2 ,C 2 )
are systems that satisfy the Lur'e equations and can be coupled via power-conserving interconnection. If Σ 1 is minimal and Σ 2 satisfies the strong Lur'e equations, then
A := A 1 −B 1 C 2 B 2 C 1 A 2
is asymptotically stable.
Proof. Since Σ 1 and Σ 2 satisfy the (strong) Lur'e equations, there exist symmetric positive definite matrices P 1 and P 2 , matrices L 1 and L 2 , and a symmetric positive definite matrix S 2 of appropriate dimension such that
A T 1 P 1 + P 1 A 1 + L 1 L T 1 = 0, B T 1 P 1 −C 1 = 0, A T 2 P 2 + P 2 A 2 + L 2 L T 2 + S 2 = 0, B T 2 P 2 −C 2 = 0.
To prove the asymptotic stability of A, define P and R as P := P 1 0 0 P 2 and R :=
R 1 0 0 R 2 := L 1 L T 1 0 0 L 2 L T 2 + S 2 .
and notice that they satisfy the Lyapunov equation
A T P + PA + R = 0.(7)
Now, assume v is a right eigenvector of A associated with the eigenvalue λ ∈ C. Multiplying (7) by v H from the left and v from the right, we obtain
2Re(λ )v H Pv = −v H Rv.
Since P ≻ 0 and R 0, we immediately see that Re(λ ) ≤ 0. Now assume Re(λ ) = 0. Then also Rv = 0. Partitioning v = v 1 v 2 appropriately and noticing ker(
R 2 ) = {0}, we see that v 2 = 0. Rewriting Av = λ v as A 1 −B 1 C 2 B 2 C 1 A 2 v 1 0 = λ v 1 0 ,(8)
we notice that the second row reads
B 2 C 1 v 1 = 0.(9)
By assumption, the system (A 2 , B 2 ,C 2 ) satisfies the strong Lur'e equations. Hence, B 2 has full column rank and the matrix B T 2 B 2 is invertible. Multiplying (9) by B T 2 from the left, we obtain C 1 v 1 = 0. But the first row of (8) gives A 1 v 1 = λ v 1 , which together with C 1 v 1 = 0 contradicts the assumed observability of (A 1 ,C 1 ) using the Hautus test. This shows that Re(λ ) = 0 is not possible, so λ ∈ C − and A is asymptotically stable.
H ∞ Control and Structure Preserving Modifications
Let us state the H ∞ control problem that we will consider. Assume a linear system under the influence of an additional control w in the form oḟ
x = Ax + Bu + D 1 w, y = Cx + D 2 w
is given, where A ∈ R n,n , B ∈ R n,m ,C ∈ R m,n and w has length ℓ ≥ n + m. In the literature, w is often assumed to be a Gaussian white noise process. Here, we assume that w is an additional deterministic control input and refer to, e.g., [14,33] for the stochastic background. We require the matrices D 1 ∈ R n,ℓ and D 2 ∈ R m,ℓ to satisfy D 1 D T 2 = 0. Consider the observed variable z as
z = E 1 x + E 2 u,
where E 1 ∈ R ℓ,n and E 2 ∈ R ℓ,m are such that E T 1 E 2 = 0. Now assume that the linear system˙
x = A x + B u, y = C x
is connected to the former system via power-conserving interconnection, i.e. u = y and u = − y. Then the dynamics of the closed loop system are determined by
ẋ x = A −B C BC A x x + D 1 w BD 2 w
and z takes the form
z = E 1 −E 2 C x x .
Note that the closed loop system matrix has the same form as A in Proposition 2, which is why we denote
A := A −B C BC A , D := D 1 BD 2 , and E := E 1 −E 2 C .(10)
The closed loop transfer function T zw = T z←w from w to z is then given by
T zw = E(sI − A) −1 D.(11)
The goal of H ∞ control design is to choose the controller ( A, B, C) so that the closed loop system matrix A is asymptotically stable and the transfer function T zw satisfies
T zw H ∞ = sup ω∈R T zw (iω) 2 < γ(12)
for some γ ∈ (0, ∞). Such a controller is termed admissible. Typically, the parameter γ is not required to be optimal, i.e. a smaller bound γ and a corresponding admissible controller might exist. We denote the smallest value of γ for which an admissible controller exists as γ 0 , only consider the case γ > γ 0 , and call the corresponding controllers suboptimal. Suboptimal H ∞ controllers were extensively studied in [13]. Unfortunately, even if the original system is port-Hamiltonian, this is not necessarily also the case for the H ∞ controller stated in [13]. This is demonstrated by Example 4.
Remark 3.
The H ∞ controller stated in [13] (which we will refer to as the classical H ∞ controller in the following) is, in the simplest setting, constructed as follows. Suppose the stabilizing solutions X and Y of the algebraic Riccati equations
A T X + X A − (1 − γ −2 )X BB T X +C T C = 0(13)
and
AY +YA T − (1 − γ −2 )YC T CY + BB T = 0(14)
exist and that ρ(XY ) < γ 2 . Define Z :
= (I − γ −2 Y X ) −1 and A := A − (1 − γ −2 )YC T C − BB T X Z, B := YC T , C := B T X Z.
Then ( A, B, C) yields T zw H ∞ < γ. With our problem formulation, these AREs follow from the results of [13] when the matrices D 1 , D 2 , E 1 and E 2 are chosen as If ( A, B, C) was port-Hamiltonian, then there would exist a symmetric positive definite matrix S such that C = B T S. Inserting the above equation and the definition of C, we see that S has to satisfy
D 1 = B 0 , D 2 = 0 I , E 1 = C 0 and E 2 = 0 I .B T S = B T QY S = B T X Z = C,
which can be written as a system of linear equations PS = F. For J, R, Q and B defined as above, the matrices P and F are given by Similar to the LQG case discussed in [8], our approach is to alter the algebraic Riccati equations (13) and (14) to ensure that the constructed controller has port-Hamiltonian structure. To show that the modified Riccati equations indeed yield a controller that ensures the closed loop error bound (12), we will follow the general idea of [18] and use multiple results of [6]. In contrast to our problem formulation, [6] considers the case where the original system and the control system are not connected by power-conserving interconnection but rather interconnected in the form u = y and u = y. Here, we have adapted the results of [6] to fit our setting. Further, for simplicity we define
V 1 := D 1 D T 1 , V 2 := D 2 D T 2 , R 1 := E T 1 E 1 , and R 2 := E T 2 E 2
and assume that both V 2 and R 2 are positive definite. Later, by an appropriate choice for these matrices, we will be able to use Proposition 5 to show our main results. In this context, the key observation from Proposition 5 is the guaranteed bound T zw H ∞ < γ.
Proposition 5 ( [6, Proposition 5.6]). Suppose that (A, B,C) is a linear system, (A, B) is stabi- lizable, (A,C) is detectable, γ > 0 and that there exist solutions X = X T ≻ 0 and Y = Y T 0 of the AREs AY +YA T +V 1 + γ −2 Y R 1 Y −YC T V −1 2 CY = 0 and (A + γ −2 Y R 1 ) T X + X (A + γ −2 Y R 1 ) + R 1 −X BR −1 2 B T X + γ −2 XYC T V −1 2 CY X = 0.
Further, assume that
A + γ −2 Y R 1 + (γ −2 YC T V −1 2 CY − BR −1 2 B T )X
is asymptotically stable and that
A + γ −2 Y R 1 + X −1 R 1 , γ −1 [R 1 + X BR −1 2 B T X ] 1/2 is observable. Define a control system ( A, B, C) via A := A −YC T V −1 2 C − BR −1 2 B T X + γ −2 Y R 1 , B := YC T V −1 2 , C := R −1 2 B T X .
If A as in (10) is asymptotically stable, then the closed loop transfer function T zw satisfies T zw H ∞ < γ.
To state our first result, we make some assumptions regarding the matrices V 1 ,V 2 , R 1 and R 2 . We assume V 2 = R 2 = I, R 1 = C T C and V 1 = 2R + (1 − γ −2 )BB T . Under these assumptions, the algebraic Riccati equations found in Proposition 5 become (15) and (16). The physical interpretation of the terms D 1 , D 2 , E 1 and E 2 is, at least partially, lost. As we have mentioned earlier, a similar approach was taken in [18].
Theorem 6 (structure-preserving H ∞ control). Suppose (J, R, Q, B) is a minimal port-Hamiltonian system, define A := (J − R)Q and C := B T Q, and assume γ > 1. Let Y = Y T 0 and X = X T 0 be the respective stabilizing solutions of the modified H ∞ filter equation
A Y + Y A T − (1 − γ −2 ) YC T C Y + (1 − γ −2 )BB T + 2R = 0(15)
and the modified H ∞ control equation Proof. The proof is carried out in four steps.
(A + γ −2 YC T C) T X + X(A + γ −2 YC T C) − (1 − γ −2 ) X BB T X +C T C = 0.(16)
(i) We show that Y is given by Y = Q −1 and that X is positive definite.
(ii) We show that the control system is port-Hamiltonian.
(iii) We show that A + γ −2 YC T C + (γ −2 YC T C Y − BB T ) X
is asymptotically stable and that
A + γ −2 YC T C + X −1 C T C, γ −1 [C T C + XBB T X] 1/2
is observable. (iv) Using Proposition 2, we argue that A is asymptotically stable. Then, together with (iii), Proposition 5 shows the claim.
Let us begin to show (i). Due to the assumed minimality of (A, B,C), it follows from γ > 1 and Hautus tests that (A T , (1 − γ −2 )C T C) is stabilizable and (A T , (1 − γ −2 )BB T + 2R) is detectable. Hence, (15) has a unique stabilizing solution. This solution is given by Y = Q −1 , since Q is symmetric positive definite and we have
AQ −1 + Q −1 A T − (1 − γ −2 )Q −1 C T CQ −1 + (1 − γ −2 )BB T + 2R = (J − R) + (−J − R) − (1 − γ −2 )BB T + (1 − γ −2 )BB T + 2R = 0.
Concerning X, notice that the stabilizability of (A + γ −2 BB T Q, (1 − γ −2 )BB T ) and the observability of A + γ −2 YC T C, C T C follow from the controllability of (A, B) and the observability of (A,C) using Hautus tests. Hence, the stabilizing solution X is symmetric positive definite.
To show that ( A, B, C) is port-Hamiltonian, we define
J := 1 2 ( A X −1 − X −1 A T ), R := − 1 2 ( A X −1 + X −1 A T )
, and Q := X.
By definition we have
A = ( J − R) Q and B = B, so B T X = B T X = C.
Since J is skew-symmetric by definition and Q = X is symmetric positive definite, it remains to show that R is positive semi-definite. Notice that
A T X + X A = (A T − (1 − γ −2 )C T C Y − XBB T ) X + X(A − (1 − γ −2 ) YC T C − BB T X) = (A T + γ −2 C T C Y ) X + X(A + γ −2 YC T C) − 2 XBB T X −C T C Y X − X YC T C.
By plugging in (16) and using Y = Q −1 , we obtain
A T X + X A = − γ −2 XBB T X −C T C − XBB T X −C T B T X − XBC = − (C + B T X) T (C + B T X) − γ −2 XBB T X 0.(17)
Using Sylvester's law of inertia and X −1 = X −T , we conclude
−2 R = A X −1 + X −1 A T = X −1 ( A T X + X A) X −1 0,
so R is symmetric positive semi-definite and the control system is port-Hamiltonian. Regarding (iii), first notice that
A + γ −2 YC T C + (γ −2 YC T C Y − BB T ) X = A + γ −2 YC T C − (1 − γ −2 )BB T X =: A 1 .
To show that this matrix is asymptotically stable, we use a standard fact regarding the solutions of Lyapunov equations associated with observable systems. For that, first note that
(A 1 , [C T C + (1 − γ −2 ) XBB T X] 1/2 ) is observable if and only if (A 1 ,C T C + (1 − γ −2 ) X BB T X) is observable.
Again using Hautus tests and γ > 1, we see that the latter matrix pair is indeed observable. Now we may deduce that A 1 is asymptotically stable if the Lyapunov equation
(A + γ −2 YC T C − (1 − γ −2 )BB T X) T P + P(A + γ −2 YC T C − (1 − γ −2 )BB T X) +C T C + (1 − γ −2 ) XBB T X = 0
has a solution P = P T ≻ 0. In fact, this solution is P = X, since we may rearrange (16) as
(A + γ −2 YC T C) T X + X(A + γ −2 YC T C) − (1 − γ −2 ) X BB T X = −C T C.
It remains to show that
A + γ −2 YC T C + X −1 C T C, γ −1 [C T C + XBB T X] 1/2 is observable.
Again, this is equivalent to the observability of
A + γ −2 YC T C + X −1 C T C, γ −2 (C T C + XBB T X) ,
which follows from the observability of (A,C).
To show (iv), first note that both (A, B,C) and ( A, B, C) satisfy the Lur'e equations. If ( A, B, C) satisfies the strong Lur'e equations, then the asymptotic stability of A follows from Proposition 2.
As we have already mentioned, our approach is based on the results of [18]. Let us remark some key differences of the two approaches.
Remark 7. In [18], the assumption R 1 ≻ C T R −1 2 C is made, which guarantees that the transfer function of the controller ( A, B, C) is strictly positive real in the sense of (3). Our choices for R 1 and R 2 were made to more closely resemble the unmodified H ∞ control and filter equations. As a consequence, the matrix inequality R 1 ≻ C T R −1 2 C becomes an equality and the transfer function of ( A, B, C) does not necessarily have to satisfy the strong Lur'e equations. Since Proposition 2 is used to show the asymptotic stability of the closed loop matrix A, and in turn that T zw H ∞ < γ holds true, we needed to assume that ( A, B, C) satisfies the strong Lur'e equations. This differs from the results of [8], where only minimality of the port-Hamiltonian system (A, B,C) was assumed.
Remark 8.
We also see that the technical assumption γ > 1 was required to ensure that (1 − γ −2 )BB T is positive semi-definite. This allowed us to show the asymptotic stability of A 1 = A + γ −2 YC T C − (1 − γ −1 )BB T X via the observability of a surrogate system. As noted in [26], our assumption γ > 1 is not a severe restriction, since for minimal systems γ 0 ≤ 1 is only possible when the system matrix A is asymptotically stable and has Hankel norm less than one.
Remark 9. If we compare the classical and modified H ∞ filter equations, which read as
AY +YA T − (1 − γ −2 )YC T CY + BB T = 0 and A Y + Y A T − (1 − γ −2 ) YC T C Y + (1 − γ −2 )BB T + 2R = 0,
respectively, we see that, similar to the LQG case discussed in [8], the filter equation is modified by adding the term −γ −2 BB T + 2R. As we have seen in the proof Theorem 6, this ensures that Y = Q −1 , which is another similarity to the LQG case. Unlike the LQG case, the classical and modified H ∞ control equations differ. They read as
A T X + X A − (1 − γ −2 )X BB T X +C T C = 0 and (A + γ −2 YC T C) T X + X(A + γ −2 YC T C) − (1 − γ −2 ) X BB T X +C T C = 0.
Since the solution Y = Q −1 is known a priori and YC T = B, the modified H ∞ filter and control equations remain decoupled.
Remark 10.
Notice that similar to the limiting behavior of the classical H ∞ controller, which approaches the classical LQG controller as γ → ∞, taking the limit γ → ∞ recovers the structurepreserving LQG controller developed in [8].
In Theorem 6, the assumption that ( A, B, C) satisfies the strong Lur'e equations is a significant loss of generality and not satisfactory. In order to overcome this assumption, note that ( A, B, C) satisfies the strong Lur'e equations when the matrix in (17) is definite and B has full column rank. As we will see, the former is ensured if we choose a symmetric positive definite matrix P ∈ R n,n and replace R 1 = C T C by R 1 = C T C + P. In turn, we also need to alter V 1 to
V 1 = 2R + (1 − γ −2 )BB T − γ −2 Q −1 PQ −1 .
The resulting modifications of the AREs are stated in Theorem 11. Note that in order to satisfy our assumption V 1 = D 1 D T 1 , the matrix V 1 needs to be positive semi-definite.
Theorem 11 (structure-preserving H ∞ control -version with P). Suppose (J, R, Q, B) is a minimal port-Hamiltonian system and that B has full column rank, define A := (J − R)Q and C := B T Q, and assume γ > 1. Let the symmetric positive definite matrix P ∈ R n,n be chosen such that 2R
+ (1 − γ −2 )BB T − γ −2 Q −1 PQ −1 is positive semi-definite. Then Y = Q −1 is a symmetric positive definite solution to the modified H ∞ filter equation A Y + Y A T + Y ((γ −2 − 1)C T C + γ −2 P) Y +(1 − γ −2 )BB T + 2R − γ −2 Q −1 PQ −1 = 0.(18)
Assume that X = X T ≻ 0 is a solution to the modified H ∞ control equation
(A + γ −2 Y (C T C + P)) T X + X(A + γ −2 Y (C T C + P)) −(1 − γ −2 ) XBB T X +C T C + P = 0,(19)
and define a control system via
A := A − (1 − γ −2 ) YC T C − BB T X + γ −2 Y P, B := YC T , C := B T X.
Then ( A, B, C) is port-Hamiltonian and the transfer function T zw of the closed loop system (A, D, E) satisfies T zw H ∞ < γ.
Proof. The proof can be carried out very similarly to the proof of Theorem 6. To see that a solution to (18) is Y = Q −1 , notice that
(J − R) + (−J − R) + (γ −2 − 1)BB T + γ −2 Q −1 PQ −1 +(1 − γ −2 )BB T + 2R − γ −2 Q −1 PQ −1 = 0.
To show that ( A, B, C) is port-Hamiltonian, we proceed as in Theorem 6 and define
J := 1 2 ( A X −1 − X −1 A T ), R := − 1 2 ( A X −1 + X −1 A T ), and Q := X.
Then B = YC T = B and C = B T X = B T X. As J is skew-symmetric by definition and Q = X is symmetric positive definite, it remains to show that R is symmetric positive semi-definite. Notice that
A T X + X A = −P − (C + B T X) T (C + B T X ) − γ −2 XBB T X ≺ 0, so R ≻ 0.
To see that A is asymptotically stable, either see [23,Lemma 3.1] or use the observability of ( A, P) together with the Hautus test and the existence of positive definite solutions to Lyapunov equations associated with observable systems. Further, notice that σ min ( B) = σ min (B) > 0 since B has full column rank. In particular, ( A, B, C) satisfies the strong Lur'e equations and the asymptotic stability of A follows from Proposition 2. To use Proposition 5 and finish the proof, we need to show that
A + γ −2 YC T C + γ −2 Y P + (γ −2 YC T C Y − BB T ) X
is asymptotically stable and that
A + γ −2 Y (C T C + P) + X −1 (C T C + P), γ −1 [C T C + P + XBB T X ] 1/2 (20)
is observable. The proof of both of these claims follows along the lines of Theorem 6 and is given only briefly here. Observe that
A + γ −2 YC T C + γ −2 Y P + (γ −2 YC T C Y − BB T ) X = A + γ −2 Y (C T C + P) − (1 − γ −2 )BB T X =: A 2 .
Again, note that (A 2 ,C T C + (1 − γ −2 ) X BB T X + P) is observable, which allows us to conclude that A 2 is asymptotically stable, since X solves the Lyapunov equation
(A + γ −2 Y (C T C + P) − (1 − γ −2 )BB T X) T X + X(A + γ −2 Y (C T C + P) − (1 − γ −2 )BB T X) + C T C + (1 − γ −2 ) X BB T X + P = 0.
The observability of (20) can be seen with the Hautus test using the invertibility of P.
Let us note that in Theorem 6 we were able to deduce that stabilizing solutions to the Riccati equations exist, whereas in Theorem 11 we did not show that Y = Q −1 is stabilizing and needed to assume that a solution X = X T ≻ 0 to (19) exists. The former is because the quadratic term in (18) is indefinite and hence the positive definite solution Y = Q −1 does not necessarily have to be stabilizing. Necessary and sufficient conditions for the indefinite case are discussed in, e.g., [11]. Regarding the existence of a suitable solution to (19), see Remark 12.
Remark 12. We can ensure that a solution X = X T ≻ 0 to (19) exists if
A + γ −2 Y (C T C + P), C T C + P(21)
is observable and
A + γ −2 Y (C T C + P), (1 − γ −2 )BB T(22)
is stabilizable. The observability of (21) follows from the invertibility of P. Regarding the stabilizability of (22), Hautus tests, the fact that B is assumed to have full rank and γ > 1 reveal that the matrix pair is stabilizable if and only if
(A + γ −2 Y P, B)
is stabilizable. As we will see in Section 4, taking P as a multiple of the Hamiltonian Q is quite natural. In this special case, where P = αQ with α ∈ [0, ∞), we have A + γ −2 Y P = A + γ −2 αI. Since identity shifts do not change the rank of the Kálmán matrix, we can deduce the controllability of (A + γ −2 αI, B). Hence, in this case a solution X = X T ≻ 0 to (19) exists.
Applications to Model Reduction
In this section, we will show how the algebraic Riccati equations from Theorem 6 and Theorem 11 can be used to develop a structure-preserving model reduction method. Our method is based on system balancing, which utilizes a change of coordinates that simultaneously diagonalizes the solutions to a pair of Lyapunov or Riccati equations. Then, certain parts of the balanced system and their corresponding states are truncated. For a general overview on balancing-related methods for model reduction we refer to, e.g., [3,9]. Classical H ∞ balancing, i.e. balancing with respect to the classical H ∞ algebraic Riccati equations (13) and (14), was extensively studied in [26]. However, the classical approach has a major drawback when it comes to the approximation of pH systems: the port-Hamiltonian structure is not preserved during the model reduction process. In other words, even when the full order system is port-Hamiltonian, the reduced order model constructed by the balancing approach will not necessarily be pH. As we will see, if the algebraic Riccati equations found in Theorem 6 and Theorem 11 are used for system balancing, then the resulting balanced truncation method will be structure-preserving. Accordingly, we will call this procedure modified H ∞ balanced truncation. Unfortunately, a few difficulties arise during the study of the model reduction error, and it is unclear how an a priori error bound in the fashion of [26] can be stated. Now, let us proceed to our modified H ∞ balanced truncation method. The first step is to balance the Gramians in question, i.e. to simultaneously diagonalize the solutions Y = Y = Q −1 and X, X to the algebraic Riccati equations from Theorem 6 and Theorem 11. Here, we assume that the matrix pair (A + γ −2 Y P, B) is stabilizable, which ensures that (19) has a positive definite stabilizing solution X. To check if standard balancing methods such as square root balancing are applicable, we examine if a state transformation x → T x transforms the Gramians as Y → TY T T and X → T −T XT −1 . Here Y ∈ { Y , Y } and X ∈ { X , X}, i.e. this transformation property has to hold for both pairs of algebraic Riccati equations found in Theorems 6 and 11. Let us focus on the second pair of algebraic Riccati equations, the pair (18) and (19), which is sufficient since for P = 0 the pair (15) and (16) is recovered. Is is easy to see that a state transformation x → T x transforms R → T RT T and Q → T −T QT −1 . Let us assume that the state transformation yields P → P. If Y is a solution to
A Y + Y A T + Y ((γ −2 − 1)C T C + γ −2 P) Y + (1 − γ −2 )BB T + 2R + γ −2 Q −1 PQ −1 = 0, then we wantȲ = T Y T T to solve TAT −1Ȳ +Ȳ T −T A T T T +Ȳ ((γ −2 − 1)T −T C T CT −1 + γ −2 P)Ȳ +(1 − γ −2 )T BB T T T + 2T RT T + γ −2 T Q −1 T T PT Q −1 T T = 0.
This is the case if and only if the state transformation acts on P as
P → T −T PT −1 = P,(23)
which is satisfied for P = αQ, where α ∈ [0, ∞), or more generally P = αS, where S is a solution to the KYP-LMI
−SA − A T S C T − SB C − B T S 0 0, S = S T ≻ 0.(24)
If (23) holds and X is a solution to
(A + γ −2 Y (C T C + P)) T X + X(A + γ −2 Y (C T C + P)) − (1 − γ −2 ) X BB T X +C T C + P = 0,
then the state transformation yields (23) is sufficient to ensure that balancing is possible.
(TAT −1 + γ −2 T Y (C T C + P)T −1 ) TX +X(TAT −1 + γ −2 T Y (C T C + P)T −1 ) −(1 − γ −2 )XT BB T T TX + T −T C T CT −1 + T −T PT −1 = 0 which is solved byX = T −T XT −1 . Hence,
In [30] it was shown that a partitioning
J = J 11 J 12 J 21 J 22 , R = R 11 R 12 R 21 R 22 , Q = Q 11 Q 12 Q 21 Q 22 , and B = B 1 B 2(25)
leads to the system (J 11 , R 11 , Q 11 − Q 12 Q −1 22 Q 21 , B 1 ) being port-Hamiltonian. Using this result we can now show that our modified H ∞ balanced truncation method preserves port-Hamiltonian structure.
Theorem 14. Suppose (J, R, Q, B) is a minimal port-Hamiltonian system and P = ηS, where η ∈ [0, ∞) and S solves the KYP-LMI (24). If (J, R, Q, B) is balanced with respect to (18) and (19), and partitioned as in (25), then the system
(J 11 , R 11 , Q 11 , B 1 )
is port-Hamiltonian. In particular, truncation of ((J − R)Q, B, B T Q) will preserve the port-Hamiltonian structure.
Proof. Since the solution Y of (18) is given by Y = Q −1 , in balanced coordinates with respect to (18) and (19) the matrix Y −1 = Q is diagonal. In particular, the off-diagonal blocks Q 12 and Q 21 are zero, which implies Q 11 − Q 12 Q −1 22 Q 21 = Q 11 . The claim then immediately follows from the previously mentioned results of [30].
Note that to state Theorem 14 we do not need to assume that B has full column rank. This assumption is only needed to prove the error bound T zw H ∞ < γ, which is not required for Theorem 14.
For most balanced truncation methods, there exist a priori error bounds for the model reduction error. For the procedure presented here, such an error bound is not easily established using the standard methods. The next remarks highlight the main difficulties in that regard.
Remark 15.
To state an a priori error bound for classical H ∞ balanced truncation, in [26] the authors used explicit constructions of coprime factorizations of the transfer function. The coprime factors are constructed using the results of [25] and the algebraic Riccati equation (14). The authors could then establish a connection between the coprime factors and the characteristic values of the method. Unfortunately, this procedure is no longer viable for the pair of modified H ∞ equations (15) and (16) (and (18) and (19)). We will focus on the first pair and define β := (1 − γ −2 ) 1/2 as in [26].
Concerning (15), with our definition of β the equation may be rewritten as
A Y + Y A T − β 2 YC T C Y + β 2 BB T + 2R = 0.
Scaling by β 2 results in
A(β 2 Y ) + (β 2 Y )A T − (β 2 Y )C T C(β 2 Y ) + β 2 ((β B)(β B) T + 2R) = 0. Hence, if [(β B)(β B) T + 2R] 1/2 = L, thenȲ := β 2 Y solves AȲ +Ȳ A T −ȲC T CȲ + (β L)(β L) T = 0.
Using the results of [25], we obtain a (left) coprime factorization of
β G = C(sI − A) −1 (β L)
which is in general different from β G. Similarly, scaling (16) by β 2 yields
(A + γ −2 YC T C) T (β 2 X) + (β 2 X)(A + γ −2 YC T C) − (β 2 X)BB T (β 2 X) + (βC) T (βC) = 0,
which is solved byX := β 2 X. Using the results of [25] we can only state a (right) coprime factorization of
βḠ = (βC)(sI − A − γ −2 YC T C) −1 B,
which is, again, generally different from β G.
Remark 16.
Another difficulty arises when the second pair of algebraic Riccati equations, (18) and (19), is used. Since the quadratic term in (18)
Numerical Experiments
In this section, we provide two numerical examples that naturally allow for a port-Hamiltonian formulation. We consider a mass-spring-damper system as in [15], and an example of a DC motor from [36]. Our focus is not on demonstrating that the new methods outperform existing ones, but rather on illustrating the theory. This is why we only consider systems of moderate state space dimension. We compare the H ∞ control scheme from Theorems 6 and 11 with the classical H ∞ control scheme from [13]. Further, we compare the modified H ∞ balanced truncation scheme with classical H ∞ balanced truncation from [26]. Since in [8] it was shown that the extremal solutions to the associated KYP-LMI play a particularly important role for model reduction, we pay special attention to these solutions as Hamiltonians. All simulations were obtained using MATLAB ® R2021b (i64) in conjunction with Rosetta 2 on an Apple M1 Pro Processor with 8 cores and 16GB of unified memory. Further, let us make the following remarks on the implementation.
• As we have mentioned earlier, Theorem 11 does not require the solutions to (18) and (19) to be stabilizing. Nevertheless, all considered algebraic Riccati equations were solved using the MATLAB ® routine icare, which computes stabilizing solutions. The features of the Control System Toolbox were used to calculate the H ∞ errors. For the experiments involving the modified H ∞ balanced truncation method we chose γ = 2. • The extremal solutions X min and X max to the KYP-LMI
−A T X − X A C T − X B C − B T X 0 0, X = X T 0(26)
are computed using a regularization approach, see [39] and the discussion above Theorem 2 therein, and an artificial feedthrough term D + D T = 10 −12 I.
• To ensure numeric stability, numerically minimal realizations of the systems were obtained by following the approach of [8], for which the truncation tolerance was chosen as ε trunc = 10 −12 . Additionally, a sign convention for encountered singular value and QR decompositions was enforced. For details on the mass-spring-damper system, we refer to [15]. In the DC motor model of [36], the system matrices J, R, Q and B are given by
J = 0 −k k 0 , R = r 0 0 b , Q = 1/l 0 0 1/ j and B = 1 0 ,
where k is the gyrator constant, r > 0 is associated with a resistor in the circuit, b > 0 models friction in the motor, l > 0 is the inductor constant, and j > 0 models the rotational inertia of the motor.
H ∞ Performance
First, we compare the structure-preserving H ∞ controller to the classical H ∞ controller in terms of the achieved closed loop H ∞ norm T zw H ∞ . For the classical setting, the matrices D 1 and E 1 were obtained as in Remark 3. In the modified case, these matrices were constructed by calculating (potentially semi-definite) Choleskylike factors of V 1 and R 1 and extending the calculated factors with zero padding. In both cases, the matrices D 2 and E 2 were chosen as
D 2 = 0 I and E 2 = 0 I .
For γ, we considered values from [1.05, 3.95].
In Figure 1, the mass-spring-damper system from [15] is considered. We see that our modified H ∞ controller is outperformed by the classical controller, although both controllers yield a closed loop performance within the prescribed H ∞ bound. Here, only the controller constructed by Theorem 6 is considered, since with P = αQ the matrix V 1 = 2R + (1 − γ −2 )BB T − γ −2 Q −1 PQ −1 from Theorem 11 was indefinite even for small values of α.
Similar observations can be made in Figure 2, where we compare the controller synthesis methods for a model of a DC motor [36]. The model constants were chosen as k = 1, r = 2, b = 1, l = 1 and j = 2. Here, our modified H ∞ controller with P = 0 is again outperformed by the classical H ∞ controller. Further, the controller from Theorem 11 with P = Q is added to the comparison, and yields the worst performance of all considered controllers, but still clearly stays within the prescribed H ∞ bound.
Model Reduction
Now, let us illustrate the theoretical results of Section 4.
In Figure 3, we show the results obtained for the balancing method when the algebraic Riccati equations (15) and (16) from Theorem 6 are used for balancing. Besides the canonical realization of the pH system, the results for the realizations associated with the extremal solutions of (26) are included. For comparison, the classical H ∞ balanced truncation approach is also added. As was already observed similarly in [8], the error clearly depends on the chosen representation, and the maximal solution X max corresponds to the smallest model reduction error. In this case, the approximation quality is close to the approximation quality of the classical H ∞ balanced truncation approach. When the minimal solution X min is used to represent the system, the reduced order model fails to approximate the full-order system and the error stagnates. The canonical representation of the system leads to much better performance, but is still clearly outperformed by the representation based on X max .
In Figure 4, the same system is considered, and the results for the balancing method with (18) and (19) from Theorem 11 are shown. Except for the representation associated with X min , which is ommited from the figure, the same parameters are considered. The matrix P is chosen as P = 0.001Q, where Q stems from the canonical representation of the pH system. We observe that the error significantly increases, and that the maximal representation still outperforms the canonical representation. [2]
, A simplified viewpoint of hyperstability, IEEE Transactions on Automatic Control, 13 (1968), pp. 292-294.
Example 4 .
4Let us consider the port-Hamiltonian system (J, As usual, let us define A := (J − R)Q and C := B T Q and assume that the classical H ∞ controller ( A, B, C) is constructed as above for γ = 2. Since(A, B,C)is port-Hamiltonian, we can rewrite the matrix B as B = YC T = Y QB.
A
:= A − (1 − γ −2 ) YC T C − BB T X, B := YC T , C := B T X. Then ( A, B, C) is port-Hamiltonian. If additionally ( A, B, C) satisfies the strong Lur'e equations, then the transfer function T zw of the closed loop system (A, D, E) satisfies T zw < γ.
Remark 13 .
13Let us point out that the results can easily be extended to co-energy variable formulations of pH systems. Such formulations are obtained by defining the new variable z := Qx and the matrix E := Q −1 and writing the system (J, R, Q, B) as Ez = (J − R)z + Bu, y = B T z.
Figure 1 .
1Comparison of the H ∞ performance of the modified H ∞ controller (P = 0) and the classical H ∞ controller for a mass-spring-damper system of dimension n = 10.
Figure 2 .
2Comparison of the H ∞ performance of the modified H ∞ controller (P = 0 and P = Q) and the classical H ∞ controller for a model of a DC motor.H ∞ approach modified H ∞ approach (Q = X min ) modified H ∞ approach (Q = X max ) classical H ∞ approach
Figure 3 .
3Comparison of different choices of the Hamiltonian for MOR using modified H ∞ balanced truncation for a mass-spring-damper system of dimension n = 200 (numerically minimal dimension 79). We chose P = 0 and added classical H ∞ balanced truncation for comparH ∞ approach modified H ∞ approach (Q = X max ) classical H ∞ approach
Figure 4 .
4Comparison of different choices of the Hamiltonian for MOR using modified H ∞ balanced truncation for the mass-spring-damper system of dimension n = 200 (numerically minimal dimension 79). We chose P = 0.001Q. The minimal solution X min was not considered.Control, 5 (1967), pp. 171-182.
where we only show the first five relevant digits. In particular, the matrix P is invertible and the unique solution S is given by which is clearly not symmetric. Hence ( A, B, C) can not be represented as a port-Hamiltonian system.P ≈
1.6940 −0.1497
−0.0749 0.4800
and F ≈
2.0592 0.1736
0.0868 0.5093
,
S ≈
1.2488 0.1990
0.3756 1.0919
,
is inherently indefinite, we can no longer guarantee that a stabilizing solution exists. Even if such a stabilizing solution exists, it is not immediately clear that Q −1 is this solution. Since the results of[25] rely on stabilizing solutions to the Riccati equations, this approach can no longer be used to construct normalized coprime factorizations. If we assume that Y = Q −1 is the stabilizing solution to(18), then a similar reasoning as in Remark 15 applies to the second pair of AREs as well.
ConclusionIn this paper, we propose a new method for the design of port-Hamiltonian controllers with a guaranteed H ∞ bound for the closed loop transfer function. To achieve this goal, we propose modifications of the algebraic Riccati equations used in classical H ∞ controller design. Based on the modified algebraic Riccati equations, we additionally develop a structure preserving model reduction method. Using numerical experiments, we illustrate that the approximation quality of the reduced order model depends on the chosen representation of the port-Hamiltonian system, and that the maximal solution of the associated KYP-LMI appears to be best suited for the purpose of model reduction.A natural first step for future research is to establish an a priori error bound for the presented H ∞ balanced truncation method. With such an error bound, the effect of the choice of the matrix P, the parameter γ and the Hamiltonian Q on the approximation quality can be studied in more detail. Furthermore, to make the presented modified approaches feasible, future research should explore efficient and robust implementations, for example to compute the maximal solution X max .
AcknowledgementsWe thank the Deutsche Forschungsgemeinschaft for their support within the project B03 in the Sonderforschungsbereich/Transregio 154 "Mathematical Modelling, Simulation and Optimization using the Example of Gas Networks".
A system theory criterion for positive real matrices. B Anderson, SIAM Journal on. B. ANDERSON, A system theory criterion for positive real matrices, SIAM Journal on
Approximation of Large-Scale Dynamical Systems. A Antoulas, Society for Industrial and Applied Mathematics. A. ANTOULAS, Approximation of Large-Scale Dynamical Systems, Society for Industrial and Applied Mathematics, 2005.
Robust port-Hamiltonian representations of passive systems. C Beattie, V Mehrmann, And P Van Dooren, Automatica. 100C. BEATTIE, V. MEHRMANN, AND P. VAN DOOREN, Robust port-Hamiltonian repre- sentations of passive systems, Automatica, 100 (2019), pp. 182-186.
R Benhabib, R Iwens, And R Jackson, Stability of large space structure control systems using positivity concepts. 4R. BENHABIB, R. IWENS, AND R. JACKSON, Stability of large space structure control systems using positivity concepts, Journal of Guidance Control and Dynamics, 4 (1981), pp. 487-494.
LQG control with an H ∞ performance bound: A Riccati equation approach. D Bernstein And W, Haddad, IEEE Transactions on Automatic Control. 34D. BERNSTEIN AND W. HADDAD, LQG control with an H ∞ performance bound: A Riccati equation approach, IEEE Transactions on Automatic Control, 34 (1989), pp. 293- 305.
S Boyd, L El, E Ghaoui, And V Feron, Balakrishnan, Linear Matrix Inequalities in System and Control Theory. SIAMS. BOYD, L. EL GHAOUI, E. FERON, AND V. BALAKRISHNAN, Linear Matrix Inequal- ities in System and Control Theory, SIAM, 1994.
T Breiten, R Morandin, And P Schulze, Error bounds for port-Hamiltonian model and controller reduction based on system balancing. T. BREITEN, R. MORANDIN, AND P. SCHULZE, Error bounds for port-Hamiltonian model and controller reduction based on system balancing, Computers & Mathematics with Applications, (2021).
Balancing-Related Model Reduction Methods. T Breiten And T, Stykel, T. BREITEN AND T. STYKEL, Balancing-Related Model Reduction Methods, De Gruyter, 2021, pp. 15-56.
Passivity preserving model reduction via spectral factorization. T Breiten And B, Unger, arxiv preprint 2103.13194v3, 2021T. BREITEN AND B. UNGER, Passivity preserving model reduction via spectral factor- ization, arxiv preprint 2103.13194v3, 2021.
Necessary and sufficient conditions for the existence of positive solutions to algebraic Riccati equations with indefinite quadratic term. S Chen, Applied Mathematics and Optimization. 26S. CHEN, Necessary and sufficient conditions for the existence of positive solutions to algebraic Riccati equations with indefinite quadratic term, Applied Mathematics and Op- timization, 26 (1992), pp. 95-110.
Guaranteed margins for LQG regulators. J Doyle, IEEE Transactions on Automatic Control. 23J. DOYLE, Guaranteed margins for LQG regulators, IEEE Transactions on Automatic Control, 23 (1978), pp. 756-757.
State-space solutions to standard H 2 and H ∞ control problems. J Doyle, K Glover, P Khargonekar, And B Francis, IEEE Transactions on Automatic Control. 34J. DOYLE, K. GLOVER, P. KHARGONEKAR, AND B. FRANCIS, State-space solutions to standard H 2 and H ∞ control problems, IEEE Transactions on Automatic Control, 34 (1989), pp. 831-847.
G Dullerud And F, Paganini, A Course in Robust Control Theory: A Convex Approach. Springer36G. DULLERUD AND F. PAGANINI, A Course in Robust Control Theory: A Convex Ap- proach, vol. 36, Springer, 2013.
Structurepreserving tangential interpolation for model reduction of port-Hamiltonian systems. S Gugercin, R Polyuga, C Beattie, And A Van Der, Schaft, Automatica. 48S. GUGERCIN, R. POLYUGA, C. BEATTIE, AND A. VAN DER SCHAFT, Structure- preserving tangential interpolation for model reduction of port-Hamiltonian systems, Au- tomatica, 48 (2012), pp. 1963 -1974.
Error bounds in the gap metric for dissipative balanced approximations. C Guiver And M, Opmeer, Linear Algebra and its Applications. 439C. GUIVER AND M. OPMEER, Error bounds in the gap metric for dissipative balanced approximations, Linear Algebra and its Applications, 439 (2013), pp. 3659-3698.
Explicit construction of quadratic Lyapunov functions for the small gain, positivity, circle and Popov theorems and their application to robust stability. W Haddad And D, Bernstein, Proceedings of the 30th IEEE Conference on Decision and Control. the 30th IEEE Conference on Decision and Control3W. HADDAD AND D. BERNSTEIN, Explicit construction of quadratic Lyapunov functions for the small gain, positivity, circle and Popov theorems and their application to robust stability, in Proceedings of the 30th IEEE Conference on Decision and Control, vol. 3, 1991, pp. 2618-2623.
Dissipative H 2 /H ∞ controller synthesis. W Haddad, D Bernstein, And Y Wang, American Control Conference. W. HADDAD, D. BERNSTEIN, AND Y. WANG, Dissipative H 2 /H ∞ controller synthesis, in American Control Conference, 1993, pp. 243-244.
Positive real and strictly positive real MIMO systems: Theory and application. M Hakimi-Moghadam, International Journal of Dynamics and Control. 8M. HAKIMI-MOGHADAM, Positive real and strictly positive real MIMO systems: Theory and application, International Journal of Dynamics and Control, 8 (2020), pp. 1-11.
New results in linear filtering and prediction theory. R Kálmán And R, Bucy, Journal of Basic Engineering. 83R. KÁLMÁN AND R. BUCY, New results in linear filtering and prediction theory, Journal of Basic Engineering, 83 (1961), pp. 95-108.
On the design of the dissipative LQG-type controllers. R Lozano-Leal, And S Joshi, Proceedings of the 27th IEEE Conference on Decision and Control. the 27th IEEE Conference on Decision and Control2R. LOZANO-LEAL AND S. JOSHI, On the design of the dissipative LQG-type controllers, in Proceedings of the 27th IEEE Conference on Decision and Control, vol. 2, 1988, pp. 1645-1646.
M Mamunuzzaman And H, Zwart, Structure preserving model order reduction of port-Hamiltonian systems. arXiv preprint 2203.07751v1, 2022M. MAMUNUZZAMAN AND H. ZWART, Structure preserving model order reduction of port-Hamiltonian systems, arXiv preprint 2203.07751v1, 2022.
Stability radii for linear Hamiltonian systems with dissipation under structure-preserving perturbations. C Mehl, V Mehrmann, And P Sharma, SIAM Journal on Matrix Analysis and Applications. 37C. MEHL, V. MEHRMANN, AND P. SHARMA, Stability radii for linear Hamiltonian sys- tems with dissipation under structure-preserving perturbations, SIAM Journal on Matrix Analysis and Applications, 37 (2016), pp. 1625-1654.
V And B Mehrmann, Unger, 2201.06590v1Control of port-Hamiltonian differential-algebraic systems and applications. arXiv preprintV. MEHRMANN AND B. UNGER, Control of port-Hamiltonian differential-algebraic sys- tems and applications, arXiv preprint 2201.06590v1, 2022.
A connection between normalized coprime factorizations and linear quadratic regulator theory. D Meyer And G, Franklin, IEEE Transactions on Automatic Control. 32D. MEYER AND G. FRANKLIN, A connection between normalized coprime factoriza- tions and linear quadratic regulator theory, IEEE Transactions on Automatic Control, 32 (1987), pp. 227-228.
Controller reduction by H ∞ -balanced truncation. D Mustafa And K, Glover, IEEE Transactions on Automatic Control. 36D. MUSTAFA AND K. GLOVER, Controller reduction by H ∞ -balanced truncation, IEEE Transactions on Automatic Control, 36 (1991), pp. 668-682.
Balanced parametrization of classes of linear systems. R Ober, SIAM Journal on Control and Optimization. 29R. OBER, Balanced parametrization of classes of linear systems, SIAM Journal on Con- trol and Optimization, 29 (1991), pp. 1251-1287.
Moment matching for linear port-Hamiltonian systems. R Polyuga And A, Van Der, Schaft, European Control ConferenceR. POLYUGA AND A. VAN DER SCHAFT, Moment matching for linear port-Hamiltonian systems, in 2009 European Control Conference (ECC), 2009, pp. 4715-4720.
Structure preserving model reduction of port-Hamiltonian systems by moment matching at infinity. Automatica. 46, Structure preserving model reduction of port-Hamiltonian systems by moment matching at infinity, Automatica, 46 (2010), pp. 665-672.
Effort-and flow-constraint reduction methods for structure preserving model reduction of port-Hamiltonian systems. Systems & Control Letters. , Effort-and flow-constraint reduction methods for structure preserving model reduc- tion of port-Hamiltonian systems, Systems & Control Letters, 61 (2012), pp. 412-421.
Lur'e equations and even matrix pencils. T Reis, Linear Algebra and its Applications. 434T. REIS, Lur'e equations and even matrix pencils, Linear Algebra and its Applications, 434 (2011), pp. 152-173.
P And M Schwerdtner, Voigt, SOBMOR: Structured optimization-based model order reduction. arXiv preprintP. SCHWERDTNER AND M. VOIGT, SOBMOR: Structured optimization-based model or- der reduction, arXiv preprint 2011.07567v2, 2020.
Optimal Control and Estimation, Courier Corporation. R Stengel, R. STENGEL, Optimal Control and Estimation, Courier Corporation, 1994.
A Van Der, Schaft, -Gain and Passivity Techniques in Nonlinear Control. Springer2A. VAN DER SCHAFT, L 2 -Gain and Passivity Techniques in Nonlinear Control, vol. 2, Springer, 2000.
Port-Hamiltonian systems: An introductory survey. A Van Der, Schaft, Proceedings of the International Congress of Mathematicians. the International Congress of MathematiciansEuropean Mathematical Society Publishing HouseIIIA. VAN DER SCHAFT, Port-Hamiltonian systems: An introductory survey, in Proceedings of the International Congress of Mathematicians Vol. III, no. suppl 2, European Mathe- matical Society Publishing House (EMS Ph), 2006, pp. 1339-1365.
Port-Hamiltonian systems theory: An introductory overview. A Van Der, Schaft And D, Jeltsema, Foundations and Trends in Systems and Control. 1A. VAN DER SCHAFT AND D. JELTSEMA, Port-Hamiltonian systems theory: An intro- ductory overview, Foundations and Trends in Systems and Control, 1 (2014), pp. 173-378.
Time domain and frequency domain conditions for strict positive realness. J Wen, IEEE Transactions on Automatic Control. 33J. WEN, Time domain and frequency domain conditions for strict positive realness, IEEE Transactions on Automatic Control, 33 (1988), pp. 988-992.
Least squares stationary optimal control and the algebraic Riccati equation. J Willems, IEEE Transactions on Automatic Control. 16J. WILLEMS, Least squares stationary optimal control and the algebraic Riccati equation, IEEE Transactions on Automatic Control, 16 (1971), pp. 621-634.
Dissipative dynamical systems part II: Linear systems with quadratic supply rates, Archive for Rational Mechanics and Analysis. 45, Dissipative dynamical systems part II: Linear systems with quadratic supply rates, Archive for Rational Mechanics and Analysis, 45 (1972), pp. 352-393.
| []
|
[]
| []
| []
| []
| arXiv:quant-ph/9511018v1 16 Nov 1995 Q uantum N etw orks for E lem entary A rithm etic O perations V l atko Vedral ,A dri ano B arenco and A rtur Ekert C l arendon Laboratory, D epartm ent ofPhysics U niversity ofO xford,O xford,O X 1 3PU ,U .K .(Subm i tted to Phys. R ev. A ) C urrent address: B l ackett Laboratory,Im peri alC ol l ege,Pri nce C onsort R oad,London SW 7 2B Z,U . K . | 10.1103/physreva.54.147 | [
"https://export.arxiv.org/pdf/quant-ph/9511018v1.pdf"
]
| 21,359,301 | quant-ph/9511018 | dc0cda5c1421e0bc2bedadaa5b1656a5c9a87969 |
arXiv:quant-ph/9511018v1 16 Nov 1995 Q uantum N etw orks for E lem entary A rithm etic O perations V l atko Vedral ,A dri ano B arenco and A rtur Ekert C l arendon Laboratory, D epartm ent ofPhysics U niversity ofO xford,O xford,O X 1 3PU ,U .K .(Subm i tted to Phys. R ev. A ) C urrent address: B l ackett Laboratory,Im peri alC ol l ege,Pri nce C onsort R oad,London SW 7 2B Z,U . K .
Q uantum com putersrequi requantum ari thm eti c. W eprovi dean expl i ci tconstructi on ofquantum netw orks e ecti ng basi c ari thm eti c operati ons: from addi ti on to m odul ar exponenti ati on. Q uantum m odul ar exponenti ati on seem s to be the m ost di cul t (ti m e and space consum i ng) part of Shor' s quantum factori si ng al gori thm . W e show that the auxi l i ary m em ory requi red to perform thi s operati on i n a reversi bl e w ay grow s l i nearl y w i th the si ze ofthe num ber to be factori sed. 03. 65. C a,07. 05. B x,89. 80. + h
I. IN T R O D U C T IO N
A quantum com puter i s a physi calm achi ne that can accept i nput states w hi ch represent a coherent superpositi on ofm any di erent possi bl e i nputs and subsequentl y evol ve them i nto a correspondi ng superposi ti on ofoutputs. C om putati on,i.e. a sequence ofuni tary transform ati ons,a ects si m ul taneousl y each el em ent ofthe superposi ti on, generati ng a m assi ve paral l eldata processi ng al bei t w i thi n one pi ece ofquantum hardware [ 1] . T hi s way quantum com puters can e ci entl y sol ve som e probl em s w hi ch are bel i eved to be i ntractabl e on any cl assi calcom puter [ 2,3] . A partfrom changi ng the com pl exi ty cl asses,the quantum theory ofcom putati on reveal sthe fundam entalconnecti ons between the l aw s ofphysi cs and the nature ofcom putati on and m athem ati cs [ 4] .
For the purpose ofthi s paper a quantum com puter w i l lbe vi ewed as a quantum network (or a fam i l y ofquantum networks) com posed ofquantum l ogi c gates;each gate perform i ng an el em entary uni tary operati on on one,two or m ore two{state quantum system s cal l ed qubits [ 5] . Each qubi t represents an el em entary uni t ofi nform ati on;i t has a chosen \com putati onal " basi s fj 0i;j 1ig correspondi ng to the cl assi calbi t val ues 0 and 1. B ool ean operati ons w hi ch m ap sequences of 0' s and 1' s i nto another sequences of 0' s and 1' s are de ned w i th respect to thi s com putati onal basi s.
A ny uni tary operati on i s reversi bl e that i s w hy quantum networks e ecti ng el em entary ari thm eti c operati ons such as addi ti on,m ul ti pl i cati on and exponenti ati on cannot be di rectl y deduced from thei r cl assi calB ool ean counterparts (cl assi call ogi cgatessuch asAN D orO R arecl earl y i rreversi bl e:readi ng 1 atthe outputofthe O R gatedoesnotprovi de enough i nform ati on to determ i ne the i nput w hi ch coul d be ei ther (0;1)or(1;0) or(1;1)). Q uantum ari thm eti c m ust be bui l t from reversi bl e l ogi calcom ponents. It has been show n that reversi bl e networks (a prerequi si te for quantum com putati on)requi resom eaddi ti onalm em ory forstori ng i nterm edi ateresul ts [ 6,7] .H encetheartofbui l di ng quantum networks i s often reduced to m i ni m i si ng thi s auxi l i ary m em ory or to opti m i si ng the trade{o between the auxi l i ary m em ory and a num ber ofcom putati onalsteps requi red to com pl ete a gi ven operati on i n a reversi bl e way.
In thi s paper we provi de an expl i ci t constructi on ofseveralel em entary quantum networks. W e focus on the space com pl exi ty i.e. on the opti m al use of the auxi l i ary m em ory. In our constructi ons, we save m em ory by reversi ng som e com putati onsw i th di erentcom putati ons(ratherthan w i th the sam e com putati on butrun backwards [ 7] ).T he networks are presented i n the ascendi ng order ofcom pl i cati on. W e start from a si m pl e quantum addi ti on,and end up w i th a m odul ar exponenti ati on
U a;N j xi j 0i! j xi j a x m od N i;(1)
w here a and N are predeterm i ned and know n param eters.T hi sparti cul aroperati on pl aysan i m portantrol e i n Shor' s quantum factori ng al gori thm [ 3]and seem s to be i ts m ost dem andi ng part. T he structure ofthe paperi sasfol l ow s:i n Secti on IIwe de ne som e basi c term sand descri be m ethodsofreversi ng som e types com putati on,i n Secti on III we provi de a detai l ed descri pti on ofthe sel ected quantum networks and i n Secti on IV we di scuss thei r com pl exi ty.
II. B A SIC C O N C E P T S
For com pl eteness l et us start w i th som e basi c de ni ti ons. A quantum network i s a quantum com puti ng devi ce consi sti ng ofquantum l ogi c gates w hose com putati onalsteps are synchroni sed i n ti m e. T he outputs ofsom e ofthe gates are connected by w i res to the i nputs ofothers. T he si ze ofthe network i s i ts num ber ofgates. T he si ze ofthe i nput ofthe network i s i ts num ber ofi nput qubi ts i.e. the qubi ts that are prepared appropri atel y at the begi nni ng of each com putati on perform ed by the network. Inputs are encoded i n bi nary form i n the com putati onalbasi s of sel ected qubi ts often cal l ed a quantum register,or si m pl y a register. For i nstance,the bi nary form ofnum ber 6 i s 110 and l oadi ng a quantum regi ster w i th thi s val ue i s done by prepari ng three qubi ts i n state j 1i j 1i j 0i. In the fol l ow i ng we use a m ore com pactnotati on:j aistandsforthe di rectproductj a n i j a n 1 i:::j a 1 i j a 0 iw hi ch denotes a quantum regi sterprepared w i th the val ue a = 2 0 a 0 + 2 1 a 1 + :::2 n a n . C om putati on i sde ned asa uni tary evol uti on ofthe network w hi ch takes i ts i ni ti alstate \i nput" i nto som e nalstate \output".
B oth the i nput and the output can be encoded i n severalregi sters.Even w hen f i s a one{to{one m ap between the i nput x and the output f(x) and the operati on can be form al l y w ri tten as a uni tary operator U f
U f j xi! j f(x)i;(2)
we m ay sti l lneed an auxi l i ary regi ster to store the i nterm edi ate data. W hen f i s not a bi jecti on we have to use an addi ti onalregi ster i n order to guarantee the uni tari ty ofcom putati on. In thi s case the com putati on m ust be vi ewed as a uni tary transform ati on U f of(at l east) two regi sters
U f j x;0i! j x;f(x)i;(3)
w here the second regi ster i s ofappropri ate si ze to accom m odate f(x). A s an exam pl e,consi der a functi on f a;N :x ! ax m od N . A quantum network that e ects thi s com putati on takes the val ue x from a regi sterand m ul ti pl i es i tby a param etera m odul o anotherparam eterN . Ifa and N are copri m e, the functi on i s bi jecti ve i n the i ntervalf0;1;:::;N 1g,and i t i s possi bl e to construct a network that w ri tes the answer i nto the sam e regi sterw hi ch i ni ti al l y contai ned the i nput x (as i n the equati on (2)). T hi s can be achi eved by i ntroduci ng an auxi l i ary regi ster and perform i ng
U a;N j x;0i! j x;ax m od N i:(4)
T hen we can precom pute a 1 m od N , the i nverse of a m odul o N (thi s can be done cl assi cal l y i n an e ci ent way usi ng Eucl i d' s al gori thm [ 8] ),and,by exchangi ng the two regi sters and appl yi ng U 1 a 1 m od N ;N to the resul ti ng state, we obtai n
U 1 a 1 m od N ;N Sj x;ax m od N i! U 1 a 1 m od N ;N j ax m od N ;xi! j ax m od N ;0i;(5)
w here S i s a uni tary operati on that exchanges the states ofthe two regi sters.T hus,
U 1 a 1 m od N ;N SU a;N j x;0i! j ax m od N ;0i(6)
e ecti vel y perform s
j xi! j f(x)i(7)
w here the second regi ster i s treated as an i nternalpart ofthe network (tem porary regi ster).
III. N E T W O R K A R C H IT E C T U R E
Q uantum networks for basi c ari thm eti c operati ons can be constructed i n a num ber of di erent ways. A l though al m ostany non-tri vi alquantum gate operati ng on two orm ore qubi ts can be used asan el em entary bui l di ng bl ock of the networks [ 9]we have deci ded to use the three gates descri bed i n Fi g.1,hereafter refered to as el em entary gates. N one ofthese gates i s uni versalfor quantum com putati on,however,they su ce to bui l d any B ool ean functi ons as the To ol igate al one su ces to support any cl assicalreversi bl e com putati on. T he N O T and the C ontrol {N O T gates are added for conveni ence (they can be easi l y obtai ned from the To ol igates).
A . P lain adder T he addi ti on oftwo regi stersj aiand j bii sprobabl y the m ostbasi c operati on,i n the si m pl estform i tcan be w ri tten as j a;b;0i! j a;b;a + bi:
H ere we w i l lfocus on a sl i ghtl y m ore com pl i cated (but m ore useful ) operati on that rew ri tes the resul t ofthe com putati on i nto the one ofthe i nput regi sters ,i.e. j a;bi! j a;a + bi;
A sone can reconstructthe i nput(a;b)outofthe output(a;a+ b),there i sno l ossofi nform ati on,and the cal cul ati on can be i m pl em ented reversi bl y. To prevent over ow s, the second regi ster (i ni ti al l y l oaded i n state j bi) shoul d be su ci entl y l arge,i.e. i fboth a and b are encoded on n qubi ts,the second regi stershoul d be ofsi ze n + 1.In addi ti on, the network descri bed here al so requi res a tem porary regi ster ofsi ze n 1,i ni ti al l y i n state j 0i,to w hi ch the carri es ofthe addi ti on are provi si onal l y w ri tten (the l ast carry i s the m ost si gni cant bi t ofthe resul t and i s w ri tten i n the l ast qubi t ofthe second regi ster). Subsequentl y we reverse al lthese operati ons (except for the l ast one w hi ch com puted the l eadi ng bi t of the resul t) i n order to restore every qubi t ofthe tem porary regi ster to i ts i ni ti alstate j 0i. T hi s enabl es us to reuse the sam e tem porary regi ster,shoul d the probl em ,forexam pl e,requi re repeated addi ti ons. D uri ng the resetti ng processthe other n qubi ts ofthe resul t are com puted through the rel ati on b i a i XO R b i XO R c i 1 and stored i n the second regi ster. T hi s operati on e ecti vel y com putes the n rst di gi ts ofthe sum (the basi c network that perform s the sum m ati on ofthree qubi ts m odul o 2 i s depi cted i n Fi g.3i i ). )
If we reverse the acti on of the above network (i.e. i f we appl y each gate of the network i n the reversed order) w i th the i nput (a;b),the output w i l lproduce (a;a b) w hen a b. W hen a < b,the output i s (a;2 n + 1 (b a)), w here n + 1 i s the si ze ofthe second regi ster.In thi s case the m ostsi gni cantqubi tofthe second regi sterw i l lal ways contai n 1 . B y checki ng thi s \over ow bi t" i t i s therefore possi bl e to com pare the two num bers a and b;we w i l luse thi s operati on i n the network for m odul ar addi ti on.
B . A dder m odulo N
A sl i ght com pl i cati on occurs w hen one attem pts to bui l d a network that e ects j a;bi! j a;a + b m od N i;
w here 0 a;b < N . A s i n the case of the pl ai n adder, there i s no a pri orivi ol ati on of uni tari ty si nce the i nput (a;b) can be reconstructed from the output (a;a + b m od N ),w hen 0 a;b < N (as i t w i l lal waysbe the case). O ur approach i s based on taki ng the output ofthe pl ai n adder network,and subtracti ng N ,dependi ng on w hether the val ue a + b i s bi gger or sm al l er than N . T he m ethod,however,m ust al so accom odate a superposi ti on ofstates for w hi ch som e val ues a + b are bi gger than N and som e sm al l er than N .
Fi g.4 i l l ustratesthe vari ousstepsneeded to i m pl em entm odul araddi ti on.T he rstadderperform sa pl ai n addi ti on on the state j a;bi returni ng j a;a + bi;the rst regi ster i s then swapped w i th a tem porary regi ster form erl y l oaded w i th N ,and a subtractor (i.e. an adder w hose network i s run backwards)i s used to obtai n the state j N ;a + b N i. A t thi s stage the m ost si gni cant bi t of the second regi ster i ndi cates w hether or not an over ow occurred i n the subtracti on,i.e. w hether a + b i s sm al l er than N or not. T hi s i nform ati on i s \copi ed" i nto a tem porary qubi t j ti (i ni ti al l y prepared i n state j 0i) through the C ontrol {N O T gate. C ondi ti onal l y on the val ue ofthi s l ast qubi t j ti,N i s added back to the second regi ster,l eavi ng i t w i th the val ue a + b m od N . T hi s i s done by ei ther l eavi ng the rst regi sterw i th the val ue N (i n caseofover ow ),orresetti ng i tto 0 (i fthere i sno over ow )and then usi ng a pl ai n adder. A fter thi s operati on,the val ue ofthe rst regi ster can be reset to i ts ori gi nalval ue and the rst and the tem porary regi stercan be swapped back,l eavi ng the rsttwo regi stersi n state j a;a + bm od N i and the tem porary one i n state j 0i. A t thi s poi nt the m odul ar addi ti on has been com puted, but som e i nform ati on i s l eft i n the tem porary qubi t j ti that recorded the over ow ofthe subtracti on. T hi s tem porary qubi t cannot be reused i n a subsequent m odul ar addi ti on,unl ess i t i s coherentl y reset to zero. T he l ast two bl ocks ofthe network take care ofthi s resetti ng: rst the val ue i n the rst regi ster (= a) i s subtracted from the val ue i n the second (= a + b m od N ) yi el di ng a totalstate j a;(a + b m od N ) ai. A s before,the m ost si gni cant bi t ofthe second regi ster contai ns the i nform ati on about the over ow i n the subtracti on,i ndi cati ng w hether or not the val ue N was subtracted after the thi rd network. T hi s bi t i s then used to reset the tem porary bi t j ti to j 0i through a second C ontrol {N O T gate. Fi nal l y the l ast subtracti on i s undone,returni ng the two regi sters to the state j a;a + b m od N i.
To accountforthi sfactatthe ith m odul araddi ti on stagethe rstregi steri sl oaded w i th the val ue 2 i a i fj c;x i i= j 1;1i and w i th val ue 0 otherw i se. T hi s i s done by appl yi ng the To ol i gate to the controlqubi ts j ci and j x i i and the appropri ate target qubi t i n the regi ster;the gate i s appl i ed each ti m e val ue \1" appears i n the bi nary form ofthe num ber 2 i a. R esetti ng the regi sterto i ts i ni ti alstate i s done by appl yi ng the sam e sequence ofthe To ol igatesagai n (the order ofthe gates i s i rrel evant as they act on di erent target qubi ts). Ifj ci = j 0i onl y 0 val ues are added at each ofthe n stages to the resul t regi ster gi vi ng state j c;x;0i. Si nce we want the state to be j c;x;xi we copy the content ofthe i nput regi ster to the resul t regi ster i fj ci = j 0i. T hi s l ast operati on i s perform ed by the ri ghtm ost el em ents ofthe network ofFi g.5. T he condi ti onalcopy i s i m pl em ented usi ng an array ofTo ol igates.
D . E xponentiation M odulo N
A reversi bl e network that com putes the functi on f a;N (x) = a x m od N can now be desi gned usi ng the previ ous constructi ons. N oti ce rst that a x can be w ri tten as a x = a 2 0 x0 a 2 1 x1 :::a 2 m 1 xm 1 ,thus m odul ar exponenti ati on can be com puted by setti ng i ni ti al l y the resul t regi ster to j 1i, and successi vel y e ecti ng n m ul ti pl i cati ons by a 2 i (m odul o N ) dependi ng on the val ue ofthe qubi t j x i i;i fx i = 1,we want the operati on j a 2 0 x0 + :::2 i 1 xi 1 ;0i! j a 2 0 x0 + :::2 i 1 xi 1 ;a 2 0 x0 + :::2 i 1 xi 1 a 2 i i (12) to be perform ed,otherw i se,w hen x i = 0 we just requi re j a 2 0 x0 + :::2 i 1 xi 1 ;0i! j a 2 0 x0 + :::2 i 1 xi 1 ;a 2 0 x0 + :::2 i 1 xi 1 i:
N ote that i n both cases the resul t can be w ri tten as j a 2 0 x0 + :::2 i 1 xi 1 ;a 2 0 x0 + :::2 i xi i. To avoi d an accum ul ati on of i nterm edi ate data i n the m em ory ofthe quantum com puter,a parti cul ar care shoul d be taken to erase the parti al i nform ati on generated. T hi s i s done, as expl ai ned i n Sect. II, by runni ng backwards a control l ed m ul ti pl i cati on network w i th the val ue a 2 i m od N . T hi squanti ty can be e ci entl y precom puted i n a cl assi calway [ 8] . T he si ze ofthe descri bed networks depends on the si ze ofthei r i nput n. T he num ber ofel em entary gates i n the pl ai n adder,the m odul araddi ti on and the control l ed{m odul araddi ti on network scal esl i nearl y w i th n.T he control l ed m odul ar m ul ti pl i cati on contai ns n control l ed m odul ar addi ti ons, and thus requi res of the order of n 2 el em entary operati ons. Si m i l arl y the network for exponenti ati on contai ns ofthe order ofn control l ed m odul ar m ul ti pl i cati ons and the total num ber of el em entary operati ons i s of the order of n 3 . T he m ul ti pl i cati ve overhead factor i n front depends very m uch on w hat i s consi dered to be an el em entary gate. For exam pl e,i fwe choose the C ontrol {N O T to be our basi c uni t then the To ol igate can be si m ul ated by 6 C ontrol {N O T gates [ 10] .
Let us have a cl oser l ook at the m em ory requi rem ents for the m odul ar exponenti ati on;thi s can hel p to asses the di cul ty ofquantum factori sati on. W e set n to be the num ber ofbi ts needed to encode the param eter N ofEq.(1). In Shor' sal gori thm ,x can be asbi g asN 2 ,and thereforethe regi sterneeded to encode i trequi resup to 2n qubi ts.N ot counti ng the two i nputregi stersand an addi ti onalbi tto store the m ostsi gni cantdi gi tofthe resul t,the pl ai n adder network requi resan extra (n 1){qubi ttem porary regi sterforstori ng tem porary (carry)qubi ts. T hi sregi steri sreset to i ts i ni ti alval ue,j 0i,after each operati on ofthe network and can be reused l ater. T he m odul ar addi ti on network, i n addi ti on to the tem porary qubi t needed to store over ow s i n subtracti ons, requi res another n{qubi t tem porary regi ster;i n totalthi sm akestwo n{qubi ttem porary regi stersform odul araddi ti on.C ontrol l ed m odul arm ul ti pl i cati on i s done by repeated m odul ar addi ti ons,and requi res three tem porary n{qubi t regi sters: one for i ts ow n operati on and two for the m odul ar addi ti on (control l ed m odul ar m ul ti pl i cati on al so requi res a tem porary qubi t used by the m odul ar addi ti on network). Fi nal l y,the network for exponenti ati on needs four tem porary n{qubi t regi sters,one for i ts ow n operati on and three for the control l ed m odul ar m ul ti pl i cati on (pl us an addi ti onalqubi t used by the m odul ar addi ti on). A l together the totalnum ber of qubi ts requi red to perform the rst part of the factori sati on al gori thm i s 7n + 1,w here 2n qubi ts are used to store x,n qubi ts store the resul t a x m od N and 4n + 1 qubi ts are used as tem porary qubi ts.
T he networkspresented i n thi s paper are by no m eans the onl y orthe m ostopti m alones.T here are m any waysto construct operati on such as a x m od N ,gi ven param eters a and N . U sual l y a dedi cated network com posed ofseveral sub{uni ts does not have to be a si m pl e sum of the sub{uni ts. In the m odul ar exponenti ati on, for exam pl e, i t i s rel ati vel y easy to reduce the m em ory i.e. the constantoverhead factor(7 i n ourcase)by noti ng thatthe rstregi ster i n the pl ai n adder network al ways stores speci c cl assi calval ues: ei ther 0 and N . T he sam e hol ds for the tem porary regi ster i n the adder m odul o N w hi ch al ways stores ei ther 0 and 2 i a m od N . T here i s no need to use a ful lquantum regi ster for thi s: a cl assi calregi ster pl us a si ngl e qubi t (that keeps track of the entangl em ent) are su ci ent. T hi s reducesthe num berofqubi ts to 5n + 2.O ne furtherregi stercan be rem oved by usi ng the addi ti on network thatdoes notrequi re a tem porary regi ster [ 11] ;the tri ck i sto use the n{bi tTo ol igatesto add n{bi tnum bers. Ifthe di cul ty ofthe practi cali m pl em entati ons ofthe n{bi tTo ol igates i s com parabl e to that ofthe regul arTo ol igate,then thi s can be a good way ofsavi ng m em ory. A l ltogether the num ber ofqubi ts can be reduced from 7n + 1 to 4n + 3. T hi s m eans that apart from the regi ster stori ng x and another one stori ng a x m od N we need addi ti onaln + 3 tem porary qubi tsto perform quantum m odul arexponenti ati on i n Shor' sal gori thm .T he requi red m em ory grow sonl y asa l i near functi on ofthe si ze ofN .
V . C O N C L U SIO N
In thi s paper we have expl i ci tl y constructed quantum networks perform i ng el em entary ari thm eti c operati ons i ncl udi ng the m odul ar exponenti ati on w hi ch dom i nates the overal lti m e and m em ory com pl exi ty i n Shor' s quantum factori sati on al gori thm .O urnetwork forthe m odul arexponenti ati on achi evesonl y a l i neargrow th ofauxi l i ary m emory by expl oi ti ng the fact that f a;N (x) = ax m od N i s a bi jecti on (w hen a and N are copri m e) and can be m ade reversi bl e by si m pl e auxi l i ary com putati ons. In m ore practi calterm s ourresul ts i ndi cate thatw i th the \trapped i ons com puter" [ 12]about 20 i ons su ce (at l east i n pri nci pl e) to factor N = 15. N eedl ess to say,the form ofthe actual network thatw i l lbe used i n the rstquantum com puterw i l lgreatl y depend on the type oftechnol ogy em pl oyed;the noti on ofan opti m alnetwork i sarchi tecture dependentand any furtheropti m i sati on hasto awai tfuture experi m ental progress. T he rst and the second netw ork add a and b together and then subtract N . T he over ow i s recorded i nto the tem porary qubi t j ti. T he next netw ork cal cul ates (a + b)m od N . A t thi s stage w e have extra i nform ati on about the val ue of the over ow stored i n j ti. T he l ast tw o bl ocks restore j ti to j 0i. T he arrow before the thi rd pl ai n adder m eans that the rst regi ster i s set to j 0i i fthe val ue ofthe tem porary qubi t j ti i s 1 and i s otherw i se l eft unchanged (thi s can be easi l y done w i th C ontrol {N O T gates,as w e know that the rst regi ster i s i n the state j N i). T he arrow after the thi rd pl ai n adderresets the rstregi ster to i tsori gi nalval ue (here j N i). T he si gni cance ofthe thi ck bl ack barsi s expl ai ned i n the capti on ofFi g.2.
FIG .5. C ontrol l ed m ul ti pl i cati on m odul o N consi sts ofconsecuti ve m odul ar addi ti ons of2 i a or 0 dependi ng on the val ues of c and xi. T he operati on before the ith m odul ar adder consi sts i n stori ng 2 i 1 a or 0 i n the tem porary regi ster dependi ng on w hether j c;xii = j 1;1i or not respecti vel y. Im m edi atel y after the addi ti on has taken pl ace,thi s operati on i s undone. A t the end,w e copy the content ofthe i nput regi ster i n the resul t regi ster onl y i fj ci= j 0i,prepari ng to account for the fact that the naloutput state shoul d be j c;x;xi and not j c;x;0i w hen c = 0. T he si gni cati on ofthe thi ck bl ack bars i s gi ven i n the capti on ofFi g.2.
T he operati on ofthe ful laddi ti on network i s i l l ustrated i n Fi g.2 and can be understood as fol l ow s: W e com pute the m ost si gni cantbi t ofthe resul t a + b. T hi s step requi rescom puti ng al lthe carri esc i through the rel ati on c i a i AN D b i AN D c i 1 , w here a i , b i and c i represent the ith qubi t of the rst, second and tem porary (carry)regi ster respecti vel y. Fi g.3i ) i l l ustrates the sub{network that e ects the carry cal cul ati on.
Functi on f a;N (x) = ax m od N can be i m pl em ented by repeated condi ti onaladdi ti ons (m odul o N ): ax = 2 0 ax 0 + 2 1 ax 1 + :::2 n 1 ax n 1 . Starti ng from a regi ster i ni ti al l y i n the state j 0i,the network consi sts si m pl y ofn stages i n w hi ch the val ue 2 i a i s added condi ti onal l y,dependi ng on the state ofthe qubi t j x i i. Fi g.5 show s the correspondi ng network;i t i s sl i ghtl y com pl i cated by the fact that we want the m ul ti pl i cati on to be e ected condi ti onal l y upon the val ue ofsom e externalqubi t j ci,nam el y,we want to i m pl em ent j c;x;0i! j c;x;a x m od N i i fj ci= j 1i j c;x;xi i fj ci= j 0i
Fi g.6 show s the network for a com pl ete m odul ar exponenti ati on. It i s m ade out ofm stages;each stage perform s the fol l ow i ng sequence ofoperati ons: j a 2 0 x0 + :::2 i 1 xi 1 ;0i! (m ul ti pl i cati on) j a 2 0 x0 + :::2 i 1 xi 1 ;a 2 0 x0 + :::2 i xi i! (sw appi ng) j a 2 0 x0 + :::2 i xi ;a 2 0 x0 + :::2 i 1 xi 1 i! (resetti ng) j a 2 0 x0 + :::2 i xi ;0i
V I. A C K N O W L E D G M E N T S V .V .thanksthe R oyalSoci ety forthe vacati on schol arshi p w hi ch enabl ed hi m to undertake the research projecton the subject ofthe paper. A .B .acknow l edgesthe nanci alsupport ofthe B errow s Fund at Li ncol n C ol l ege,O xford.
FIG . 1 .
1Truth tabl es and graphi cal representati ons of the el em entary quantum gates used for the constructi on of m ore com pl i cated quantum netw orks. T he controlqubi ts are graphi cal l y represented by a dot,the target qubi ts by a cross. i ) N O T operati on. i i ) C ontrol {N O T.T hi s gate can be seen as a \copy operati on" i n the sense that a target qubi t (b) i ni ti al l y i n the state 0 w i l lbe after the acti on ofthe gate i n the sam e state as the controlqubi t. i i i ) To ol igate. T hi s gate can al so be seen as a C ontrol {control {N O T:the target bi t (c) undergoes a N O T operati on onl y w hen the tw o control s (a and b) are i n state 1. . + . mod N + + FIG .2. Pl ai n adder netw ork. In a rst step, al lthe carri es are cal cul ated unti lthe l ast carry gi ves the m ost si gni cant di gi t of the resul t. T hen al lthese operati ons apart from the l ast one are undone i n reverse order,and the sum of the di gi ts i s perform ed correspondi ngl y. N ote the posi ti on of a thi ck bl ack bar on the ri ght or l eft hand si de of basi c carry and sum netw orks. A netw ork w i th a bar on the l eft si de represents the reversed sequence of el em entary gates em beded i n the sam e netw ork w i th the bar on the ri ght si de. FIG .4. A dder m odul o N .
Pel l i zzari ,and P.Zol l er for usefuldi scussi ons. D Eutsch, D D I V I Ncenzo, S Ardi Ner, H J , P L Ni Ght, E Ni L L, T , T he authors woul d l i ke to thank D . D eutsch, D . D i V i ncenzo, S. G ardi ner, H . J. K i m bl e, P. L. K ni ght, E. K ni l l , T .Pel l i zzari ,and P.Zol l er for usefuldi scussi ons.
D Eutsch, Proc.R .Soc.Lond.A. .R .Soc.Lond.A40097D .D eutsch,Proc.R .Soc.Lond.A 400,97 (1985).
D Eutsch, R Jozsa, Proc. R . Soc. Lond. A. R . Soc. Lond. A439553D . D eutsch and R . Jozsa, Proc. R . Soc. Lond. A 439, 553 (1992);
E , U Vazi Rani, Proc. 25th A C M Sym posium on the T heory of C om putation. 25th A C M Sym posium on the T heory of C om putation11E. B ernstei n and U . Vazi rani , i n Proc. 25th A C M Sym posium on the T heory of C om putation, 11 (1993);
D S Si M On, Proceedings of the 35th A nnual Sym posium on the Foundations ofC om puter Science. S.G ol dw asserthe 35th A nnual Sym posium on the Foundations ofC om puter ScienceIEEE C om puter Soci ety Press16Los A l am i tos,C A )D . S. Si m on, Proceedings of the 35th A nnual Sym posium on the Foundations ofC om puter Science,edi ted by S.G ol dw asser (IEEE C om puter Soci ety Press,Los A l am i tos,C A ),16 (1994);
P W Shor, Proceedings of the 35th A nnualSym posium on the T heory of C om puter Science. S.G ol dw asserthe 35th A nnualSym posium on the T heory of C om puter ScienceIEEE C om puter Soci ety Press124Los A l am i tos,C A )P. W .Shor,i n Proceedings of the 35th A nnualSym posium on the T heory of C om puter Science, edi ted by S.G ol dw asser (IEEE C om puter Soci ety Press,Los A l am i tos,C A ),p. 124 (1994).
T he Fabric ofReal ity (V i ki ng{Pengui n Publ i shers,London,i n pri nt). D Eutsch, D .D eutsch,T he Fabric ofReal ity (V i ki ng{Pengui n Publ i shers,London,i n pri nt).
D Eutsch, Proc.R .Soc.Lond.A. .R .Soc.Lond.A42573D .D eutsch,Proc.R .Soc.Lond.A 425,73 (1989).
. R Landauer, Ib M J.R Es, Ev, 5183R .Landauer,IB M J.R es.D ev.5,183 (1961);
. C H B Ennett, Ib M J.R Es, Ev, 3216C . H B ennett,IB M J.R es.D ev.32,16 (1988);
System s T heory. T To Ol I, 1413T .To ol i ,M ath.System s T heory 14,13 (1981).
. C H Ennett, Sia M J.C Om, Put, 18766C . H .B ennett,SIA M J.C om put.18(4),766 (1989).
Sem inum ericalA l gorithm s (A ddi son-W esl ey. D E Nuth, T he A rtofC om puter Program m ing. 2N ew YorkD . E.K nuth,T he A rtofC om puter Program m ing,Vol um e 2: Sem inum ericalA l gorithm s (A ddi son-W esl ey,N ew York,1981).
A , Proc. R . Soc. Lond. A. R . Soc. Lond. A449679A . B arenco, Proc. R . Soc. Lond. A , 449, 679 (1995);
. T Sl, H , Phys. R ev. Lett. 744087T . Sl eator and H . W ei nfurter, Phys. R ev. Lett. 74 4087 (1995);
D Eutsch, A Arenco, A Ekert, Proc.R .Soc.Lond.A. .R .Soc.Lond.A449669D .D eutsch,A .B arenco and A .Ekert,Proc.R .Soc.Lond.A 449 669 (1995);
. S Oyd, Phys.R ev.Lett. 75346S.Ll oyd,Phys.R ev.Lett.75,346 (1995).
. A Arenco, C H Ennett, R , D P . D I V I Cenzo, N Us, P Shor, T Sl Eator, J Sm Ol I N, H , Phys.R ev.A. 523457A . B arenco, C . H . B ennett, R . C l eve, D . P. D i V i cenzo, N . M argol us, P. Shor, T . Sl eator, J. Sm ol i n and H . W ei nfurter, Phys.R ev.A 52,3457 (1995).
Pel l i zzariand P.Zol l er,private com m unication. S A Ner, T , S. A .G ardi ner,T .Pel l i zzariand P.Zol l er,private com m unication.
. J I , P , Phys.R ev.Lett. 744091J. I.C i rac and P.Zol l er,Phys.R ev.Lett 74,4091 (1995).
B asi c carry and sum operati onsforthe pl ai n addi ti on netw ork. i )the carry operati on. note thatthe carry operati onFIG .3. B asi c carry and sum operati onsforthe pl ai n addi ti on netw ork. i )the carry operati on (note thatthe carry operati on
FIG .6. M odul arexponenti ati on consi stsofsuccessi ve m odul arm ul ti pl i cati onsby a 2 i . T he even netw orksperform the reverse. FIG .6. M odul arexponenti ati on consi stsofsuccessi ve m odul arm ul ti pl i cati onsby a 2 i . T he even netw orksperform the reverse
| []
|
[
"Gravitational Waves from Generalized Newtonian Sources",
"Gravitational Waves from Generalized Newtonian Sources"
]
| [
"J W Van Holten "
]
| []
| []
| I review the elementary theory of gravitational waves on a Minkowski background and the quadrupole approximation. The modified conservation laws for energy and momentum keeping track of the gravitational-wave flux are presented. The theory is applied to two-body systems in bound and scattering states subject to newtonian gravity generalized to include a 1/r 3 force allowing for orbital precession. The evolution of the orbits is studied in the adiabatic approximation. From these results I derive the conditions for capture of two bodies to form a bound state by the emission of gravitational radiation. | 10.1002/prop.201800083 | null | 54,999,056 | 1809.05670 | e123938d6cdfc831c8131f290eb465cd9492b22e |
Gravitational Waves from Generalized Newtonian Sources
J W Van Holten
Gravitational Waves from Generalized Newtonian Sources
10.1002/prop.201800083REVIEW ARTICLE www.fp-journal.org
I review the elementary theory of gravitational waves on a Minkowski background and the quadrupole approximation. The modified conservation laws for energy and momentum keeping track of the gravitational-wave flux are presented. The theory is applied to two-body systems in bound and scattering states subject to newtonian gravity generalized to include a 1/r 3 force allowing for orbital precession. The evolution of the orbits is studied in the adiabatic approximation. From these results I derive the conditions for capture of two bodies to form a bound state by the emission of gravitational radiation.
Introduction and Overview
The existence of gravitational waves is now well-established from both direct and indirect observations. [1][2][3][4] A completely new field of astronomy is opening up which will no doubt have an impact also on other branches of astronomy and astrophysics such as dynamics and evolution of stars and galaxies. The supermassive black holes in the centers of galaxies, and possibly intermediatemass black holes in stellar clusters, will by the relatively large curvature they create in the surrounding space enhance the emission of gravitational waves from massive objects on trajectories passing close to them, whether these are on bound or open orbits. The emission of gravitational waves can even lead to the capture of objects originally in open orbits to end up in a bound state.
Apart from these radiative phenomena involving very massive black holes, the emission of gravitational waves also affects more common binary star systems like the well-known close binary neutron stars, the recently discovered binary black holes and presumably systems containing white dwarfs. [5] No doubt radiation has an impact on three-and many-body systems, especially on their stability. Detailed investigations of close binary star systems using high-order post-newtonian expansions of the Einstein equations of General Relativity have been carried out with great success; for a review see e.g. [6]. The inspiral and merger of extreme mass-ratio binaries involving a very massive black hole has also been studied directly in the background geometry of the DOI: 10.1002/prop.201800083 black hole. [7][8][9][10][11] Whenever these theoretical investigations can be compared with data they seem to describe the dynamics of these systems very well, thereby also confirming General Relativity to be the best available theory for gravitational interactions. [12] The study of radiation from two-body scattering has been addressed as well, [13] although no corresponding observations have been announced so far.
Even though they may carry large amounts of energy and momentum, the deformations of space-time created by gravitational waves are extremely small. For example a flux of monochromatic gravitational waves with a frequency of 100 Hz and an extreme intensity of 1 W/m 2 will create spatial deformations of less than 1 part in 10 19 , the diameter of a proton over a distance of 1 km. This testifies as to the extreme stiffness of space and explains both why it is so difficult to create gravitational waves and to observe them. It also implies that most potential sources of gravitational waves are weak and many move on close-to-stationary almost-newtonian orbits.
This review is devoted to gravitational radiation from such weak or very weak sources. They produce the most abundant, though maybe not the most spectacular, form of gravitational waves in the universe and may eventually become relevant to a wide range of astronomical and astrophysical observations. To lowest order their description and propagation involve straightforward applications of linear field theory in Minkowski spacetime. This also provides the starting point for many more elaborate and precise calculations.
We will begin by recapturing in fairly standard fashion the wave equation for gravitational waves, its gauge invariance and its implications for the propagation and polarization states of gravitational waves. We address the quadrupole nature of the waves and the associated sources, and explain how dynamical mass quadrupole motion generates the simplest and most common weak gravitational waves. Next we derive the modification of the conservation laws for energy, momentum and angular momentum by taking account of gravitational radiation. We present equations for the transport of energy and angular momentum by gravitational waves, keeping track of the anisotropic dependence on directions.
This theory is then applied to systems of massive objects moving on generalized newtonian orbits, either in bound states or on open scattering trajectories. The generalization includes the effects of possible 1/r 3 forces causing orbital precession, which may result e.g. from many-body or post-newtonian interactions. We calculate the evolution of orbital parameters due to emission of gravitational radiation and their relations. We finish by www.advancedsciencenews.com www.fp-journal.org establishing which binary scattering orbits are turned into bound states by emission of radiation.
The Wave Equation
Weak gravitational waves are dynamical fluctuations of the spacetime metric about flat Minkowski geometry. [14][15][16] Thus we can split the full space-time metric as
g μν = η μν + 2κh μν ,(1)
where κ is the positive root of
κ 2 = 8π G c 4 2.1 × 10 −41 kg −1 m −1 s 2 ,(2)
G being the newtonian constant of gravity and c the speed of light in vacuum. This endows h μν with the standard dimensions of a bosonic tensor field. Up to non-linear corrections the tensor field is postulated to satisfy the field equation
h μν − ∂ μ ∂ λ h λν − ∂ ν ∂ λ h λμ + ∂ μ ∂ ν h λ λ − η μν h λ λ − ∂ κ ∂ λ h κλ = −κ T μν ,(3)
where = η μν ∂ μ ∂ ν is the d'Alembertian and the inhomogeneous term T μν on the right-hand side represents the sources of the field. By factoring out the constant κ this tensor has the dimensions of energy per unit of volume or force per unit of area. In this treatise we always use the flat Minkowski metric η μν with signature (−, +, +, +) and its inverse η μν to raise and lower indices on components of mathematical objects like vectors and tensors. The motivation for postulating this field equation comes from the physical properties of the tensor field h μν implied by its structure. First note that defining the linear Ricci tensor
R μν = κ h μν − ∂ μ ∂ λ h λν − ∂ ν ∂ λ h λμ + ∂ μ ∂ ν h λ λ ,(4)
the trace of which reads
R = R λ λ = 2κ h λ λ − ∂ κ ∂ λ h κλ ,(5)
the field equation takes the form
R μν − 1 2 η μν R = −κ 2 T μν .(6)
This is the linearized version of Einstein's gravitational field equation in a flat background. Note also that
∂ μ R μν = 1 2 ∂ ν R,(7)
and as a result the inhomogeneous field Equation (6) is seen to imply a conservation law for the source terms:
∂ μ T μν = 0.(8)
As the energy-momentum tensor of matter and radiation has the required physical dimensions and satisfies the condition (8) in
Minkowski space it is the obvious source for the tensor field. As all physical systems possess energy and momemtum this explains the universality of gravity 1 . An observation closely related to (7) is that the linear Ricci tensor is invariant under gauge transformations
h μν → h μν = h μν + ∂ μ ξ ν + ∂ ν ξ μ , R μν = R μν .(9)
By such gauge transformations one can straightforwardly eliminate four components of the field to reduce the number of independent components from ten to six. To achieve such a reduction in pratice the standard procedure is to impose the De Donder condition
∂ μ h μν = 1 2 ∂ ν h μ μ .(10)
This condition reduces the linear Ricci tensor and its trace to the expressions
R μν = κ h μν , R = κ h λ λ ,(11)
and therefore the field equation turns into the inhomogeneous wave equation
h μν − 1 2 η μν h λ λ = −κ T μν .(12)
It is then convenient to redefine the field components by
h μν ≡ h μν − 1 2 η μν h λ λ ,(13)
which transform under gauge transformations as
h μν = h μν + ∂ μ ξ ν + ∂ ν ξ μ − η μν ∂ λ ξ λ .(14)
After implementing the De Donder condition the field is divergence-free and satisfies the inhomogeneous wave equation:
∂ μ h μν = 0, h μν = −κ T μν .(15)
Finally a second gauge transformation can be made without changing the De Donder condition provided the parameter satisfies itself the homogeneous wave equation:
∂ μ h μν = ∂ μ h μν + ξ ν = 0 ⇔ ξ ν = 0.(16)
Such a residual gauge transformation can be made in particular on free fields to remove the trace of the tensor field:
h λ λ = h λ λ − 2 ∂ λ ξ λ = 0,(17)
in agreement with the Equations (15) provided ξ ν = 0 and T λ λ = 0. It follows automatically that the same condition holds for the original tensor field: h λ λ = 0. Removal of the trace reduces the number of independent components of free fields to five, equal www.advancedsciencenews.com www.fp-journal.org to the dimension of the irreducible spin-2 representation of the rotation group, However, as dynamical free wave fields propagate on the light cone and have only transverse polarization states, the actual number of independent dynamical components of gravitational wave fields is two. This will be discussed in the following.
Solutions of the Inhomogeneous Wave Equation
The inhomogeneous linear wave Equation (15) has many solutions: to a given solution one can always add any solution of the homogeneous equation representing free gravitational waves. Free gravitational waves can therefore appear as a background to gravitational wave signals from specific sources.
In the absence of such a background the standard causal solution for sources localized in a finite region of space is the retarded solution
h μν (x, t) = κ 4π Sr d 3 x T μν (x , t − |x − x|) |x − x| ,(18)
where the integration volume S r can be taken to be a large sphere of radius r = |x| containing the finite region of the sources where T μν = 0 in its center. To evaluate the field by performing the integration is difficult in practice for any realistic type of sources. In order to make progress it makes sense to consider the situation in which the waves are evaluated at large distance from the sources: the radius r of the sphere is taken to be much larger than any typical dimension of the sources. For example we evaluate the waves emitted by a binary star system of orbital extension d at a distance r d. Under this assumption one can expand the integral expression on the right-hand side of (18) in inverse powers of r keeping only terms which do not fall off faster than 1/r. This results in the simpler integral
h μν (x, t) = κ 4π r Sr d 3 x T μν (x , t − r).(19)
Another simplification is possible as it is straightforward to show that for localized sources these solutions have no dynamical time components:
∂ 0 h 0μ = κ 4π r Sr d 3 x ∂ 0 T 0μ = κ 4π r Sr d 3 x ∂ i T iμ = κ 4π r ∂ Sr d 2 σr i T iμ = 0.(20)
The second equality on the first line follows from energymomentum conservation, whilst the last equality uses Gauss' theorem to convert the volume integral to a surface integral over the corresponding normal component of the energy-momentum tensor,r being the radial unit vector pointing out of the spherical surface ∂ S r . Finally the localization of the sources in a finite region near the center of the sphere guarantee the vanishing of the energy-momentum tensor on the boundary. We infer that the time components may represent static newtonian fields, but they cannot contribute to the flux of dynamical waves across the boundary of the sphere.
As concerns dynamical fields we are therefore left with the spatial components of the outgoing wave solutions (19):
h i j = κ 4π r Sr d 3 x T i j (x , t − r).(21)
In empty space far from the sources the expression on the righthand side actually represents an exact formal solution of the wave equation. Now this solution was obtained by imposing the De Donder condition (15); in addition, as argued after (17), in this region one can always find a local gauge transformation of the fields that makes them traceless. For the solution at hand this implies that after such a gauge transformation
∂ i h i j = 0 ⇒r i h i j = 0.(22)
and
h j j = h j j = 0.(23)
A detailed discussion of the necessary gauge transformations is presented in appendix A. Tensor fields obeying these conditions of are called transverse and traceless (T T) and satisfy h T T i j = h T T i j . We will take these properties for granted in what follows and omit the T T in the notation. Combining the above requirements the outgoing wave fields far from the source must then be represented in the T T-gauge by an expression of the form
h i j (x, t) = h i j (x, t) = κ 4π r (δ ik −r irk ) δ jl −r jrl I kl + 1 2 δ klr · I ·r ,(24)
where the spatial symmetric 3-tensor I is traceless: I kk = 0. Writing u ≡ t − r, agreement of this expression with the result (21) up to gauge transformations is obtained by taking
I i j (u) = Sr d 3 x T i j − 1 3 δ i j T kk x , u .(25)
With the help of energy-momentum conservation the integral can be rewritten in terms of the quadrupole moment of the total energy density T 00 of the sources:
I i j (u) = 1 2 ∂ 2 0 Sr d 3 x x i x j − 1 3 δ i j x 2 T 00 (x , u).(26)
The proof is easier in backward fashion; first notice that as
∂ 0 = ∂ u ∂ 2 0 T 00 (x , u) = ∂ 0 ∂ i T i0 = ∂ i ∂ j T i j (x , u);(27)
then perform two partial integrations with respect to x to reobtain (25), observing that the full energy-momentum tensor is supposed to vanish at the boundary ∂ S r . Finally considering non-relativistic sources in the center-ofmass frame, the energy density is dominated by the mass-density ρ(x, t), which allows us to replace the integral in (26) by the components of the mass quadrupole moment and write explicitly:
I i j = 1 2 d 2 Q i j dt 2 , Q i j (u) = Sr d 3 x x i x j − 1 3 δ i j x 2 ρ(x , u).(28)
Thus we get the final expression for the wave field h i j for nonrelativistic sources in the T T-gauge:
h i j (x, t) = κ 8π r (δ ik −r irk ) δ jl −r jrl × d 2 dt 2 Q kl + 1 2 δ klr · Q ·r u=t−r .(29)
For the dynamical (non-Newtonian) metric fluctuations δg μν = g μν − η μν , recalling Equations (1) and (2) this result implies that
δg 00 = δg 0i = 0; δg i j = 2G r (δ ik −r irk ) δ jl −r jrl × d 2 dt 2 Q kl + 1 2 δ klr · Q ·r u=t−r .(30)
Conservation Laws and Gravitational-Wave Fluxes
Free radiation fields (always taken in the T T-gauge) define conserved currents of energy, momentum and angular momentum; [15,16] in the conventions of the previous sections
E = 1 2 ∂ 0 h i j 2 + 1 2 ∂ k h i j 2 , P k = ∂ 0 h i j ∂ k h i j , M k = ∂ 0 h i j 2ε kmi h mj − ε kmn x m ∂ n h i j .(31)
Subject to the field equations and gauge conditions these quantities satisfy the continuity equations
∂E ∂t = ∂ j P j , ∂P k ∂t = ∂ j S j k , ∂M k ∂t = ∂ j J j k ,(32)
where
S j k = ∂ j h mn ∂ k h mn + 1 2 δ j k (∂ 0 h mn ) 2 − (∂ l h mn ) 2 , J j k = 2ε kmn h ml ∂ j h nl − 1 2 ε j kl x l (∂ 0 h mn ) 2 − (∂ l h mn ) 2 .(33)
Applying them to the free fields (29) these expressions determine the flux of energy, momentum and angular momentum carried by outgoing gravitational waves far from the source region. First, integration over a large sphere around the center of mass of the source and using Gauss' theorem gives the change in total energy, momentum and angular momentum of gravitational waves in terms of surface integrals
d E dt = ∂ Sr d 2 σr i P i , d P k dt = ∂ Sr d 2 σr i S ik , d M k dt = ∂ Sr d 2 σr i J ik .(34)
Next, on the spherical surface ∂ S r the surface element of integration taken in polar co-ordinates (r, θ, ϕ) is
d 2 σ = r 2 sin θ dθ dϕ ≡ r 2 d 2 .(35)
Evaluating the integrands on the right-hand side in Equations (34) while restoring factors of c then results in differential fluxes
d E d 2 dt = − G 8π c 5 Tr ··· Q 2 − 2r · ··· Q 2 ·r + 1 2 (r· ··· Q ·r) 2 u=t−r , d P k d 2 dt = − d E d 2 cdtr k = G 8π c 6r k Tr ··· Q 2 − 2r · ··· Q 2 ·r + 1 2 (r· ··· Q ·r) 2 u=t−r , d M k d 2 dt = − G 4π c 5 ε ki j Q · ··· Q i j − Q ·r i ··· Q ·r j +r i Q · ··· Q ·r − 1 2Q ·rr · ··· Q ·r j u=t−r .(36)
As usual overdots denote derivatives with respect to time t. The integrands themselves represent the anisotropic angular distribution of fluxes. The spherical surface integrals can be performed taking note that the quadrupole moments depend only on retarded time u = t − r , and that the angular integrals can be evaluated using the averaging procedure
X ≡ 1 4π d 2 X(θ, ϕ) ⇒ r i = r i 1r i 2r i 3 = · · · = r i 1 · · ·r i 2n+1 = 0,(37)
whilst
r ir j = 1 3 δ i j , r ir jrkrl = 1 15 δ i j δ kl + δ ik δ jl + δ il δ j k .(38)
Fortschr. Phys. 2019, 67, 1800083 www.advancedsciencenews.com
www.fp-journal.org
This results in [14][15][16][17]
d E dt = − G 5c 5 Tr ··· Q 2 , d P k dt = 0, d M k dt = − 2G 5c 5 ε ki j Q · ··· Q i j .(39)
Note that the total flux of linear momentum vanishes by symmetry (in the present approximation) as it involves only products of odd numbers ofr i integrated over a full spherical surface, whereas the integrands of the energy and angular momentum contain even numbers of outward spherical unit vectors.
Generalized Newtonian 2-Body Forces
In the following we will apply the results to systems of masses moving under the influence of mutual newtonian forces, considering two-body systems interacting via a central potential. The classical description of such systems simplifies greatly, first as one can effectively reduce it to a single-body system by separating off the center-of-mass (CM) motion; second as angular momentum conservation implies the relative motion to be confined to a two-dimensional plane. Of course, the emission of gravitational radiation introduces limitations to these simplifications, but as long as the rate of energy and angular-momentum loss by the system is small the orbits will change only gradually and one can evaluate the effect of gravitational-wave emission in terms of adiabatic changes in the orbital parameters. In this section we first discuss non-disspiative motion; the effects of gravitational wave emission will be analysed afterwards. Let the bodies have masses m 1 and m 2 and positions r 1 and r 2 . To make maximal use of the simplifications we work in the CM frame in which
m 1 r 1 + m 2 r 2 = 0.
In terms of the relative separation vector r = r 2 − r 1 the positions w.r.t. the CM are
r 1 = − m 2 M r, r 2 = m 1 M r,
and Newton's third law of motion implies that
m 1r1 = −m 2r2 = μr = F (r )r,(40)
where μ is the reduced mass
μ = m 1 m 2 m 1 + m 2 ,
and F (r ) is the magnitude of the central force acting on the masses. As usual r andr represent the modulus and unit direction vector of the separation. In the absence of dissipation the energy and angular momentum of the system are conserved. In the CM frame these quantities can be written as
E = 1 2 μṙ 2 + V (r ), such that F (r ) = − d V dr ,(41)
and L = μr ×ṙ.
Angular momentum being a conserved vector, the relative motion takes place in the plane perpendicular to L, which we take to be the equatorial plane θ = π/2. Then
r = rr = r (cos ϕ, sin ϕ, 0) ,(43)
and
L = (0, 0, μ ) , = r 2φ .(44)
In the following we will always orient the orbit such that the motion is counter-clockwise and therefore ≥ 0. The orbit is represented by the parametrized curve r (ϕ) such thaṫ
r = r φ = r r 2 ,(45)
the prime denoting a derivative w.r.t. ϕ. Newton's law of central force (40) then takes the form
F (r ) = μ 2 r 3 r r − 2r 2 r 2 − 1 = − μ 2 r 2 1 r + 1 r .(46)
This result is tailored to suit Newton's original program of finding the law of force corresponding to a given orbit. [18] We will demonstrate it for the particular case of precessing conic sections: ellipses, parabolae and hyperbolae; these orbits are parametrized by
r = ρ 1 − e cos nϕ .(47)
Here ρ is known as the semi-latus rectum; e is the eccentricity: e = 0 for circles, 0 < e < 1 for precessing ellipses, e = 1 for similar parabolae and e > 1 for hyperbolae. Finally the number n determines the rate of precession. For circles this is of course irrelevant. For precessing ellipses the apastra occur for
ϕ = 2π k n ,(48)
where k is an integer; thus the apastron shift is ϕ = 2π (1 − n)/n per turn. For precessing parabolae n determines the angle over which the directrix turns during the passage of the two bodies, i.e. the asymptotic scattering angle due to precession, also measuring
ϕ = 2π (1 − n) n .(49)
Similarly for hyperbolae it determines the angle between the incoming and outgoing asymptotes:
ϕ = ϕ out − ϕ in = 2 n π − arccos 1 e .(50)
Fortschr. Phys. 2019, 67, 1800083 www.advancedsciencenews.com www.fp-journal.org
Substitution of the expression (47) into Equation (46) leads to the result
F (r ) = − μn 2 2 ρ 1 r 2 − μ(1 − n 2 ) 2 1 r 3 ,(51)
the sum of an inverse square and an inverse cube force. Identifying the inverse square term with newtonian gravity and introducing an inverse cubic force with strength βμ:
F (r ) = − GMμ r 2 − βμ r 3 ,(52)
we find
n 2 2 = GMρ, n 2 = GMρ GMρ + β .(53)
with M = m 1 + m 2 the total mass of the two-body system. Such a force follows from a potential
V (r ) = − GμM r − βμ 2r 2 .(54)
The eccentricity is determined by the radial velocity when the system is at the semi-latus rectum ϕ = π/2n, r = ρ:
r | ϕ=π/2n = − en ρ = −e GM ρ .(55)
Evaluating the total energy at the semi-latus rectum and observing it is a constant of motion then tells us that
E = GMμ 2ρ e 2 − 1 .(56)
This confirms that for e 2 < 1 the orbits are bound, whilst for e 2 ≥ 1 the orbits are open. Obviously the total angular momentum is by definition
L z = μ = μ GMρ + β.(57)
Note that taking the first-order result for relativistic precession in Schwarzschild space-time with innermost circular orbit R isco = 6GM/c 2 one gets
n 2 1 − 6GM c 2 ρ ⇒ β = 6G 2 M 2 c 2 = GMR isco .(58)
Gravitational Waves from Two-Body Systems
In this section and the following we address the emission of gravitational radiation by the two-body systems described in section 5.
As announced we treat this as a form of adiabatic dissipation changing the orbital parameters (ρ, e, n) of the system. This applies only to systems in which no head-on collisions or mergers involving strong gravity effects take place; these require more powerful methods of computation. [6] To compute the amplitude h i j from Equation (29) for point masses on the quasi-newtonian orbits (47) we must first determine the components of the quadrupole moment and their derivatives. For a two-body system in the CM frame they read
Q i j = m 1 r 1i r 1 j − 1 3 δ i j r 2 1 + m 2 r 2i r 2 j − 1 3 δ i j r 2 2 = μr 2 r ir j − 1 3 δ i j ≡ μr 2R i j ,(59)
wherer is the orbital unit vector in the equatorial plane defined in (43). We explicitly factor out the three-tensor arrayR with compo-nentsR i j describing the angular dependence of the orbits used in computing the quadrupole moments:
R = 1 2 ⎡ ⎣ cos 2ϕ + 1 3 sin 2ϕ 0 sin 2ϕ − cos 2ϕ + 1 3 0 0 0 − 2 3 ⎤ ⎦ .(60)
They have simple algebraic properties
E 2 = 2 9 I − 1 3 E, M 2 = N 2 = −J 2 = 2 3 I + E, E · M = M · E = 1 3 M, E · N = N · E = 1 3 N, M · N = −N · M = J.(63)
In addition their derivatives are
dM dt = 2 r 2 N, dN dt = − 2 r 2 M, dE dt = dI dt = dJ dt = 0.(64)
Fortschr. Phys. 2019, 67, 1800083 www.advancedsciencenews.com www.fp-journal.org
It follows that
R = 1 2 (E + M) .(65)
Using these results and the ones in appendix B it is now straightforward to establish expressions for the quadrupole moment and its derivatives:
Q = μr 2 2 (E + M) ,Q = μ r r E + r r M + N , Q = μ 2 r 2 r r − r 2 r 2 E + r r − r 2 r 2 − 2 M + 2r r N , ··· Q = μ 3 r 4 r r − 5r r r 2 + 4r 3 r 3 (E + M) + 4 r r − 2r 2 r 2 − 1 N .(66)
More generally we can write for the n-th derivative
Q (n) = μ n r 2(n−1) Q (n) E E + Q (n) M M + Q (n) N N , n = 0, 1, 2, 3, . . . ,(67)
where the coefficients Q (n) E ,M,N can be read off from the expressions (66) or computed by taking still higher derivatives. These results can now be used to evaluate the amplitude h i j (x, t); the expression (29) for the amplitude is equivalent to
h i j (x, t) = κ 8π r Q i j −r i (Q ·r) j −r j (Q ·r) i + 1 2 δ i j +r ir j r ·Q ·r u=t−r .(68)
Note that the direction of the observer is given by the polar unit vector
r = (sin θ cos φ, sin θ sin φ, cos θ),(69)
which is distinct from the orbital unit vectorr; then the amplitude in three-tensor notation takes the form
h = κ 8π r μ 2 r 2 Q (2) E E + Q (2) M M + Q (2) N N −r Q (2) E E ·r + Q (2) M M ·r + Q (2) N N ·r T − Q(2)E E ·r + Q(2)
M M ·r + Q (2) N N ·r r T
+ 1 2 I +rr T Q(2)
Er · E ·r + Q
Mr · M ·r + Q (2) Nr · N ·r .
To evaluate this expression use E ·r = 1 3 (sin θ cos φ, sin θ sin φ, −2 cos θ) ,
M ·r = sin θ cos(2ϕ − φ), sin(2ϕ − φ), 0 , N ·r = sin θ − sin(2ϕ − φ), cos(2ϕ − φ), 0 ,(71)
and
r · E ·r = sin 2 θ − 2 3 ,r · M ·r = sin 2 θ cos 2(φ − ϕ),
r · N ·r = sin 2 θ sin 2(φ − ϕ).(72)
The simplest case is that of circular orbits with r = 0 and = ωr 2 , where ω is the constant angular velocity such that ϕ(t) = ωt. Then
Q (2) E = Q (2) N = 0, Q(2)M = −2,(73)
Note that the frequency of the gravitational waves is twice that of the orbital motion, which is a direct consequence of their quadrupole nature.
Radiative Energy Loss
The first Equation (36) describes the energy flux of gravitational waves per unit of spherical angle as a function of the direction specified by the unit vectorr. Equations (66) specify the quadrupole moments and their derivatives for two-body systems Fortschr. Phys. 2019, 67, 1800083 www.advancedsciencenews.com www.fp-journal.org in generalized newtonian orbits (47). To evaluate the differential energy flux these quadrupole moments are to be substituted into the energy flux equation. First we compute
Q (3) 2 = μ 2 6 r 8 2 3 1 3 Q (3) 2 E + Q (3) 2 M + Q (3) 2 N I + − 1 3 Q (3) 2 E + Q (3) 2 M + Q (3) 2 N E + 2 3 Q (3) E Q (3) M M + 2 3 Q (3) E Q (3) N N .(77)
It follows that
Tr Q (3) 2 = 2μ 2 6 r 8 1 3 Q (3) 2 E + Q (3) 2 M + Q (3) 2 N ,(78)
and r · Q (3) 2 ·r = μ 2 6 r 8
4 9 Q (3) 2 E + sin 2 θ − 1 3 Q (3) 2 E + Q (3) 2 M + Q (3) 2 N + 2 3 cos 2(φ − ϕ) Q (3) E Q (3) M + 2 3 sin 2(φ − ϕ) Q (3) E Q (3) N .(79)
Finallŷ
r · Q (3) ·r = μ 3 r 4 − 2 3 Q (3) E + sin 2 θ Q (3) E + cos 2(φ − ϕ) Q (3) M + sin 2(φ − ϕ) Q (3) N .(80)
Inserting the coefficients taken from Equation (66):
Q (3) E = Q (3) M = r r − 5r r r 2 + 4r 3 r 3 ≡ A, Q(3)N = 4 r r − 2r 2 r 2 − 1 ≡ B,(81)
the general result is
d E d 2 dt = − Gμ 2 6 8π c 5 r 8 2 A 2 + B 2 cos 2 θ − 2 A 2 sin 2 θ cos 2(φ − ϕ) − 2 AB sin 2 θ sin 2(φ − ϕ) + 1 2 sin 4 θ A 2 + B 2 + 2 A 2 cos 2(φ − ϕ) + 2 AB sin 2(φ − ϕ) + A 2 − B 2 cos 2 2(φ − ϕ + 2 AB sin 2(φ − ϕ) cos 2(φ − ϕ) .(82)
For purely Keplerian orbits this result was derived in [20]. Using the results from appendix B for the generalized newtonian orbits (47) the expressions for the quantities A and B take the form
A = n 3 r ρ (e 2 − 1) r 2 ρ 2 + 2r ρ − 1, B = − 4n 2 r ρ + 4 n 2 − 1 .(83)
The intensity distribution of gravitation radiation emitted by a bound binary system in elliptical orbit, precessing and nonprecessing, is illustrated for a particular choice of parameters in appendix C. After integrating the result (82) over all angles the standard result (39) for the total energy loss becomes
d E dt = − 2Gμ 2 6 15c 5 r 8 4A 2 + 3B 2 .(84)
Substitution of the expressions (83) then results in
d E dt = − 8G 4 M 3 μ 2 15c 5 n 6 ρ 5 n 6 e 2 − 1 ρ 4 r 4 + 2n 6 ρ 5 r 5 − n 4 n 2 − 12 ρ 6 r 6 − 24n 2 n 2 − 1 ρ 7 r 7 + 12(n 2 − 1) 2 ρ 8 r 8 .(85)
In the simplest case, that of a circular orbit with e = 0, n = 1, r = ρ and with angular velocity given by
2 = r 4 ω 2 = GMρ,(86)
this result reduces to the well-known expression
d E dt = − 32G 4 M 3 μ 2 5c 5 ρ 5 = − 2 5 2GM c 2 ρ 4 μ 2 c 3 Mρ .(87)
The last result has been cast in terms of the dimensionless compactness parameter 2GM/c 2 ρ, defined as the ratio of the Schwarzschild radius for the combined system and the actual orbital scale characterized by ρ. For non-precessing orbits for which n = 1, 2 = GMρ, the rate of energy loss is
d E dt = − 1 30 2GM c 2 ρ 4 μ 2 c 3 Mρ e 2 − 1 ρ 4 r 4 + 2 ρ 5 r 5 + 11 ρ 6 r 6 .(88)
The expression (85) can also be used to compute the total energy lost by the two-body system in a definite period between times t 1 and t 2 , e.g. between two periastra for bound orbits, or during the total passage of two objects in an open orbit:
E = t 2 t 1 dt d E dt = ρ 2 ϕ 2 ϕ 1 dϕ r 2 ρ 2 d E dt = ρ 2 n ψ 2 ψ 1 dψ r 2 ρ 2 d E dt ,(89)
Fortschr. Phys. 2019, 67, 1800083 www.advancedsciencenews.com www.fp-journal.org
where we have introduced the integration variable ψ = nϕ. Now substitute (84) for the energy change and use
ρ r = 1 − e cos ψ.
Recalling that n 2 2 = GMρ and expanding the integrand transforms the expression to
E = − √ 2 30n 6 2GM c 2 ρ 7/2 μ 2 c 2 M ψ 2 ψ 1 dψ 12 + n 6 e 2 + e
The adiabatic approximation implies that we treat the parameters e and n in this interval as constants; then it is straightforward to perform the integrations. For a bound orbit with succesive periastra at ψ 1 = 0 and ψ 2 = 2π the total energy lost per period to gravitational waves is
E = − 4π √ 2 5n 6 2GM c 2 ρ 7/2 μ 2 c 2 M
1 + e 2 24 n 6 + 12n 4 − 120n 2 + 180 + e 4 96 n 6 + 216n 4 − 720n 2 + 540 + 5e 6 16
n 2 − 1 2 .(91)
In particular for non-precessing orbits with n = 1:
E = − 4π √ 2 5 2GM c 2 ρ 7/2 μ 2 c 2 M 1 + 73 24 e 2 + 37 96 e 4 .(92)
For the simplest case, a circular orbit with e = 0:
E = − 4π √ 2 5 2GM c 2 ρ 7/2 μ 2 c 2 M .(93)
On the other hand, for open orbits with e ≥ 1 and asymptotic values of the azimuth (ψ 1 , ψ 2 ) satisfying cos ψ 1 = cos nϕ 1 = 1 e , sin ψ 1 = 1 e e 2 − 1,
ψ 2 = 2π − ψ 1 ,(94)
the result of the integral (90) in a somewhat hybrid notation is
E = − √ 2 15n 6 2GM c 2 ρ 7/2 μ 2 c 2 M 6 k=0 I k (n, ψ 1 ) e k ,(95)
with coefficients I 0 = 12 (π − ψ 1 ) , I 1 = −24n 2 + 72 sin ψ 1 ,
I 2 = 1 2 3n 6 + 12n 4 − 120n 2 + 180 (π − ψ 1 ) + 1 2 n 6 −
For non-precessing orbits with n = 1 the expression simplifies as I 5 = I 6 = 0. The simplest case is the parabolic orbit with e = 1, n = 1 and ψ 1 = 0, resulting in
E = − 433π √ 2 120 2GM c 2 ρ 7/2 μ 2 c 2 M .(97)
These results are based on the generalized newtonian approximation. Results for scattering in the Effective One-Body formalism to all orders in v/c have been obtained in ref. [19].
Radiative Loss of Angular Momentum
The gravitational waves emitted by a system of masses in motion not only carry away energy, they also change the system's angular momentum. The last Equation (36) quantifies the directional angular momentum loss per unit of time of a non-relativistic system in terms of the change in the mass quadrupole. In this section we compute the angular momentum lost by a quasi-newtonian twobody system as we did for the energy in the previous section.
www.advancedsciencenews.com www.fp-journal.org
After substitution of Equations (66), (67) in the expression (36) for the differential flux of angular momentum we get
d M k d 2 dt = − G 4π c 5 μ 2 5 r 6 ε ki j × Q (2) E E + Q (2) M M + Q (2) N N · Q (3) E E + Q (3) M M + Q (3) N N i j − Q (2) E E ·r + Q (2) M M ·r + Q (2) N N ·r i × Q (3) E E ·r + Q (3) M M ·r + Q (3) N N ·r j +r i Q (2) E E + Q (2) M M + Q (2) N N jl × Q (3) E E ·r + Q (3) M M ·r + Q (3) N N ·r l − 1 2r i Q (2) E E ·r + Q (2) M M ·r + Q (2) N N ·r j × Q(3)
Er · E ·r + Q
Mr · M ·r + Q (3) Nr · N ·r(3)
The total loss of angular momentum obtained by integration over all angles as given by the result (39) is
d M k dt = − 2G 5c 5 ε ki j [Q (2) · Q (3) ] i j .
According to the expansion (67) and the multiplication rules (63) the only antisymmetric contribution to the product of Q (2) and Q (3) comes from
M · N = −N · M = J,
which has only a non-vanishing J xy = −J yx = 1 component. As the only non-trivial component of orbital angular momentum is M z this is as expected. Using the results of appendix B it follows that
d M z dt = − 4Gμ 2 5 5c 5 r 6 Q (2) M Q (3) N − Q (2) N Q (3) M = − 8Gμ 2 5 5c 5 r 6 n 4 (1 − e 2 ) r 3 ρ 3 − 2n 2 (n 2 − 1)(1 − e 2 ) r 2 ρ 2 + n 2 (n 2 + 2) r ρ − 4(n 2 − 1) .(99)
For circular orbits with r = ρ, e = 0 and n = 1 this reduces to
d M z dt = − 32G 3 μ 2 M 2 5c 5 ρ 3 GM ρ = − 2 √ 2 5 2GM c 2 ρ 7/2 μ 2 c 2 M ,(100)
and for other non-precessing orbits
d M z dt = − √ 2 10 2GM c 2 ρ 7/2 μ 2 c 2 M (1 − e 2 ) ρ 3 r 3 + 3 ρ 5 r 5 .(101)
Following a procedure similar to the treatment of energy we can compute the change in angular momentum in a fixed period of time between precessing angles ψ 1,2 :
M z = ρ 2 n ψ 2 ψ 1 dψ r 2 ρ 2 d M z dt = − 1 5n 5 2GM c 2 ρ 3 μ 2 ρc
It follows that for a bound state the angular momentum lost per period between successive periastra ψ 1 = 0 and ψ 2 = 2π is
M z = − 8π 5n 5 2GM c 2 ρ 3 μ 2 ρc M 1 + e 2 8 3n 4 − 20n 2 + 24 + e 4 8 2n 2 − 3 n 2 − 1 .(103)
For n = 1 this becomes:
M z = − 8π 5n 5 2GM c 2 ρ 3 μ 2 ρc M 1 + 7e 2 8 ;(104)
for circular motion just take e = 0. Next considering open orbits with asymptotic directions as in (94) Equation (102) takes the form
M z = − 2 5n 5 2GM c 2 ρ 3 μ 2 ρc M 4 k=0 m k (n, ψ 1 ) e k ,(105)
with coefficients
m 0 = 4 (π − ψ 1 ) , m 1 = −6n 2 + 16 sin ψ 1 , m 2 = 3 2 n 4 − 10n 2 + 12 (π − ψ 1 )m 4 = n 2 − 1 n 2 − 3 2 (π − ψ 1 )
www.advancedsciencenews.com www.fp-journal.org − n 2 − 5 2 sin ψ 1 cos ψ 1 − sin 3 ψ 1 cos ψ 1 .
In particular for parabolic orbits with e = n = 1 and ψ 1 = 0:
M z = −3π 2GM c 2 ρ 3 μ 2 ρc M .(107)
In ref. [13] a similar result was derived for small-angle scattering in purely newtonian gravity with β = 0.
Evolution of Orbits
The flux of energy and angular momentum carried by gravitational waves as expressed by Equations (34) can be determined only if all components of the wave signal are known. With present interferometric detectors this is barely possible by combining the signals received by at least three instruments at different locations. However, the loss of energy and angular momentum by sources such as binary star systems is observable and allows the gravitational-wave flux to be reconstructed as in the well-known case of the binary pulsar systems. Therefore it is of some practical use to evaluate the orbital changes due to the emission of gravitational radiation by such systems. Here as in the previous sections we consider non-relativistic two-body systems, either in bound orbit or on scattering trajectories. In the adiabatic approximation on which our calculations are based the orbits of two-body systems in the CM frame are parametrized by the expression (47). We take the orbital parameters (ρ, e, n) to be slowly changing functions of time; they would be constant in the absence of gravitational radiation. According to Equations (56) and (57) the orbital energy and angular momentum are expressed in terms of these parameters by
E = GMμ 2ρ e 2 − 1 , L z = μ GMρ + β.(108)
For comparison with observational data of bound orbits it is sometimes convenient to consider the (possibly precessing) semi-major axis of the orbit related to the semi-latus rectum by
a = ρ 1 − e 2 ⇒ E = − GMμ 2a .(109)
This quantity is also related to the precession parameter by
1 n 2 = 1 + β GMρ ⇒ L z = μ n GMρ.(110)
It follows that for bound orbits the orbital parameter changes are related to change in orbital energy and angular momentum by
d E dt = GMμ 2a 2 da dt , d L z dt = nμ 2 GM ρ dρ dt .(111)
As these parameters are related by (109) the changes in ρ and in eccentricy e are related as well:
1 ρ dρ dt = 1 a da dt − 1 1 − e 2 de 2 dt .(112)
Also for constant β:
1 ρ dρ dt = 2 n(1 − n 2 ) dn dt .(113)
Now by equating the change in energy and orbital angular momentum to the amount of energy E and angular momentum M z carried away by gravitational waves we can relate the change in orbital parameters to these parameters themselves. In particular according to Equations (91) and (103) during a period between to succesive periastra the orbital parameters change by
a a = − E E = − 16π √ 2 5n 6 μ M 2GM c 2 ρ 5/2 1 1 − e 2 × 1 + e 2 24 n 6 + 12n 4 − 120n 2 + 180 + e 4 96 n 6 + 216n 4 − 720n 2 + 540 + 5e 6 16 n 2 − 1 2 , ρ ρ = 2 nμ √ GMρ M z = − 16π √ 2 5n 6 μ M 2GM c 2 ρ 5/2 × 1 + e 2 8 3n 4 − 20n 2 + 24 + e 4 8 2n 2 − 3 n 2 − 1 ,(114)
Furthermore from these results we can determine the period of the orbit between periastra and its evolution. The period itself is
T = 2π/n 0 dϕ dt dϕ = ρ 2 n 2π 0 dψ 1 (1 − e cos ψ) 2 = 2π (1 − e 2 ) 3/2 ρ 2 n = 2πa 3/2 √ GM .(115)
This is the appropriate generalization of Kepler's third law for precessing orbits, which holds provided the period T is taken to be that between two periastra. From this it follows that the rate of change of the period is
dT dt = 3π a GM da dt ,(116)
Fortschr. Phys. 2019, 67, 1800083 www.advancedsciencenews.com www.fp-journal.org and the relative change per turn is
T T = 3 2 a a .(117)
This amounts to a generalization of the Peter-Matthews Equation [20] dT dt
T T = − 192π 5c 5 G 5/3 M 2/3 μ (1 − e 2 ) 7/2 T 2π −5/3 × 1 n 6 +
Next we consider open orbits. These we will characterize in terms of ρ and e directly with rates of change determined by (108) and (111) 1 e 2 − 1
de 2 dt = 1 ρ dρ dt + 1 E d E dt .(119)
This results in
dρ dt = − 2 5n 6 μc M 2GM c 2 ρ 3 n 4 (1 − e 2 ) ρ 3 r 3
−2n 2 (n 2 − 1)(1 − e 2 ) ρ 4 r 4 + n 2 (n 2 + 2) ρ 5 r 5 − 4(n 2 − 1)
ρ 6 r 6 ,(120)de 2 dt = 1 60n 6 μc Mρ 2GM c 2 ρ 3 24n 4 e 2 − 1 2 ρ 3 r 3 −n 2 (e 2 − 1) n 4 + 48(n 2 − 1)(e 2 − 1) ρ 4 r 4
−2n 2 n 4 + 12(n 2 + 2)(e 2 − 1) ρ 5 r 5 + n 2 (n 2 − 12) + 96(n 2 − 1)(e 2 − 1) ρ 6 r 6
+24n 2 n 2 − 1 ρ 7 r 7 − 12 n 2 − 1 2 ρ 8 r 8 .(121)
The corresponding changes over the complete orbit are
ρ ρ = − 4 √ 2 5n 6 μ M 2GM c 2 ρ 5/2 4 k=0 m k (n, ψ 1 )e k ,(122)
and
e 2 = e 2 − 1 ρ ρ − 4 √ 2 15n 6 μ M 2GM c 2 ρ 5/2 6 k=0 I k (n, ψ 1 )e k .(123)
The total energy change in such an open orbit is given by
E E = − 4 √ 2 15n 6 μ M 2GM c 2 ρ 5/2 6 k=0 (I k e k ) e 2 − 1 .(124)
Finally one can determine for which open orbits the loss of energy by gravitational radiation results in a bound orbit, at least in lowest-order approximation. Such a capture process happens when the initial energy is positive and the final energy is negative:
| E | > E . From (124) this requires 4 √ 2 15n 6 (e 2 − 1) μ M 6 k=0 I k (n, ψ 1 )e k > c 2 ρ 2GM 5/2 .
As the semi-latus rectum ρ must be greater than the Schwarzschild radius of the system, the quantity on the left-hand side must be definitely larger than one, and as μ < M it follows that e 2 − 1 must be small, i.e. the orbit must be close to parabolic.
Appendix A: The Transverse Traceless Gauge
In this appendix we explain in more detail how starting from an arbitrary solution of the field Equations (3) for the massless tensor field one can reach the T T-gauge (24) in the far-field region. We will do this in the hamiltonian formulation in which spaceand time components of the fields are considered separately. In this formulation the space-components h i j and their conjugate momentum fields π i j satisfy field equations which are first-order in time derivatives. In contrast the time components represent auxiliary fields N = −h 00 and N i = h 0i acting as Lagrage multipliers to impose constraints: time-independent field equations restricting the allowed field configurations of the space components. The full set of dynamical equations for these fields read
π i j =ḣ i j − δ i jḣkk + 2δ i j ∂ k N k − ∂ i N j − ∂ j N i , π i j = h i j − ∂ i ∂ k h kj − ∂ j ∂ k h ki + ∂ i ∂ j h kk − δ i j ( h kk − ∂ k ∂ l h kl ) − δ i j N + ∂ i ∂ j N + κ T i j .(125)
The constraints imposed by the auxiliary fields are
h j j − ∂ i ∂ j h i j = −κ T 00 , ∂ j π j i = κ T i0 .(126)
Together these equations are fully equivalent to the covariant field Equations (3). Our analysis will show that the split in dynamical space-and non-dynamical time components is in full agreement with the properties of the causal solutions (18)
h i j = h i j + ∂ i ξ j + ∂ j ξ i , N i = N i +ξ i + ∂ i ξ, π i j = π i j + 2δ i j ξ − 2 ∂ i ∂ j ξ, N = N − 2ξ,(127)
Observe that h i j changes only by terms depending on ξ i , whilst the change of π i j is determined only by ξ . Clearly the transformations of the auxiliary fields (N, N i ) suffice to remove these components by takinġ
ξ = 1 2 N,ξ i = N i − ∂ i ξ.(128)
This results in N = N i = 0 and π i j =ḣ i j − δ i jḣ kk ,
π i j = h i j − ∂ i ∂ k h kj − ∂ j ∂ k h ki + ∂ i ∂ j h kk − δ i j h kk − ∂ k ∂ l h kl + κ T i j ,(129)
constrained by
h j j − ∂ i ∂ j h i j = −κ T 00 , ∂ j π j i = κ T i0(130)
Now note that the choice of gauge parameters (128) does not fix these transformations completely: one can still make residual gauge transformations with parameters (ξ , ξ i ) subject to the conditionṡ
ξ = 0,ξ i = −∂ i ξ ,ξ i = 0.(131)
To see how these can be used, first note that combining the second field Equation (129) with the first constraint (130) results iṅ π j j = κ T j j + T 00 .
This condition is invariant under the residual gauge transformations, and therefore in empty space where T j j = T 00 = 0 the trace π j j is seen to be constant in time and can be removed by a timeindependent gauge transformation: ξ = − 1 4 π j j t=0 ⇒ π j j = π j j + 4 ξ = 0.
In view of the first Equation (129) this also implies that at all timesḣ j j = 0 and therefore h j j is time-independent. In empty space the first constraint (130) then asserts that also ∂ i ∂ j h i j is time-independent. Next the residual gauge parameters ξ i can be used to restrict the field combination
∂ j h j i − 1 2 ∂ i h j j = ∂ j h j i − 1 2 ∂ i h j j + ξ i .(134)
First it can be removed from the initial configuration by taking
ξ i = − ∂ j h j i − 1 2 ∂ i h j j t=0 ⇒ ∂ j h j i − 1 2 ∂ i h j j t=0 = 0.(135)
In combination with the first constraint (130), and knowing that h j j and ∂ i ∂ j h i j themselves are constant in time, this implies that in empty space
h j j t=0 = ∂ i ∂ j h i j t=0 = 0 ⇒ h j j = ∂ i ∂ j h i j = 0(136)
at all times. Finally one can still make one more residual gauge transformation, with harmonic parameters (ξ , ξ i ) satisfying
ξ i = 0, ξ = −∂ iξ i = 0.(137)
These transformations can be used to remove the trace of the field at t = 0 and therefore at all times:
∂ i ξ i = − 1 2 h j j t=0 ⇒ h j j = h j j t=0 = h j j + 2∂ i ξ i t=0 = 0.(138)
Finally as the second constraint (130) in empty space requires
∂ jḣ j i = 0,(139)
we also find that by combining with (135) and (138) ∂ j h j i = ∂ j h j i t=0 = 0.
In conclusion, we have proved that we can find local gauge transformations such that in empty space any solution of the field equation can be transformed to the T T-gauge ∂ j h j i = h j j = 0, by the gauge transformations specified in (128), (133), (135) and (138). The vanishing of the trace also implies that in the T Tgauge h i j = h i j . We close this section by noting that the hamiltonian field Equations (125), (126) follow directly from the action
S = d 4 x ḣ i j π i j − H ,(141)
with hamiltonian density
H = 1 2 π 2 i j − 1 4 π 2 j j + 1 2 ∂ k h i j 2 − ∂ j h j i − 1 2 ∂ i h j j 2 − 1 4 ∂ i h j j 2 − κh i j T i j − 2N i ∂ j π j i − κ T i0 + N h j j − ∂ i ∂ j h i j + κ T 00 .(142)
As is to be expected, in the T T-gauge this hamiltonian reduces to the energy density (31).
Fortschr. Phys. 2019, 67, 1800083 Figure B1. Intensity patterns of gravitational radiation emitted by a binary system in (quasi-)elliptical orbits (characterized by the value of n) with eccentricity e = 0.25 at three different points in the orbit at orientations ϕ = (0, π/2, π), and as emitted in three different directions w.r.t. the polar axis: θ = 90 • (blue inner contour), θ = 60 • (red middle contour) and θ = 30 • (green outer contour). Note that the scales agree in vertical columns, but differ from left to right in proportion 10 : 65 : 200.
Fortschr.
Phys. 2019, 67, 1800083www.advancedsciencenews.com www.fp-journal.org
Next we want to compute the time derivatives of the quadrupole moment Q. For ease of computation it is convenient to introduce a set of basic three-tensors in which all our results can be expressed:
+ 2r (M ·r) T + 2(M ·r)r T −r · M ·r I +rr T .(74)In particular in the equatorial plane θ = π/2 and h = κμ ω 2 r 2 16π r cos 2(φ − ωt)
cos ψ 24n 2 − 72 − 2n 6 e 2 + e 2 cos 2 ψ −n 6 + 12n 4 − 120n 2 + 180 + n 6 e 2 + e 3 cos 3 ψ 2n 6 − 48n 4 + 240n 2 − 240 + e 4 cos 4 ψ −n 6 + 72n 4 − 240n 2 + 180 + e 5 cos 5 ψ −48n 4 + 120n 2 − 72 + 12(n 2 − 1) 2 e 6 cos 6 ψ .
12n 4 +n 6 −
46120n 2 − 180 sin ψ 1 cos ψ 1 , I 3 = 48n 4 − 240n 2 + 240 sin ψ 360n 4 + 1200n 2 − 900 sin ψ 1 cos ψ 1 − 1 4 n 6 − 72n 4 + 240n 2 − 180 sin 3 ψ 1 cos ψ 1 , I 5 = 48n 4 − 120n 2 + 72 sin ψ
4 + e 2 n 2 (n 2 − 2) + e cos ψ 6n 2 − 16 − e 2 n 2 (3n 2 − 4) + e 2 cos 2 ψ n 4 − 16n 2 + 24 + 2e 2 n 2 (n 2 − 1) + e 3 cos 3 ψ −n 4 + 14n 2 − 16 − 4(n 2 − 1)e 4 cos 4 ψ .
n 4 −
48n 2 + 12 sin ψ 1 cos ψ 1 , m 3 = 4n 4 − 18n 2 + 16 sin ψ 1 (106) − 1 3 n 4 − 14n 2 + 16 sin 3 ψ 1 ,
-(21). As expected the full set of Equations (125), (126) is invariant under local gauge transformations which in this formulation take the formwww.advancedsciencenews.com
www.fp-journal.org
C 2019 The Authors. Fortschritte der Physik Published by Wiley-VCH Verlag GmbH & Co. KGaA.
As is well-known, requiring this universality to encompass the gravitational field itself leads to the non-linear structure of the full theory of General Relativity.
C 2019 The Authors. Fortschritte der Physik Published by Wiley-VCH Verlag GmbH & Co. KGaA. www.advancedsciencenews.com www.fp-journal.org
AcknowledgementThis paper grew out of a series of lectures by the author at Leiden University in the spring of 2018. The support of the Lorentz Foundation throught the Leiden University Fund (LUF) is gratefully acknowledged.Appendix B: Generalized Newtonian OrbitsThe generalized newtonian orbits (47) are parametrized byIn our computations we also need the derivatives of this expression, up to the third derivative. Taking anti-clockwise motion they read(143)Appendix C: Intensity of Emission from a Binary SystemIn this appendix we show an example of the intensity distribution of gravitational-wave emission in various directions produced by generalized newtonian binary systems in elliptic orbit with eccentricity e = 0.25 and precession rates n = 1 (newtonian, nonprecessing), n = 0.9 (prograde precession) and n = 1.1 (retrograde precession). The intensity distribution is represented by the dimensionless quantityIt is plotted as a function of azimuth φ for three different polar angles θ: in the equatorial plane θ = 90 • , and in the direc-tions θ = 60 • and θ = 30 • with respect to the axis of angular momentum, at three different instants during the orbit where the relative orientation of the two masses is ϕ = 0, ϕ = 90 • and ϕ = 180 • corresponding in the non-precessing case with n = 1 to apastron, semi-latus rectum and periastron. The same distributions for the same polar angles are also plotted for the case of prograde precession with n = 0.9, and for retrograde precession with n = 1.1.
. J H Taylor, J M Weisberg, Astrophys. J. 345434J. H. Taylor, J. M. Weisberg, Astrophys. J. 1989, 345, 434.
. B P Abbott, Phys. Rev. Lett. 61102B. P. Abbott et al. Phys. Rev. Lett. 2016, 116, 061102.
. B P Abbott, Phys. Rev. Lett. 141101B. P. Abbott et al. Phys. Rev. Lett. 2017, 119, 141101.
. B P Abbott, Phys. Rev. Lett. 161101B. P. Abbott et al. Phys. Rev. Lett. 2017, 119, 161101.
. G Nelemans, arXiv:1807.01060astro-ph.SRG. Nelemans, arXiv:1807.01060 [astro-ph.SR].
. L Blanchet, Living Rev. Relativity. 172L. Blanchet, Living Rev. Relativity 2014, 17, 2.
. C O Lousto, R H Price, Phys. Rev. D. 2124C. O. Lousto, R. H. Price, Phys. Rev. D 1997, 55, 2124.
. K Martel, E Poisson, Phys. Rev. D. 84001K. Martel, E. Poisson, Phys. Rev. D 2005, 71, 084001.
. R Fujita, W Hikida, H Tagoshi, Prog Theor Phys, 843R. Fujita, W. Hikida, H. Tagoshi, Prog. Theor Phys. 2009, 121, 843.
. G Koekoek, J W Van Holten, Class. Quantum Grav. G. Koekoek, J. W. van Holten, Class. Quantum Grav. 2011, 28, 225022.
. G Ambrosi, J W Van Holten, Class. Quantum Grav. 3215012G. d'Ambrosi, J. W. van Holten, Class. Quantum Grav. 2015, 32, 015012.
. B P Abbott, Phys. Rev. Lett. 221101B. P. Abbott et al. Phys. Rev. Lett. 2016, 116, 221101.
. T Damour, N Deruelle, Phys. Lett. A. 8781T. Damour, N. Deruelle, Phys. Lett. A 1981, 87, 81.
A Einstein, Sitzungsber. K. Preuss. Akad. Wiss. 1918, I. 154A. Einstein, Sitzungsber. K. Preuss. Akad. Wiss. 1918, I, 154.
C W Misner, K S Thorne, J A Wheeler, Gravitation. San FranciscoFreemanC. W. Misner, K. S. Thorne, J. A. Wheeler, Gravitation, Freeman, San Francisco, 1970.
M Maggiore, Gravitational Waves. Oxford Univ. PressM. Maggiore, Gravitational Waves, Oxford Univ. Press, 2008.
. P C Peters, Phys. Rev. B. 1224P. C. Peters, Phys. Rev. B 1964, 136, 1224.
. I Newton, Principia Mathematica, Royal Society1687LondonI. Newton, Principia Mathematica, Royal Society, London, 1687.
. T Damour, Phys. Rev. D. 94T. Damour, Phys. Rev. D 2016, 94, 104015.
. P C Peters, J Mathews, Phys. Rev. 131435P. C. Peters, J. Mathews, Phys. Rev. 1963, 131, 435.
| []
|
[
"arXiv:physics/0205058v1 [physics.acc-ph] An Accurate, Simplified Model of Intrabeam Scattering",
"arXiv:physics/0205058v1 [physics.acc-ph] An Accurate, Simplified Model of Intrabeam Scattering"
]
| [
"Karl L F Bane \nStanford Linear Accelerator Center\nStanford University\n94309StanfordCA\n"
]
| [
"Stanford Linear Accelerator Center\nStanford University\n94309StanfordCA"
]
| []
| Beginning with the general Bjorken-Mtingwa solution for intrabeam scattering (IBS) we derive an accurate, greatly simplified model of IBS, valid for high energy beams in normal storage ring lattices. In addition, we show that, under the same conditions, a modified version of Piwinski's IBS formulation (where η 2x,y /β x,y has been replaced by H x,y ) asymptotically approaches the result of Bjorken-Mtingwa. * | 10.2172/799047 | [
"https://export.arxiv.org/pdf/physics/0205058v1.pdf"
]
| 118,723,081 | physics/0205058 | e7c836a0ff422a1584823fff91ffb78adf696b3a |
arXiv:physics/0205058v1 [physics.acc-ph] An Accurate, Simplified Model of Intrabeam Scattering
21 May 2002 May 2002
Karl L F Bane
Stanford Linear Accelerator Center
Stanford University
94309StanfordCA
arXiv:physics/0205058v1 [physics.acc-ph] An Accurate, Simplified Model of Intrabeam Scattering
21 May 2002 May 2002
Beginning with the general Bjorken-Mtingwa solution for intrabeam scattering (IBS) we derive an accurate, greatly simplified model of IBS, valid for high energy beams in normal storage ring lattices. In addition, we show that, under the same conditions, a modified version of Piwinski's IBS formulation (where η 2x,y /β x,y has been replaced by H x,y ) asymptotically approaches the result of Bjorken-Mtingwa. *
INTRODUCTION
Intrabeam scattering (IBS), an effect that tends to increase the beam emittance, is important in hadronic [1] and heavy ion [2] circular machines, as well as in low emittance electron storage rings [3]. In the former type of machines it results in emittances that continually increase with time; in the latter type, in steady-state emittances that are larger than those given by quantum excitation/synchrotron radiation alone.
The theory of intrabeam scattering for accelerators was first developed by Piwinski [4], a result that was extended by Martini [5], to give a formulation that we call here the standard Piwinski (P) method [6]; this was followed by the equally detailed Bjorken and Mtingwa (B-M) result [7]. Both approaches solve the local, two-particle Coulomb scattering problem for (six-dimensional) Gaussian, uncoupled beams, but the two results appear to be different; of the two, the B-M result is thought to be the more general [8].
For both the P and the B-M methods solving for the IBS growth rates is time consuming, involving, at each time (or iteration) step, a numerical integration at every lattice element. Therefore, simpler, more approximate formulations of IBS have been developed over the years: there are approximate solutions of Parzen [9], Le Duff [10], Raubenheimer [11], and Wei [12]. In the present report we derive-starting with the general B-M formalismanother approximation, one accurate and valid for high energy beams in normal storage ring lattices. We, in addition, demonstrate that under these same conditions a modified version of Piwinski's IBS formulation asymptotically becomes equal to this result.
HIGH ENERGY APPROXIMATION TO BJORKEN-MTINGWA
The General B-M Solution [7] Let us consider first machines with bunched beams that are uncoupled and have vertical dispersion due to e.g. orbit errors. Let the intrabeam scattering growth rates be defined as
1 T p = 1 σ p dσ p dt , 1 T x = 1 ǫ 1/2 x dǫ 1/2 x dt , 1 T y = 1 ǫ 1/2 y dǫ 1/2 y dt ,(1)
with σ p the relative energy spread, ǫ x the horizontal emittance, and ǫ y the vertical emittance.
The growth rates according to Bjorken-Mtingwa (including a √ 2 correction factor [13], and including vertical dispersion) are
1 T i = 4πA(log) ∞ 0 dλ λ 1/2 [det(L + λI)] 1/2 T rL (i) T r 1 L + λI − 3T rL (i) 1 L + λI(2)
where i represents p, x, or y;
A = r 2 0 cN 64π 2β3 γ 4 ǫ x ǫ y σ s σ p ,(3)
with r 0 = 2.82 × 10 −15 m, the classical electron radius, c the speed of light, N the bunch population,β the velocity over c, γ the Lorentz energy factor, and σ s the bunch length; (log)
represents the Coulomb log factor, means that the enclosed quantities, combinations of beam parameters and lattice properties, are averaged around the entire ring; det and T r signify, respectively, the determinant and the trace of a matrix, and I is the unit matrix.
Auxiliary matrices are defined as
L = L (p) + L (x) + L (y) ,(4)L (p) = γ 2 σ 2 p 0 0 0 0 1 0 0 0 0 ,(5)L (x) = β x ǫ x 1 −γφ x 0 −γφ x γ 2 H x /β x 0 0 0 0 ,(6)L (y) = β y ǫ y 0 0 0 0 γ 2 H y /β y −γφ y 0 −γφ y 1 .(7)
The
dispersion invariant is H = [η 2 + (βη ′ − 1 2 β ′ η) 2 ]/β, and φ = η ′ − 1 2 β ′ η/β,
where β and η are the beta and dispersion lattice functions.
For unbunched beams σ s in Eq. 2 is replaced by C/(2 √ 2π), with C the circumference of the machine.
The Bjorken-Mtingwa Solution at High Energies
Let us first consider 1/T p as given by Eq. 2. We first notice that, for normal storage ring lattices (where H x,y /β x,y ≪ 1), the off-diagonal elements in L, −γφ, are small and can be set to zero. Then all matrices are diagonal. Let us also limit consideration to high energies,
i.e. let us assume a,b ≪ 1, with
a = σ H γ β x ǫ x , b = σ H γ β y ǫ y ,(8)with 1 σ 2 H = 1 σ 2 p + H x ǫ x + H y ǫ y .(9)
Note that if a,b ≪ 1, then the beam is cooler longitudinally than transversely. If we consider, for example, KEK's ATF, a 1.4 GeV, low emittance electron damping ring, ǫ y /ǫ x ∼ 0.01,
a ∼ 0.01, b ∼ 0.1[3].
If the high energy conditions are met then the 2nd term in the braces of Eq. 2 is small compared to the first term, and can be dropped. Now note that L 2,2 can be written as γ 2 /σ 2 H . For high energy beams a factor in the denominator of the integrand of Eq. 2, γ 2 /σ 2 H + λ, can be approximated by γ/σ H ; also, the (2,2) contribution to T r[(L + λI) −1 ] becomes small, and can be set to 0. Finally, the first of Eqs. 2 becomes
1 T p ≈ r 2 0 cN(log) 32γ 3 ǫ 3/4 x ǫ 3/4 y σ s σ 3 p σ H g(a/b) (β x β y ) −1/4 ,(10)
with
g(α) = 4 √ α π ∞ 0 dy y 2 (1 + y 2 )(α 2 + y 2 ) × × 1 1 + y 2 + 1 α 2 + y 2 .(11)
A plot of g(α) over the interval [0 < α < 1] is given in Fig. 1; to obtain the results for α > 1, note that g(α) = g(1/α). A fit to g,
g(α) ≈ 2α (0.021−0.044 ln α) [for 0.01 < α < 1] ,(12)
is given by the dashes in Fig. 1. The fit has a maximum error of 1.5% over [0.02 ≤ α ≤ 1].
Similarly, beginning with the 2nd and 3rd of Eqs. 2, we obtain
1 T x,y ≈ σ 2 p H x,y ǫ x,y 1 T p .(13)
FIG. 1:
The auxiliary function g(α) (solid curve) and an analytical approximation, g = 2α (0.021−0.044 ln α) (dashes).
Our approximate IBS solution is Eqs. 10,13. Note that Parzen's high energy formula is a similar, though more approximate, result to that given here [9]; and Raubenheimer's approximation is formulas similar, though less accurate, than Eq. 10 and identical to Eqs. 13 [11].
Note that the beam properties in Eqs. 10,13, need to be the self-consistent values. Thus, for example, to find the steady-state growth rates in electron machines, iteration will be required. Note also that these equations assume that the zero-current vertical emittance is due mainly to vertical dispersion caused by orbit errors; if it is due mainly to (weak) x-y coupling we let H y = 0, drop the 1/T y equation, and simply let ǫ y = κǫ x , with κ the coupling factor [3].
COMPARISON TO THE PIWINSKI SOLUTION
The Standard Piwinski Solution [6] The standard Piwinski solution is
1 T p = A σ 2 h σ 2 p f (ã,b, q) 1 T x = A f ( 1 a ,b a , q a ) + η 2 x σ 2 h β x ǫ x f (ã,b, q) 1 T y = A f ( 1 b ,ã b , q b ) + η 2 y σ 2 h β y ǫ y f (ã,b, q) .(14)
1
σ 2 h = 1 σ 2 p + η 2 x β x ǫ x + η 2 y β y ǫ y , (15) a = σ h γ β x ǫ x ,b = σ h γ β y ǫ y , q = σ h β 2d r 0 ,(16)
The function f is given by:
f (ã,b, q) = 8π 1 0 du 1 − 3u 2 P Q × × 2 ln q 2 1 P + 1 Q − 0.577 . . .(17)
where
P 2 =ã 2 + (1 −ã 2 )u 2 , Q 2 =b 2 + (1 −b 2 )u 2 .(18)
The parameter d functions as a maximum impact parameter, and is normally taken as the vertical beam size. . We finally obtain
f (a, b) ≈ 8π(log) 1 0 du √ a 2 + u 2 √ b 2 + u 2 .(19)
The integral is an elliptic integral. The first of Eqs. 14 then becomes
1 T p ≈ r 2 0 cN(log) 32γ 3 ǫ 3/4 x ǫ 3/4 y σ s σ 3 p σ H h(a, b) (β x β y ) −1/4 ,(20)with h(a, b) = 4 √ ab π 1 0 du √ a 2 + u 2 √ b 2 + u 2 .(21)
We see that the the approximate equation for 1/T p for high energy beams according to modified Piwinski is the same as that for B-M, except that h(a, b) replaces g(a/b).
We can now show that, for high energy beams, h(a, b) ≈ g(a/b): Consider the functioñ h(a, b, ζ), which is the same as h(a, b) except that the upper limit of integration is infinity, and the u 2 in the denominator are replaced by ζu 2 . It is simple to show that ∂ ζh (a, b, ζ)| ζ=1 = g(a/b) =h(a, b, 1). Now for high energies (a,b small), reducing the upper limit in the integral ofh(a, b, 1) to 1 does not significantly change the result, and h(a, b) ≈ g(a/b). To demonstrate this, we plot in Fig. 2 the ratio h(a, b)/g(a/b) for several values of a. We see, for example, for the ATF with ǫ y /ǫ x ∼ 0.01, a ∼ 0.01, a/b ∼ 0.1, and therefore h(a, b)/g(a/b) = 0.97; the agreement is quite good. Finally, for the relation between the transverse to longitudinal growth rates according to modified Piwinski: note that for non-zero vertical dispersion the second term in the brackets of Eqs. 14 (but with η 2 x,y /β x,y replaced by H x,y ), will tend to dominate over the first term, and the results become the same as for the B-M method.
In summary, we have shown that for high energy beams (a,b ≪ 1), in rings with a standard type of storage ring lattice: if the parameter d in P is chosen to give the same equivalent Coulomb log as in B-M, then the modified Piwinski solution agrees with the Bjorken-Mtingwa solution.
NUMERICAL COMPARISON [3] We consider a numerical comparison between results of the general B-M method, the modified Piwinski method, and Eqs. 10,13. The example is the ATF ring with no coupling and vertical dispersion due to random orbit errors. For our example H y = 17 µm, yielding a zero-current emittance ratio of 0.7%; the beam current is 3.1 mA. The steady-state growth rates according to the 3 methods are given in Table I. We note that the Piwinski results are 4.5% low, and the results of Eqs. 10,13, agree very well with those of B-M. Finally note that, not only the growth rates, but even the differential growth rates-i.e. the growth rates as function of position along the ring-agree well for the three cases.
of Modified Piwinski to the B-M Solution at High Energies To compare with the B-M solution, let us consider a slightly changed version of Piwinski that we call the modified Piwinski solution. It is the standard version of Piwinski, but with η 2 /β replaced by H (i.e.ã,b, σ h , become a, b, σ H , respectively). Let us also assume high energy beams, i.e. let a,b ≪ 1. Let us sketch the derivation. First, notice that in the integral of the auxiliary function f (Eq. 17): the −0.577 can be replaced by 0; the −3u 2 in the numerator can be set to 0; P (Q) can be replaced by √ a 2 + u 2 ( √ b 2 + u 2 ). The first term in the braces can be approximated by a constant and then be pulled out of the integral; it becomes the effective Coulomb log factor. Note that for the proper choice of the Piwinski parameter d, the effective Coulomb log can be made the same as the B-M parameter (log). For flat beams (a ≪ b), the Coulomb log of Piwinski becomes (log) = ln [dσ 2 H /(4r 0 a 2 )]
FIG. 2 :
2The ratio h(a, b)/g(a/b) as function of a/b, for three values of a.
TABLE I :
ISteady-state IBS growth rates for an ATF example including vertical dispersion due to random errors.Method 1/T p [s −1 ] 1/T x [s −1 ] 1/T y [s −1 ]Modified Piwinski
25.9
24.7
18.5
Bjorken-Mtingwa
27.0
26.0
19.4
Eqs. 10,13
27.4
26.0
19.4
arXiv:physics/0205058v1 [physics.acc-ph] 21 May 2002 SLAC-AP-141 May 2002
AcknowledgmentsThe author thanks K. Kubo and A. Piwinski for help in understanding IBS theory.
C Bhat, 1999 Particle Accelerator Conference (PAC 1999). New York3155C. Bhat et al., in 1999 Particle Accelerator Conference (PAC 1999) (New York, 1999), p. 3155.
W Fischer, 2001 Particle Accelerator Conference. Chicago2857W. Fischer et al., in 2001 Particle Accelerator Conference (PAC 2001) (Chicago, 2001), p. 2857.
Intrabeam scattering analysis of measurements at KEK's ATF damping ring. K Bane, report in preparationK. Bane et al., Intrabeam scattering analysis of measurements at KEK's ATF damping ring, report in preparation.
. A Piwinski, Tech. Rep. HEAC. 74A. Piwinski, Tech. Rep. HEAC 74, Stanford (1974).
. M Martini, PS/84-9AA), CERNTech. Rep.M. Martini, Tech. Rep. PS/84-9 (AA), CERN (1984).
A Piwinski, Handbook of Accelerator Physics and Engineering. A. W. Chao and M. TignerWorld Scientific125A. Piwinski, in Handbook of Accelerator Physics and Engineering, edited by A. W. Chao and M. Tigner (World Scientific, 1999), p. 125.
. J D Bjorken, S K Mtingwa, Particle Accelerators. 13115J. D. Bjorken and S. K. Mtingwa, Particle Accelerators 13, 115 (1983).
. A Piwinski, private communicationA. Piwinski, private communication.
. G Parzen, Nuclear Instruments and Methods. 256231G. Parzen, Nuclear Instruments and Methods A256, 231 (1987).
J L Duff, Proceedings of the CERN Accelerator School: Second Advanced Accelerator Physics Course. the CERN Accelerator School: Second Advanced Accelerator Physics CourseGenevaCERNJ. L. Duff, in Proceedings of the CERN Accelerator School: Second Advanced Accelerator Physics Course (CERN, Geneva, 1989).
. T Raubenheimer, SLAC-R-387Stanford UniversityPh.D. thesis. Sec. 2.3.1T. Raubenheimer, Ph.D. thesis, Stanford University (1991), SLAC-R-387, Sec. 2.3.1.
J Wei, 1993 Particle Accelerator Conference (PAC 93. Washington D.C.3651J. Wei, in 1993 Particle Accelerator Conference (PAC 93) (Washington D.C., 1993), p. 3651.
. K Kubo, K Oide, Physical Review Special Topics-Accelerators and Beams. 4124401K. Kubo and K. Oide, Physical Review Special Topics-Accelerators and Beams 4, 124401 (2001).
| []
|
[
"Dynamics of ion cloud in a linear Paul trap",
"Dynamics of ion cloud in a linear Paul trap"
]
| [
"Pintu Mandal ",
"Manas Mukherjee \nPresent address: Centre for Quantum Technologies\nNational University of Singapore\n117543Singapore\n",
"\nRaman Center for Atomic, Molecular and Optical Sciences\nIndian Association for the Cultivation of Science 2A & 2B Raja S. C. Mullick Road\n700 032Kolkata\n"
]
| [
"Present address: Centre for Quantum Technologies\nNational University of Singapore\n117543Singapore",
"Raman Center for Atomic, Molecular and Optical Sciences\nIndian Association for the Cultivation of Science 2A & 2B Raja S. C. Mullick Road\n700 032Kolkata"
]
| []
| A linear ion trap setup has been developed for studying the dynamics of trapped ion cloud and thereby realizing possible systematics of a high precision measurement on a single ion within it. The dynamics of molecular nitrogen ion cloud has been investigated to extract the characteristics of the trap setup. The stability of trap operation has been studied with observation of narrow nonlinear resonances pointing out the region of instabilities within the broad stability region. The secular frequency has been measured and the motional spectra of trapped ion oscillation have been obtained by using electric dipole excitation. It is applied to study the space charge effect and the axial coupling in the radial plane. | null | [
"https://arxiv.org/pdf/1306.5582v1.pdf"
]
| 119,180,955 | 1306.5582 | a6ded926333c6285ab721175d8a920ed4bff1f30 |
Dynamics of ion cloud in a linear Paul trap
24 Jun 2013 May 7, 2014
Pintu Mandal
Manas Mukherjee
Present address: Centre for Quantum Technologies
National University of Singapore
117543Singapore
Raman Center for Atomic, Molecular and Optical Sciences
Indian Association for the Cultivation of Science 2A & 2B Raja S. C. Mullick Road
700 032Kolkata
Dynamics of ion cloud in a linear Paul trap
24 Jun 2013 May 7, 2014arXiv:1306.5582v1 [physics.atom-ph]
A linear ion trap setup has been developed for studying the dynamics of trapped ion cloud and thereby realizing possible systematics of a high precision measurement on a single ion within it. The dynamics of molecular nitrogen ion cloud has been investigated to extract the characteristics of the trap setup. The stability of trap operation has been studied with observation of narrow nonlinear resonances pointing out the region of instabilities within the broad stability region. The secular frequency has been measured and the motional spectra of trapped ion oscillation have been obtained by using electric dipole excitation. It is applied to study the space charge effect and the axial coupling in the radial plane.
1 Introduction "A single atomic particle forever floating at rest in free space" [1] is an ideal system for precision measurement and a single trapped ion provides the closest realization to this ideal. A single or few ions can be trapped within a small region of space in an ion trap and they are free from external perturbations. Such a system has been used for the precision measurement of electron's g -factor [2], various atomic properties like the lifetime of atomic states [3], the quadrupole moment [4,5,6] etc. Precision table-top experiments of fundamental physics in the low energy sector like the atomic parity violation measurement, nuclear anapole moment measurement, electron's electric dipole moment measurement are either in progress in different laboratories worldwide or proposed [7,8,9,10]. Any highprecision experiment appears with systematics which are required to be tracked or removed and hence a systematic investigation on the system itself is essential at the initial stage. In order to prepare for measuring atomic parity violation with trapped ions, a series of experiments have been performed in a linear ion trap to fully understand its behaviour and associated systematics. In this colloquium, the results of some experiments will be presented that are of preeminent interest to an audience coming from a variety of physics disciplines. It is organised with a brief overview on the physics of ion trapping in a linear Paul trap, description of the experimental setup and followed by results.
Physics of ion trapping
An electrostatic field can not produce a potential minimum in three dimensional space as is required for trapping the charged particles. It is therefore, either a combination of static magnetic field and an electric field is used (Penning trap) or a combination of a timevarying and an electrostatic field is used (Paul trap). In Paul trap a radio-frequency (rf) potential superposed with a dc potential is applied on electrodes of hyperbolic geometry to develop quadrupolar potential in space. The geometry of the electrodes evolved over the decades for ease in machining, smooth optical access to the trapped ions etc. Figure 1 shows one of the most frequently used trap geometries with four three-segmented rods placed symmetrically at four corners of a square and is commonly called a linear Paul trap. The four rods at each end are connected together and a common dc potential (V e ) is applied so as to produce an axial trapping potential. The diagonally opposite rods at the middle are connected and a rf (V 0 cos Ωt) in addition to a dc potential (U) is applied on one pair with respect to the other pair for providing a dynamic radial confinement. The radial potential inside the trap is
Φ(x, y, t) = (U − V 0 cos Ωt) x 2 − y 2 2r 2 0 ,(1)
where 2r 0 is the separation between the surfaces of the diagonal electrodes as depicted in figure 1 These equations (eqn. 2) can be rewritten as
d 2 u dζ 2 + (a u − 2q u cos 2ζ)u = 0,(3)
with u = x, y, where
a x = −a y = 4eU mr 2 0 Ω 2 , q x = −q y = 2eV 0 mr 2 0 Ω 2 ,(4)
and ζ = Ωt/2. Eqn. 3 is standard Matheiu differential equation and its solution provides stability or instability of the ion motion [11] depending on the values of the parameters a and q as defined in eqn. 4. There exists a region in a vs. q diagram for which the ion-motion is stable along a particular direction, for example along x. A similar stability region exists for the motion along y direction. An intersection between these two stability regions thus signifies a stable motion in xy plane. For stable ion motion the trap should be operated at q < 0.908. The stable solutions of Mathieu differential equation show that the trapped ion oscillates with different frequencies given by [12] ω n = (2n ± β)Ω 2 , n = 0, 1, 2, 3...
Here β is a function of the trap operating parameters a, q and for their small values, β = a + q 2 /2. The fundamental frequency ω 0 (that corresponds to n = 0) of secular motion and other micromotion frequencies are given by
ω 0 = βΩ 2 ,(6)
ω 1± = Ω ± ω 0 , ω 2± = Ω ± 2ω 0 and so on. A large spectra of the motional frequency have been obtained in our experiment by using electric dipole excitation technique. Though in ideal case the trap potential is quadrupolar, real traps appear with misalignment, defect in machining, truncation and holes in the electrodes to have optical access. In addition, there are space charge developed by the trapped ions themselves. All these result in deviation from pure quadrupole trap potential contributing to other higher order terms and make the ion motion unstable for certain values of the trapping parameters, for which the stability exists in ideal case. The ions gain energy from the rf trapping field and their motional amplitudes get enhanced resulting loss from the trap. The condition of such nonlinear resonances is given by [13] n x ω 0x + n y ω 0y = Ω, n x , n y = 0, 1, 2, 3...
where ω 0x and ω 0y are the secular frequencies for the motion along x and y respectively.
Here n x + n y = k is the order of the multipole. If one of the trap parameters is varied, a parametric resonance appears at a definite value subjected to the condition defined by eqn. 7 and it gives rise to instabilities called "black canyons" [14] within the stability diagram. The middle electrode is separated from the end electrodes by a gap of 2 mm. The molecular nitrogen ions (N + 2 ) are created by electron impact ionization. The ions are dynamically trapped for few hundreds ms before they are extracted by lowering the axial potential in one direction. The extracted ions are detected by a channel electron multiplier (CEM). The CEM produces one pulse corresponding to each ion and the pulse is successively processed through an amplifier, a discriminator, a TTL converter before it is fed into a multichannel scalar (MCS) card which ultimately counts the number of ions reaching the CEM. This time-of-flight (TOF) technique provides a detection efficiency around 10%. The time sequences are generated by National Instruments' Data Acquisition (DAQ) hardware which is controlled by Labview and monitored by a personal computer (PC).
The trap is operated at a rf frequency of 1.415 MHz and no dc potential is applied to the middle electrodes (U = 0, a u = 0). The end electrodes are kept at +20 V while trapping. At the time of extraction, the end electrodes at the ion-exit-side are switched fast (within 75 ns) from +20 V to −45 V.
Stability characteristics
The stability behaviour of the trap is studied by varying the trap-operating-parameter q while keeping the other parameter a at zero. The q is varied in steps of 0.0008 by changing the rf amplitude at small intervals of 0.35 V while the number of trapped ions (N) is plotted in figure 3 as a function of q. It shows that N grows with q initially but decreases above q ≈ 0.5. It remains almost constant and shows a plateau for 0.3 < q < 0.5. The q scanning is restricted to 0.6 due to the presence of heavier masses which can not be resolved in the TOF spectra. One of the significant observations within this stability diagram is the appearance of narrow nonlinear resonances for specific values of q. These are due to the existence of higher order multipoles within the trap potential as explained in section 2. The resonances appear at q = 0.3461, 0.4073 and 0.4885 are assigned to the 8th,7th and 6th order multipoles respectively. The 7th order multipole is unlikely as the symmetry of trap setup forbids non-zero perturbations due to odd order multipole. However, such a nonlinear observation has been observed previously [15]. It could result from any misalignment of the setup that partially breaks the radial symmetry or due to some electrical connection wires near the trap center. The nonlinear resonance at q = 0.5163 in our experiment could not be assigned. It may result from other atmospheric species, or some molecules produced by charge-transfer-reactions inside the trap. As can be seen from figure 3, the depth of the resonance appearing at q = 0.4885 is maximum and hence it can be concluded that the 6th order multipole is the strongest one in our trap setup. While operating the trap for a single ion, the region of instabilities should be avoided as the ion gains energy from the time varying trapping field corresponding to these operating regions and its motional amplitude increases. It can add to systematics in precision measurement on the ion.
Dipole excitation of trapped ions
Electric dipole excitation of the trapped ions has been employed to measure their secular frequency and to obtain motional spectra. An electric dipole field has been applied on one of the middle electrodes as shown schematically in figure 4. The amplitude of the excitation potential (v i ) is kept fixed while its frequency is tuned so as to match with the secular frequency of the trapped ions. The trap operating parameters are kept fixed during the experiment. After the ions are loaded into the trap, the dipole excitation field is applied for few hundreds of ms. After a short waiting time, the ions are released and detected. The frequency of the excitation signal (ω) is varied and the total number of ions is detected in each step.
Measurement of secular frequency
The experimentally obtained ion counts (N) have been normalised after dividing by the maximum ion count (N max ) during a particular experiment. The normalised ion count (N n = N/N max ) with associated uncertainty, has been plotted as a function of the frequency (ω/2π) of the dipole excitation signal. Figure 5 shows such a dipole excitation resonance plot obtained with an excitation amplitude v i = 50 mV. The frequency is scanned from 165 kHz to 205 kHz in steps of 500 Hz. The excitation signal is applied during 150 ms in each step. The experimental data points have been fitted with the following function,
N n = N 0 + A exp [− exp(−ω ′ ) − ω ′ + 1] ,(8)
with ω ′ = (ω − ω 0 )/σ. Here ω 0 is the resonant frequency and is equal to the secular frequency of the trapped ions. N 0 is an offset, A is a scaling factor and σ is the full-width at half-maxima (FWHM) of the resonance. The secular frequency of the trapped ions obtained from the fit is 182.730(76) kHz and it is in good agreement with theoretically calculated value.
Motional spectra
The motional spectra of the trapped ions as described in section 2 have been measured by varying the dipole excitation signal frequency over long range. Figure 6 shows the motional spectra in the radial plane. The fundamental or the first harmonic frequency of oscillation is observed at ω 0 = 2π × 184 kHz that corresponds to the trap operating parameter a = 0, q = 0.39 and it is the strongest one. The second and third harmonics are observed at 386 kHz and 577 kHz respectively. The other motional spectra as described in eqn. 6 are observed at ω 2− = 2π × 915 kHz, ω 1− = 2π × 1.109 MHz, ω 1+ = 2π × 1.492 kHz and ω 2+ = 2π × 1.685 MHz.
Application
The accurate measurement of the motional frequency of the trapped ions is essential for different studies on them [16]. In a real linear Paul trap the radial motion is coupled with the axial motion and hence a variation in the axial potential affects the secular frequency of the ions [15]. The motional frequency of the trapped ions for different axial potentials has been measured with the technique described in section 4.2.1 and from this measurement the geometrical radial-axial coupling constant has been determined. This is important for any precision spectroscopic study on a single ion confined in this setup. The dipole excitation technique is also applied to study the shift in the motional frequency due to space charge created by the trapped ions. It is observed that the frequency decreases while they oscillate collectively with increasing space charge [17]. Detailed discussion on these topics can be found elsewhere [16,17].
Conclusions
This colloquium paper describes the development of an ion trap facility at IACS and the results of some experiments fundamentally based on the dynamics of a trapped ion cloud. It presents a demonstration of some first principles of ion-trap-physics that are of common interest to an audience coming from wide variety of physics and participating in this colloquium. The results are also some significant feeds to the precision measurement based on a single ion in a linear Paul trap.
(Figure 1 :
1(b). The equipotential lines are rectangular hyperbolae in the xy plane having four-fold symmetry about the z axis. The equation of motion of an ion of charge e and mass m under the potential Φ(x, y, t) (eqn. 1) can be represented as d 2 x dt 2 = − U − V 0 cos Ωt)y. (a) Schematic of the linear ion trap used in the experiment. (b) End view of the four middle electrodes with relevant electrical connections. Various dimensions as marked by l e , l m , l, r e and r 0 are described in section 3.
Figure 2 :
2Schematic of the experimental setup. The trap, filament and the CEM with other ion optics (extraction cylinder) are housed in a vacuum chamber. The functioning and control of the signal processing devices are explained in the text.
of the whole experimental setup is presented in figure 2. It consists of a linear Paul trap as shown in figure 1, an ionization setup, extraction and detection setup. The linear trap is assembled from four three-segmented electrodes each placed at four corners of a square of side (l) 12.73 mm [figure 1](b). Each of twelve rods are of diameter (2r e ) 10 mm. The four middle rods are of length (l m ) 25 mm while all others are 15 mm long (l e ) [figure 1(a)]. The separation between the surfaces of the diagonally opposite rods (2r 0 ) is 8 mm.
Figure 3 :
3Number of trapped ions (N) as a function of q (a = 0). Sudden fall of N about some specific values of q corresponds to nonlinear resonances as explained in the text. The numbers 6, 7, 8 describe the order of the multipoles to which the resonances are assigned.
Figure 4 :
4Schematic of the circuit used for dipole excitation of trapped ions. The dipole excitation signal v i cos ωt is applied between the electrodes marked as I and III.
Figure 5 :
5Dipole excitation resonance of trapped ions. Solid line shows a fit to the data with model function described in eqn. 8.
Figure 6 :
6The normalized ion count N plotted as a function of the dipole excitation frequency (in kHz) presenting the motional spectra of the trapped ion cloud. The amplitude of the excitation voltage is v i = 100 mV and the trap operating parameters are set at a = 0, q = 0.39 for N + 2 . The frequency of the trap supply voltage is Ω = 2π × 1.3 MHz.
AcknowledgementThe authors thank S. Das, D. De Munshi and T. Dutta, presently at the Centre for Quantum Technologies, National University of Singapore, for their support in developing the experimental setup at IACS, and beyond it. The machining support from Max-Planck Institute, Germany is gratefully acknowledged.
. H Dehmelt, Physica Scripta T22. 102H Dehmelt Physica Scripta T22 102 (1988)
. R S Van Dyck, P Schwinberg, H G Dehmelt, Phys. Rev. Lett. 5926R S Van Dyck, P B Schwinberg and H G Dehmelt Phys. Rev. Lett. 59 26 (1987)
. N Yu, W Nagourney, H Dehmelt, Phys. Rev. Lett. 784898N Yu, W Nagourney and H Dehmelt Phys. Rev. Lett. 78 4898 (1997)
. G P Barwood, Phys. Rev. Lett. 93133001G P Barwood et al. Phys. Rev. Lett. 93 133001 (2004)
. W H Oskay, Phys. Rev. Lett. 94163001W H Oskay et al. Phys. Rev. Lett. 94 163001 (2005)
. C F Roos, Nature. 443316C F Roos et al. Nature 443 316 (2006)
. N Fortson Phys. Rev. Lett. 702383N Fortson Phys. Rev. Lett. 70 2383(1993)
. P Mandal, M Mukherjee, Phys. Rev. A. 8250101P Mandal and M Mukherjee Phys. Rev. A 82 050101(R) (2010)
. P B K Sahoo, M Mandal, Mukherjee, Phys. Rev. A. 8330502B K Sahoo, P Mandal and M Mukherjee Phys. Rev. A 83 030502(R) (2011)
. O O Versolato, Phys. Rev. A. 8210501O O Versolato et al. Phys. Rev. A 82 010501(R) (2010)
P H Dawson, Quadrupole Mass Spectrometry and Its Applications. ElsevierP H Dawson Quadrupole Mass Spectrometry and Its Applications Elsevier (1976)
. P K Ghosh, Traps Oxford University PressP K Ghosh Ion Traps Oxford University Press (1995)
. N P H Dawson, R Whetten, Int. J. Mass Spectrom. Ion Phys. 245P H Dawson and N R Whetten Int. J. Mass Spectrom. Ion Phys. 2 45 (1969)
. R E March, J F J Todd, John Wiley & SonsQuadrupole Ion Trap Mass SpectrometryR E March and J F J Todd Quadrupole Ion Trap Mass Spectrometry John Wiley & Sons (2005)
. A Drakoudis, M Söllner, G Werth, Int. J. Mass Spectrom. 25261A Drakoudis, M Söllner and G Werth Int. J. Mass Spectrom. 252 61 (2006)
. P , University of Calcutta submittedPhD ThesisP. Mandal PhD Thesis University of Calcutta submitted (2013)
. P , arXiv 1305.7081v1 [physics-atom-phP. Mandal et al. arXiv 1305.7081v1 [physics-atom-ph] (2013)
| []
|
[
"Anisotropic electron transport in the nuclear pasta phase",
"Anisotropic electron transport in the nuclear pasta phase"
]
| [
"M R Pelicer \nDepto de Física -CFM\nUniversidade Federal de Santa Catarina Florianópolis -SC -CP\n476 -CEP 88.040 -900Brazil\n",
"M Antonelli \nLaboratoire de Physique Corpusculaire\nCNRS\nENSICAEN\n14050CaenFrance\n",
"† ",
"D P Menezes \nDepto de Física -CFM\nUniversidade Federal de Santa Catarina Florianópolis -SC -CP\n476 -CEP 88.040 -900Brazil\n",
"‡ ",
"F Gulminelli \nLaboratoire de Physique Corpusculaire\nCNRS\nENSICAEN\n14050CaenFrance\n"
]
| [
"Depto de Física -CFM\nUniversidade Federal de Santa Catarina Florianópolis -SC -CP\n476 -CEP 88.040 -900Brazil",
"Laboratoire de Physique Corpusculaire\nCNRS\nENSICAEN\n14050CaenFrance",
"Depto de Física -CFM\nUniversidade Federal de Santa Catarina Florianópolis -SC -CP\n476 -CEP 88.040 -900Brazil",
"Laboratoire de Physique Corpusculaire\nCNRS\nENSICAEN\n14050CaenFrance"
]
| [
"MNRAS"
]
| The presence of nuclear pasta is expected to modify the transport properties in the mantle of neutron stars. The non-spherical geometry of the pasta nuclear clusters leads to anisotropies in the collision frequencies, impacting the thermal and electrical conductivity. We derive analytical expressions for the anisotropic collision frequencies using the Boltzmann equation in the relaxation time approximation. The average parallel, perpendicular and Hall electrical conductivities are computed in the high-temperature regime above crustal melting, considering incoherent elastic electron-pasta scattering and randomly oriented pasta structures. Numerical values are obtained at different densities and temperatures by using the IUFSU parametrization of the non-linear Walecka model to determine the crustal structure. We find that the anisotropy of the collision frequencies grows with the length of the pasta structures and, independently of the magnetic field, the presence of rod and slab phases decreases the conductivity by more than one order of magnitude. Our numerical results indicate that, even if the pasta structures might survive above the crustal melting point, no strong anisotropies are to be expected in the conduction properties in this temperature regime, even in the presence of a very high magnetic field. due to degenerate electron-electron Coulomb scattering dominates over the contribution due to electron-phonon scattering(Shternin & Yakovlev 2006)and becomes competitive with the electron conductivity due to the scattering of electrons by impurity ions(Chamel & Haensel 2008). | 10.1093/mnras/stad562 | [
"https://export.arxiv.org/pdf/2212.11817v3.pdf"
]
| 254,973,922 | 2212.11817 | ecc8982880304049c084cbf0bf587ac7fabcc339 |
Anisotropic electron transport in the nuclear pasta phase
2022
M R Pelicer
Depto de Física -CFM
Universidade Federal de Santa Catarina Florianópolis -SC -CP
476 -CEP 88.040 -900Brazil
M Antonelli
Laboratoire de Physique Corpusculaire
CNRS
ENSICAEN
14050CaenFrance
†
D P Menezes
Depto de Física -CFM
Universidade Federal de Santa Catarina Florianópolis -SC -CP
476 -CEP 88.040 -900Brazil
‡
F Gulminelli
Laboratoire de Physique Corpusculaire
CNRS
ENSICAEN
14050CaenFrance
Anisotropic electron transport in the nuclear pasta phase
MNRAS
0002022Preprint 4 May 2023 Compiled using MNRAS L A T E X style file v3.0dense matter -conduction -stars: neutron
The presence of nuclear pasta is expected to modify the transport properties in the mantle of neutron stars. The non-spherical geometry of the pasta nuclear clusters leads to anisotropies in the collision frequencies, impacting the thermal and electrical conductivity. We derive analytical expressions for the anisotropic collision frequencies using the Boltzmann equation in the relaxation time approximation. The average parallel, perpendicular and Hall electrical conductivities are computed in the high-temperature regime above crustal melting, considering incoherent elastic electron-pasta scattering and randomly oriented pasta structures. Numerical values are obtained at different densities and temperatures by using the IUFSU parametrization of the non-linear Walecka model to determine the crustal structure. We find that the anisotropy of the collision frequencies grows with the length of the pasta structures and, independently of the magnetic field, the presence of rod and slab phases decreases the conductivity by more than one order of magnitude. Our numerical results indicate that, even if the pasta structures might survive above the crustal melting point, no strong anisotropies are to be expected in the conduction properties in this temperature regime, even in the presence of a very high magnetic field. due to degenerate electron-electron Coulomb scattering dominates over the contribution due to electron-phonon scattering(Shternin & Yakovlev 2006)and becomes competitive with the electron conductivity due to the scattering of electrons by impurity ions(Chamel & Haensel 2008).
INTRODUCTION
Observations related to the thermal, magnetic and spin evolution of neutron stars can provide us with indirect information on the transport properties of ultra-dense matter, e.g. Horowitz et al. (2015); Montoli et al. (2020); Potekhin & Chabrier (2021). In principle, the observations must be compared with simulations by properly modelling the coupled magneto-thermal evolution. Hence, models are necessary for the microscopic processes that give rise to the thermal and electric conductivities and viscosity throughout the star (Page & Reddy 2012;Chamel & Haensel 2008;Schmitt & Shternin 2018), which are then used as inputs to the macroscopic simulations, see Bransgrove et al. (2018); Pons & Viganò (2019); Camelio et al. (2022).
In the crust, transport properties are determined by the scattering of electrons by other electrons, ionic impurities and phonons in the crystal lattice. Electron-ion scattering dominates at the lowest densities and has been extensively studied (Flowers & Itoh 1976;Yakovlev & Urpin 1980;Nandkumar & Pethick 1984;Baiko et al. 1998;Potekhin et al. 1999;Chugunov & Yakovlev 2005;Aguilera et al. 2009). In the inner crust at temperatures T < 10 7 K, thermal conductivity
The situation gets more complicated in the innermost part of the crust, where it might be energetically favourable for the ions composing the crystal lattice to deform in complex structures known as "pasta" (Ravenhall et al. 1983;Hashimoto et al. 1984;Oyamatsu 1993). Classical molecular dynamics simulations suggest that this matter is disordered and amorphous and that different shapes might coexist at a given depth of the star, due to the small energy barriers between them (Schneider et al. 2014;Horowitz et al. 2015;Caplan et al. 2021;Newton et al. 2022). This shape coexistence has been validated by relativistic mean field (RMF) calculations (Pelicer et al. 2021). In the case of a disordered and amorphous inner crust with randomly distributed nuclear clusters of different sizes (Carreau et al. 2020;Potekhin & Chabrier 2021) and geometries (Pelicer et al. 2021), the main mechanism of charge and heat transport is given by uncorrelated scattering processes between the electrons and the clusters, which play a role similar to one of the lattice impurities in a crystallized phase.
Regarding the possible astrophysical consequences, a high impurity parameter in the inner crust raises the electrical resistivity of the star, decreasing steeply the magnetic field after a certain age and thus the spin-down. This may ex-plain the very small number of isolated X-ray pulsars with spin periods larger than 12s (Pons et al. 2013;Newton 2013;Hambaryan et al. 2017;Tan et al. 2018). The high impurity also lowers the thermal conductivity, leading to a better fit of the late-time cooling of the binary MXB 1659-29 (Horowitz et al. 2015;Deibel et al. 2017). Furthermore, the presence of the pasta layers modifies the so-called mutual friction force between the nuclear clusters and the neutron superfluid, with consequences on the pulsar glitch phenomenon (Antonelli & Haskell 2020). Gravitational waves (Horowitz 2010), quasiperiodic oscillations (Sotani 2012), quasi-persistent sources of SXRTs and giant flares due to the relaxation of the crust after heat deposition and neutrino emissivity (Alloy & Menezes 2011;Horowitz et al. 2004;Lin et al. 2020) are also influenced by the presence of an amorphous layer in the inner crust.
In the presence of a strong B field, electron transport is anisotropic, as the field bends the electron trajectories in the orthogonal plane and suppresses electron transport across the direction of B, e.g. (Chamel & Haensel 2008). This argument considers that the only source of anisotropy is given by the B direction. However, the spherical symmetry of nuclear clusters is spontaneously broken in the pasta layers, leading to additional anisotropies already at the level of the microscopic scattering process: in particular, Yakovlev (2015) has shown that, even in the case of random orientation of the pasta structures, anisotropic scattering can modify the transport properties.
In the analysis of Yakovlev (2015), the scattering rates along and across the pasta symmetry axis were taken as free parameters. While molecular dynamics has been able to provide estimates of the transport properties in the inner crust, by taking the angular average of the effective structure factor of the charge distribution (Horowitz & Berry 2008;Horowitz et al. 2015;Nandi & Schramm 2018), to our knowledge no estimation of the different collision frequencies that arise due to the pasta anisotropic shapes has been performed to date.
The existing microscopic simulations of the finite temperature pasta (Schneider et al. 2014;Horowitz et al. 2015;Caplan et al. 2021;Newton et al. 2022;Nandi & Schramm 2018) are typically done at fixed proton fraction and high temperatures T ≥ 10 10 K, thermodynamic conditions that are especially aimed at the description of proto-neutron stars formed in supernova events. In these conditions, it appears from those calculations that the distribution of baryonic matter is strongly disordered, and one might expect that anisotropies should not have a strong effect on the transport properties. On the other hand, in the case of neutron star binaries and soft X-ray transients, the inner crust is close to β-equilibrium and temperatures are one or two orders of magnitude lower, which might preserve both the peculiar pasta geometrical shapes and the lattice quasi-long range order, potentially leading to a strong anisotropy of the scattering rates, as assumed by Yakovlev (2015).
In this paper, we show how the anisotropic collision frequencies can be calculated from the Boltzmann equation in the relaxation time approximation, in the case of elastic scattering of ultra-relativistic degenerate electrons off pasta structures. We limit ourselves to the hypothesis of incoherent scattering sources following the Matthiessen rule (Schmitt & Shternin 2018;Heiselberg & Pethick 1993;Shternin & Yakovlev 2006). Based on the behaviour of the static struc-ture factor, we argue that this hypothesis should be valid in the high-temperature regime above crustal melting.
The paper is organized as follows. In Sec. 2 we calculate the general anisotropic collision frequencies. The collision integral and the transition matrix elements are first expanded in the spherical harmonics basis in Sec. 2.1. Then, to extract the physical real collision frequencies, in Sec. 2.2 we consider the lowest order (dipole) deviation from equilibrium and take advantage of the axial symmetry of the pasta phase. The contribution of the collision integral to the conductivity is given in terms of axial and perpendicular collision frequencies, in agreement with the analysis of Yakovlev (2015). Analytical expressions for the conductivity matrix are given in Sec. 3 for the case of a liquid, disordered, pasta phase. In Sec. 3.1, the transition matrix is numerically evaluated in the temperature domain of validity of our approximations. The conductivity tensor with and without magnetic field is finally obtained in Sec. 3.2. To illustrate the formalism and give quantitative estimations of the transport coefficients, in Sec. 4 we present numerical calculations for the collision frequencies and conductivity for different densities and B values in the hightemperature regime. Conclusions are presented in Sec. 5.
All the numerical estimates reported in this paper are obtained using the IUFSU parametrization of the RMF approach for the crustal composition, see Fattoyev et al. 2010;Avancini et al. 2012, but our expressions can be employed with any nuclear physics model that gives the static structure of the crust. In particular, while our quantitative numerical results might be model dependent, the qualitative conclusions remain valid for any other realistic equation of state model.
We use natural units = c = kB = 1 all over the paper.
RELAXATION TIME APPROXIMATION FOR ANISOTROPIC ELASTIC COLLISIONS
The thermal and electrical electron conductivities due to electron-ion scattering have been calculated in a wide range of temperatures T and electron densities ne, see e.g. Potekhin (1999): for homogeneous media, and in the absence of a magnetic field, they are expressed in terms of the effective collision frequencies νσ,κ as
σ = e 2 ne m * e νσ κ = π 2 T ne 3m * e νκ ,(1)
where m * e is the effective electron mass, and in the liquid regime νσ = νκ ≡ ν, with the collision frequency ν defined as the inverse of the relaxation time, ν = 1/τ . Because of the isotropy assumption, the collision frequencies only depend on the modulus of the momentum transfer according to the general expression (Flowers & Itoh 1976;Yakovlev & Urpin 1980;Nandkumar & Pethick 1984):
ν = 4πnie 4 Z 2 vF p 2 F 2p F 0 dq q 1 − q 2 4 2 F F 2 (q) ε 2 (q) S(q) ,(2)
where F (q) is the ion form factor, (q) is the dielectric function, S(q) is the effective structure factor that accounts for ion correlations, and vF , pF , and F are the Fermi velocity, momentum and energy respectively. Unfortunately, eqs (1) and (2) cannot be straightforwardly generalized to the case of anisotropic scatterings. To derive the anisotropic collision frequencies, we consider a multipole expansion of the Boltzmann equation in the relaxation time approximation, as we detail below.
Anisotropic case: expansion in spherical harmonics
We consider a strongly degenerate relativistic electron gas with position-dependent temperature and chemical potential fields T (r) and µ(r) in a constant external magnetic field B and a weak electric field E. Assuming that the gas is only slightly out of equilibrium, we can write its distribution function as f (r, p, t) = f0(r, p) + δf ( p), where r, v and p are the electron position, velocity and momentum, respectively, with the latter given by p = pv. The Fermi-Dirac function f0 is given by
f0(r, p) = 1 + exp p − µ(r) T (r) −1 .(3)
The deviation from equilibrium can be found by solving the linearized Boltzmann equation (Heiselberg & Pethick 1993;Shternin & Yakovlev 2006)
− ∂f0 ∂ p v · ∇µ + eE + p − µ T ∇T − e(v × B) · ∂δf ∂p = I[f ] ,(4)
where I[f ] is the collision integral that can be written as
I[f ] = d 3 p (2π) 3 Γ p →p f (p ) (1 − f (p)) − Γ p→p f (p) 1 − f (p ) .(5)
Here, Γ p→p is the transition rate from an initial momentum p to a final momentum p , introduced to account for electron scattering with any generic potential, and we have omitted the position dependencies as they do not affect the calculation. We restrict ourselves to elastic scatterings with a localized source for the potential, such that the following simplification applies:
Γ p→p = Γ p →p = 2πδ( p − p )W pp ,(6)
where W pp is the transition matrix element, which we will write explicitly for the case of electron-pasta scattering in the next section. Taking into account that deviations from equilibrium are small, we can rewrite the collision integral, eq. (5), as
I[f ] = −2π d 3 p (2π) 3 δ( p − p )W pp δf (p) − δf (p ) ,(7)
where the Fermi-Dirac terms coming from the different momenta have cancelled out due to the elasticity assumption in eq. (6). In isotropic scatterings, W pp is a function of q = |p − p | and of the electron energy only. Since in this work we are dealing with general anisotropic scatterings, we will assume the matrix elements to be functions of the solid angles of both incoming and outgoing electron momenta (p and p ), as well as the energy p, so there is no assumption of symmetry for the source of potential. The transition matrix can be expanded in the basis of spherical harmonics as
W p p (Ωp, Ω p , p) = lm l m W lm l m ( p)Y m l (Ωp)Y m l (Ω p ),(8)
whereas the assumption of elasticity implies that Ωp and Ω p are interchangeable, such that
W pp = W p p =⇒ W lm l m = W l m lm .(9)
This is a generalization of the Legendre expansion used for scattering with isotropic targets -see Sec. 3 in Pines & Nozières (2018). In App. A we show how the isotropic limit can be recovered from our calculation.
The deviation from equilibrium of the electron distribution is expanded as:
δf (p) = lm δf lm ( p)Y m l (Ωp).(10)
Substitution of eqs (8) and (10) into eq. (7) allows us to use the orthogonality of spherical harmonics and the contraction rule
Y m l (Ω)Y m l (Ω) = L M (−1) M (2L + 1)(2l + 1)(2l + 1) 4π × l l L 0 0 0 l l L m m −M Y M L (Ω)(11)
to rewrite the collision integral as
I[f ] = − p 2 4π 2 v lm,l m δf lm W l m 00 LM (−1) M (2l + 1)(2l + 1)(2L + 1) × l l L 0 0 0 l l L m m −M Y M L (Ωp) − (−1) m W l m l−m Y m l (Ωp) .(12)
The 3-j Wigner symbols l 1 l 2 l 3 m 1 m 2 m 3 are invariant under even permutations of the columns and non-zero only if m1 + m2 + m3 = 0, |l1 − l2| < l3 < l1 + l2 and l1 + l2 + l3 is an integer (Brink & Satchler 1968;Edmonds 2016).
We define the anisotropic collision frequencies by expand- Figure 1. Cylindrical (rod and tube) and planar (slab) geometries of the nuclear pasta with z as the symmetry axis. The transferred momentum vector q is drawn arbitrarily and the magnetic field B lies in the xz-plane.
y z x q θ q φ q B B θ b R 2 L 2 y z x q q θ q φ q B θ b 2 R 1 L 1 L 1
ing the collision integral linearly in δf ,
I[f ] = − lm,l m δf lm ( p) [ν ( p)] l m lm Y m l (Ωp),(13)
Integration of eqs (12) and (13) in Ωp yields
[ν] l m lm = p 2 4π 2 v (−1) m (2l + 1)(2l + 1) LM WLM 00 √ 2L + 1 l l L 0 0 0 L l l M m −m − (−1) m W l m l−m .(14)
We can obtain a more compact form of this expression by using the Wigner-Eckart theorem and the spherical harmonics representation of irreducible tensor operators of rank l (Racah 1942a,b),
C l m = 4π 2l + 1 Y lm (Ω) ,(15)
such that eq. (14) becomes
[ν] l m lm = p 2 4π 2 v LM WLM 00 √ 2L + 1 l m |C L M |lm − (−1) m W l m l−m .(16)
Derivation of the collision frequencies
To evaluate the collision frequencies in the pasta phase, we consider idealized rod and slab-like geometries, as expected in the basic liquid-drop modelling of the inner crust (Ravenhall et al. 1983;Hashimoto et al. 1984). These geometries and the definitions entering the calculations are sketched in Fig. 1. Equation (16) is not yet a multipole expansion of the collision rates because the different expansion coefficients of the collision integral [ν] l m lm are complex numbers. This is due to the fact that both the electron distribution function and the collision integral are written on the basis of complex spherical harmonics. To relate eq. (16) to the physical quantities, we must rewrite eqs (10) and (13) in terms of real coefficients.
To do so, we notice that the coefficients W l m lm in eq. (8) are constrained by the symmetries of rods and slabs. Both geometries are invariant under inversion of the z-axis (z → −z), implying that the only non-zero W l m lm are the ones having the sum l + l that is even. The sum m + m is constrained by the xy-plane symmetries: cylinders are invariant under arbitrary rotations: the non-zero W l m lm are only those with m+m = 0; slabs are invariant over π/2 rotations, so the sum m + m must be a multiple of 4. To summarize, the W lm l m are not zero if and only if:
Rods l + l = 2k m + m = 0 Slabs l + l = 2k m + m = 4k(17)
with k, k ∈ Z. To further progress, we restrict ourselves to the case of electric and thermal conductivities. Accounting for spin degeneracy, the electric current and heat flow are given by
j = −2e d 3 p (2π) 3 vδf q = 2 d 3 p (2π) 3 v( − µ)δf ,(18)
so that only the odd l terms in the expansion (10) contribute to the integrals in eq. (18). Moreover, in the relaxation time approximation, the collision integral, eq. (7), is linear in δf . The left-hand side of eq. (4) is linear in p, implying that only the coefficient l = 1 in the expansion of the collision integral, eq. (13) contributes to the currents 1 . This is also discussed in depth in the case of isotropic scattering in Sykes & Brooker (1970), and mentioned in the case of pasta in Schmitt & Shternin (2018).
In the isotropic case, there is no mixing between the different terms of I[f ] in eq. (13) and those of δf in eq. (10). However, in the anisotropic case, the collision frequencies can mix the l = 1 contributions in eq. (13) with the l = 2, 3... components of δf lm in eq. (13). For simplicity, we neglect such mixing and restrict ourselves to the most important contribution (see also Schmitt & Shternin 2018) by writing ν l m lm = ν 1m 1m δ l1 δ l 1 in eq. (13). This approximate approach is probably good in the case of pasta, due to the symmetry rules in (17).
We will show that the axial symmetry of the problem limits the number of physical collision frequencies to two: an axial frequency νa, and a perpendicular one νp. To do so, we need to rewrite the expansions eqs (10) and (13) in terms of real coefficients. We introduce the real spherical harmonics:
Y lm = i √ 2 Y m l − (−1) m Y −m l m < 0 Y 0 l m = 0 1 √ 2 Y −m l + (−1) m Y m l m > 0(19)
and rewrite the l = 1 term of eq. (10), δf1m, as:
δf (p) l=1 = Y11δfx + Y1−1δfy + Y10δfz(20)
where the coefficients are given by
δfx = δf1−1 − δf11 √ 2 , δfy = δf1−1 + δf11 √ 2i , δfz = δf10.
(21) Since the electron distribution function is real, so are the coefficients defined above. Substituting eq. (19) and eq. (21) in the collision integral eq. (13), we get
I[f ] = Y11 Y1−1 Y10 νxx νxy νxz νyx νyy νyz νzx νzy νzz δfx δfy δfz (22)
with the physical collision frequencies given by:
ν = 1 2 ν 11 11 + ν 1−1 1−1 − ν 1−1 11 − ν 11 1−1 i 2 ν 11 11 − ν 1−1 1−1 + ν 1−1 11 − ν 11 1−1 1 √ 2 ν 10 1−1 − ν 10 11 i 2 −ν 11 11 + ν 1−1 1−1 + ν 1−1 11 + ν 11 1−1 i 2 ν 11 11 + ν 1−1 1−1 + ν 1−1 11 + ν 11 1−1 i √ 2 ν 10 1−1 + ν 10 11 1 √ 2 ν 1−1 10 − ν 11 10 −i √ 2 ν 11 10 + ν 1−1 10 ν 10 10 (23)
The constraint of elasticity eq. (9) implies that the collision frequency matrix is symmetric, νij = νji. Moreover, we can see from the pasta symmetries in eq. (17) that the off-diagonal terms vanish and that the xx and yy terms are equal. This is valid for slabs because L1x = L1y = L1. Thus, ν = νxx νxy νxz νyx νyy νyz νzx νzy νzz
= νp 0 0 0 νp 0 0 0 νa (24)
where νp = ν 11 11 and νa = ν 10 10 . Writing, without any loss of generality, δf1m = 4π/3Φ1m( p)|v|, the collision integral expansion, eq. (13), can be simply rewritten as
I[f ] = −vzΦzνa − vp · Φpνp,(25)
where Φ is a vector that can depend on p, and the collision frequencies νa and νp are defined parallel and perpendicular to the pasta symmetry axis. This result exactly coincides with the generalization of the relaxation time approximation proposed by Yakovlev (2015) on symmetry arguments to include the effect of the anisotropic medium. The axial and perpendicular collision frequencies can be calculated from eq. (14):
νa( p) = p 2 4π 2 v W00,00 − W10,10 + 1 √ 5 (W20,00 + W00,20) (26) νp( p) = p 2 4π 2 v W00,00 − 1 2 √ 5 (W20,00 + W00,20) + 1 2 (W11,1−1 + W1−1,11) . (27)
To rewrite νa and νp in terms of the transition matrix W pp , we invert eq. (8) using the orthogonality of spherical harmonics:
W lm l m = dΩpdΩ p W pp Y m l * (Ωp)Y m l * (Ω p ).
(28) This leads to the final expression of the collision rates, for an arbitrary interaction preserving axial symmetry, and assuming a dipole-like deviation from the equilibrium of the electron distribution function:
νa ( p) = 3 32π 3 v dΩpdΩ p W pp q 2 cos 2 θq (29) νp ( p) = 3 32π 3 v dΩpdΩ p W pp q 2 1 2 sin 2 θq.(30)
To get the generalization of eqs (1) and (2) to the physical problem of electron-pasta scattering, we now turn to evaluate the transition matrix W pp .
RW R d 0.5 MeV (a) 0.04 0.06 0.08 RW R d 1 MeV (b) 0.04 0.06 0.08 RW R d 3 MeV (c) 0.04 0.06 0.08 RW R d 5 MeV (d) R (fm) n B (fm −3 )
droplets rods slabs tubes bubbles
n B (fm −3 ) n B (fm −3 ) n B (fm −3 )
CONDUCTIVITY TENSOR FOR RODS AND SLABS
3.1 Elastic scattering matrix in the incoherent scattering limit
In the case of isotropic scattering, the transition matrix (8) depends solely on the absolute value of the transferred momentum q = p − p . This is equivalent to a dependence on the angle between the incoming and outgoing electron momentum because one can write q 2 = 2p 2 (1 −p ·p ). In the case of anisotropic scattering, the transition matrix can depend separately on the angles of both incoming and outgoing momenta. However, because of the axial symmetry of the pasta structures, it can only depend on the projections of the transferred momentum in the axis perpendicular and parallel to the symmetry axis of the pasta. We can assume without loss of generality that the pasta symmetry axis coincides with the z axis, such that the transition matrix becomes a function of the vector q itself, see Fig.1. The transition matrix W pp for the elastic scattering in the Born approximation is given by (see, e.g., equation 81.5 of Berestetskii et al. 1971)
W pp (q, p) = 1 2 s,s e 2 pū p ,s γ 0 up,s d 3 xA0(x)e −iq·x 2 = e 2 1 − q 2 4 2 p |U (q)| 2 S(q)(31)
where in the first line up,s is the electron spinor, A0(x) is the electric potential generated by nuclei and γ 0 is the Dirac matrix. In the second line, the Fourier transform of the potential U (q) is introduced, and the static structure factor is defined as (Flowers & Itoh 1976):
S(q) = np(q)np(−q) T = 1 V d 3 rd 3 r e iq·(r−r ) np(r)np(r ) T ,(32)
where np(q) is the charge density of the scatterer in momentum space, ... T is the thermal average that accounts for correlations between protons, and the integral covers the entire thermodynamic system of volume V . The structure factor carries the whole information regarding the anisotropy of the system both through the anisotropic shape of the pasta and through the lattice arrangement. In principle, S(q) also carries information about thermal excitations. The contribution of single-nucleon thermal excitations to S(q) has been calculated by Schuetrumpf et al. (2020), within the framework of density functional theory. Since this is not a main source of anisotropy, we do not consider it here. On the other hand, larger-scale collective thermal vibrations of pasta structures and deviations from lattice periodicity are likely important to the anisotropic behaviour of transport coefficients. To the best of our knowledge, these have not been calculated yet and will be addressed in future work. Still, variational theory in Wigner-Seitz (WS) cells of different geometries with energy densities obtained from the RMF approach, is routinely used by nuclear physicists to obtain a microscopically funded estimation of the optimal average charge distribution np(r) T , see e.g. Avancini et al. (2009Avancini et al. ( , 2008Avancini et al. ( , 2012 and Haensel et al. (2007); Chamel & Haensel (2008) for reviews. In the simplest liquid-drop modelling of Fig. 1, the pasta structures are characterized by constant density profiles. We can write for rods (d = 2) and slabs (d = 1), respectively:
np(r) d=2 T = np m Θ(R2 − r − mRW 2) np(r) d=1 T = np m Θ(R1 − z − mRW 1)(33)
where the average linear size of the cluster R d , its internal proton (neutron) density np (nn) and average WS cell radius R W d are variationally obtained for any given temperature T and baryon density nB, as well as the (uniform) electron density, and the density of the dripped neutrons 2 . The sum in (33) runs over the parallel structures in the lattice. For this application, we use the relativistic mean-field approach of Avancini et al. (2012). The mean field Lagrangian is given by the non-linear Walecka model, parametrized by the IUFSU force (Fattoyev et al. 2010) with a surface tension fitted to reproduce a Thomas-Fermi simulation, see Avancini et al. (2012) for details. In Fig. 2 we show the linear and WS cell radii as a function of the density, for some representative temperatures that will be considered below. We can see that non-spherical geometries are expected in the innermost part of the inner crust, and are found to persist even at high temperatures well above crustal melting, that occurs around Tm ≈ 1 MeV in this density region (Carreau et al. 2020). The different colours correspond to the geometries that are associated, for a given baryon density, with the minimal free energy density. It is known that the pasta properties are model dependent (Dinh Thi et al. 2021), mainly the densities at which the different geometries appear, but the qualitative behaviour shown in Fig.2 is obtained in all realistic nuclear models found in the literature (Dinh Thi et al. 2021;Parmar et al. 2022).
In the case of a perfect lattice, electron band structures suppress the scattering rates and charge transport occurs only through electron-phonon interactions. However, thermal fluctuations disturb the lattice periodicity and destroy the longrange order. In particular, in the disordered limit expected to dominate with increasing temperature, the correlation function drops to zero on a length scale of the linear size of the cluster, and the different pasta structures are fully uncorrelated and act as incoherent impurity scatterers. These fluctuations have been calculated for the pasta by Pethick & Potekhin (1998) within the classical approach of the Landaude Gennes model of liquid crystals (de Gennes & Prost 1993;Chandrasekhar 1992). For slabs, the thermal displacement presents a logarithmic divergence with the linear dimension of the sample, reflecting the well-known Landau-Peierls instability (Landau & Lifshitz 1969). Concerning the rod phase, the thermal displacement is finite and the long-range order in the transverse plane is in principle preserved. A critical temperature for the long (or quasi-long) range order was estimated by Watanabe et al. (2000) as the temperature at which the thermal displacement becomes comparable to the cell radius. Such a temperature was shown to strongly decrease with increasing baryonic density and, for fiducial values of the elastic constants, to be of the order of a few MeV both for slabs and for rods (Watanabe et al. 2000). Above such temperatures, it is reasonable to expect that the conventional pasta phase is fully destroyed by thermal fluctuations, even if complex deformed disordered cluster structures may still be present, as suggested by molecular dynamics simulations (Horowitz et al. 2015;Schneider et al. 2014;Newton et al. 2022;Nandi & Schramm 2018). Below the critical temperature, the consequence of the reduced dimensionality of the pasta phase is that the long-range order (or quasi-long in the case of slabs) is only preserved in the directions corresponding to the lattice periodicity (that is along uz for the slab phase, and u ⊥ for the rod one), potentially leading to strong anisotropies in the collision frequencies.
Interestingly, limiting behaviours can be obtained for the density correlation of slabs (Poniewierski et al. 1998;de Gennes & Prost 1993) np(r)np(r ) T ≡ δn 2 (r − r ) T showing the power law behaviour characteristic of the quasilong range order of the smectic (slab) phase:
δn 2 (r) T ∝ |z| −η |z| → ∞ (34) ∝ r −2η ⊥ r ⊥ → ∞,(35)where η = q 2 0 T 8πλC0 ,(36)
with q0 = π/RW 1, λ 2 = R 2 W 1 (1 + 2f − 2f 2 )/45 where f = R1/RW 1 is the average volume fraction of the cluster,and C0 = 6EC , where EC is the average Coulomb energy density in the cell Pethick & Potekhin (1998). More recently, the calculations of elastic constants were improved by Pethick et al. (2020), but for our estimates we stick to the simpler prescription of Pethick & Potekhin (1998). The numerical values of the η parameter as a function of the density, as numerically obtained from the average pasta configuration predicted by the RMF model, are displayed in the left part of Fig. 3 for different temperatures. As expected, the correlation decreases with temperature and density.
In the absence of a complete calculation of the correlation function, we limit ourselves in this paper to temperatures high enough for the hypothesis of uncorrelated scatterers to be realistic. To this aim, we plot in the center and right parts of Fig. 3 the quantity 2 −η as an estimation of the ratio between the correlation function at z = 2RW 1, corresponding to a distance containing two different slabs, and the same quantity at z = RW 1, such that a single slab is accounted for,
2 −η ≈ δn 2 (2RW 1) δn 2 (RW 1) .(37)
Even if these distances might be small to justify the use of the asymptotic behaviour given by eq. (34), the quantity 2 −η can be taken as an estimation of the correlation reduction due to thermal effects. From Fig.3 we can see that only at very high temperatures above 1 MeV the hypothesis of incoherent scattering appears justified. For the following numerical applications, we will focus on T = 3 MeV as a representative temperature value. Since the correlation asymptotically follows the same power law in the transverse as in the longitudinal plane see eq. (35), we define the effective length of the slab L1 from the same order-of-magnitude consideration:
δn 2 (L ef f 1 ) δn 2 (LW 1) ≈ 2 −η(38)
where LW 1 is defined by normalizing the slab WS volume to the droplet volume at identical thermodynamic conditions. Comparing eqs (37) and (38), we consider that the effective length of the slabs is
L ef f 1 = √ 2L1W .(39)
For the rods, in the absence of an exact calculation of S(q), we assume the length of interest to be equal to the slabs one if they were dominant at the same density, L ef f 2 = L ef f 1 . The resulting numerical values of the pasta length and proton number at T = 3 MeV are shown in Fig. 4.
Within the hypothesis of incoherent scatterings, the structure factor integral is limited to a single cell. Charge fluctuations within the cell being negligible we can write np(r)np(r ) T = np(r) T np(r ) T , leading to:
S(q) ≈ Z 2 ni|F (q)| 2 ,(40)
with the form factor F (q) defined as the Fourier transform of the charge density normalized by the number of protons composing the cluster (Z):
F (q) = 1 Z WS d 3 re iq·r np(r),(41)
and the number density of targets within the medium as ni = 1/V . Analytic expressions can be found for the form factor by direct integration of eq. (41) for spherical, cylindrical, and planar geometries (labelled d = 3, 2, 1, respectively):
F d (q) = 3 (qR3) 3 [sin(qR3) − qR3 cos(qR3)] , d=3 2 qzL2 sin qzL2 2 2 q ⊥ R2 J1 (q ⊥ R2) , d=2 2 L1qx sin L1qx 2 2 L1qy sin L1qy 2 1 R1qz sin (R1qz) , d=1(42)
Here, qx = q sin θq cos φq, qy = q sin θq sin φq, qz = q cos θq, q ⊥ = q 2
x + q 2 y , and J1 is the cylindrical Bessel function,
J1(x) = 1 iπ π 0 dφe ixcos(φ) cos(φ).(43)
Finally, using the Fourier transform of the electric potential
U (q) = 4πe q 2 ε(q) ,(44)
the transition matrix element is written as:
W pp (q, p) = nie 2 1 − q 2 4 2 p 4πeZF d (q) q 2 ε(q) 2 ,(45)
where the dielectric function ε(q) is introduced, regularizing the divergence at q = 0, to account for electron screening. A complete calculation within relativistic quantum mechanics, in the random phase approximation, gives (Jancovici 1962;Haensel et al. 2007):
(q) = 1 + k 2 T F q 2 2 3 − 2 3
y 2 xr γr ln(xr + γr)
+ x 2 r + 1 − 3x 2 r y 2 6yx 2 r ln 1 + y 1 − y + 2y 2 x 2 r − 1 6yx 2 r 1 + x 2 r y 2 γr ln yγr + 1 + x 2 r y 2 yγr − 1 + x 2 r y 2 ,(46)
where y = q/2pF , xr = pF /me, γr
= √ 1 + x 2 r , kT F is the 0 0.2 0.4 0.6 0.8 0 π/2 π 0 π/2 π ϕ q = 0 0 π/2 π ϕ q = π/4 F 2 d (q, Ω q ) θ q Rods: n B = 0.06 fm −3 θ q q = k F /2 q = k F q = 2 k F
Slabs: n B = 0.08 fm −3 θ q Slabs: n B = 0.08 fm −3 Figure 5. Form factor squared of rods at the representative density n B = 0.06 fm −3 (left, yellow) and slabs at n B = 0.08 fm −3 (middle and right, blue) as a function of the azimuthal angle, for different transferred momenta q = k F /2, k F and 2k F shown as continuous, dashed and dotted curves, respectively. For slabs we fix φq = 0 (middle) and π/4 (right). The effective lengths L are taken from Figure 4. The temperature is fixed to T = 3 MeV.
Thomas-Fermi momentum, defined as:
kT F = 4πe 2 ∂ne/∂µe = 2 αemγr/(πxr)kF ,(47)
and the second equality supposes a strongly degenerate electron gas (Haensel et al. 2007). Though in this work we assume that the screening is isotropic, it is important to observe that strong magnetic fields lead to anisotropic behaviour and can produce Friedel oscillations (Horing 1969;Sharma & Reddy 2011). We leave the consideration of this source of further anisotropies to a future study.
Collision frequencies and conductivities
The expression of the transition matrix in eq. (45) allows us to get the final result for the collision rates. We substitute this expression into eqs (29) and (30) and make the change of variables
dΩpdΩ p = 2π p 2 d 3 q q ,(48)
to get the collision frequencies
νa ( p) = 12πnie 4 Z 2 vp 2 2p 0 dq 1 qε 2 (q) 1 − q 2 4 2 p dΩq 4π |F d (q)| 2 cos 2 θq (49) νp ( p) = 12πnie 4 Z 2 vp 2 2p 0 dq 1 qε 2 (q) 1 − q 2 4 2 p dΩq 4π |F d (q)| 2 1 2 sin 2 θq .(50)
Eqs (49) and (50) are the main results of the paper. Under the assumption of incoherent scatterings among the different pasta structures, and using the analytical expressions eq. (42) for the form factors, they allow calculating the thermal and electric conductivity of the pasta phase (see eq. (55) below) at any arbitrary temperature, density, proton fraction and magnetic field value from a given nuclear physics model providing the composition of the matter, namely the values of L, R and Z for the dominant pasta geometry. Some representative results will be given in the next section.
To compute the transport coefficients, the collision frequencies eqs (49) and (50) must be calculated at the Fermi energy p = F , since in the strongly degenerate electron gas transport occurs close to the Fermi surface. They can be written compactly as
νK = 12πnie 4 Z 2 vF p 2 F ΛK ,(51)
where pF (vF ) is the Fermi momentum (velocity). We have also defined the axial (K = a) and perpendicular (K = p)
Coulomb logarithms as
ΛK = 2p F 0 dq q 1 ε 2 (q) 1 − q 2 4 2 F F 2 K ,(52)
and the averages F 2 K as
F 2 a = dΩq 4π |F d (q)| 2 cos 2 θq ,(53)
and
F 2 p = dΩq 4π |F d (q)| 2 1 2 sin 2 θq .(54)
The calculation of the conductivities with the anisotropic collision frequencies has been worked out in Yakovlev (2015), so here we only report the main equations. The magnetic field is assumed, without loss of generality, to lie in the xz-plane. By defining the unit vector b = B/B = bxx + bzẑ and the cyclotron frequency for electrons ω = eB/ F , the electric conductivity tensor can be written aŝ where ∆ = ν 2 p νa + ω 2 b 2 x νp + ω 2 b 2 z νa. The thermal conductivity can be obtained by the Wiedemann-Franz law κij = σij(π 2 T /3e 2 ) which is valid for strongly degenerate electrons (Ziman 2001). We do not consider neutron superfluidity, which results in corrections to the thermal conductivity, but not to the electrical (Aguilera et al. 2009). For B = 0, the conductivity becomeŝ
σ = e 2 ne m * e ∆ νaνp + ω 2 b 2 x −ωbzνa ω 2 bxbz ωbzνa νaνp −ωbxνp ω 2 bxbz ωbxνp ν 2 p + ω 2 b 2 z ,(55)σ0 = e 2 ne m * e ν −1 p 0 0 0 ν −1 p 0 0 0 ν −1 a .(56)
NUMERICAL RESULTS
The source of anisotropy entering the collision frequencies eq.
(3) lies in the angular dependence of the form factors F d (q). The latter is displayed for rods and slabs in Fig. 5, as a function of the azimuthal angle θq (see Fig.1) for different values of q = kF /2, kF , 2kF . In these figures, the temperature is fixed to T = 3 MeV and the two representative densities nB =0.06, 0.08 fm −3 are chosen, where rods (slabs) are expected to be dominant according to the results presented in Fig. 2. We can see from Fig. 5 that for the rods, the form factor is strongly peaked at θq = π/2, while for slabs it is peaked at θq = 0 and π. Such behaviour is expected from their geometries, as the form factors are peaked in the elongated direction. However, this dependence is smoothed out by the angular average implied by eqs (53) and (54). This is shown in Fig. 6, that displays the averaged form factors F 2 a,p defined in eqs (53),(54). The average value, given by 2 F 2 p + F 2 a, is also plotted, in the same thermodynamic conditions as in Fig. 5. As we can expect from Fig.1 and eq. (42), the form factor is maximum (minimum) in the symmetry axis direction in the case of slabs (rods). This difference is pronounced at momentum transfers smaller than q = kF , as afterwards, both axial and perpendicular components tend to zero. When comparing the form factors to the one of the equivalent spherical droplet (right side of Fig.6), we can see that the difference is essentially seen at low momentum as well, where the form factor is systematically smaller within a deformed shape than for the equivalent spherical geometry.
In the previous figures, we have estimated the effective pasta length L d based on the asymptotic behaviour of the thermal density correlation function of the smectic phase in the perpendicular plane (see Figure 3 and associated discussion). Though qualitatively the physical origin of the electronpasta scattering is certainly the breaking of the long-range order due to thermal fluctuations, our estimations are very rough and would deserve to be confronted with microscopic molecular dynamics simulations (Horowitz et al. 2015;Caplan et al. 2021;Newton et al. 2022;Pelicer et al. 2021;Nandi & Schramm 2018). To evaluate the effect of the uncertainty on the estimation of L d , in Fig. 7 we show the axial and perpendicular Coulomb logarithms eq.(52) and the ratio of perpendicular to axial collision frequency eq.(51) as a function of the ratio of the pasta length L d to the WS-radius R W d for rods and slabs.
For both geometries, the Coulomb logarithm decreases with the increasing length of the pasta, and its value tends to zero as L d becomes sufficiently large. This is consistent with the expectation that the lattice order should suppress the electron-ion scattering and increase the conductivity of matter.
For rods (slabs), the perpendicular component is larger (smaller) than the axial one, and the difference between them increases with the growing length of the pasta, varying up to 100 (0.01) when L d ≈ 150RW . We can see that a precise estimation of the length of the structures is important for the quantitative determination of the collision frequencies, as it affects in a considerable way the difference between the two scattering directions. In particular, the deviation from an isotropic scattering is small only for small values of L d /RW , corresponding to the high-temperature regime. At smaller temperatures, as correlations become more important, a larger transverse length will contribute to the scattering, so the difference in the anisotropic frequencies will be more pronounced, likely reaching those expected in Yakovlev (2015).
In Fig. 8 the Coulomb logarithms eqs(52) are shown as a function of the density for T=1 and 3 MeV. In both cases, we can see that the abrupt change of favoured geometry leads to slight discontinuities in the Coulomb logarithms, and both overall decrease with density. This can be understood from the increase in length, shown in Fig. 4, and from Fig. 7.
The ratio of perpendicular to axial collision frequency eqs (51) and the average conductivity are shown in Fig. 9 for the same temperatures. The slight increase in the ratio with density is due to the increasing length. It is important to note that, at the temperatures and lengths we are considering, the different collision frequencies differ by a factor smaller than two, so there is only a small deviation from isotropic scatter-ing at high temperatures. In the average conductivity on the other hand, there is a larger discontinuity when the abrupt change of geometry happens. This difference is mainly due to the associated discontinuity in proton number, which can be seen on the right side of Fig. 4. All in all, the anomalously high resistivity of the pasta layer reported in the literature (Caplan et al. 2018) is nicely reproduced by our calculations, and it is seen to be essentially due to the high Z value of the clusters close to the crust-core transition, more than to the specific geometry of the pasta phases.
We now turn to the effect of the magnetic field on the conductivities. When including the magnetic field, we show all components of the conductivity in units of the dominant conductivity at zero magnetic field, i.e. the perpendicular (axial) conductivity for rods (slabs). In Figs. 10 (11) we show the conductivity components when the magnetic field forms an angle θ b = 0, 45º, and 90º with the symmetry axis of the pasta. Different off-diagonal components appear depending on the angle of the magnetic field: if it lies in the symmetry axis of the pasta, only the perpendicular xy component is not zero, the zz component depends only on νa, and the perpendicular xx and yy components are determined by both νp and B. If it lies perpendicularly to the symmetry axis, only zy is not zero, the zz component is determined by νa and B and xx = yy only by νp. At magnetic fields B < 10 18 G, the zz component is not drastically modified, transverse components increase (decrease) for rods (slabs), and the offdiagonal terms increase steadily. At ∼ 10 18 G, the diagonal components parallel to the magnetic field are unaffected, but the perpendicular and off-diagonal components start to decrease. A magnetic field of this order is not far from the one expected at the very bottom of the inner crust of magnetars, which is about 20% of the field in the core (Chatterjee et al. 2019;Fujisawa & Kisaka 2014).
For the average conductivity, we follow Yakovlev (2015) once more, and assume the pasta takes random orientations with respect to the magnetic field since up to date there is no information regarding its orientation or prevalence of domains. To calculate the average parallel, perpendicular, and Hall terms we define a plane orthogonal to the magnetic field with the vectors e1, e2 = e1 × b and make the projections: σ ⊥ = b ·σ · b, σ = e1 ·σ · e1 and σH = e1 ·σ · e2, such that averaging the coefficients over all directions leads to:
σ ⊥ σ σH = e 2 ne m * e ω 2 (ω 2 + νpνa)(ν 2 p + ω 2 )H − νp 1 2 [νaνp(ω 2 − ν 2 p )H + νp] ω(1 − νaνpH) with H = (sr) −1 arctan(s/r) νa > νp (sr) −1 arctanh(s/r) νa < νp (ν 3 a + ω 2 νa) −1 νa = νp(57)
s = ω |νa − νp| and r = νp(ω 2 + νaνp). For B → 0 we get
σ ⊥ = σ = e 2 ne m * e ν −1 , ν −1 = 1 3 2 νp + 1 νa(58)
and the Hall parameters is zero σH → 0. One must notice that the average conductivity is proportional to the average of the inverse of ν , and not ν itself, therefore its calculation does not amount to averaging the matrix element over Ωq.
In Fig. 12 we show the average conductivities for rods and slabs, respectively, in units of the average conductivity with B = 0. This figure can be compared with Fig.3 of Yakovlev (2015), though the parallel component in our calculation is not so different from the perpendicular one due to the small difference we found between the anisotropic frequencies, unlike the assumption of Yakovlev (2015). To conclude this discussion, it is important to note that for all the calculations re- ported in this paper, the inner crust structure was computed without accounting for the magnetic field. Numerous studies exist in the literature addressing this point, using CLD or Thomas-Fermi techniques with different nuclear models, see e.g. Nandi et al. (2011);de Lima et al. (2013); Bao et al. (2021); Wang et al. (2022). The general result of these works is that only extreme magnetic fields of the order of B = 10 18 G affect the density profiles of the Wigner-Seitz cells, with an increased average proton fraction, particularly in the outer part of the inner crust dominated by spherical nuclei, and an increase of the charge of the pasta structures, that however does not exceed ≈ 10 − 20%. These modifications would not affect the results presented in Figs. 11 and 12, and would lead to an extra decrease of the conductivity in Fig. 9, since σ ∝ Z −2 , see eq.(51).
CONCLUSIONS
In this paper, we studied the collision frequencies for elastic scattering between electrons and two different pasta phase structures. To do so, we performed an expansion of the collision integral in spherical harmonics, which allowed us to treat also the scattering with non-spherical targets. We applied this framework to calculate the electrical conductivity tensor. The form factor of the pasta structures was evaluated by direct integration, although we neglected contributions from the structure factor, which is equivalent to assuming that electron scatterings with different pasta targets are completely incoherent. This is a reasonable first approximation at high temperatures of the order of the MeV or above. More work is needed to evaluate the anisotropy of the transport coefficients at lower temperatures where the lattice long-range order along the pasta symmetry axis is likely to be preserved.
We find that anisotropic collision frequencies are highly dependent on the length of the pasta structures. In the hightemperature regime, where the effective length that participates in the scattering is comparable to the WS length, the anisotropy is small and affects mainly the components of the conductivity perpendicular to the pasta symmetry axis. It should be emphasized that neutron star properties are expected to be significantly impacted by the presence of different and mixed pasta geometries (Caplan et al. 2021;Schneider et al. 2016;Newton et al. 2022), and their (possibly disordered) mesoscopic arrangement. Unfortunately, information is still lacking on how pasta domains, defects, and impurities appear at larger scales, but the presence of this kind of disorder is considered to be a likely feature of the pasta layers (Caplan et al. 2021;Schneider et al. 2016;Newton et al. 2022;Pelicer et al. 2021). Our treatment, at the moment, does not include precise corrections coming from scattering with domain boundaries and mixed geometries. Future inves- log 10 x p Figure 10. Components of the electric conductivity in units of the perpendicular conductivity at B = 0, as a function of the magnetic field for rods at n B = 0.06 fm −3 . The angle between the pasta symmetry axis and the magnetic field is fixed at 0 (left), 45º (center) and 90º (right). In the top axis, we show the variable xp = eB/( F νp). log 10 x a Figure 11. Components of the electron conductivity in units of the axial conductivity at B = 0, as a function of the magnetic field for slabs at n B = 0.08 fm −3 . The angle between the pasta symmetry axis and the magnetic field is fixed at 0 (left), 45º (center) and 90º (right).
In the top axis, we show the variable xa = ω/νa.
tigations of these matters must be incorporated within the present framework to improve it. Our numerical results are based on the IUFSU force, a simplified modelling of the pasta phase using a one-component liquid drop approach, and an estimation of the pasta sizes based on the values of the correlation between neighbouring WS cells. However, the analytical results are general and can be used to calculate the transport properties of the inner crust, with and without a magnetic field, by using any microscopic estimation of the pasta linear dimensions and proton number from microscopic mean-field or molecular dynamics simulations. log 10 x a Slabs: n B = 0.08 fm −3 Figure 12. Average parallel, perpendicular and Hall conductivities for randomly oriented rods (left) and slabs (right) at n B = 0.06 fm −3 and n B = 0.08 fm −3 , respectively. In the top axis, we show the variable x i = ω/ν i , with i = p (a) on the left (right).
Figure 2 .
2Linear and Wigner-Seitz radius of the pasta as a function of density in dashed and full curves for T = 0.5, 1, 3 and 5 MeV (panels a, b c and d, respectively). The different pasta geometries are indicated with different colours. The results for T = 0 and T = 0.5 MeV are indistinguishable.
Figure 3 .
3Left panel: correlation function decay parameter η for slabs. Middle (right) panel: estimation of the correlation between neighbouring slab structures as a function of density (temperature), see text for details. Curves for T =0.1, 0.5, 1, 3, and 5 MeV are shown on the left and center, and for n B = 0.07, 0.075 and 0.08 fm −3 on the right.
Figure 4 .
4Left: Effective length of the pasta transverse to the symmetry axis, normalized to the Wigner-Seitz radius (see text). Right: Proton number of the pasta as a function of density. The different pasta geometries are indicated with different colours. The temperature is fixed to T = 3 MeV.
Figure 6 .
6Left panels (a,b): axial (full) and perpendicular (dashed) square form factor of rods (yellow) and slabs (blue) at n B = 0.06 (a) and 0.08 (b) fm −3 . Right panels (c,d): average square form factors of rods (yellow, c panel) and slabs (blue, d panel) are compared to the ones of droplets (magenta) at the same density. The representative temperature T=3 MeV is chosen.
Figure 7 .
7The Coulomb logarithms (upper part) and the ratio of perpendicular to axial collision frequency (lower part) are shown as a function of the pasta length normalized to the Wigner-Seitz radius, L d /R W . Quantities for rods (slabs) are calculated at n B = 0.06 fm −3 (n B = 0.08 fm −3 ) and plotted in yellow (blue). In the upper panel, the perpendicular (axial) component are displayed as dashed (continuous) lines. The points in the lower plot indicate the estimated effective length L ef f 1 = √ 2 L 1W (see text).The representative temperature T = 3 is chosen.
Figure 8 .Figure 9 .
89Axial (continuous) and perpendicular (dashed) Coulomb logarithms as a function of baryon density, for T = 1 MeV (left panel) and T = 3 MeV (right panel). The different pasta geometries are indicated with different colours. Top: ratio of perpendicular to axial relaxation time. Bottom: Average electric conductivity. Curves are shown as a function of density for the representative temperatures of T = 1 (dashed) and 3 (continuous) MeV. The different pasta geometries are indicated with different colours.
Pelicer et al.
MNRAS 000, 1-15(2022)
The multiplicity l of the spherical harmonics coincides with the power of p in an equivalent expansion in homogeneous harmonic polynomials since they are isomorphic(Gallier 2013;Freire 2022).
This last quantity does not play any role in the calculations concerned by the present paper.
ACKNOWLEDGEMENTSThis work is a part of the project INCT-FNA Proc. No. 464898/2014-5, and of the Master project In2p3 NewMAC. D.P.M. is partially supported by Conselho Nacional de De-senvolvimento Científico e Tecnológico (CNPq/Brazil) under grant 303490/2021-7 and M.R.P. is supported by Conselho Nacional de Desenvolvimento Científico e Tecnológico -Brasil (CNPq) and with a doctorate scholarship by Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (Capes/Brazil). M. R. P. also acknowledges partial support from LPC Caen. MA acknowledges partial support from PHAROS, COST Action CA16214.No new data were generated or analysed in support of this research.APPENDIX A: ISOTROPIC LIMITIn this appendix, we show that the isotropic limit is obtained from eq. (14) when W pp is a function only of |q| The equation obtained is equivalent to to eq. (3.135) ofPines & Nozières (2018). In the isotropic case, there is no change in the l index of spherical harmonics during the collision, and the sum of m and m indexes are zero since W pp is a function of the relative angle between p and p , therefore:This can be understood from the expansion in eq. (8), where in the isotropic case the pair of spherical harmonics must be replaced by the Legendre polynomial. Eq.(14)can be rewritten asWe simplify this expression by utilizing the following property of the 3j-Wigner symbols:and defining W lm = W lm l−m , such that:To recover the usual integral equation with the transition matrix element we use eq. (28), with the aid of eq. (48)into eq. (A4) and average over the m index:where cos ξ = p·p /(|p||p |) and changing variables as 2qdq = −p 2 d(cos ξ), we obtainwhich is equivalent to eq. (3.135) ofPines & Nozières (2018)for electron scattering with isotropic targets. We use q 2 = p 2 F (1 − cos ξ) to change variables and write, for l = 1:Using eq. (31), we recover eq. (2). Likewise, the viscosity can be obtained with the l = 2 -see eqs(2)and(3)ofChugunov & Yakovlev (2005):
. D N Aguilera, V Cirigliano, J A Pons, S Reddy, R Sharma, 10.1103/PhysRevLett.102.091101Phys. Rev. Lett. 10291101Aguilera D. N., Cirigliano V., Pons J. A., Reddy S., Sharma R., 2009, Phys. Rev. Lett., 102, 091101
. M D Alloy, D P Menezes, 10.1103/PhysRevC.83.035803Phys. Rev. C. 8335803Alloy M. D., Menezes D. P., 2011, Phys. Rev. C, 83, 035803
. M Antonelli, B Haskell, 10.1093/mnras/staa3097MNRAS. 4993690Antonelli M., Haskell B., 2020, MNRAS, 499, 3690
. S S Avancini, D P Menezes, M D Alloy, J R Marinelli, M M W Moraes, C Providência, 10.1103/PhysRevC.78.015802Phys. Rev. C. 7815802Avancini S. S., Menezes D. P., Alloy M. D., Marinelli J. R., Moraes M. M. W., Providência C., 2008, Phys. Rev. C, 78, 015802
. S S Avancini, L Brito, J R Marinelli, D P Menezes, M M W De Moraes, C Providencia, A M Santos, 10.1103/PhysRevC.79.035804Phys. Rev. C. 7935804Avancini S. S., Brito L., Marinelli J. R., Menezes D. P., de Moraes M. M. W., Providencia C., Santos A. M., 2009, Phys. Rev. C, 79, 035804
. S S Avancini, C C Barros, L Brito, S Chiacchiera, D P Menezes, C Providência, 10.1103/PhysRevC.85.035806Phys. Rev. C. 8535806Avancini S. S., Barros C. C., Brito L., Chiacchiera S., Menezes D. P., Providência C., 2012, Phys. Rev. C, 85, 035806
. D A Baiko, A D Kaminker, A Y Potekhin, D G Yakovlev, 10.1103/PhysRevLett.81.5556Phys. Rev. Lett. 815556Baiko D. A., Kaminker A. D., Potekhin A. Y., Yakovlev D. G., 1998, Phys. Rev. Lett., 81, 5556
. S S Bao, J N Hu, H Shen, 10.1103/PhysRevC.103.015804Phys. Rev. C. 10315804Bao S. S., Hu J. N., Shen H., 2021, Phys. Rev. C, 103, 015804
Course of theoretical physics-Pergamon International Library of. V Berestetskii, E Lifshitz, V Pitaevskii, 10.1093/mnras/stx2508Science Bransgrove A., Levin Y., Beloborodov A. 4732771MNRASBerestetskii V., Lifshitz E., Pitaevskii V., 1971, Course of theoret- ical physics-Pergamon International Library of Science Bransgrove A., Levin Y., Beloborodov A., 2018, MNRAS, 473, 2771
D Brink, G Satchler, G Camelio, L Gavassino, M Antonelli, S Bernuzzi, B Haskell, arXiv:2204.11809Theory of Angular Momentum. 2022Brink D., Satchler G., 1968, Theory of Angular Momentum Camelio G., Gavassino L., Antonelli M., Bernuzzi S., Haskell B., 2022, arXiv e-prints, p. arXiv:2204.11809
. M E Caplan, A S Schneider, C J Horowitz, 10.1103/PhysRevLett.121.132701Phys. Rev. Lett. 121132701Caplan M. E., Schneider A. S., Horowitz C. J., 2018, Phys. Rev. Lett., 121, 132701
. M E Caplan, C R Forsman, A S Schneider, 10.1103/PhysRevC.103.055810Phys. Rev. C. 10355810Caplan M. E., Forsman C. R., Schneider A. S., 2021, Phys. Rev. C, 103, 055810
. T Carreau, A F Fantina, F Gulminelli, 10.1051/0004-6361/202038347A&A. 64077Carreau T., Fantina A. F., Gulminelli F., 2020, A&A, 640, A77
. N Chamel, P Haensel, 10.12942/lrr-2008-10Living Reviews in Relativity. 1110Chamel N., Haensel P., 2008, Living Reviews in Relativity, 11, 10
Liquid crystals. S Chandrasekhar, Cambridge University PressChandrasekhar S., 1992, Liquid crystals. Cambridge University Press
. D Chatterjee, J Novak, M Oertel, 10.1103/PhysRevC.99.055811Phys. Rev. C. 9955811Chatterjee D., Novak J., Oertel M., 2019, Phys. Rev. C, 99, 055811
. A I Chugunov, D G Yakovlev, 10.1134/1.2045323Astron. Rep. 49724Chugunov A. I., Yakovlev D. G., 2005, Astron. Rep., 49, 724
. A Deibel, A Cumming, E F Brown, S Reddy, 10.3847/1538-4357/aa6a19ApJ. 83995Deibel A., Cumming A., Brown E. F., Reddy S., 2017, ApJ, 839, 95
. Dinh Thi, H Fantina, A F Gulminelli, F , 10.1140/epja/s10050-021-00605-6Eur. Phys. J. A. 57296Dinh Thi H., Fantina A. F., Gulminelli F., 2021, Eur. Phys. J. A, 57, 296
Angular momentum in quantum mechanics. A R Edmonds, Princeton university pressEdmonds A. R., 2016, Angular momentum in quantum mechanics. Princeton university press
. F J Fattoyev, C J Horowitz, J Piekarewicz, G Shen, 10.1103/PhysRevC.82.055803Phys. Rev. C. 8255803Fattoyev F. J., Horowitz C. J., Piekarewicz J., Shen G., 2010, Phys. Rev. C, 82, 055803
. E Flowers, N Itoh, 10.1086/154375ApJ. 206218Flowers E., Itoh N., 1976, ApJ, 206, 218
Partial differential equations. A Freire, University of TennesseeFreire A., 2022, Partial differential equations. University of Ten- nessee
. K Fujisawa, S Kisaka, 10.1093/mnras/stu1911Monthly Notices of the Royal Astronomical Society. 4452777Fujisawa K., Kisaka S., 2014, Monthly Notices of the Royal Astro- nomical Society, 445, 2777
Notes on Spherical Harmonics and Linear Representations of Lie Groups. J Gallier, University of PennsylvaniaGallier J., 2013, Notes on Spherical Harmonics and Linear Repre- sentations of Lie Groups. University of Pennsylvania
Neutron Stars 1 : Equation of State and Structure. P Haensel, A Y Potekhin, D G Yakovlev, V Hambaryan, V Suleimanov, F Haberl, A D Schwope, R Neuhäuser, M Hohle, K Werner, 10.1051/0004-6361/201630368Springer Science & Business Media. 326108A&AHaensel P., Potekhin A. Y., Yakovlev D. G., 2007, Neutron Stars 1 : Equation of State and Structure. Astrophysics and Space Science Library Vol. 326, Springer Science & Business Media Hambaryan V., Suleimanov V., Haberl F., Schwope A. D., Neuhäuser R., Hohle M., Werner K., 2017, A&A, 601, A108
. M Hashimoto, H Seki, M Yamada, 10.1143/PTP.71.320Physics. 71320Progress of TheoreticalHashimoto M.-a., Seki H., Yamada M., 1984, Progress of Theoret- ical Physics, 71, 320
. H Heiselberg, C J Pethick, 10.1103/PhysRevD.48.2916Phys. Rev. D. 482916Heiselberg H., Pethick C. J., 1993, Phys. Rev. D, 48, 2916
. N J Horing, 10.1103/PhysRev.186.434Phys. Rev. 186434Horing N. J., 1969, Phys. Rev., 186, 434
. C J Horowitz, 10.1103/PhysRevD.81.103001Phys. Rev. D. 81103001Horowitz C. J., 2010, Phys. Rev. D, 81, 103001
. C J Horowitz, D K Berry, 10.1103/PhysRevC.78.035806Phys. Rev. C. 7835806Horowitz C. J., Berry D. K., 2008, Phys. Rev. C, 78, 035806
. C J Horowitz, M A Pérez-García, J Carriere, D K Berry, J Piekarewicz, 10.1103/PhysRevC.70.065806Phys. Rev. C. 7065806Horowitz C. J., Pérez-García M. A., Carriere J., Berry D. K., Piekarewicz J., 2004, Phys. Rev. C, 70, 065806
. C J Horowitz, D K Berry, C M Briggs, M E Caplan, A Cumming, A S Schneider, 10.1103/PhysRevLett.114.031102Phys. Rev. Lett. 11431102Horowitz C. J., Berry D. K., Briggs C. M., Caplan M. E., Cumming A., Schneider A. S., 2015, Phys. Rev. Lett., 114, 031102
B Jancovici, Il Nuovo Cimento. 25428Jancovici B., 1962, Il Nuovo Cimento (1955-1965), 25, 428
Statistical Physics. L D Landau, E M Lifshitz, Pergamon PressOxfordLandau L. D., Lifshitz E. M., 1969, Statistical Physics. Pergamon Press, Oxford
. Z Lin, M E Caplan, C J Horowitz, C Lunardini, 10.1103/PhysRevC.102.045801Phys. Rev. C. 10245801Lin Z., Caplan M. E., Horowitz C. J., Lunardini C., 2020, Phys. Rev. C, 102, 045801
. A Montoli, M Antonelli, F Magistrelli, P M Pizzochero, 10.1051/0004-6361/202038340A&A. 642223Montoli A., Antonelli M., Magistrelli F., Pizzochero P. M., 2020, A&A, 642, A223
. R Nandi, S Schramm, 10.3847/1538-4357/aa9f12The Astrophysical Journal. 852135Nandi R., Schramm S., 2018, The Astrophysical Journal, 852, 135
. R Nandi, D Bandyopadhyay, I N Mishustin, W Greiner, 10.1088/0004-637X/736/2/156The Astrophysical Journal. 736156Nandi R., Bandyopadhyay D., Mishustin I. N., Greiner W., 2011, The Astrophysical Journal, 736, 156
. R Nandkumar, C J Pethick, 10.1093/mnras/209.3.511Monthly Notices of the Royal Astronomical Society. 209511Nandkumar R., Pethick C. J., 1984, Monthly Notices of the Royal Astronomical Society, 209, 511
. W G Newton, Nature Physics. 9396Newton W. G., 2013, Nature Physics, 9, 396
. W G Newton, S Cantu, S Wang, A Stinson, M A Kaltenborn, J R Stone, 10.1103/PhysRevC.105.025806Phys. Rev. C. 10525806Newton W. G., Cantu S., Wang S., Stinson A., Kaltenborn M. A., Stone J. R., 2022, Phys. Rev. C, 105, 025806
. K Oyamatsu, 10.1016/0375-9474(93)90020-XNucl. Phys. A. 561431Oyamatsu K., 1993, Nucl. Phys. A, 561, 431
. D Page, S Reddy, arXiv:1201.5602Page D., Reddy S., 2012, arXiv e-prints, p. arXiv:1201.5602
. V Parmar, H C Das, A Kumar, A Kumar, M K Sharma, P Arumugam, S K Patra, 10.1103/PhysRevD.106.023031Phys. Rev. D. 10623031Parmar V., Das H. C., Kumar A., Kumar A., Sharma M. K., Aru- mugam P., Patra S. K., 2022, Phys. Rev. D, 106, 023031
. M R Pelicer, D P Menezes, C C Barros, F Gulminelli, 10.1103/PhysRevC.104.L022801Phys. Rev. C. 10422801Pelicer M. R., Menezes D. P., Barros C. C., Gulminelli F., 2021, Phys. Rev. C, 104, L022801
. C Pethick, A Potekhin, 10.1016/S0370-2693(98)00341-4Physics Letters B. 4277Pethick C., Potekhin A., 1998, Physics Letters B, 427, 7
. C J Pethick, Z Zhang, D N Kobyakov, 10.1103/PhysRevC.101.055802Phys. Rev. C. 10155802Pethick C. J., Zhang Z., Kobyakov D. N., 2020, Phys. Rev. C, 101, 055802
D Pines, P Nozières, R Holyst, A Price, L Sorensen, S Kevan, J Toner, Theory of Quantum Liquids: Normal Fermi Liquids. CRC Press Poniewierski A582027Pines D., Nozières P., 2018, Theory of Quantum Liquids: Normal Fermi Liquids. CRC Press Poniewierski A., Holyst R., Price A., Sorensen L., Kevan S., Toner J., 1998, Phys. Rev. E, 58, 2027
. J A Pons, D Viganò, 10.1007/s41115-019-0006-7Living Reviews in Computational Astrophysics. 5Pons J. A., Viganò D., 2019, Living Reviews in Computational Astrophysics, 5
. J Pons, D Viganó, N Rea, 10.1038/nphys2640Nature Physics. Pons J., Viganó D., Rea N., 2013, Nature Physics, pp 431-434
. A Y Potekhin, A&A. 351787Potekhin A. Y., 1999, A&A, 351, 787
. A Y Potekhin, G Chabrier, 10.1051/0004-6361/202039006Astron. Astrophys. 645102Potekhin A. Y., Chabrier G., 2021, Astron. Astrophys., 645, A102
. A Y Potekhin, D A Baiko, P Haensel, D G Yakovlev, Astron. Astrophys. 346345Potekhin A. Y., Baiko D. A., Haensel P., Yakovlev D. G., 1999, Astron. Astrophys., 346, 345
. G Racah, 10.1103/PhysRev.61.186Phys. Rev. 61186Racah G., 1942a, Phys. Rev., 61, 186
. G Racah, 10.1103/PhysRev.62.438Phys. Rev. 62438Racah G., 1942b, Phys. Rev., 62, 438
. D G Ravenhall, C J Pethick, J R Wilson, 10.1103/PhysRevLett.50.2066Phys. Rev. Lett. 502066Ravenhall D. G., Pethick C. J., Wilson J. R., 1983, Phys. Rev. Lett., 50, 2066
The Physics and Astrophysics of Neutron Stars. A Schmitt, P Shternin, Astrophysics and Space Science Library. Rezzolla L., Pizzochero P., Jones D., Rea N.and Vidana I.457SpringerSchmitt A., Shternin P., 2018, in Rezzolla L., Pizzochero P., Jones D., Rea N.and Vidana I., eds, , Vol. 457, The Physics and As- trophysics of Neutron Stars. Springer, Astrophysics and Space Science Library
. A S Schneider, D K Berry, C M Briggs, M E Caplan, C J Horowitz, 10.1103/PhysRevC.90.055805Phys. Rev. C. 9055805Schneider A. S., Berry D. K., Briggs C. M., Caplan M. E., Horowitz C. J., 2014, Phys. Rev. C, 90, 055805
. A S Schneider, D K Berry, M E Caplan, C J Horowitz, Z Lin, 10.1103/PhysRevC.93.065806Phys. Rev. C. 9365806Schneider A. S., Berry D. K., Caplan M. E., Horowitz C. J., Lin Z., 2016, Phys. Rev. C, 93, 065806
. B Schuetrumpf, G Martínez-Pinedo, P.-G Reinhard, 10.1103/PhysRevC.101.055804Phys. Rev. C. 10155804Schuetrumpf B., Martínez-Pinedo G., Reinhard P.-G., 2020, Phys. Rev. C, 101, 055804
. R Sharma, S Reddy, 10.1103/PhysRevC.83.025803Phys. Rev. C. 8325803Sharma R., Reddy S., 2011, Phys. Rev. C, 83, 025803
. P S Shternin, D G Yakovlev, 10.1103/PhysRevD.74.043004Phys. Rev. D. 7443004Shternin P. S., Yakovlev D. G., 2006, Phys. Rev. D, 74, 043004
H Sotani, 21st Workshop on General Relativity and Gravitation in Japan. Sotani H., 2012, in 21st Workshop on General Relativity and Grav- itation in Japan. pp 100-103
. J Sykes, G Brooker, 10.1016/0003-4916(70)90002-3Annals of Physics. 561Sykes J., Brooker G., 1970, Annals of Physics, 56, 1
. C M Tan, 10.3847/1538-4357/aade88ApJ. 86654Tan C. M., et al., 2018, ApJ, 866, 54
. X Wang, J Li, J Fang, H Pais, C M Providência, 10.1103/PhysRevD.105.063004Phys. Rev. D. 10563004Wang X., Li J., Fang J., Pais H., Providência C. m. c., 2022, Phys. Rev. D, 105, 063004
. G Watanabe, K Iida, K Sato, 10.1016/S0375-9474(00)00197-4Nuclear Physics A. 676455Watanabe G., Iida K., Sato K., 2000, Nuclear Physics A, 676, 455
. D G Yakovlev, 10.1093/mnras/stv1642Monthly Notices of the Royal Astronomical Society. 453581Yakovlev D. G., 2015, Monthly Notices of the Royal Astronomical Society, 453, 581
. D G Yakovlev, V A Urpin, Soviet Astronomy. 24303Yakovlev D. G., Urpin V. A., 1980, Soviet Astronomy, 24, 303
Electrons and phonons: the theory of transport phenomena in solids. J M Ziman, The physics of liquid crystals. Clarendon, OxfordOxford university press de Gennes P. GZiman J. M., 2001, Electrons and phonons: the theory of transport phenomena in solids. Oxford university press de Gennes P. G., Prost J., 1993, The physics of liquid crystals. Clarendon, Oxford
. R C R De Lima, S S Avancini, C Providência, 10.1103/PhysRevC.88.035804Phys. Rev. C. 8835804de Lima R. C. R., Avancini S. S., Providência C., 2013, Phys. Rev. C, 88, 035804
| []
|
[
"STATUS OF THE NSCL CYCLOTRON GAS STOPPER*",
"STATUS OF THE NSCL CYCLOTRON GAS STOPPER*"
]
| [
"N Joshi \nNSCL/MSU\n48823East LansingMIUSA\n",
"G Bollen \nNSCL/MSU\n48823East LansingMIUSA\n",
"M Brodeur \nNSCL/MSU\n48823East LansingMIUSA\n",
"D J Morrissey \nNSCL/MSU\n48823East LansingMIUSA\n",
"S Schwarz \nNSCL/MSU\n48823East LansingMIUSA\n"
]
| [
"NSCL/MSU\n48823East LansingMIUSA",
"NSCL/MSU\n48823East LansingMIUSA",
"NSCL/MSU\n48823East LansingMIUSA",
"NSCL/MSU\n48823East LansingMIUSA",
"NSCL/MSU\n48823East LansingMIUSA"
]
| []
| A gas-filled reverse cyclotron for the thermalisation of energetic beams is under construction at NSCL/MSU. Rare isotopes produced via projectile fragmentation after in-flight separation will be injected into the device and converted into low-energy beams through buffer gas interactions as they spiral towards the centre of the device. The extracted thermal beams will be used for low energy experiments such as precision mass measurements with traps or laser spectroscopy, and further transport for reacceleration. Detailed calculations have been performed to optimize the magnetic field design as well as the transport and stopping of ions inside the gas. An RFcarpet will be used to transport the thermal ions to the axial extraction point. The calculations indicate that the cyclotron gas stopper will be much more efficient for the thermalisation of light and medium mass ions compared to linear gas cells. In this contribution we will discuss simulations of the overall performance and acceptance of machine, the beam matching calculations to the fragment separator emittance, and the construction status. | null | [
"https://export.arxiv.org/pdf/1606.06088v1.pdf"
]
| 55,264,817 | 1606.06088 | d6a98ef89a48a2b8b7ba3e03ffafbb2da7a595db |
STATUS OF THE NSCL CYCLOTRON GAS STOPPER*
N Joshi
NSCL/MSU
48823East LansingMIUSA
G Bollen
NSCL/MSU
48823East LansingMIUSA
M Brodeur
NSCL/MSU
48823East LansingMIUSA
D J Morrissey
NSCL/MSU
48823East LansingMIUSA
S Schwarz
NSCL/MSU
48823East LansingMIUSA
STATUS OF THE NSCL CYCLOTRON GAS STOPPER*
A gas-filled reverse cyclotron for the thermalisation of energetic beams is under construction at NSCL/MSU. Rare isotopes produced via projectile fragmentation after in-flight separation will be injected into the device and converted into low-energy beams through buffer gas interactions as they spiral towards the centre of the device. The extracted thermal beams will be used for low energy experiments such as precision mass measurements with traps or laser spectroscopy, and further transport for reacceleration. Detailed calculations have been performed to optimize the magnetic field design as well as the transport and stopping of ions inside the gas. An RFcarpet will be used to transport the thermal ions to the axial extraction point. The calculations indicate that the cyclotron gas stopper will be much more efficient for the thermalisation of light and medium mass ions compared to linear gas cells. In this contribution we will discuss simulations of the overall performance and acceptance of machine, the beam matching calculations to the fragment separator emittance, and the construction status.
INTRODUCTION
The fragmentation of fast heavy-ion projectiles enables fast, chemistry-independent production, separation and delivery of exotic isotopes. The resulting beams of exotic nuclei have high energies (>50 MeV/u) and large emittances due to the production process. The high energy ions are passed through isotope separators followed by a momentum compressor and wedge degrader. These fast ions are slowed down by solid degraders before injection into the gas cell [1]. The thermalisation and extraction processes in linear gas cell are limited due to large stopping range for light ions and space charge created during the slowing down process. An alternate approach consists of applying a strong axial magnetic field that forces the ions to follow spiral trajectory in the gas. Thermalised ions are then transported using an RF-carpet to the central extraction orifice. Afterwards these ions are transformed into low energy beams using a differentially pumped ion guide and transported for either low energy experiments or for reacceleration. Initial calculations for stopping in such device were presented in [2]. Here, we present simulation results with improved models for stopping, optimisation of the parameters and report the current status of the project.
BEAM STOPPING AND CYCSTOP CODE
The energetic ions are injected on the outer radius of the cyclotron. Due to the interaction with the buffer gas, the ions lose energy and follow a spiral path inward. The radius ρ of a certain ion is given by expression:
Bq mE Bq
p 2 = = ρ ,(1)
where p is the momentum, B the magnetic field, q is the charge state, m the mass and E energy of the ion. The ion dynamics was simulated using a code named CycSTOP. In addition to particle tracking in the magnetic field, the code includes:
• ATIMA module: for the interaction of ion beam with solids. • SRIM routine: for energy loss of ions in the buffer gas. • CX package: for the charge exchange process taking place in gasses and solids. • SAS package: for small angle multiple scattering in gasses. Fig. 2 shows a model of the cyclotron gas stopper including a magnetic field map and calculated beam envelopes. The energetic beam (magnetic rigidity Bρ=2.6 Tm) after injection passes through a vacuum window, made of a metal foil e.g. Al or Be; before it hits the glass degrader which brings the energy of ion down to match a magnetic rigidity of 1.6 Tm. The ions spiral inward towards the centre and are considered as stopped once the energy falls below the cut off value of 1 keV. The ions are regarded as lost if they hit the injection channel, the axial or radial wall, or stay in the degrader.
The CycSTOP code has been upgraded to describe the slowing down process more realistically since the original ____________________________________________ *Work supported by NSF #[email protected] work described in Ref. [2] with the following improvements:
• The tables of charge exchange cross sections have been rebuilt. At high energies the ETACHA code [3] was used to calculate the single electron capture and loss cross sections. At low energies Schlachter's empirical scaling rule [4] was used to calculate the single electron capture and equilibrium charge state was calculated using Schiwietz's formula [5]. The electron loss cross sections are based on Franzke's method [6]. The high and low energy results were joined by interpolation. • Small angle multiple scattering [7] has been added and tested against data available in the literature. The CycSTOP code was modified to allow gas targets with arbitrary thickness and atomic number. • The particle mover has been changed to account for relativistic motion of ions with the implementation of the Boris method [8]. The relativistic γ factor typically lies in the range of 1.05-1.07 for the injection energies. The relativistic and nonrelativistic calculations were found to differ in stopping efficiency in range of 3-5%. The sectored magnet consists of hill and valley profile with 3 fold symmetry to provide axial focussing. The magnetic field with a maximum strength of about 2.7 T will be generated by a superconducting coil excited by currents up to 320 kA. The magnetic field configuration was optimized in order to achieve maximum stopping efficiency. The magnet discussed in [9] (version S13) required further modification to gain a higher acceptance for lighter ions.
To calculate the acceptance, the ions were distributed at the injection plane (about 2 m outside) with all possible positions and angles that correspond to a homogenous distribution in the phase-space. The ions which successfully stopped were sampled to get the acceptance for a particular configuration. Fig. 3 shows an example of such acceptance plot for 79 Br ions calculated at a gas pressure of 100 mbar. Table 1 compares the results for the two magnet designs; the S13 with a pole gap of 75 mm and the S17 with a gap of 90 mm, in terms of acceptances at a gas pressure of 100 mbar. It can be seen that the axial acceptance (ε z ) is considerably increased for 24 O, by about 23%. No significant change was found in radial phasespace (ε r ). Ion ε r (S13) ε z (S13) ε r (S17) ε z (S17) 79 A few more iterations of the magnet design were performed for pole profile to reduce axial losses contributed by Walkinshaw resonance. The magnet field was finalised with a version called S20.
The beam optics calculation for beam passing through the A1900 separator was done using combination of two well known codes TRANSPORT and LISE++ [10,11]. The beam emittances from LISE++ were matched to the calculated acceptances. Table 2 gives the axial and radial acceptances for the S20 magnet and emittances calculated by LISE++ for selected ion species. The calculations show promising efficiencies even for lighter ions like 24 O at a low gas pressure of 100 mbar. The energy spread acceptance for the cyclotron gas stopper is dominated by the interaction with degrader. Ions moving too slowly are lost inside the degrader and ions with too high energy are lost radially due to a rigidity mismatch. The energy spread acceptance is asymmetric ranging typically from -10% to +5%. The expected energy spread calculated with LISE++ is symmetric with a Gaussian distribution. Table 3 compares the typical values for energy acceptance and values obtained with LISE++ after momentum compressor. 4 shows variation in the stopping efficiency as a function of gas pressure. It can be seen that the cyclotron gas stopper can even be operated at low gas pressure in case of heavier ions, whereas in case of lighter ions at a nominal pressure of 100 mbar it exhibits as efficiency of more than 65%, which can be further increased by increasing in the gas pressure to 150 mbar.
RF-CARPET
The ions, once thermalised, need to be transported radially from the outer peripheral region towards a central aperture for extraction. An RF-carpet will be employed for this task. The carpet will be driven in so-called ionsurfing mode [12], which replaces the drag field with a travelling wave. Experiments were performed using two different isotopes. 85 Rb + ions were successfully transported over a distances up to 40 cm with gas pressures reaching 240 mbar. For a pressure of 120 mbar, 85 Rb + ions were shown to sustain axial push fields of over 45 V/cm, and reached transport velocities of 75 m/s. Efficient transport of 39 K + ions was also achieved at a pressure of 80 mbar with an axial push field of 20 V/cm and velocities of 50 m/s.
CURRENT STATUS
Detailed ion stopping simulations have been performed to optimise acceptance of the cyclotron gas stopper. The magnet design is finalized. The project has entered into the design and manufacturing phase. The pole pieces were ordered are being machined and the yoke steel is being manufactured. Considerable amount of effort are being devoted to designing vacuum vessel, cryostat and mountings. The whole assembly is scheduled to be tested around middle of 2013. The scaled up version of the RFcarpet and the extraction system using ion guide are being designed. Both of these will be tested separately before final installation into the gas stopper. The final commissioning of the cyclotron gas stopper is expected late in 2013.
Figure 1 :
1Layout of beam line depicting cyclotron gas stopper in N4 vault of NSCL experimental hall.
Figure 2 :
2The cut-view model of the cyclotron gas stopper. The magnetic field map is overlaid along with beam envelope for to ion species 24 O, 79 Br.
Figure 3 :
3Example of an acceptance plot for 79 Br ions at a gas pressure of 100 mbar. The phase-space is colour coded according to the fate of ions.
Figure 4 :
4Stopping efficiency as a function of buffer gas pressure for different ions.
Table 1 :
1Acceptance comparison for two versions of magnet for different ion species. All values have units of π-mm-mrad
Table 2 :
2Acceptance for S20 magnet, input emittances for different ions calculated from LISE++ and corresponding efficiency. All values have units of π-mm-mradIon
ε r
(S20)
ε z
(S20)
ε r
(LISE)
ε z
(LISE)
Effi-
ciency
(%)
79 Br
897
1190
227
424
98.10
56 Fe
740
1165
153
419
96.85
40 Si
853
1187
336
1098
86.50
24 O
707
1179
1550
1038
64.90
Table 3 :
3Longitudinal energy acceptance for S20 magnet and energy spread calculated from LISE++.Ion
79 Br
56 Fe
40 Si
24 O
∆E/E (S20)
-10.6%
~ + 5 %
-10.1%
~ +2.1
%
-10.1%
~ +2.8
%
-9.5% ~
+0.9%
∆E/E (LISE) ± 1.3 % ± 1.3 % ± 1.4 % ± 4.6 %
Fig.
ACKNOWLEDGMENTThe project is supported by the National Science Foundation under grant PHY-09-58726.
. L Weissman, NIM A. 531416L. Weissman et al, NIM A 531 (2004) 416.
. G Bollen, NIM A. 550G. Bollen et al, NIM A 550 (2005) 27-38.
. J P Rozet, NIM B. 107J. P. Rozet et al, NIM B 107 (1996) 67-70.
. A S Schlachter, Phys. Rev. A. 2727A. S. Schlachter et al, Phys. Rev. A 27 (1983) 27.
. G Schiwitz, NIM B. 125G. Schiwitz et al, NIM B 175-177 (2001) 125.
. B Franzke, 92-01100CERN Yellow ReportB. Franzke, CERN Yellow Report 92-01 (1992) 100
. G , NIM B. 201G. Amsel et al, NIM B 201 (2003) 325-388.
J P Boris, Proc. Fourth Conf. Num. Sim. Plasmas. Fourth Conf. Num. Sim. PlasmasJ. P. Boris, Proc. Fourth Conf. Num. Sim. Plasmas, pp. 3-67
G K Pang, Proc. PAC07. PAC0740G. K. Pang et al, Proc. PAC07, THPAS040.
Simulation of fragment separators. Graphic Transport framework. Graphic Transport framework, http://pc532.psi.ch/trans.htm [11] Simulation of fragment separators, http://lise.nscl.msu.edu
. G Bollen, Int. J. Mass Spectrum. 299131G. Bollen, Int. J. Mass Spectrum 299 (2011) 131.
| []
|
[
"Eventually Consistent Configuration Management in Fog Systems with CRDTs",
"Eventually Consistent Configuration Management in Fog Systems with CRDTs"
]
| [
"Nick Stender \nTU\nBerlin & ECDF BerlinGermany\n",
"Tobias Pfandzelter \nTU\nBerlin & ECDF BerlinGermany\n",
"David Bermbach \nTU\nBerlin & ECDF BerlinGermany\n"
]
| [
"TU\nBerlin & ECDF BerlinGermany",
"TU\nBerlin & ECDF BerlinGermany",
"TU\nBerlin & ECDF BerlinGermany"
]
| []
| Current fog systems rely on centralized and strongly consistent services for configuration management originally designed for cloud systems. In the geo-distributed fog, such systems can exhibit high communication latency or become unavailable in case of network partition. In this paper, we examine the drawbacks of strong consistency for fog configuration management and propose an alternative based on CRDTs. We prototypically implement our approach for the FReD fog data management platform. Early results show reductions of server response times of up to 50%. | null | [
"https://export.arxiv.org/pdf/2306.01595v1.pdf"
]
| 259,063,713 | 2306.01595 | 07e4f2a49e002033e7272ca3525610a125125cdd |
Eventually Consistent Configuration Management in Fog Systems with CRDTs
Nick Stender
TU
Berlin & ECDF BerlinGermany
Tobias Pfandzelter
TU
Berlin & ECDF BerlinGermany
David Bermbach
TU
Berlin & ECDF BerlinGermany
Eventually Consistent Configuration Management in Fog Systems with CRDTs
Current fog systems rely on centralized and strongly consistent services for configuration management originally designed for cloud systems. In the geo-distributed fog, such systems can exhibit high communication latency or become unavailable in case of network partition. In this paper, we examine the drawbacks of strong consistency for fog configuration management and propose an alternative based on CRDTs. We prototypically implement our approach for the FReD fog data management platform. Early results show reductions of server response times of up to 50%.
INTRODUCTION
Fog computing combines geo-distributed servers at the edge, in the cloud, and in the core network to support novel application domains such as the IoT and autonomous driving [6,8,30,31]. Fog platforms, e.g., FogStore [16] and FReD [18,19,25], use centralized configuration management systems with strong consistency. While desirable for easier configuration of replicas and availability clusters, this comes with an inherent performance penalty [26,29] that is exacerbated in fog systems, which are highly geo-distributed with connections over the unreliable Internet [10].
Eventual consistency could enable distributed configuration management with low latency and highly available access to global configuration data [26]. In this paper, we explore the potential QoS benefits of such an approach and show the drawbacks of eventual consistency in fog configuration management. Specifically, we develop an alternative distributed configuration management system with eventual consistency for the fog data management platform FReD. We convert existing methods and data fields in the configuration management service to use conflict-free replicated data types (CRDTs) that allow resolving consistency conflicts after they occur due to network partitions or delay [21,27].
We make the following contributions:
• We design an eventually consistent configuration management architecture for the FReD fog data management platform based on CRDTs ( §3). • We prototypically implement this design and evaluate it experimentally ( §4).
BACKGROUND
Before we introduce the specific architecture of our system, we will give some background information about the technologies and theoretical concept used in this paper.
Fog Computing. Fog computing extends cloud computing past the confines of a centralized data center by incorporating compute and storage resources in the core network and the edge [6,8,23]. Fog systems are deployed across geo-distributed heterogeneous nodes close to end users and devices in order to provide application services with low latency, decrease network strain, and increase data protection.
Managing applications within such an environment is more complex than in the cloud given heterogeneity and geo-distribution. Researchers have proposed compute [24], messaging [17], and data management [16,18,19,25] abstractions to make adopting fog computing easier.
The FReD fog data management platform [25] provides the abstraction of keygroups, logically coherent data tables that can be accessed by application with a key/value interface. For each keygroup applications can specify geographically diverse replication locations. As a central source of truth about the available FReD locations, replication instructions, and user authentication, the FReD naming service runs a centralized etcd [12] cluster in the cloud. This is a similar design to Apache ZooKeeper [20].
Consistency in Distributed Systems. In distributed systems, there are different levels of data consistency. In strong consistency, two copies of a data item are identical at all times of valid system state [28]. Eventual consistency describes the promise that data will be identical or consistent at some point in the future. In strong eventual consistency two data copies that receive the same updates, albeit not necessarily in the same order, will end up in the same state eventually [27].
The PACELC theorem describes that distributed computing systems have to choose between consistency and latency in normal operation. In case of a network partition, a choice between consistency and availability has to be made [3,14]. In a cloud context, most strongly consistent configuration management systems choose consistency in both cases as low communication latency and high network availability arXiv:2306.01595v1 [cs.DC] 2 Jun 2023 can be assumed in a data center. In fog computing, which applications use to decrease communication latency and to rely less on unstable Internet connections, emphasis should instead be put on latency in normal operation and availability during network partitions [6,22].
CRDTs. Conflict-free replicated data types come in statebased and operation-based variants [27]. While operationbased CRDTs are more efficient in communication, they require exactly-once message delivery. We thus focus on state-based CRDTs that are more compatible with gossip dissemination in distributed fog systems [5].
A state-based CRDT is a tuple of data type and merge function. This function takes two data items and produces a combined output item, so that the states of two nodes can be combined without communication between them. A Last-Write-Wins element set (LWW) is a state-based CRDT based on an add and a remove set [4,27]. Inserted elements are added to the add set, and removed elements are added to the remove set. Both set additions include timestamps. An element is considered an element of the LWW if a) it is only present in the add set OR b) present in both sets, but the timestamp of the entry in the add set is newer. Unique replica identifiers may be added to per-replica counters to ensure timestamp uniqueness.
CRDT-BASED CONFIGURATION MANAGEMENT IN FRED
We propose replacing the centralized etcd naming service in FReD with a decentralized CRDT-based approach with the goal of improving client access latency and network partition tolerance. This is especially relevant as reading configuration data is on the hot path of a client request to FReD: When a client reads data from a FReD node, the node has to check that the client is allowed to perform this read. Similarly, when an update request occurs, the FReD node has to read keygroup configuration that specifies to which other nodes in the fog network data should be replicated. We use LWW element sets to hold configuration data in our eventually consistent configuration service. Specifically, we use one set each for node information, keygroup configuration, system permission, and FReD node organization. As there is no longer any central instance, we use a distributed bootstrapping approach where new nodes are informed of one existing node to create a decentralized overlay network. We use a gossip-style message dissemination where nodes periodically call other nodes to update their view of the network and discover unavailable nodes [7]. We convert the following functionality of the FReD naming service:
Node Registration: Instead of registering a new node with a central orchestrator, node identifier and address are sent to the providing bootstrapping node. As node creation happens infrequently and identifiers can easily be made unique, this is unlikely to lead to incorrect behavior. In case of a restart after failure LWW ensures that outdated information about a node is overwritten.
User Permission Changes: When an administrator makes permission changes for a user at the user's node, this node will immediately apply those changes. If message dissemination is slower than user movement, data staleness could lead to user permissions being outdated when switching nodes. The correlation between physical locations of users and nodes, and the data dissemination latency, however, makes this unlikely. Partitioned nodes are a challenge, as updated permission information cannot reach them. The only alternative to stale information is unavailability of the node, e.g., by disabling access for users when the partition is detected.
Keygroup Modification: When creating keygroups, identifier uniqueness is paramount. Concurrent creation of two keygroups with identical names at two different keygroups will lead to conflicts in LWW. However, the large identifier space makes this unlikely.
Keygroup Membership: Administrators and application can join and remove nodes from keygroups to specify data replication. Conflicts in an eventually consistent configuration management could occur only for changes made to the same keygroup, as memberships to keygroups is independent. If keygroup membership for a single node is modified concurrently, one of these changes is overwritten by LWW. Such a situation is unlikely, however, as, logically, each keygroup is managed by a single application.
EVALUATION
We implement the alternative naming service using Go [2] and gRPC [15], making it compatible with the open-source FReD implementation. In our experiments, we start FReD nodes as Docker containers and connect them to a FReD naming service, either the original etcd implementation or our new CRDT-based system. Each naming service is distributed over at least three machines. We inject an artificial network delay between containers using tc-netem [9]. We Figure 2: FReD response times without network delays between configuration service machines using etcd (Fig. 2a) and CRDTs (Fig. 2b). connect a load generator to a FReD node that measures completion times of requests. Our experiment topology is shown in Fig. 1.
Baseline. As a baseline, we compare configuration management approaches without network delay. To invoke write access to the naming service call the createKeygroup API of FReD to create keygroups from our load generator. The results in Fig. 2 show a higher delay for the etcd naming service. Although we expect this improvement to be caused mainly by the switch to a CRDT-based approach, we cannot rule out that our prototypical implementation is otherwise more efficient than production-ready etcd.
We measure the message dissemination delay in the CRDTbased system by logging the number of keygroups each configuration machine knows about. As shown in Fig. 3, the distributed CRDT-based systems converges quickly.
With Network Delays. Using an artificial network delay of 10ms, we evaluate the impact of communication delay between naming service machines. As shown in Fig. 4, this small communication delay increases FReD response times for both implementations. However, the total impact is more noticeable for the etcd naming service. Figure 4: FReD response times with 10ms network delays between configuration service machines using etcd (Fig. 4a) and CRDTs (Fig. 4b). Figure 6: FReD response times with configuration service machines using etcd (Fig. 6a) and CRDTs (Fig. 6b). After 45s experiment time, we partition the network between configuration service machines.
As shown in Fig. 5, there is a slight impact to data dissemination in our CRDT-based configuration management service.
Network Partitions. Finally, we introduce a network partition between the naming service machine used by our FReD node and the two others. This partition is introduced after running the experiment for 45 seconds. We re-enable the network link after a further 35 seconds. As shown in Fig. 6, the partition impacts only the strongly consistent etcd implementation, where all requests fail during the partition (shown as a 0ms response time). Note also that it takes an additional 20 seconds after the network links are re-enabled for the system to recover. The CRDT-based implementation remains unaffected by this partition.
The partition still impacts data dissemination as Fig. 7 shows: During the partition, the other two machines of the naming service do not receive updated keygroup information. As soon as the network connection is reinstated data is updated again.
RELATED WORK
To the best of our knowledge, we are the first to implement a CRDT-based configuration management for fog systems. We first suggested such an approach in prior work [26]. In related domains, Fördős and Cesarini [13] propose CRDT-based configuration management for distributed Erlang systems. They find their approach to improve response times and system reliability. Jeffery et al. [21] outline a CRDT-based replacement for etcd in distributed Kubernetes. Although they do not provide experimental evaluation of this approach, this proves the general idea we follow in this paper. Serf [1] is a distributed cluster management tool based on the SWIM gossip protocol [11]. Serf is decentralized, available during network partitions, and provides weak consistency guarantees. It primarily targeting cloud and cluster deployments and its applicability to geo-distributed fog computing system is unclear.
CONCLUSION & FUTURE WORK
In this paper, we have shown that eventually consistent configuration management systems based on CRDTs are a promising alternative to strictly consistent centralized solutions for fog systems. Our evaluation of a CRDT-based distributed naming service for the FReD fog data management platform has shown reduced response times for clients, especially with network delay between machines. Future work will include a more comprehensive evaluation of the drawbacks of using eventually consistent configuration management in the fog. We also plan to explore the combination of strong consistency for some configuration data and eventual consistency for others. While complex, such a hybrid approach would allow for more efficient data dissemination without impacting application logic.
Figure 1 :
1Overview of the different components.
Figure 3 :
3Number of keygroups present in each CRDTbased configuration service instance without added network delays.
Figure 5 :
5Number of keygroups present in each CRDTbased configuration service instance with 10ms added network delays.
Figure 7 :
7Number of keygroups present in each CRDTbased configuration service instance with network partition after 45s experiment time.
ACKNOWLEDGMENTS
Serf -Decentralized Cluster Membership, Failure Detection, and Orchestration. HashiCorp 2017. HashiCorp 2017. Serf -Decentralized Cluster Membership, Failure Detection, and Orchestration. HashiCorp. Retrieved June 2, 2023 from https://serf.io/
The Go Programming Language. Google 2023. Google 2023. The Go Programming Language. Google. Retrieved June 2, 2023 from https://go.dev/
Consistency Tradeoffs in Modern Distributed Database System Design: CAP is Only Part of the Story. Daniel Abadi, 10.1109/MC.2012.33Computer. 45Daniel Abadi. 2012. Consistency Tradeoffs in Modern Distributed Database System Design: CAP is Only Part of the Story. Computer 45, 2 (Jan. 2012), 37-42. https://doi.org/10.1109/MC.2012.33
Edge Applications: Just Right Consistency. Anshul Ahuja, Geetesh Gupta, Subhajit Sidhanta, 10.1109/SRDS47363.2019.00047Proceedings of the 2019 38th Symposium on Reliable Distributed Systems. the 2019 38th Symposium on Reliable Distributed SystemsLyon, France; New York, NY, USAIEEESRDS '19Anshul Ahuja, Geetesh Gupta, and Subhajit Sidhanta. 2019. Edge Ap- plications: Just Right Consistency. In Proceedings of the 2019 38th Sym- posium on Reliable Distributed Systems (Lyon, France) (SRDS '19). IEEE, New York, NY, USA, 351-3512. https://doi.org/10.1109/SRDS47363. 2019.00047
Making operation-based CRDTs operation-based. Carlos Baquero, Paulo Sérgio Almeida, Ali Shoker, 10.1145/2596631.2596632Proceedings of the First Workshop on Principles and Practice of Eventual Consistency. the First Workshop on Principles and Practice of Eventual ConsistencyNew York, NY, USA, 1-2Association for Computing MachineryThe Netherlands) (PaPEC '14)Carlos Baquero, Paulo Sérgio Almeida, and Ali Shoker. 2014. Making operation-based CRDTs operation-based. In Proceedings of the First Workshop on Principles and Practice of Eventual Consistency (Ams- terdam, The Netherlands) (PaPEC '14). Association for Computing Machinery, New York, NY, USA, 1-2. https://doi.org/10.1145/2596631. 2596632
A Research Perspective on Fog Computing. David Bermbach, Frank Pallas, David García Pérez, Pierluigi Plebani, Maya Anderson, Ronen Kat, Stefan Tai, 10.1007/978-3-319-91764-1_16Proceedings of the 2nd Workshop on IoT Systems Provisioning & Management for Context-Aware Smart Cities. the 2nd Workshop on IoT Systems Provisioning & Management for Context-Aware Smart CitiesMalaga, Spain; Cham, SwitzerlandSpringerDavid Bermbach, Frank Pallas, David García Pérez, Pierluigi Plebani, Maya Anderson, Ronen Kat, and Stefan Tai. 2017. A Research Per- spective on Fog Computing. In Proceedings of the 2nd Workshop on IoT Systems Provisioning & Management for Context-Aware Smart Cities (Malaga, Spain) (ISYCC 2017). Springer, Cham, Switzerland, 198-210. https://doi.org/10.1007/978-3-319-91764-1_16
The promise, and limitations, of gossip protocols. Ken Birman, 10.1145/1317379.1317382ACM SIGOPS Operating Systems Review. 41Ken Birman. 2007. The promise, and limitations, of gossip protocols. ACM SIGOPS Operating Systems Review 41, 5 (Oct. 2007), 8-13. https: //doi.org/10.1145/1317379.1317382
Fog computing and its role in the internet of things. Flavio Bonomi, Rodolfo Milito, Jiang Zhu, Sateesh Addepalli, 10.1145/2342509.2342513Proceedings of the first edition of the MCC workshop on Mobile cloud computing. the first edition of the MCC workshop on Mobile cloud computingHelsinki, Finland; New York, NY, USAAssociation for Computing MachineryMCC '12)Flavio Bonomi, Rodolfo Milito, Jiang Zhu, and Sateesh Addepalli. 2012. Fog computing and its role in the internet of things. In Proceedings of the first edition of the MCC workshop on Mobile cloud computing (Helsinki, Finland) (MCC '12). Association for Computing Machinery, New York, NY, USA, 13-16. https://doi.org/10.1145/2342509.2342513
Traffic Control HOWTO. Martin A Brown, . linux-ip.netTechnical ReportMartin A. Brown. 2006. Traffic Control HOWTO. Technical Report. linux-ip.net.
Fog computing at industrial level, architecture, latency, energy, and security: A review. Heliyon 6, 4, Article e03706. Gustavo Caiza, Morelva Saeteros, William Oñate, Marcelo V Garcia, 10.1016/j.heliyon.2020.e03706Gustavo Caiza, Morelva Saeteros, William Oñate, and Marcelo V. Gar- cia. 2020. Fog computing at industrial level, architecture, latency, energy, and security: A review. Heliyon 6, 4, Article e03706 (April 2020). https://doi.org/10.1016/j.heliyon.2020.e03706
SWIM: scalable weakly-consistent infection-style process group membership protocol. Abhinandan Das, Indranil Gupta, Ashish Motivala, 10.1109/DSN.2002.1028914Proceedings of the International Conference on Dependable Systems and Networks. the International Conference on Dependable Systems and NetworksWashington, DC, USA; New York, NY, USAIEEEDSN '02)Abhinandan Das, Indranil Gupta, and Ashish Motivala. 2002. SWIM: scalable weakly-consistent infection-style process group membership protocol. In Proceedings of the International Conference on Dependable Systems and Networks (Washington, DC, USA) (DSN '02). IEEE, New York, NY, USA, 303-312. https://doi.org/10.1109/DSN.2002.1028914
etcd: A distributed, reliable key-value store for the most critical data of a distributed system. etcd Authors. 2023.. Retrievedetcd Authors. 2023. etcd: A distributed, reliable key-value store for the most critical data of a distributed system. Retrieved June 1, 2023 from https://etcd.io/
CRDTs for the configuration of distributed Erlang systems. Viktória Fördős, Francesco Cesarini, 10.1145/2975969.2975974Proceedings of the 15th International Workshop on Erlang. the 15th International Workshop on ErlangNara, Japan; New York, NY, USAErlang '16). Association for Computing MachineryViktória Fördős and Francesco Cesarini. 2016. CRDTs for the con- figuration of distributed Erlang systems. In Proceedings of the 15th International Workshop on Erlang (Nara, Japan) (Erlang '16). Associ- ation for Computing Machinery, New York, NY, USA, 42-53. https: //doi.org/10.1145/2975969.2975974
Proving PACELC. Wojciech Golab, 10.1145/3197406.3197420ACM SIGACT News. 491Wojciech Golab. 2018. Proving PACELC. ACM SIGACT News 49, 1 (March 2018), 73-81. https://doi.org/10.1145/3197406.3197420
grpc: A high performance, open source universal RPC framework. gRPC Authors. 2023. RetrievedgRPC Authors. 2023. grpc: A high performance, open source universal RPC framework. Retrieved June 2, 2023 from https://grpc.io/
FogStore: A Geo-Distributed Key-Value Store Guaranteeing Low Latency for Strongly Consistent Access. Harshit Gupta, Umakishore Ramachandran, 10.1145/3210284.3210297Proceedings of the 12th ACM International Conference on Distributed and Event-based Systems. the 12th ACM International Conference on Distributed and Event-based SystemsHamilton, New Zealand; New York, NY, USAAssociation for Computing MachineryDEBS '18)Harshit Gupta and Umakishore Ramachandran. 2018. FogStore: A Geo- Distributed Key-Value Store Guaranteeing Low Latency for Strongly Consistent Access. In Proceedings of the 12th ACM International Confer- ence on Distributed and Event-based Systems (Hamilton, New Zealand) (DEBS '18). Association for Computing Machinery, New York, NY, USA, 148-159. https://doi.org/10.1145/3210284.3210297
DisGB: Using Geo-Context Information for Efficient Routing in Geo-Distributed Pub/Sub Systems. Jonathan Hasenburg, David Bermbach, 10.1109/UCC48980.2020.00026Proceedings of the 13th IEEE/ACM International Conference on Utility and Cloud Computing. the 13th IEEE/ACM International Conference on Utility and Cloud ComputingLeicester, United Kingdom; New York, NY, USAIEEEUCC 2020Jonathan Hasenburg and David Bermbach. 2020. DisGB: Using Geo- Context Information for Efficient Routing in Geo-Distributed Pub/Sub Systems. In Proceedings of the 13th IEEE/ACM International Conference on Utility and Cloud Computing (Leicester, United Kingdom) (UCC 2020). IEEE, New York, NY, USA, 67-78. https://doi.org/10.1109/ UCC48980.2020.00026
FBase: A Replication Service for Data-Intensive Fog Applications. Jonathan Hasenburg, Martin Grambow, David Bermbach, Berlin, GermanyTU Berlin & ECDF, Mobile Cloud Computing Research GroupTechnical ReportJonathan Hasenburg, Martin Grambow, and David Bermbach. 2019. FBase: A Replication Service for Data-Intensive Fog Applications. Tech- nical Report. TU Berlin & ECDF, Mobile Cloud Computing Research Group, Berlin, Germany.
Towards A Replication Service for Data-Intensive Fog Applications. Jonathan Hasenburg, Martin Grambow, David Bermbach, 10.1145/3341105.3374060Proceedings of the 35th ACM Symposium on Applied Computing, Posters Track. the 35th ACM Symposium on Applied Computing, Posters TrackBrno, Czech Republic; New York, NY, USAACMSAC '20Jonathan Hasenburg, Martin Grambow, and David Bermbach. 2020. Towards A Replication Service for Data-Intensive Fog Applications. In Proceedings of the 35th ACM Symposium on Applied Computing, Posters Track (Brno, Czech Republic) (SAC '20). ACM, New York, NY, USA, 267-270. https://doi.org/10.1145/3341105.3374060
ZooKeeper: Wait-free Coordination for Internet-scale Systems. Patrick Hunt, Mahadev Konar, Benjamin Reed Flavio Paiva Junqueira, Proceedings of the USENIX Annual Technical Conference. the USENIX Annual Technical ConferenceBoston, MA, USA; Berkeley, CA, USAATC '10). USENIX AssociationPatrick Hunt, Mahadev Konar, Flavio Paiva Junqueira, and Benjamin Reed. 2010. ZooKeeper: Wait-free Coordination for Internet-scale Systems. In Proceedings of the USENIX Annual Technical Conference (Boston, MA, USA) (ATC '10). USENIX Association, Berkeley, CA, USA.
Rearchitecting Kubernetes for the Edge. Andrew Jeffery, Heidi Howard, Richard Mortier, 10.1145/3434770.3459730Proceedings of the 4th International Workshop on Edge Systems, Analytics and Networking (EdgeSys '21). the 4th International Workshop on Edge Systems, Analytics and Networking (EdgeSys '21)New York, NY, USAAssociation for Computing MachineryAndrew Jeffery, Heidi Howard, and Richard Mortier. 2021. Rearchi- tecting Kubernetes for the Edge. In Proceedings of the 4th International Workshop on Edge Systems, Analytics and Networking (EdgeSys '21). Association for Computing Machinery, New York, NY, USA, 7-12. https://doi.org/10.1145/3434770.3459730
Reliability in the utility computing era: Towards reliable Fog computing. Henrik Madsen, Bernard Burtschy, Grigore Albeanu, Florin Popenţiu-Vlădicescu, 10.1109/IWSSIP.2013.6623445Proceedings of the 2013 20th International Conference on Systems, Signals and Image Processing. the 2013 20th International Conference on Systems, Signals and Image ProcessingBucharest, Romania; New York, NY, USAIEEEIWSSIP '13)Henrik Madsen, Bernard Burtschy, Grigore Albeanu, and Florin Popenţiu-Vlădicescu. 2013. Reliability in the utility computing era: Towards reliable Fog computing. In Proceedings of the 2013 20th International Conference on Systems, Signals and Image Processing (Bucharest, Romania) (IWSSIP '13). IEEE, New York, NY, USA, 43-46. https://doi.org/10.1109/IWSSIP.2013.6623445
From Cloud to Fog Computing: A Review and a Conceptual Live VM Migration Framework. Opeyemi Osanaiye, Shuo Chen, Zheng Yan, Rongxing Lu, Kim-Kwang Raymond Choo, Mqhele Dlodlo, 10.1109/ACCESS.2017.2692960IEEE Access. 5Opeyemi Osanaiye, Shuo Chen, Zheng Yan, Rongxing Lu, Kim- Kwang Raymond Choo, and Mqhele Dlodlo. 2017. From Cloud to Fog Computing: A Review and a Conceptual Live VM Migration Frame- work. IEEE Access 5 (April 2017), 8284-8300. https://doi.org/10.1109/ ACCESS.2017.2692960
2020. tinyFaaS: A Lightweight FaaS Platform for Edge Environments. Tobias Pfandzelter, David Bermbach, 10.1109/ICFC49376.2020.00011Proceedings of the Second IEEE International Conference on Fog Computing. the Second IEEE International Conference on Fog ComputingSydney, NSW, Australia; New York, NY, USAIEEEICFC 2020Tobias Pfandzelter and David Bermbach. 2020. tinyFaaS: A Lightweight FaaS Platform for Edge Environments. In Proceedings of the Second IEEE International Conference on Fog Computing (Sydney, NSW, Australia) (ICFC 2020). IEEE, New York, NY, USA, 17-24. https://doi.org/10.1109/ ICFC49376.2020.00011
Managing Data Replication and Distribution in the Fog with FReD. Tobias Pfandzelter, Nils Japke, Trever Schirmer, Jonathan Hasenburg, David Bermbach, arXiv:2303.05256Tobias Pfandzelter, Nils Japke, Trever Schirmer, Jonathan Hasenburg, and David Bermbach. 2023. Managing Data Replication and Distribu- tion in the Fog with FReD. (March 2023). arXiv:2303.05256
Towards Distributed Coordination for Fog Platforms. Tobias Pfandzelter, Trever Schirmer, David Bermbach, 10.1109/CCGrid54584.2022.00087Proceedings of the 22nd IEEE/ACM international Symposium on Cluster, Cloud and Internet Computing. the 22nd IEEE/ACM international Symposium on Cluster, Cloud and Internet ComputingTaormina, Italy; New York, NY, USAIEEETobias Pfandzelter, Trever Schirmer, and David Bermbach. 2022. To- wards Distributed Coordination for Fog Platforms. In Proceedings of the 22nd IEEE/ACM international Symposium on Cluster, Cloud and Internet Computing, Posters (Taormina, Italy) (CCGrid 2021). IEEE, New York, NY, USA, 760-762. https://doi.org/10.1109/CCGrid54584.2022.00087
A comprehensive study of Convergent and Commutative Replicated Data Types. Marc Shapiro, Nuno Preguiça, Carlos Baquero, Marek Zawirski, Paris, FranceTechnical ReportInstitute National de Recherche en Informatique et en Automatique (INRIA)Marc Shapiro, Nuno Preguiça, Carlos Baquero, and Marek Zawirski. 2011. A comprehensive study of Convergent and Commutative Replicated Data Types. Technical Report. Institute National de Recherche en Informatique et en Automatique (INRIA), Paris, France. https://inria. hal.science/inria-00555588
Replicated data consistency explained through baseball. Doug Terry, 10.1145/2500500Commun. ACM. 56Doug Terry. 2013. Replicated data consistency explained through baseball. Commun. ACM 56, 12 (Dec. 2013), 82-89. https://doi.org/10. 1145/2500500
Eventually consistent. Werner Vogels, 10.1145/1435417.1435432Commun. ACM. 52Werner Vogels. 2009. Eventually consistent. Commun. ACM 52, 1 (Jan. 2009), 40-44. https://doi.org/10.1145/1435417.1435432
Key ingredients in an IoT recipe: Fog Computing, Cloud computing, and more Fog Computing. Marcelo Yannuzzi, Rodolfo Milito, René Serral-Gracià, Diego Montero, Mario Nemirovsky, 10.1109/CAMAD.2014.7033259Proceedings of the 2014 IEEE 19th International Workshop on Computer Aided Modeling and Design of Communication Links and Networks. the 2014 IEEE 19th International Workshop on Computer Aided Modeling and Design of Communication Links and NetworksAthens, Greece; New York, NY, USAIEEECAMAD '14)Marcelo Yannuzzi, Rodolfo Milito, René Serral-Gracià, Diego Mon- tero, and Mario Nemirovsky. 2014. Key ingredients in an IoT recipe: Fog Computing, Cloud computing, and more Fog Computing. In Pro- ceedings of the 2014 IEEE 19th International Workshop on Computer Aided Modeling and Design of Communication Links and Networks (Athens, Greece) (CAMAD '14). IEEE, New York, NY, USA, 325-329. https://doi.org/10.1109/CAMAD.2014.7033259
A Survey of Fog Computing: Concepts, Applications and Issues. Shanhe Yi, Cheng Li, Qun Li, 10.1145/2757384.2757397Proceedings of the 2015 Workshop on Mobile Big Data. the 2015 Workshop on Mobile Big DataHangzhou, China; New York, NY, USAAssociation for Computing MachineryMobidata '15)Shanhe Yi, Cheng Li, and Qun Li. 2015. A Survey of Fog Computing: Concepts, Applications and Issues. In Proceedings of the 2015 Workshop on Mobile Big Data (Hangzhou, China) (Mobidata '15). Association for Computing Machinery, New York, NY, USA, 37-42. https://doi.org/10. 1145/2757384.2757397
| []
|
[
"Strong tractability for multivariate integration in a subspace of the Wiener algebra *",
"Strong tractability for multivariate integration in a subspace of the Wiener algebra *"
]
| [
"Takashi Goda "
]
| []
| []
| Building upon recent work by the author, we prove that multivariate integration in the following subspace of the Wiener algebra over [0, 1) d is strongly polynomially tractable: | null | [
"https://export.arxiv.org/pdf/2306.01541v1.pdf"
]
| 259,063,814 | 2306.01541 | 06746a674735b6e72e683c63bc95931bf7d33ab4 |
Strong tractability for multivariate integration in a subspace of the Wiener algebra *
2 Jun 2023 June 5, 2023
Takashi Goda
Strong tractability for multivariate integration in a subspace of the Wiener algebra *
2 Jun 2023 June 5, 2023
Building upon recent work by the author, we prove that multivariate integration in the following subspace of the Wiener algebra over [0, 1) d is strongly polynomially tractable:
Introduction and main results
This paper concerns numerical integration for multivariate functions defined over the d-dimensional unit cube. For a Riemann integrable function f : [0, 1) d → R, we approximate its integral with sets of n sampling points {x 0 , . . . , x n−1 } ⊂ [0, 1) d and associated weights {w 0 , . . . , w n−1 }. Quasi-Monte Carlo (QMC) rule denotes a special case of Q d,n where all the weights w h are equal to 1/n. The worst-case error of an algorithm Q d,N in a Banach space F with norm · is defined by e wor (F, Q d,n ) := sup
I d (f ) = [0,1) d f (x) dxf ∈F, f ≤1 |I d (f ) − Q d,n (f )| .
In the field of information-based complexity [11,12,16], we are interested in how the information complexity n(ε, d, F ) grows in the reciprocal of the error tolerance ε ∈ (0, 1) and the dimension d.
Here, the information complexity is defined as the minimum number of function values, among all possible Q d,n , needed to make the worst-case error in F no greater than ε, that is,
n(ε, d, F ) := min{n ∈ N | ∃Q d,n : e wor (F, Q d,n ) ≤ ε}.
In a recent work by the author [6], it has been proven that the information complexity for the following unweighted subspace of the Wiener algebra grows only polynomially both in ε −1 and d:
F 1 d := f ∈ C([0, 1) d ) f := k∈Z d |f (k)| max 1, min j∈supp(k) log |k j | < ∞ , withf (k) being the k-th Fourier coefficient of f , i.e., f (k) = [0,1) d f (x) exp(−2πik · x) dx,
and supp(k) := {j ∈ {1, . . . , d} | k j = 0}. More precisely, it has been shown that an upper bound n(ε, d, F 1 d ) ≤ C 1 ε −3 d 3 holds for a positive constant C 1 , concluding that the problem of multivariate integration in F 1 d is polynomially tractable. We refer to [9,10] for more recent progress on this line of research. In this context, an unweighted function space F refers to a space where all variables and groups of variables play an equal role. Therefore, for any permutation matrix π and f ∈ F , it holds that f • π ∈ F and f • π = f . The result presented in [6] builds upon the work of Dick [1], who established polynomial tractability for multivariate integration in the intersection of the Wiener algebra and an unweighted space of Hölder continuous functions.
As a continuation of [6], we prove the following result in this paper: Theorem 1. Let F 2 d be a subspace of the Wiener algebra defined by
F 2 d := f ∈ C([0, 1) d ) f := k∈Z d |f (k)| max width(supp(k)), min j∈supp(k) log |k j | < ∞ ,
where width : 2 {1,...,d} → {1, . . . , d} is defined by
width(u) := max j∈u j − min j∈u j + 1,
for non-empty subset u ⊆ {1, . . . , d}, and width(∅) = 1. Then, there exists a positive constant C 2 such that, for any d ∈ N and ε ∈ (0, 1), we have
n(ε, d, F 2 d ) ≤ C 2 ε −3 /(log ε −1 ).
In comparison to the result of [6] for F 1 d , by replacing 1 (the first argument in taking the maximum for each k) with width(supp(k)) in the definition of norms, the polynomial dependence of the information complexity on the dimension d does not show up anymore, meaning that the problem of multivariate integration in F 2 d is strongly polynomially tractable. This result is strengthened by the following theorem on the former space F 1 d .
Theorem 2.
For any linear algorithm Q d,n using n function values, we have e wor (F 1 d , Q d,n ) ≥ d/(2n 2 ) for any d ∈ N and n > 2d.
Note that there is a significant gap between the lower bound on the worst-case error obtained above and the upper bound of order dn −1/3 shown in [6]. Nevertheless, this result implies that a dependence of the information complexity on the dimension d cannot be eliminated for F 1 d . Therefore, the problem of multivariate integration in F 1 d is polynomially tractable but not strongly polynomially tractable. As a future research direction, it would be interesting to study whether an intermediate space between F 1 d and F 2 d still exhibits strong polynomial tractability for multivariate integration. As we have 1 ≤ | supp(k)| ≤ width(supp(k)) for all k ∈ Z d when defining | supp(0)| = 1, one of the most natural spaces we can consider is an unweighted space
F 3 d := f ∈ C([0, 1) d ) f := k∈Z d |f (k)| max | supp(k)|, min j∈supp(k) log |k j | < ∞ . Note that, although the space F 2 d is weighted, it remains invariant under the reversion of the variables, i.e., if f ∈ F 2 d , then we have g ∈ F 2 d and f = g where g(x 1 , . . . , x d ) = f (x d , . . . , x 1 )
. This is in contrast to many existing results on strong polynomial tractability for multivariate integration in the worst-case setting, where weight parameters are introduced to model the relative importance of each group of variables, and variables are typically assumed ordered in decreasing importance order. See [2,5,11,12,15] among many others. In fact, it seems not possible to characterize the space F 2 d in such a way. The author believes that further tractability studies in subspaces of the Wiener algebra will offer new insights into the field of information-based complexity, particularly regarding (strong) polynomial tractability in (un)weighted spaces.
Proof of Theorem 1
This section is devoted to proving Theorem 1 by providing an explicit QMC rule that attains the desired worst-case error bound. The QMC rule considered here is exactly the same as the one discussed in [6]. For an integer m ≥ 2, let
P m := {⌈m/2⌉ < p ≤ m | p is prime}.
It is known that there exist constants c P and C P with 0 < c P < min(1, C P ) such that
c P m log m ≤ |P m | ≤ C P m log m ,(1)
for all m ≥ 2, see [13,. Now, given an integer m ≥ 2, we define two different point sets as multiset unions:
P 1 d,m = p∈Pm S d,p and P 2 d,m = p∈Pm T d,p , where S d,p = {x (p) h | 0 ≤ h < p 2 } and T d,p = {x (p)
h,ℓ | 0 ≤ h, ℓ < p} are sets with p 2 points known as Korobov's p-sets [1,5,7,8]. These point sets are defined as follows:
x (p) h = h p 2 , h 2 p 2 , . . . , h d p 2 ,and|P 1 d,m | = |P 2 d,m | = p∈Pm p 2 .
The following result on the exponential sums refines the known results from [7, Lemmas 4.5 & 4.6] as well as [5,Lemmas 4.4 & 4.5].
Lemma 3. Let d ∈ N and p be a prime with p ≥ d. For any k ∈ Z d \ {0} such that there exists at least one index j * ∈ {1, . . . , d} where k j * is not divisible by p, i.e., p ∤ k, the following bounds hold:
1 p 2 p 2 −1 h=0 exp 2πik · x (p) h ≤
width(supp(k)) p ,
and 1 p 2 p−1 h,ℓ=0 exp 2πik · x (p) h,ℓ ≤ width(supp(k)) p .
Proof. Let us consider the first bound. As we have {0, . . . ,
p 2 − 1} = {h 0 + h 1 p | 0 ≤ h 0 , h 1 < p} and, for each pair of h 0 , h 1 ∈ {0, . . . , p − 1}, it holds that exp 2πik · x (p) h0+h1p = exp 2πi p 2 j∈supp(k) k j (h 0 + h 1 p) j = exp 2πi p 2 j∈supp(k) k j j a=0 j a h a 0 (h 1 p) j−a = exp 2πi p 2 j∈supp(k) k j (h j 0 + jh j−1 0 h 1 p) ,
we obtain 1 p 2
p 2 −1 h=0 exp 2πik · x (p) h = 1 p 2 p−1 h0,h1=0 exp 2πi p 2 j∈supp(k) k j (h j 0 + jh j−1 0 h 1 p) = 1 p p−1 h0=0 exp 2πi p 2 j∈supp(k) k j h j 0 1 p p−1 h1=0 exp 2πih 1 p j∈supp(k) k j jh j−1 0 ≤ 1 p p−1 h0=0 1 p p−1 h1=0 exp 2πih 1 p j∈supp(k) k j jh j−1 0 = 1 p p−1 h0=0 j∈supp(k) kj jh j−1 0 ≡0 (mod p) 1,
where the last equality follows from the well-known character property for the trigonometric functions As the last sum over j is a polynomial in h 0 with degree j max − j min , the number of solutions of the congruence j∈supp(k) k j jh j−1 0 ≡ 0 (mod p) is at most j max − j min + 1 = width(supp(k)). Thus the result follows. Since the second bound can be proven in the same manner, we omit the details.
Note that, if k j is divisible by p for all j, i.e., p | k, then we only have a trivial bound on the exponential sum, which is 1. Using this refined result, we obtain the following bounds on the exponential sums for our point sets P 1 d,m and P 2 d,m .
Corollary 4. Let d ∈ N and m ≥ 2 with min p∈Pm p ≥ d. For any k ∈ Z d \ {0}, it holds that 1 |P 1 d,m | p∈Pm p 2 −1 h=0 exp 2πik · x (p) h ≤ 1 m 4 width(supp(k)) + 8 c P min j∈supp(k)
log |k j | ,
and 1 |P 2 d,m | p∈Pm p−1 h,ℓ=0 exp 2πik · x (p) h,ℓ ≤ 1 m 4 width(supp(k)) + 8 c P min j∈supp(k) log |k j | .
Proof. The following proof for the first bound is similar to that of [6,Corollary 2.3], and the second bound can be proven in a similar way, so we omit the details. Using Lemma 3, we have
1 |P 1 d,m | p∈Pm p 2 −1 h=0 exp 2πik · x (p) h ≤ 1 |P 1 d,m | p∈Pm p 2 −1 h=0 exp 2πik · x (p) h ≤ 1 |P 1 d,m | p∈Pm p∤k p width(supp(k)) + 1 |P 1 d,m | p∈Pm p|k p 2 ≤ m|P m | |P 1 d,m | width(supp(k)) + m 2 |P 1 d,m | p∈Pm p|k 1 ≤ m|P m | (m/2) 2 |P m | width(supp(k)) + m 2 (m/2) 2 |P m | p∈Pm p|k 1 ≤ 4 m width(supp(k)) + 4 log m c P m p∈Pm p|k 1,
where the last inequality follows from (1). To give a bound on the last sum over p ∈ P m which divides k, we use the fact that, for any integers k, n ∈ N, k has at most log n k prime divisors larger than or equal to n. With I(·) denoting the indicator function, for any index j * ∈ supp(k), we get
p∈Pm p|k 1 = p∈Pm j∈supp(k) I(p | k j ) ≤ p∈Pm I(p | k j * ) ≤ log ⌈m/2⌉+1 |k j * | ≤ 2 log |k j * | log m .
Since this inequality applies to any index j * ∈ supp(k), it holds that p∈Pm p|k
1 ≤ 2 log m min j∈supp(k) log |k j |.
This completes the proof. Now we are ready to prove Theorem 1.
Proof of Theorem 1.
Since any function f ∈ F 2 d has an absolutely convergent Fourier series, by letting Q d,n being the QMC rule using P 1 d,m (or P 2 d,m ) for some m ≥ 2 with min p∈Pm p ≥ d, it follows from Corollary 4 that, with n equal to p∈Pm p 2 ,
|I d (f ) − Q d,n (f )| = I d (f ) − 1 |P 1 d,m | p∈Pm p 2 −1 h=0 f (x (p) h ) = f (0) − 1 |P 1 d,m | p∈Pm p 2 −1 h=0 k∈Z df (k) exp 2πik · x (p) h = k∈Z d \{0}f (k) 1 |P 1 d,m | p∈Pm p 2 −1 h=0 exp 2πik · x (p) h ≤ k∈Z d \{0} |f (k)| 1 |P 1 d,m | p∈Pm p 2 −1 h=0 exp 2πik · x (p) h ≤ 1 m k∈Z d \{0} |f (k)| 4 width(supp(k)) + 8 c P min j∈supp(k) log |k j | ≤ 16 c P m k∈Z d \{0} |f (k)| max width(supp(k)), min j∈supp(k) log |k j | ≤ 16 c P m f .
This leads to an upper bound on the worst-case error as
e wor (F 2 d , Q d,n ) ≤ 16 c P m .
Therefore, in order to make e wor (F 2 d , Q d,n ) less than or equal to ε ∈ (0, 1), it suffices to choose m = ⌈16c −1 P ε −1 ⌉ and we have
n(ε, d, F 2 d ) ≤ p∈P ⌈16c −1 P ε −1 ⌉ p 2 ≤ C P ⌈16c −1 P ε −1 ⌉ log⌈16c −1 P ε −1 ⌉ × ⌈16c −1 P ε −1 ⌉ 2 ,
from which the result follows immediately.
Proof of Theorem 2
Proof of Theorem 2. We adopt a similar approach as in the proofs of [14,Theorem 1] and [4, Theorem 1]. Consider an arbitrary linear algorithm Q d,n (f ) = n−1 h=0 w h f (x h ). For a set A ⊂ Z d with enough cardinality |A| > n, we define a function g : [0, 1) d → C by
g(x) = k∈A c k exp(2πik · x)
with c k ∈ C, which satisfies g(x h ) = 0 for all h = 0, . . . , n − 1. In fact, there exists a non-zero vector of (c k ) k∈A , as the condition that g(x h ) = 0 for all h = 0, . . . , n − 1 forms n homogeneous linear equations with |A| > n unknowns c k . Let us normalize these coefficients in such a way that max k∈A |c k | = c ℓ = 1 for some ℓ ∈ A.
With this ℓ and a positive constant C, we define another functiong : [0, 1) d → C as follows:
g(x) = C exp(−2πiℓ · x)g(x) = C k∈A c k exp(2πi(k − ℓ) · x).
Then we construct a real-valued function g ⋆ defined on [0, 1) d by taking the average ofg and its complex conjugate: g ⋆ (x) = (g(x) +g(x))/2. Regarding the norm of g ⋆ in F 1 d , we have
g ⋆ ≤ g + g 2 = g = C k∈A |c k | max 1, min j∈supp(k) log |k j − ℓ j | ≤ C k∈A max 1, min j∈supp(k) log |k j − ℓ j | ≤ C max ℓ∈A k∈A max 1, min j∈supp(k) log |k j − ℓ j | .
To ensure g ⋆ ≤ 1, we set
C = max ℓ∈A k∈A max 1, min j∈supp(k) log |k j − ℓ j | −1 .
By construction, we have g ⋆ (x h ) = 0 for all h = 0, . . . , n − 1, which implies Q n,d (g ⋆ ) = 0. On the other hand, the exact integral is given by Since g ⋆ ∈ F 1 d with g ⋆ ≤ 1, the worst-case error of any linear algorithm Q d,n is bounded below by
e wor (F 1 d , Q d,n ) ≥ |I d (g ⋆ ) − Q n,d (g ⋆ )| = max ℓ∈A k∈A max 1, min j∈supp(k) log |k j − ℓ j | −1 .
In what follows, let A = {0} ∪ k ∈ Z d | d − 1 of k j are all 0 and one non-zero k j is from {1, . . . , ⌈n/d⌉} .
It is easy to verify that |A| = 1 + d⌈n/d⌉ > n. For this choice of A, we can restrict ourselves to ℓ = (ℓ, 0, . . . , 0) for some ℓ ∈ {0, . . . , ⌈n/d⌉}. By utilizing the assumption n > 2d and the well-known inequality log x ≤ x − 1, we have ≤ log⌈n/d⌉ + d⌈n/d⌉ log⌈n/d⌉ ≤ (⌈n/d⌉ − 1) · (1 + d⌈n/d⌉) ≤ 2n 2 d .
Since the last bound is independent of ℓ, we obtain e wor (F 1 d , Q d,n ) ≥ max ℓ∈A k∈A max 1, min
j∈supp(k) log |k j − ℓ j | −1 ≥ d 2n 2 .
This completes the proof.
f (x h ) * The work of the author is supported by JSPS KAKENHI Grant Number 23K03210. † School of Engineering, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan ([email protected])
[ 3 ,
3Lemma 4.3]. Here, by denoting j min = min j∈supp(k) j and j max = max j∈supp(k) j, we have j∈supp(k) k j jh j
respectively, where we write {x} = x − ⌊x⌋ to denote the fractional part of a non-negative real number x. It is important to note that taking the multiset unions of Korobov's p-sets with different primes p is crucial in our error analysis. Trivially we havex
(p)
h,ℓ =
hℓ
p
,
hℓ 2
p
, . . . ,
hℓ d
p
,
Numerical integration of Hölder continuous, absolutely convergent Fourier, Fourier cosine, and Walsh series. J Dick, Journal of Approximation Theory. 183J. Dick. Numerical integration of Hölder continuous, absolutely convergent Fourier, Fourier cosine, and Walsh series. Journal of Approximation Theory, 183:14-30, 2014.
Digital inversive vectors can achieve polynomial tractability for the weighted star discrepancy and for multivariate integration. J Dick, D Gomez-Perez, F Pillichshammer, A Winterhof, Proceedings of the American Mathematical Society. 145J. Dick, D. Gomez-Perez, F. Pillichshammer, and A. Winterhof. Digital inversive vectors can achieve polynomial tractability for the weighted star discrepancy and for multivariate integration. Proceedings of the American Mathematical Society, 145:3297-3310, 2017.
Proof techniques in quasi-Monte Carlo theory. J Dick, A Hinrichs, F Pillichshammer, Journal of Complexity. 313J. Dick, A. Hinrichs, and F. Pillichshammer. Proof techniques in quasi-Monte Carlo theory. Journal of Complexity, 31(3):327-371, 2015.
Exponential convergence and tractability of multivariate integration for Korobov spaces. J Dick, G Larcher, F Pillichshammer, H Woźniakowski, Mathematics of computation. 80274J. Dick, G. Larcher, F. Pillichshammer, and H. Woźniakowski. Exponential convergence and tractability of multivariate integration for Korobov spaces. Mathematics of computation, 80(274):905-930, 2011.
The weighted star discrepancy of Korobov's p-sets. J Dick, F Pillichshammer, Proceedings of the. theAmerican Mathematical Society143J. Dick and F. Pillichshammer. The weighted star discrepancy of Korobov's p-sets. Proceedings of the American Mathematical Society, 143:5043-5057, 2015.
Polynomial tractability for integration in an unweighted function space with absolutely convergent Fourier series. T Goda, Proceedings of the. theAmerican Mathematical SocietyT. Goda. Polynomial tractability for integration in an unweighted function space with absolutely convergent Fourier series. Proceedings of the American Mathematical Society, 2023.
Applications of Number Theory to Numerical Analysis. L K Hua, Y Wang, Springer-VerlagBerlinL. K. Hua and Y. Wang. Applications of Number Theory to Numerical Analysis. Springer-Verlag, Berlin, 1981.
Number-Theoretic Methods in Approximate Analysis. N M Korobov, FizmatgizMoscowN. M. Korobov. Number-Theoretic Methods in Approximate Analysis. Fizmatgiz, Moscow, 1963.
D Krieg, arXiv:2304.14169Tractability of sampling recovery on unweighted function classes. arXiv preprintD. Krieg. Tractability of sampling recovery on unweighted function classes. arXiv preprint arXiv:2304.14169, 2023.
New lower bounds for the integration of periodic functions. D Krieg, J Vybiral, arXiv:2302.02639arXiv preprintD. Krieg and J. Vybiral. New lower bounds for the integration of periodic functions. arXiv preprint arXiv:2302.02639, 2023.
Tractability of Multivariate Problems, Volume I: Linear Information. E Novak, H Woźniakowski, EMS PressZürichE. Novak and H. Woźniakowski. Tractability of Multivariate Problems, Volume I: Linear Infor- mation. EMS Press, Zürich, 2008.
Tractability of Multivariate Problems. E Novak, H Woźniakowski, Standard Information for Functionals. ZürichEMS PressIIE. Novak and H. Woźniakowski. Tractability of Multivariate Problems, Volume II: Standard Information for Functionals. EMS Press, Zürich, 2010.
Approximate formulas for some functions of prime numbers. J B Rosser, L Schoenfeld, Illinois Journal of Mathematics. 6J. B. Rosser and L. Schoenfeld. Approximate formulas for some functions of prime numbers. Illinois Journal of Mathematics, 6:64-94, 1962.
An intractability result for multiple integration. I H Sloan, H Woźniakowski, Mathematics of Computation. 66I. H. Sloan and H. Woźniakowski. An intractability result for multiple integration. Mathematics of Computation, 66:1119-1124, 1997.
When are quasi-Monte Carlo algorithms efficient for high dimensional integrals. I H Sloan, H Woźniakowski, Journal of Complexity. 141I. H. Sloan and H. Woźniakowski. When are quasi-Monte Carlo algorithms efficient for high dimensional integrals? Journal of Complexity, 14(1):1-33, 1998.
Information-Based Complexity. J F Traub, G W Wasilkowski, H Woźniakowski, Academic PressNew YorkJ. F. Traub, G. W. Wasilkowski, and H. Woźniakowski. Information-Based Complexity. Academic Press, New York, 1988.
| []
|
[
"A network-of-networks model for physical networks",
"A network-of-networks model for physical networks"
]
| [
"Gábor Pete \nAlfréd Rényi Institute of Mathematics\nBudapestHungary\n\nBudapest University of Technology and Economics\nBudapestHungary\n",
"Ádám Timár \nAlfréd Rényi Institute of Mathematics\nBudapestHungary\n\nUniversity of Iceland\nReykjavíkIceland\n",
"Sigurdur Örn Stefánsson \nUniversity of Iceland\nReykjavíkIceland\n",
"Ivan Bonamassa \nDepartment of Network and Data Science\nCentral European University\nViennaAustria\n",
"Márton Pósfai \nDepartment of Network and Data Science\nCentral European University\nViennaAustria\n"
]
| [
"Alfréd Rényi Institute of Mathematics\nBudapestHungary",
"Budapest University of Technology and Economics\nBudapestHungary",
"Alfréd Rényi Institute of Mathematics\nBudapestHungary",
"University of Iceland\nReykjavíkIceland",
"University of Iceland\nReykjavíkIceland",
"Department of Network and Data Science\nCentral European University\nViennaAustria",
"Department of Network and Data Science\nCentral European University\nViennaAustria"
]
| []
| Physical networks are made of nodes and links that are physical objects embedded in a geometric space. Understanding how the mutual volume exclusion between these elements affects the structure and function of physical networks calls for a suitable generalization of network theory. Here, we introduce a network-of-networks framework where we describe the shape of each extended physical node as a network embedded in space and these networks are bound together by physical links. Relying on this representation, we model the growth of physical networks, showing for a general class of systems that volume exclusion induces heterogeneity in both node volume and degree, with the two becoming correlated. These emergent structural properties strongly affect the dynamics on physical networks: by calculating their Laplacian spectrum as a function of the coupling strength between the nodes we show that volume-degree correlations suppress the tail of the spectrum. Finally, we apply our theoretical framework to a large-scale three-dimensional map of a fruit fly brain, finding analog behavior with the networks generated by our growth model. arXiv:2306.01583v1 [cond-mat.stat-mech] | null | [
"https://export.arxiv.org/pdf/2306.01583v1.pdf"
]
| 259,063,896 | 2306.01583 | 6101c8a98cbdf4e6c02d9958e868c33f0e82c47f |
A network-of-networks model for physical networks
Gábor Pete
Alfréd Rényi Institute of Mathematics
BudapestHungary
Budapest University of Technology and Economics
BudapestHungary
Ádám Timár
Alfréd Rényi Institute of Mathematics
BudapestHungary
University of Iceland
ReykjavíkIceland
Sigurdur Örn Stefánsson
University of Iceland
ReykjavíkIceland
Ivan Bonamassa
Department of Network and Data Science
Central European University
ViennaAustria
Márton Pósfai
Department of Network and Data Science
Central European University
ViennaAustria
A network-of-networks model for physical networks
(Dated: June 5, 2023)
Physical networks are made of nodes and links that are physical objects embedded in a geometric space. Understanding how the mutual volume exclusion between these elements affects the structure and function of physical networks calls for a suitable generalization of network theory. Here, we introduce a network-of-networks framework where we describe the shape of each extended physical node as a network embedded in space and these networks are bound together by physical links. Relying on this representation, we model the growth of physical networks, showing for a general class of systems that volume exclusion induces heterogeneity in both node volume and degree, with the two becoming correlated. These emergent structural properties strongly affect the dynamics on physical networks: by calculating their Laplacian spectrum as a function of the coupling strength between the nodes we show that volume-degree correlations suppress the tail of the spectrum. Finally, we apply our theoretical framework to a large-scale three-dimensional map of a fruit fly brain, finding analog behavior with the networks generated by our growth model. arXiv:2306.01583v1 [cond-mat.stat-mech]
INTRODUCTION
The building blocks of physical networks are extended objects that do not intersect each other, resulting in non-trivial geometric layouts [1], link entanglement [2] and emergent correlations between physical and network structure [3]. Yet, these works model nodes as localized spheres connected by extended tube-like links, an assumption that does not necessarily reflect the structure of most real-world physical networks. In the connectome, for example, nodes represent neurons with non-trivial dendritic shapes and links are point-like synapses [4]. A similar picture emerges for molecular networks such as the cytoskeleton, mitochondrial networks or fiber materials, where nodes are extended molecular strands and bonds between them are localized [5][6][7], as well as in the wood-wide-web, where extended tree roots and mycelia connect to form a complex underground network [8,9]. Therefore, the sphere-tube paradigm often falls short at describing physical networks, calling for a more general framework to cope with the complex shape of nodes and links.
In this work we develop a network-of-networks representation of physical networks that is able to capture arbitrary node shapes [10,11] and allows us to characterize both structural and dynamical properties of networks. Relying on the network-of-networks framework, we introduce a model that grows physical networks from fractal segments. Analytically solving the model, we show that physicality induces heterogeneity in both the physical and the network properties of the nodes and that the two become strongly correlated. These correlations also affect the dynamics on the networks: generalizing the combinatorial Laplacian to physical networks [12][13][14], we show that fast dynamical modes associated to hubs (and corresponding to the tail of the Laplacian spectra) are suppressed by the emergent correlations between node volume and degree. The usefulness of the mathematical tools we develop in this paper go beyond the specifics of the model, and we demonstrate this by applying our framework to a recently collected data set describing more than ∼ 20, 000 neurons of a fruit fly's brain [15]. In doing so, we identify structural correlations similar to correlations in our physical network growth model, and we show that these have an analog effect on the Laplacian spectrum of the connectome. Vi is a subgraph of the substrate S. Physical nodes cannot overlap, i.e., Vi ∩ Vj = ∅ for i ̸ = j. The physical layout P (dashed area) is a network-of-networks: it is the union of physical nodes Vi together with the bonds connecting them. (b) The combinatorial network G is a coarse-grained representation of P capturing the connections between the nodes without the physical structure.
NETWORK-OF-NETWORKS REPRESENTATION
We aim to describe physical networks embedded in some substrate or medium. In its most general form, the substrate is represented by a graph S, and each physical node i is an extended object occupying a subgraph V i ⊂ S. To capture volume exclusion, we do not allow nodes to overlap, i.e., V i ∩ V j = ∅ for i ̸ = j. Two nodes i and j may form a link (i, j) if they occupy adjacent sites in S. The physical layout P of the network is a network-of-networks, i.e., it is the union of physical nodes, where each node is a network itself, together with the bonds forming the connections between the nodes (Fig. 1a). The layout P is a physical realization of the combinatorial network, G, where node i of G corresponds to the physical node V i , and nodes i and j are connected if there is a bond between V i and V j in P (Fig. 1b). Though the substrate S can represent any available space, here we focus on substrates that are d dimensional cubic lattices with linear size L and periodic boundary conditions. Note that network representations of this kind are employed in the graph drawing literature with the focus on algorithms that embed a given combinatorial network into S [16]. Here, we are interested in physical networks P growing in S, the emergent relation between P and G, and its consequences on the dynamics on the network.
NETWORK GROWTH
To study the effect of physicality on network evolution, we define a model of network growth relying on the network-of-networks representation. We start with an empty S and we place a single node V 0 occupying a subset of the sites. We add the rest of the nodes iteratively: At time step t we add a new node V t that is seeded at a random unoccupied site and grows until it hits an already existing node V s and a link (t − s) is formed. The growth of node V t can be driven by any random or deterministic process; all that we assume is that each physical node is characterized by a fractal dimension d f ∈ [1, d] [17,18]. We add N physical nodes or until all of S is occupied; in the latter case we call the physical network saturated.
Since the total volume of the network increases over time, later nodes hit the network at higher rates and the typical size of nodes decreases. Hence, we expect that nodes added early have higher degree than nodes added in the final stages of the network evolution, both because they are larger and they have more time to collect connections. This suggests that to analytically characterize the evolution of the physical network two ingredients have to be considered: (i) network growth, i.e., nodes are added sequentially to the system and (ii) externally limited node growth, i.e., the nodes grow until they hit the already existing network. We show that these two ingredients lead to the emergence of power law combinatorial networks with degree exponents γ ≤ 3.
We start the analytical treatment of the model by estimating the probability p ij that two randomly placed physical nodes V i and V j intersect. If the two boxes containing the physical nodes have side length l i ≫ l j , respectively, and the larger node V i intersects the box containing the smaller node V j , then, by dimension count, the two nodes overlap with positive probability if d f ≥ d/2. We can tile the lattice with (L/l j ) d boxes with side length l j , and the number of such boxes intersected by V i is ∼ (l i /l j ) d f . Therefore the intersection probability is
p ij ∼ l d f i l d−d f j L d ∼ v i v d/d f −1 j L d ,(1)
where v i = |V i | ∼ l d f i is the volume of node i. If, however, d f < d/2 and l j ≤ l i ≪ L, then the nodes avoid each other with high probability. In this case, the intersection probability will have the meanfield behavior, wellapproximated by the probability of selecting the sites of V i and V j uniformly from S, i.e., p MF ij ∼ v i v j /L d , which is independent of d f and agrees with Eq. (1) for d f = d/2.
Using the same box-counting argument that led to Eq. (1), the probability that a node added at time t intersects any existing node s < t is approximately given
by s<t p st = v d/d f −1 t V t−1 /L d , where V t−1 is the total volume of nodes s < t.
A key observation is that the size of node t increases until it hits the existing network, meaning that v t increases until s<t p st ≈ 1, allowing us to estimate the volume of node t as
v t ≈ V t−1 /L d − d f d−d f .(2)
Equation (2) allows us to express the evolution of the expected total volume via the recursion V t+1 = v t+1 + V t with initial condition V 0 = v 0 . Using a continuous time approximation, we obtain
V t ≈ L d d d − d f t + c L d d−d f d ∼ L d t L d 1−d f /d ,(3)
where c is a constant depending on v 0 . A natural choice for the latter is that the first node spans the entire avail- (3) predicts that N sat , the number of nodes when the network saturates, scales as N sat ∼ L d . As a consequence, for saturated networks, the physical layout P is an asymptotically optimal embedding of the combinatorial network G in the sense that there is no physical representation of G that fits into smaller volume than ∼ L d .
able space, i.e., v 0 ∼ L d f , in which case c is independent of L. Equation
In light of Eq. (3), we can now calculate the expected degree of the physical nodes in the combinatorial network G. The time derivative of V t shows that the volume of newly added nodes decreases as v t ∼ t/L d −d f /d . Hence, following Eq. (1), the expected degree of node t after the addition of N nodes is
k t (N ) = 1 + v t L d N s=t+1 v d/d f −1 s ∼ v t · N L d d d f ,(4)
where the proportionality is valid for t ≪ N . This means that the volume occupied by large nodes (i.e., nodes that were added early) in the physical layout is proportional to their degree in the combinatorial network. We finally calculate the complementary cumulative de-
gree distribution P (k) = 1 − 1 N N t:kt≥k 1, finding that P (k) ∼ k −(γ−1) with exponent γ = 1 + d d f ,(5)
For d f ≤ d < 2d f the degree exponent falls in the range 2 ≤ γ < 3. In the mean-field regime the degree exponent can be obtained by substituting d/d f with 2, yielding γ MF = 3. Note that the upper critical dimension of the physical network depends on the kinetic growth of the nodes. For example, growing nodes along a straight trajectory in a random direction generates nodes with d f = 1; therefore the networks fall in the mean-field regime d f ≤ d/2 even for embedding dimension d = 2.
NUMERICAL SIMULATIONS
To test our analytical predictions, we numerically generate physical networks where nodes grow according to random walk trajectories. Specifically, we generate nodes using loop-erased random walks (LERWs), i.e., a trajectory that evolves as a simple random walk in which any loop is erased as soon as it is formed [19][20][21][22]. The LERW represents a tractable model of non-self-intersecting random trajectories. Its critical properties are well studied both in the mathematics and physics literature [23][24][25][26]; for example, their fractal dimension in d = 2 is [29]. See methods for further details.
d f = 5/4 [22], in d = 3 it is d f ≃ 1.6236(4) [27, 28], while its upper critical dimension is d u = 4 where d f = 2 with a logarithmic correction
Knowing the fractal dimensions of LERWs allows us to directly verify the predictions of Eqs. (3)-(5):
Volume evolution. Equation (3) predicts that the total volume of the physical network evolves as V t ∼ Figure 2a shows the excellent agreement between the theoretical predictions and numerical simulations. Note that, as expected, in the mean-field regime d ≥ 4 the network volume follows the classic diffusion growth V t ∼ t 1/2 . Figure 2b further corroborates the predicted scaling of the number of nodes at saturation, i.e., N sat ∼ L d .
t 1−d f /d .
Degree-volume correlations. A second prediction is the emergence of volume-degree correlations, capturing the interplay between the physical layout P and the combinatorial network G. In particular, Eq. (4) predicts a linear proportionality between the node volume v i and degree k i , and we again find excellent agreement with simulations for all the tested dimensions (Fig. 2c).
Power law emergence. As a final test, we verify the emergence of power law scaling in the degree distribution of the combinatorial networks G. As anticipated in Eq. (5), the degree exponent depends on both the dimensionality of the embedding substrate, d, and the fractal dimension of the LERW, d f . Figure 2d shows that numerical simulations confirm the predicted degree exponent
γ = 1 + d/d f for dimensions d < 4, while the mean-field exponent γ MF = 3 is found for d ≥ 4.
In traditional models of combinatorial networks, heterogeneity typically arises from preferential attachment or some other optimization process. Our model is based on random growth without any explicit preference to create highly connected nodes; therefore, the uniform attachment tree may be considered as the non-physical counterpart of our model. Uniform attachment yields exponential degree distribution, hence the power law distribution observed here is a direct consequence of volume exclusion, which, together with the dynamic network growth, induces effective preferential attachment.
PHYSICAL NETWORK LAPLACIAN
The layout P is a physical realization of the combinatorial network G. Traditional studies of dynamics on physical networks ignore the layout P and focus only on the role of G, thus prompting the question: does modeling dynamics on G accurately capture dynamics on physical networks? To explore this, we study the spectral properties of P and show that physical nodes emerge as functional units through timescale separation, yet even in this limit the structure of P continues to affect the dynamics. We focus on the Laplacian spectrum [12] since it influences the behavior of several dynamical processes on networks [30] including diffusion [31,32], synchronization [33] and it underlies the definition of several information-theoretic tools to analyze the multiscale functioning of networks [10,14,[34][35][36][37].
We study the problem by invoking, once again, the network-of-networks representation. In our setup, we assign a weight to each connection in P such that links within physical nodes have weight 1 and links connecting two physical nodes have weight w, capturing that in real physical networks bonds between nodes are often qualitatively different than those within nodes. The weighted Laplacian matrix of P occupying V sites is then Q P = D P − A P , where A P is the V × V weighted adjacency matrix and D P is a diagonal matrix such that We can gain insights about the spectral properties of Q P by working in the weak coupling regime w ≪ 1 and relying on perturbation theory. Following a treatment similar to the one adopted to study diffusion in multilayer networks [38][39][40], we consider w a small perturbation and write Q P (w) = Q P (0) + wQ ′ P , where Q ′ P is the Laplacian matrix of the subnetwork of P formed by the bonds between physical nodes. The characteristic equation, up to first order in w, becomes then
CCDF d = 2 d = 3 d = 4 d = 5 ∼ t 1 − df/d ∼ L d ∼ v a b c d ∼ k −d/dfQ P (0)+ wQ ′ P u(0) + wu ′ ≈ ≈ λ(0) + wλ ′ u(0) + wu ′ .(6)
Perturbations around λ(0) = 0 lead to N eigenvalues that are O(w), while the rest of the eigenvalues are constant in leading order (Fig. 3a). This means that on the 1/w timescale, diffusion-like dynamics on the physical network are captured by the N slow eigenmodes. We obtain these from Eq. (6) (see Methods), yielding
V −1/2 Q G V −1/2ũ = λ ′ũ ,(7)
where Q G is the N × N Laplacian matrix of the combinatorial network G and V is an N × N diagonal matrix such that its diagonal elements are [V] ii = v i . Equation (7) is a key relation for understanding the dynamics on physical networks since it allows to characterize the dynamics on P on the timescale 1/w in a coarse-grained way: after integrating out the fast modes corresponding to eigenvalues λ(w) ≫ w, the state of each physical node V i is given by a single variable, while the coupling between the nodes is provided by the combinatorial network G. However, the combinatorial Laplacian Q G is not sufficient to capture the dynamics, and we must normalize Q G by the volume of the nodes, as shown in Eq. (7). This means that physical networks with the same combinatorial network but different layout can have drastically different dynamical properties. For example, if nodes have approximately the same size, i.e., v i ≈ V /N , then the physical layout only affects the overall timescale, otherwise the Laplacian spectrum is determined by Q G . If, however, node sizes are heterogeneously distributed, normalizing by volume will also have a heterogeneous effect on the eigenvalues.
Application to the physical network growth model. We showed above that physical networks generated by our network growth model are characterized by heterogeneous node-volume distribution and a proportionality between the degree and the volume of nodes (Fig. 2). To probe the effect of this emergent correlation, we shuffle the volume of the nodes of a LERW physical network to remove the correlation between network and physical structure. We then compare the spectrum of the volume-normalized Laplacian V −1/2 Q G V −1/2 to its ran-
domized version V −1/2 rand Q G V −1/2
rand and to the Laplacian spectrum of the combinatorial network Q G . Figure 3b shows that the spectrum of Q G has a heavy tail characterized by the same γ exponent of Eq. (5), as expected for combinatorial networks with power-law degree distributions [12]. Adding heterogeneous but uncorrelated node sizes does not influence the tail, while taking into account the degree-volume correlation of nodes removes the heavy tail and leads to a rapidly decaying spectrum. In power law networks, the eigenvector corresponding to the largest eigenvalue λ N of Q G is typically concentrated on the node with the largest degree [41,42]. In our model, the largest degree node also has the largest volume; therefore normalizing by node volume V −1/2 Q G V −1/2 significantly lowers λ N . Since node sizes are heterogeneously distributed, with high probability we associate volume ∼ 1 to the highest degree node after randomization. Hence, the eigenvalue λ N of Q G is largely unaffected by the randomized normalization (Fig. 3c). At the other end of the spectrum, controlling the long-time mixing of the dynamics, the eigenvector associated to the algebraic connectivity λ 2 typically spans the entire network. Figure 3d shows that taking node volumes into account slows the dynamics down; however, degree-volume correlations do not significantly affect λ 2 .
Note that the number of connections a physical node can establish is bound by its volume up to a constant factor; therefore the total volume V of P is bound by the number of links in G from below. The saturated networks generated by our model are optimal in the sense that the network's volume reaches this lower bound asymptotically in the large L d limit. More generally, in any optimal physical realization P that reaches this lower bound, the volume of the nodes must be proportional to their degree. Therefore the Laplacian spectrum of any optimally embedded physical network has a similar volume-degree correlation as our model, hence its Laplacian spectrum is similarly affected by physicality as our model.
DISCUSSION
Physical networks, such as neural networks or threedimensional integrated circuits, are complex networks that have a complex three-dimensional layout. Our model vividly demonstrates that traditional methods of network science focusing on combinatorial networks cannot fully describe such systems and that physical structure must be accounted for. We identified node degree and volume heterogeneity and the correlations between them as important quantities describing physical networks. These quantities are (i) shaped by volume exclusion during network evolution and (ii) have a strong effect on their functioning as captured by the physical Laplacian. The usefulness of the mathematical tools that we introduced, such as the network-of-networks repre- sentation or the physical Laplacian matrix, go beyond the specifics of the model and allow us to explore the effect of physicality on real networks. Take, for example, a recently published data set describing the threedimensional layout of more than 20,000 neurons of a fruit fly's brain and the location of more than 13 million synapses connecting them (Fig. 4a) [15]. Although we focused on model networks that are trees, the fruit fly brain is in many ways qualitatively similar to the model networks. The tail of the degree distribution can be approximated by a power law [43,44] and we find a strong positive correlation between the degree and the volume of the nodes (Fig. 4b,c). These structural properties affect the dynamics on the brain network similarly to what we observed for model networks: the Laplacian spectrum of the combinatorial network inherits the heterogeneity of the degree distribution. If, however, we take into account physicality, i.e., we normalize the Laplacian matrix by the node volumes, the power law tail of the spectrum is suppressed due to the correlation between node degree and volume. There are differences, however, that the model is unable to capture; for example, the degree exponent of the brain network is γ ff ≈ 3.6 and the relationship between node volume and degree is sub-linear, while the degree exponent of the model is always γ ≤ 3 and the relation between degree and size is linear k ∼ v. Future work may aim to understand the origin of these features, for example, by allowing nodes to branch and create multiple connections or introducing additional interactions between nodes.
MATERIALS AND METHODS.
Loop-erased random walks. In our network growth model, we can generate physical nodes with any stochastic or deterministic process that produces a growing fractal embedded in Z d . Standard self-avoiding walks are traditionally used to model polymers obeying volume exclusion, and therefore represent a natural choice to model node growth [17]. However, the naïve kinetic version of the self-avoiding walk traps itself in two and three dimensions at finite length [21], making it a poor candidate to construct large physical networks. Instead, we focus on loop-erased random walks (LERW): a LERW evolves as a simple random walk, except when it intersects itself, we delete the loop that it created and continue the walk [20]. This guarantees that the final physical node does not intersect itself and that the walk never gets trapped. Alternatively, the LERW can be defined as special case of Laplacian-random walks, where transition probabilities are defined by a harmonic function [45,46]. This alternative construction does not require deleting loops, hence is more realistic as a growth model. The LERW has attractive mathematical properties making it amenable to analytical treatment. For example, Wilson's algorithm uses iterative LERWs to construct a uniform spanning tree (UST) of any graph [24]. In fact, the physical network our algorithm constructs is a UST of the S substrate together with a partition identifying the nodes. Future work may exploit this connection between USTs and LERW physical networks, together with known results in dimensions d = 2 and d > 4 dimensions [47,48], to rigorously prove some of the results presented here.
Perturbation of the physical Laplacian. To obtain the slow eigenmodes, we match the first order terms of Eq. (6) and substitute u(0) = Mũ, so that Q P (0)u ′ + Q ′ P Mũ = λ ′ Mũ.
Multiplying from the left by the transpose of the membership matrix M we get
M T Q P (0)u ′ + M T Q ′ P Mũ = λ ′ M T Mũ.(9)
The ith row of M T is the trivial eigenvector u i (w = 0) corresponding to physical node i; therefore M T Q P (0) is all zeros and M T M is the N ×N identity matrix, leading to Eq. (7) in the text. The fruit fly connectome. This data set describes a portion of the central brain of the fruit fly, Drosophila melanogaster [15]. The physical layout of the connectome is provided by the detailed three dimensional shape of each neuron and the location of the synapses between them. The corresponding combinatorial network contains 21,662 nodes representing neurons and 13,603,750 links representing synapses. Synaptic partners are connected through approximately 5 synapses on average, and the maximum number of synapses between two neurons is 6039. In our calculations, we treat the combinatorial network as a weighted and undirected network. Note that we only require the combinatorial network and the volume of each node for our calculations; therefore the detailed physical layout of the connectome is in fact not needed.
We find that the weighted degree distribution has a heavy tail, which can be approximated by a power law with γ ff ≈ 3.6 for degrees ≥ 448; the power law fit, however, cannot be distinguished from a lognormal fit on the same range [43,44]. To calculate the spectrum of the physical Laplacian, we normalize the node volume by its minimum value.
FIG. 1 .
1Network-of-networks representation. (a) Each physical node
FIG. 2 .
2Evolution of LERW physical networks. (a) The temporal evolution of the total volume Vt of physical networks, dashed lines represent the theoretical prediction Eq. (3). Networks built from LERWs in dimensions d ≥ 4 fall into the meanfield regime. (b) The number of physical nodes in the saturated networks is proportional to the volume of the substrate |S| irrespective of d f and d. (c) Node degree is proportional to node volume independently from d f and d. (d) The complementary cumulative degree distribution function (CCDF) of the physical networks. Dashed lines indicate the predicted degree exponent γ = 1 + d/d f ≤ 3, where γ = 3 corresponds to the mean-field behavior. Plots (a), (c) and (d) represent measurements of single networks with initial condition v0 = L d f and |S| = 10 6 . In (b) markers are an average of 10 independent networks, error bars represent the standard error of the mean. Lines corresponding to different slopes are shifted to increase readability.[D P ] ss = u [A P ] su is the sum of the weights of the links adjacent to site s in P. If we now set w = 0, the networkof-networks falls apart and each physical node becomes a separate connected component, resulting in a block-diagonal Laplacian Q P (0) = diag(Q V1 , Q V2 , . . . , Q V N ),where Q Vi is the Laplacian of the physical node V i . The Laplacian Q P (0) has N zero eigenvalues corresponding to the N blocks (i.e., the physical nodes), hence we can assign an eigenvector u i (w = 0) to the i-th node such that[u i (0)] s = 1/ √ v i if site s is within node i, otherwise [u i (0)] s = 0, where v i = |V i | is the volume of node i. Since linear combinations of these vectors are also eigenvectors, we can write the zero eigenvectors of Q P as u(0) = Mũ, where M is the N × V membership matrix such that [M] si = 1/ √ v i if site s is part of node i, otherwise [M] si = 0, andũ is any normalized N dimensional vector.
FIG. 3 .
3Laplacian of LERW physical networks. (a) For decreasing weight w the eigenvalues of QP separate into two groups: eigenvalues corresponding to the zero eigenmodes of QP (w = 0) decay as ∼ w (blue), while the rest converge to a constant value (teal). (b-d) Comparing the spectrum of the volume normalized Laplacian V −1/2 QGV −1/2 , the randomized Laplacian V , and the Laplacian of the combinatorial network QG. (b,c) The heterogeneous node volume distribution and the correlation between node degree and volume significantly reduce the largest eigenvalues of the spectra. (d) Heterogeneous node volumes alone explain the reduction of the algebraic connectivity λ2. Eigenvalues are calculated for d = 2 and L = 10 in (a) and L = 100 in (b). In (c) and (d) markers are an average of 10 independent networks, error bars represent the standard error of the mean.
FIG. 4 .
4Fruit fly brain network. (a) The neurons have complex three-dimensional shapes. Two intertwined neurons (teal and yellow) are connected by synapses (red circles). (b) The tail of the degree distribution can be approximated with the power law with γ ff ≈ 3.6. (c) Similar to the network growth model, the volume of the nodes v is strongly correlated with their degree k; however, the relationship between v and k is sub-linear. Circles indicate binned averages, shading represents a kernel density estimate of the joint v-k distribution. (d) The effect of the node volume-degree correlation on the Laplacian spectrum in the brain network is analog to the effect of correlations in the network growth model.
A structural transition in physical networks. N Dehmamy, S Milanlouei, A.-L Barabási, Nature. 5637733N. Dehmamy, S. Milanlouei, and A.-L Barabási. A structural transition in physical networks. Nature, 563(7733):676-680, 2018.
Isotopy and energy of physical networks. Y Liu, N Dehmamy, A.-L Barabási, Nature Physics. 172Y. Liu, N. Dehmamy, and A.-L. Barabási. Isotopy and energy of physical networks. Nature Physics, 17(2):216- 222, 2021.
M Pósfai, B Szegedy, I Bačić, L Blagojević, M Abért, J Kertész, L Lovász, A.-L Barabási, arXiv:2211.13265Understanding the impact of physicality on network structure. arXiv preprintM. Pósfai, B. Szegedy, I. Bačić, L. Blagojević, M. Abért, J. Kertész, L. Lovász, and A.-L. Barabási. Understanding the impact of physicality on network structure. arXiv preprint arXiv:2211.13265, 2022.
Complex brain networks: graph theoretical analysis of structural and functional systems. E Bullmore, O Sporns, Nature Reviews Neuroscience. 103E. Bullmore and O. Sporns. Complex brain networks: graph theoretical analysis of structural and functional systems. Nature Reviews Neuroscience, 10(3):186-198, 2009.
Mitochondrial fission and fusion dynamics generate efficient, robust, and evenly distributed network topologies in budding yeast cells. M P Viana, A I Brown, I A Mueller, C Goul, E F Koslover, S M Rafelski, Cell systems. 103M. P. Viana, A. I. Brown, I. A. Mueller, C. Goul, E. F. Koslover, and S. M. Rafelski. Mitochondrial fission and fusion dynamics generate efficient, robust, and evenly dis- tributed network topologies in budding yeast cells. Cell systems, 10(3):287-297, 2020.
Cell mechanics and the cytoskeleton. D A Fletcher, R D Mullins, Nature. 4637280D. A. Fletcher and R. D. Mullins. Cell mechanics and the cytoskeleton. Nature, 463(7280):485-492, 2010.
Catalin R Picu, Network materials: Structure and properties. Catalin R Picu. Network materials: Structure and prop- erties. 2022.
Net transfer of carbon between ectomycorrhizal tree species in the field. S Simard, D A Perry, M D Jones, D D Myrold, D M Durall, R Molina, Nature. 3886642S. W Simard, D. A. Perry, M. D. Jones, D. D. Myrold, D. M. Durall, and R. Molina. Net transfer of carbon between ectomycorrhizal tree species in the field. Nature, 388(6642):579-582, 1997.
Climatic controls of decomposition drive the global biogeography of forest-tree symbioses. B S Steidinger, T Crowther, J Liang, M E Van Nuland, G D A Werner, P B Reich, G.-J Nabuurs, S Miguel, M Zhou, N Picard, Nature. 5697756B. S. Steidinger, T. W Crowther, J. Liang, M. E. Van Nu- land, G. D. A. Werner, P. B. Reich, G.-J. Nabuurs, S. de Miguel, M. Zhou, N. Picard, et al. Climatic con- trols of decomposition drive the global biogeography of forest-tree symbioses. Nature, 569(7756):404-408, 2019.
Mathematical formulation of multilayer networks. M De Domenico, A Solé-Ribalta, E Cozzo, M Kivelä, Y Moreno, M A Porter, S Gómez, A Arenas, Physical Review X. 3441022M. De Domenico, A. Solé-Ribalta, E. Cozzo, M. Kivelä, Y. Moreno, M. A. Porter, S. Gómez, and A. Arenas. Mathematical formulation of multilayer networks. Phys- ical Review X, 3(4):041022, 2013.
Multilayer networks: structure and function. G Bianconi, Oxford university pressG. Bianconi. Multilayer networks: structure and func- tion. Oxford university press, 2018.
Graph spectra for complex networks. P Van Mieghem, Cambridge University PressP. Van Mieghem. Graph spectra for complex networks. Cambridge University Press, 2010.
Relating topological determinants of complex networks to their spectral properties: Structural and dynamical effects. C Castellano, R Pastor-Satorras, Physical Review X. 7441024C. Castellano and R. Pastor-Satorras. Relating topolog- ical determinants of complex networks to their spectral properties: Structural and dynamical effects. Physical Review X, 7(4):041024, 2017.
Laplacian renormalization group for heterogeneous networks. P Villegas, T Gili, G Caldarelli, A Gabrielli, Nature Physics. P. Villegas, T. Gili, G. Caldarelli, and A. Gabrielli. Laplacian renormalization group for heterogeneous net- works. Nature Physics, pages 1-6, 2023.
A connectome and analysis of the adult drosophila central brain. L K , 57443Elife, 9L. K. et al. Scheffer. A connectome and analysis of the adult drosophila central brain. Elife, 9:e57443, 2020.
Handbook of graph drawing and visualization. R Tamassia, CRC pressR. Tamassia. Handbook of graph drawing and visualiza- tion. CRC press, 2013.
Fractal growth phenomena. T Vicsek, World scientificT. Vicsek. Fractal growth phenomena. World scientific, 1992.
Fractals and disordered systems. A Bunde, S Havlin, Springer Science & Business MediaA. Bunde and S. Havlin. Fractals and disordered systems. Springer Science & Business Media, 2012.
Exponents for the excluded volume problem as derived by the wilson method. P.-G De Gennes, Phys. Lett. A. 385P.-G. de Gennes. Exponents for the excluded volume problem as derived by the wilson method. Phys. Lett. A, 38(5):339-340, 1972.
A self-avoiding random walk. G F Lawler, G. F. Lawler. A self-avoiding random walk. 1980.
Survival probability for kinetic selfavoiding walks. L Pietronero, Phys. Rev. Lett. 55192025L. Pietronero. Survival probability for kinetic self- avoiding walks. Phys. Rev. Lett., 55(19):2025, 1985.
Scaling limits of loop-erased random walks and uniform spanning trees. O Schramm, Selected Works of Oded Schramm. SpringerO. Schramm. Scaling limits of loop-erased random walks and uniform spanning trees. In Selected Works of Oded Schramm, pages 791-858. Springer, 2011.
Fractal dimension of dielectric breakdown. L Niemeyer, L Pietronero, H J Wiesmann, Phys. Rev. Lett. 52121033L. Niemeyer, L. Pietronero, and H. J. Wiesmann. Frac- tal dimension of dielectric breakdown. Phys. Rev. Lett., 52(12):1033, 1984.
Generating random spanning trees more quickly than the cover time. D B Wilson, Proceed. of the 28th ACM Theory of computing. eed. of the 28th ACM Theory of computingD. B. Wilson. Generating random spanning trees more quickly than the cover time. In Proceed. of the 28th ACM Theory of computing, pages 296-303, 1996.
Conformal invariance of planar loop-erased random walks and uniform spanning trees. G F Lawler, O Schramm, W Werner, Selected Works of Oded Schramm. SpringerG. F. Lawler, O. Schramm, and W. Werner. Conformal invariance of planar loop-erased random walks and uni- form spanning trees. In Selected Works of Oded Schramm, pages 931-987. Springer, 2011.
Field theories for looperased random walks. K Wiese, A A Fedorenko, Nuclear Physics B. 946114696K. Wiese and A. A. Fedorenko. Field theories for loop- erased random walks. Nuclear Physics B, 946:114696, 2019.
Distribution of sizes of erased loops of loop-erased random walks in two and three dimensions. H Agrawal, D Dhar, Phys. Rev. E. 63556115H. Agrawal and D. Dhar. Distribution of sizes of erased loops of loop-erased random walks in two and three di- mensions. Phys. Rev. E, 63(5):056115, 2001.
Scaling of loop-erased walks in 2 to 4 dimensions. P Grassberger, Journal of statistical physics. 136P. Grassberger. Scaling of loop-erased walks in 2 to 4 dimensions. Journal of statistical physics, 136:399-404, 2009.
Dimension of the loop-erased random walk in three dimensions. D B Wilson, Phys. Rev. E. 82662102D. B. Wilson. Dimension of the loop-erased random walk in three dimensions. Phys. Rev. E, 82(6):062102, 2010.
Dynamical processes on complex networks. A Barrat, M Barthelemy, A Vespignani, Cambridge University PressA. Barrat, M. Barthelemy, and A. Vespignani. Dynamical processes on complex networks. Cambridge University Press, 2008.
Random walks and diffusion on networks. N Masuda, M A Porter, R Lambiotte, Phys. Rep. 716N. Masuda, M. A. Porter, and R. Lambiotte. Random walks and diffusion on networks. Phys. Rep., 716:1-58, 2017.
Spectral entropies as information-theoretic tools for complex network comparison. M De Domenico, J Biamonte, Phys. Rev. X. 6441062M. De Domenico and J. Biamonte. Spectral entropies as information-theoretic tools for complex network compar- ison. Phys. Rev. X, 6(4):041062, 2016.
Synchronization in complex networks. A Arenas, A Díaz-Guilera, J Kurths, Y Moreno, C Zhou, Phys. Rep. 4693A. Arenas, A. Díaz-Guilera, J. Kurths, Y. Moreno, and C. Zhou. Synchronization in complex networks. Phys. Rep., 469(3):93-153, 2008.
Network geometry. Marian Boguna, Ivan Bonamassa, Manlio De Domenico, Shlomo Havlin, Dmitri Krioukov, M Ángeles Serrano, Nature Reviews Physics. 32Marian Boguna, Ivan Bonamassa, Manlio De Domenico, Shlomo Havlin, Dmitri Krioukov, and M Ángeles Ser- rano. Network geometry. Nature Reviews Physics, 3(2):114-135, 2021.
Unraveling the effects of multiscale network entanglement on empirical systems. Arsham Ghavasieh, Massimo Stella, Jacob Biamonte, Manlio De Domenico, Communications Physics. 41129Arsham Ghavasieh, Massimo Stella, Jacob Biamonte, and Manlio De Domenico. Unraveling the effects of mul- tiscale network entanglement on empirical systems. Com- munications Physics, 4(1):129, 2021.
Laplacian paths in complex networks: Information core emerges from entropic transitions. P Villegas, A Gabrielli, F Santucci, G Caldarelli, T Gili, Phys. Rev. Res. 4333196P. Villegas, A. Gabrielli, F. Santucci, G. Caldarelli, and T. Gili. Laplacian paths in complex networks: Informa- tion core emerges from entropic transitions. Phys. Rev. Res., 4(3):033196, 2022.
Generalized network density matrices for analysis of multiscale functional diversity. Arsham Ghavasieh, Manlio De, Domenico , Physical Review E. 107444304Arsham Ghavasieh and Manlio De Domenico. General- ized network density matrices for analysis of multiscale functional diversity. Physical Review E, 107(4):044304, 2023.
Diffusion dynamics on multiplex networks. S Gomez, A Diaz-Guilera, J Gomez-Gardenes, C J Perez-Vicente, Y Moreno, A Arenas, Phys. Rev. Lett. 110228701S. Gomez, A. Diaz-Guilera, J. Gomez-Gardenes, C. J. Perez-Vicente, Y. Moreno, and A. Arenas. Diffusion dynamics on multiplex networks. Phys. Rev. Lett., 110(2):028701, 2013.
Spectral properties of the laplacian of multiplex networks. A Sole-Ribalta, M De Domenico, N E Kouvaris, A Diaz-Guilera, S Gomez, A Arenas, Physical Review E. 88332807A. Sole-Ribalta, M. De Domenico, N. E. Kouvaris, A. Diaz-Guilera, S Gomez, and A. Arenas. Spectral prop- erties of the laplacian of multiplex networks. Physical Review E, 88(3):032807, 2013.
Abrupt transition in the structural formation of interconnected networks. F Radicchi, A Arenas, Nature Physics. 911F. Radicchi and A. Arenas. Abrupt transition in the structural formation of interconnected networks. Nature Physics, 9(11):717-720, 2013.
Distinct types of eigenvector localization in networks. R Pastor-Satorras, C Castellano, Sci. Rep. 6118847R. Pastor-Satorras and C. Castellano. Distinct types of eigenvector localization in networks. Sci. Rep., 6(1):18847, 2016.
Localization of laplacian eigenvectors on random networks. S Hata, H Nakao, Sci. Rep. 71S. Hata and H. Nakao. Localization of laplacian eigen- vectors on random networks. Sci. Rep., 7(1):1-11, 2017.
Power-law distributions in empirical data. Aaron Clauset, Cosma Rohilla Shalizi, Mark, Newman, SIAM review. 514Aaron Clauset, Cosma Rohilla Shalizi, and Mark EJ Newman. Power-law distributions in empirical data. SIAM review, 51(4):661-703, 2009.
powerlaw: a python package for analysis of heavy-tailed distributions. Jeff Alstott, Ed Bullmore, Dietmar Plenz, PloS one. 9185777Jeff Alstott, Ed Bullmore, and Dietmar Plenz. powerlaw: a python package for analysis of heavy-tailed distribu- tions. PloS one, 9(1):e85777, 2014.
The laplacian random walk. J W Lyklema, C Evertsz, L Pietronero, Europhysics Letters). 2277EPLJ. W. Lyklema, C. Evertsz, and L. Pietronero. The lapla- cian random walk. EPL (Europhysics Letters), 2(2):77, 1986.
Loop-erased self-avoiding random walk and the laplacian random walk. G F Lawler, Journal of Physics A: Mathematical and General. 20134565G. F. Lawler. Loop-erased self-avoiding random walk and the laplacian random walk. Journal of Physics A: Mathematical and General, 20(13):4565, 1987.
Scaling limits of loop-erased random walks and uniform spanning trees. O Schramm, Israel Journal of Mathematics. 1181O. Schramm. Scaling limits of loop-erased random walks and uniform spanning trees. Israel Journal of Mathemat- ics, 118(1):221-288, 2000.
Inequalities for critical exponents in d-dimensional sandpiles. S Bhupatiraju, J Hanson, A A Járai, S. Bhupatiraju, J. Hanson, and A. A. Járai. Inequalities for critical exponents in d-dimensional sandpiles. 2017.
| []
|
[
"Complexity of Motion Planning of Arbitrarily Many Robots: Gadgets, Petri Nets, and Counter Machines",
"Complexity of Motion Planning of Arbitrarily Many Robots: Gadgets, Petri Nets, and Counter Machines"
]
| [
"Joshua Ani ",
"Michael Coulombe ",
"Erik D Demaine ",
"Yevhenii Diomidov ",
"Timothy Gomez ",
"Dylan Hendrickson ",
"Jayson Lynch "
]
| []
| []
| We extend the motion-planning-through-gadgets framework to several new scenarios involving various numbers of robots/agents, and analyze the complexity of the resulting motionplanning problems. While past work considers just one robot or one robot per player, most of our models allow for one or more locations to spawn new robots in each time step, leading to arbitrarily many robots. In the 0-player context, where all motion is deterministically forced, we prove that deciding whether any robot ever reaches a specified location is undecidable, by representing a counter machine. In the 1-player context, where the player can choose how to move the robots, we prove equivalence to Petri nets, EXPSPACE-completeness for reaching a specified location, PSPACE-completeness for reconfiguration, and ACKERMANN-completeness for reconfiguration when robots can be destroyed in addition to spawned. Finally, we consider a variation on the standard 2-player context where, instead of one robot per player, we have one robot shared by the players, along with a ko rule to prevent immediately undoing the previous move. We prove this impartial 2-player game EXPTIME-complete. | null | [
"https://export.arxiv.org/pdf/2306.01193v1.pdf"
]
| 259,063,912 | 2306.01193 | 98477b11c741b56df205914c8914f831095b38e6 |
Complexity of Motion Planning of Arbitrarily Many Robots: Gadgets, Petri Nets, and Counter Machines
Joshua Ani
Michael Coulombe
Erik D Demaine
Yevhenii Diomidov
Timothy Gomez
Dylan Hendrickson
Jayson Lynch
Complexity of Motion Planning of Arbitrarily Many Robots: Gadgets, Petri Nets, and Counter Machines
We extend the motion-planning-through-gadgets framework to several new scenarios involving various numbers of robots/agents, and analyze the complexity of the resulting motionplanning problems. While past work considers just one robot or one robot per player, most of our models allow for one or more locations to spawn new robots in each time step, leading to arbitrarily many robots. In the 0-player context, where all motion is deterministically forced, we prove that deciding whether any robot ever reaches a specified location is undecidable, by representing a counter machine. In the 1-player context, where the player can choose how to move the robots, we prove equivalence to Petri nets, EXPSPACE-completeness for reaching a specified location, PSPACE-completeness for reconfiguration, and ACKERMANN-completeness for reconfiguration when robots can be destroyed in addition to spawned. Finally, we consider a variation on the standard 2-player context where, instead of one robot per player, we have one robot shared by the players, along with a ko rule to prevent immediately undoing the previous move. We prove this impartial 2-player game EXPTIME-complete.
Introduction
Intuitively, motion planning is harder with more agents/robots. This paper formalizes this intuition by studying the effects of varying the number of robots in a recent combinatorial model for combinatorial motion planning and the resulting computational complexity.
Specifically, the motion-planning-through-gadgets framework was introduced in 2018 [DGLR18] and has had significant study since [DHL20, ABD + 20, ADHL22, ADD + 22, DHHL22, ACD + 22, Lyn20,Hen21]. In the original one-player setting, the framework considers a single agent/robot traversing a dynamic network of "gadgets", where each gadget has finite state and a finite set of traversals that the robot can make depending on the state, and each traversal potentially changes the state (and thus which future traversals are possible). The goal is for the robot to traverse from one specified location to another (reachability), or for the system of gadgets to reach a desired state (reconfiguration) [ADD + 22]. Existing results characterize in many settings which gadgets (in many cases, one extremely simple gadget) result in NP-complete or PSPACE-complete motionplanning problems, and which gadgets are simple enough to admit polynomial-time motion planning. This framework has already proved useful for analyzing the computational complexity of motion-planning problems involving modular robots [ADG + 21], swarm robots [BMLC + 19, CCG + 20], and chemical reaction networks [AFG + 22]. These applications all involve naturally multi-agent systems, so it is natural to consider how the complexity of the gadgets framework changes with more than one robot.
1-player with arbitrarily many robots.
In Section 4, we consider a generalization of this 1-player gadget model to an arbitrary number of robots, and the player can move any one robot at a time. By itself, this extension does not lead to additional computational complexity: such motion planning remains in PSPACE, or in NP if each gadget can be traversed a limited number of times. To see the true effect of an arbitrary number of robots, we add one or two additional features: a spawner gadget that can create new robots, and optionally a destroyer gadget that can remove robots. For reachability, only the spawning ability matters -it is equivalent to having one "source" location with infinitely many robots -and we show that the complexity of motion planning grows to EXPTIME-complete with a simple single gadget called the symmetric self-closing door (previously shown PSPACE-complete without spawners [ABD + 20]). For reconfiguration, we show that motion planning with a spawner and symmetric self-closing door is just PSPACEcomplete (just like without a spawner), but when we add a destroyer, the complexity jumps to ACKERMANN-complete (in particular, the running time is not elementary). These results follow from a general equivalence to Petri nets -a much older and well-studied model of dynamic systems -whose complexity has very recently been characterized [Ler22,CO22]. 0-player with arbitrarily many robots. In Section 3, we consider the same concepts in a 0-player setting, where every robot has a forced traversal during its turn, and spawners and robots take turns in a round-robin schedule. 0-player motion planning in the gadget framework with one robot was considered previously [ADHL22,DHHL22], with the complexity naturally maxing out at PSPACE-completeness. With spawners and a handful of simple gadgets, we prove that the computational complexity of motion planning increases all the way to RE-completeness. In particular, the reachability problem becomes undecidable. This is a surprising contrast to the 1-player setting described above, where the problem is decidable. Impartial 2-player with a shared robot. In Section 5, we consider changing the number of robots in the downward direction. Past study of 2-player motion planning in the gadget framework [DHL20] considers one robot per player, with each player controlling their own robot. What happens if there is instead only one robot, shared by the two players? This variant results in an impartial game where the possible moves in a given state are the same no matter which player moves next. To prevent one player from always undoing the other player's moves, we introduce a ko rule, which makes it illegal to perform two consecutive transitions in the same gadget. In this model, we show that 2-player motion planning is EXPTIME-complete for a broad family of gadgets called "reversible deterministic interacting k-tunnel gadget", matching a previous result for 2-player motion planning with one robot per player [DHL20]. In other words, reducing the number of robots in this way does not affect the complexity of the problem (at least for the gadgets understood so far).
Standard Gadget Model
We now define the gadget model of motion planning, introduced in [DGLR18].
In general, a gadget consists of a finite number of locations (entrances/exits) and a finite number of states. Each state S of the gadget defines a labeled directed graph on the locations, where a directed edge (a, b) with label S ′ means that a robot can enter the gadget at location a and exit at location b, changing the state of the gadget from S to S ′ . Equivalently, a gadget is specified by its transition graph, a directed graph whose vertices are state/location pairs, where a directed edge from (S, a) to (S ′ , b) represents that the robot can traverse the gadget from a to b if it is in state S, and that such traversal will change the gadget's state to S ′ . Gadgets are local in the sense that traversing a gadget does not change the state of any other gadgets.
A system of gadgets consists of gadgets, their initial states, and a connection graph on the gadgets' locations. If two locations a and b of two gadgets (possibly the same gadget) are connected by a path in the connection graph, then a robot can traverse freely between a and b (outside the gadgets). (Equivalently, we can think of locations a and b as being identified, effectively contracting connected components of the connection graph.) These are all the ways that the robot can move: exterior to gadgets using the connection graph, and traversing gadgets according to their current states.
Previous work has focused on the robot reachability 1 problem [DGLR18, DHL20]:
Definition 2.1. For a gadget G, robot reachability for G is the following decision problem. Given a system of gadgets consisting of copies of G, the starting location(s), and a win location, is there a path a robot can take from the starting location to the win location?
Gadget reconfiguration, which had target states for the gadgets to be in, was considered in [ADD + 22] and [Hen21]. We additionally investigate a problem where we have target states and multiple locations which require specific numbers of robots.
Definition 2.2. For a gadget G, the multi-robot targeted reconfiguration problem for G is the following decision problem. Given a system of gadgets consisting of copies of G, the starting location(s), and a target configuration of gadgets and robots, is there a sequence of moves the robots can take to reach the target configuration?
[DHL20] also defines 2-player and team analogues of this problem. In this case, each player has their own starting and win locations, and the players take turns making a single transition across a gadget (and any movement in the connection graph). The winner is the player who reaches their win location first. The decision problem is whether a particular player or team can force a win. When there are multiple robots, we are asking whether any of them can reach the win location.
We will consider several specific classes of gadgets.
Definition 2.3. A k-tunnel gadget has 2k locations, which are partitioned into k pairs called tunnels, such that every transition is between two locations in the same tunnel.
Most of the gadgets we consider are k-tunnel.
Definition 2.4. The state-transition graph of a gadget is the directed graph which has a vertex for each state, and an edge S → S ′ for each transition from state S to S ′ . A DAG gadget is a gadget whose statetransition graph is acyclic.
DAG gadgets naturally lead to bounded problems, since they can be traversed a bounded number of times. The complexity of the reachability problem for DAG k-tunnel gadgets, as well as the 2-player and team games, is characterized in [DHL20].
Definition 2.5. A gadget is deterministic if every traversal can put it in only one state and every location has at most 1 traversal from it. More precisely, its transition graph has maximum out-degree 1.
Definition 2.6. A gadget is reversible if every transition can be reversed. More precisely, its transition graph is undirected. Reversible deterministic gadgets are gadgets whose transition graphs are partial matchings, and they naturally lead to unbounded problems. [DHL20] characterizes the complexity of reachability for reversible deterministic k-tunnel gadgets and partially characterizes the complexity of the 2-player and team games.
We define the decision problems we consider in their corresponding sections.
0-Player Motion Planning with Spawners
In this section, we describe a model of 0-player motion planning, introduce the spawner gadget, and show that 0-player motion planning with spawners is RE-complete, implying undecidability. RE-completeness is defined in terms of arbitrary computable many-one reductions; in particular, they don't have to run in polynomial time. We will use the fact that the halting problem for 3counter machines is RE-complete [Min67].
Model
In 0-player directed-edge motion planning (with one robot), we modify 1-player motion planning by removing the player's ability to control the robot, and specifying directions on the connections between gadget locations. More precisely, the connection graph is now a directed graph such that each gadget location has only incoming edges (meaning that the robot enters the gadget from that location), or only outgoing edges and at most one such edge (meaning that the robot exits the gadget from that location); and all gadgets must be deterministic. 2 Thus the robot moves on its own, moving in the direction of the edge it is on and traversing any gadgets it encounters. The reachability question asks whether the robot reaches a specified target location in finite time.
Because the state of this system can be encoded in a polynomial number of bits (the state for each gadget and the location of the robot), this reachability problem is in PSPACE as in other 0-player models of the gadget framework [ADHL22,DHHL22].
Our extension is to define the spawner gadget: a 1-location gadget that spawns a new robot in each round, appearing at its only location. We now define 0-player directed-edge motion planning to take into account multiple robots and spawners. 0-player directed-edge motion planning with spawners is divided into rounds. In each round, each robot takes a turn in spawn order, and then each spawner spawns a robot (in a predefined spawning order). A robot's turn consists of it moving along the directed edge it is on until it either traverses a gadget or it gets stuck (i.e., reaches a point where all edges are directed to its position). The reachability question asks whether any robot reaches a specified target location in finite time.
Lemma 3.1. Deciding robot reachability in 0-player directed-edge motion planning with spawners with any set of gadgets is in RE.
Proof. After each step of the game, there will still be a finite, if increasing, number of robots. Thus to confirm if at least 1 robot can reach the win location in finite time we can simply simulate the game for the needed finite number of steps.
RE-hardness
We show that deciding robot reachability in 0-player directed-edge motion planning with spawners is RE-hard by reduction from the halting problem by simulating a 3-counter machine. First we introduce the gadgets that we show RE-hard.
Increment gadget.
The increment gadget is a 4-state 10-location gadget containing a 3-path lock branch and a 3-path path selector (Figure 1). When a robot traverses a path in the path selector, it enables a single path in the lock branch and locks the path selector. When a robot traverses a path in the lock branch, the gadget reverts to the original state. Register gadget. The register gadget is a 3-state 10-location gadget containing a path selector, a processing branch, and a response branch (Figure 2). When a robot traverses the top path selector path, the path selector is locked and a path in the processing branch is enabled. When a robot traverses the bottom path selector path, the path selector is locked and the other processing branch path and a path in the response branch are enabled. If a robot traverses any non-path-selector path, the gadget reverts to the original state. UPDSDS gadget. For the following theorem, we will also use the UPDSDS gadget. This gadget has two states "up" and "down", a tunnel which sets the state to "up", and two set-up switches which each have one input and two outputs, where the output taken depends on the state and traversing the switch sets the state to "down".
Theorem 3.2. Deciding robot reachability for 0-player directed-edge motion planning with spawners is RE-hard with the spawner, increment, register, and UPDSDS gadgets combined.
Proof. We reduce from the halting problem of the 3-counter machine with INC(r), DEC(r), and JZ(r, z) instructions, which is undecidable ( [Min67]). We will need to implement the INC(r) (increment register r by 1), DEC(r) (decrement r by 1), and JZ(r, z) (jump to instruction z if r is 0) instructions of a counter machine. We will not worry about decrementing a register that is already 0, because all DEC instructions can be preceded by JZ to guard against that. We will also implement the HALT instruction, which should result in a win. First we implement a register, which will store a nonnegative integer, just like a register in a counter machine. This, of course, uses the register gadget, and the implementation is shown in Figure 3. In this implementation, the value of a register gadget is the number of robots stuck at the entrance of the processing branch. If a robot b crosses the decrement in path, a single robot can cross the gadget to the sink, where it is stuck forever, and all other robots stuck at the entrance stay stuck. Robot b goes through the out path on its next turn. This decrements the value of the gadget by 1, thus implementing DEC, taking 1 round to process. If a robot b crosses the jump-zero in path, then if the gadget's value is nonzero, a single robot b ′ crosses the top path of the processing branch, reverting the gadget's state, and forcing b to traverse the top path of the response branch on its next turn, which leads to the out path. b ′ gets stuck back at the entrance on its next turn. However, if the gadget's value is 0, then no robot will traverse the processing branch, which lets b traverse the bottom path of the response branch on its next turn. This does not change the value of the gadget, and changes the path of b iff the value is 0, thus implementing JZ, taking 2 rounds to process. To implement INC, we need a place that robots can come from. For this, we have the setup shown in Figure 4. This setup contains a spawner gadget. Spawned robots go through the US gadget (a set-up switch, simulated by using one switch of the UPDSDS gadget and flipping it) to the entrance of the lock branch of the increment gadget and get stuck. It takes 2 turns for this to happen. The first robot b to get spawned instead takes the bottom path of the US gadget and executes the program. So during the 4th and later rounds, an extra robot gets stuck at the increment gadget. When robot b goes through the increment r i in path, a single robot b ′ at the increment gadget traverses the lock branch, goes to the income entrance of r i , and gets stuck at that register gadget's processing branch on its next turn, incrementing said register gadget's value. In the process, the increment gadget reverts to its original state. This implements INC, taking 2 rounds to process, and we only need to make sure that b does not traverse the path selector of the increment gadget before the 4th round to ensure that there will be a robot b ′ that goes to a register. We also need to implement the program, and we use UPDSDS gadgets for that, as shown in Figure 5. A UPDSDS-gadget instruction contains an execute in entrance, a pass in entrance, a jump in entrance, a jump destination entrance, an execute out exit, an execute next exit, a pass next exit, a jump next exit, and a jump out exit. Only the executor robot is allowed to traverse this gadget.
The execute out exit leads to the proper location of the increment or register gadgets. For an INC(r) instruction, it leads to the increment r in entrance of the increment gadget. For a DEC(r) instruction, it leads to the decrement in entrance of the register gadget for register r. For a JZ(r, z) instruction, it leads to the jump-zero in entrance of the register gadget for register r. For a HALT instruction, it leads directly to the win location.
The execute next exit leads to the execute in entrance of the next instruction. The pass next exit leads to the pass in entrance of the next instruction. The jump out exit leads to the jump destination entrance of instruction z for a JZ(r, z) gadget, and doesn't exist otherwise. The jump next exit leads to the jump in entrance of the next instruction.
This reduction can be done in polynomial time with respect to the number of instructions, because each instruction is simulated with 1 UPDSDS gadget, and there are a constant number of constant-size gadgets other than these.
We now describe the behavior of the entire simulation, with an example shown in Figure 6.
• A robot spawns from the spawner.
• The robot that spawned takes the bottom path of the US gadget, setting it to the up state permanently. This robot is the executor robot. Another robot spawns from the spawner. • The executor robot takes the top path of the UPDSDS gadget representing the first instruction. The newly spawned robot crosses the US gadget. Another robot spawns from the spawner.
• If the executor robot is executing an INC instruction, it traverses the path selector of the increment gadget. This is the 4th (or later) round, so there will be a robot ready to traverse the lock branch of the increment gadget.
• When the executor robot finishes executing an instruction that doesn't lead to a jump, it travels along the upper set-down switches of the UPDSDS gadgets until it finds the one representing the instruction it was executing. It resets that gadget and executes the next instruction, flipping the state of the next UPDSDS gadget.
• If the instruction led to a jump instead, the executor robot travels along the lower set-down switches of the UPDSDS gadgets until it finds the one representing the instruction it was executing. It resets that gadget and takes the jump next path to the destination UPDSDS gadget of the jump, then executes the corresponding instruction.
• If the executor robot reaches the top path of the UPDSDS gadget representing the HALT instruction, it goes to the win location.
So this simulates a 3-counter machine. So if the 3-counter machine halts, then a robot will reach the win location in finite time, and vice versa.
1-Player Motion Planning with Spawners and/or Destroyers
In this section, we investigate 1-player motion planning with multiple robots, where a single player controls a set of robots, with the ability to separately command each, moving any one robot at a time. There is no limit to the number of robots that can be at a given location. We include a spawner gadget (as in Section 3) which the player can use to produce a new robot at a specific location, providing an unlimited source of robots at that location. We optionally also include a destroyer gadget, which deletes any robot that reaches a specified sink location; such removal plays a role when we consider the targeted reconfiguration problem where the goal is to achieve an exact pattern of robots at the locations. If a system of gadgets only has a single spawner gadget we call that gadget the source and if the system only has a single destroyer gadget we call that the sink.
We show an equivalence between this 1-player motion planning problem and corresponding problems on Petri nets. Through these connections, we establish EXPSPACE-completeness for reachability; PSPACE-completeness for reconfiguration with a spawner; and ACKERMANNcompleteness for reconfiguration with a spawner and a destroyer.
Petri Nets
Petri nets are used to model distributed systems using tokens divided into dishes, and rules which define possible interactions between dishes. This is a natural model since many equivalent models have been defined such as Vector Addition Systems and Chemical Reaction Networks. Definition 4.2. A reachable set for a Petri-net configuration, denoted REACH P ({D, R}, t), is the set of configurations of a Petri net reachable starting in configuration t and applying rules from R.
We can view a system of gadgets with multiple robots as a set of gadget states Γ and a vector l indicating the counts of robots at each location. We can define the set of reachable targeted configurations as REACH(Γ, l) similarity to Petri nets.
Equivalence between Petri Nets and Gadgets
We present transformations that turn Petri nets into gadgets, and gadgets into Petri nets. We use these simulations to prove the complexity of robot reachability and reconfiguration with arbitrarily many robots.
Gadgets to Petri Nets. We can transform a set of gadgets into a Petri net where each location, besides the source and sink, is represented as a robot dish. Each gadget besides the spawner and destroyer is given a number of state dishes equal to its states, and each transition of the gadget is represented by a rule. The set of dishes D is D STATE ∪ D LOCT , the union of state and robot dish sets, respectively.
A configuration of robots and gadgets is represented by a Petri-net configuration t satisfying the following:
• Each k-state gadget is simulated by k unique dishes in D STATE , one per state. The state of the gadget is represented by a single token which is contained in the corresponding dish, and the other k − 1 dishes are empty.
• Each location in the system of gadgets is simulated by a unique dish in D LOCT . The number of tokens in that dish is equal to the number of robots at that location.
A Petri net {D, R} simulates a system of gadgets G if for any configuration {Γ, l} of G represented by Petri-net configuration t, each configuration in REACH G (Γ, I) is represented by a configuration REACH P ({D, R}, t) and each configuration in REACH P ({D, R}, t) represents a configuration in REACH G (Γ, I).
Lemma 4.3. For any set of deterministic gadgets S, any system of multiple copies of gadgets in S with a spawner (and optionally, a destroyer) can be simulated by a Petri net.
Proof. We first explain how to create the rules for gadgets that are not connected to the source or sink locations. Each gadget transition will be represented by a unique rule. For example the 2-tunnel toggle gadget is shown in Figure 8 and has four transitions. It can be traversed:
• from A to B in state 1, • from C to D in state 1,
• from B to A in state 2, and
• from D to C in state 2.
The four corresponding rules for the gadget are drawn in Figure 8 as well. Each rule takes in one token from a robot dish and one from a state dish, and places one token in a robot dish and one in a state dish. The token being moved between robot dishes models moving one robot across a gadget, and the token being moved between state dishes models the state change of the gadget.
If a gadget is connected to the source, any transition from the source is represented by a rule that only takes in a state token, producing two tokens. One token is output to a location dish and one to a state dish. If a transition is connected to the sink then the rule takes in two tokens and outputs only a state token. These special cases are shown in Figure 9. Note that we do not have an actual dish for the source so the player may spawn multiple robots at the source but they do not appear in the simulation until they traverse a gadget. For each configuration of a system of gadgets, there exists a configuration of the Petri net with dishes that represent the gadgets and locations. Each rule of the Petri net acts as a traversal of a robot changing the state of a gadget. The rules need the gadgets state token to be in the correct dish, and a robot token in the location dish representing the start traversal.
Petri Nets to Gadgets. We simulate a Petri net with symmetric self-closing doors using a location for each dish, where each rule is represented by multiple gadgets. We also have a single control robot which starts in a location we call the control room. The other robots are token robots which represent the tokens in each dish. At a high level, our simulation works by "consuming" the input tokens to a rule to open a series of tunnels for the control robot to traverse. The control robot then opens a gadget for each output to allow token robots to traverse into their new dishes. We use the source and sink to increase and decrease rules as needed. Figure 11 gives an overview. Using this simulation we prove two problems in Petri-nets are polynomial time reducible to the gadgets problems we are interested in. [Esp05] lists many problems including the ones we describe here. 3 First is production, this problem asks given a Petri-net configuration and a target dish, does there exist a reachable configuration which contains at least one token in the target dish. Configuration reachability asks given an initial and target configuration, is the target reachable from the initial configuration.
Lemma 4.4. Production in Petri nets is polynomial time reducible to robot reachability with the symmetric self-closing door and a spawner. Configuration reachability in Petri nets is polynomial-time reducible to multi-robot targeted reconfiguration with the symmetric self-closing door and a spawner.
Proof. For a rule (a, b) we include |a| + |b| copies of the gadgets. There is a gadget for each input to the rule; these gadgets can be traversed from the location representing an input dish to an intermediate location, opening another tunnel for the control robot to traverse. The control robot must traverse all the input gadgets the goes through the tunnels of the output gadgets. The control robot opens the doors of these gadgets allowing the robots moving from an intermediate wire to traverse to a location representing the output dishes.
If a rule would increase the volume, the surplus output gadgets will allow traversal from the spawn location instead of an input gadget. If a rule decreases the volume, then the surplus input gadgets send robots to a "sink" location instead of an output gadget. We do not require a true sink in this case because we can add an extra location which robots can be held instead of being deleted. If we do not connect this location to any other gadget, then the robots can never leave and can be thought of as having left the system. Production reduces to robot reachability since a robot can reach a location if and only if a token can reach the corresponding dish. If token is placed in a dish, it must have moved through a rule gadget. The robot can only move through a rule gadget if the number of robots in the dishes are at least the number of tokens of the left hand side of the rules to open the tunnels for the control robot to move through.
Configuration reachability in Petri nets reduces to multi-robot targeted reconfiguration. The target and initial states of the gadgets are the same. The only difference between the initial configuration and the target is the number of robots at each location, equal to the counts in the instance of Configuration reachability for Petri nets. The number of robots at each location is equal to the number of tokens in each dish. The targets for each intermediate wire is 0 and in the control room 1. Thus, it is never beneficial to partially traverse a rule gadget.
Complexity of Reachability
The reachability problem for a single robot is very similar to the well-studied problem in Petri nets called coverage. The input to the coverage problem is a Petri net and a vector of required token amounts in each dish, and the output is yes if and only if there exists a rule application sequence to reach a configuration with at least the required number of tokens in each dish. Theorem 4.6. Robot reachability is EXPSPACE-complete with symmetric self-closing doors, a spawner, and optionally a destroyer.
Proof. We can solve robot reachability by converting the system of gadgets to a Petri net which simulates it as in Lemma 4.3. In this simulation, a token can be placed in a location dish if and only if a robot can reach that location represented by that dish. Determining if a single token can be placed in a target dish, the production problem, is a special case of coverage problem where the target dish is labeled with 1 and all others labeled with 0. We can use the exponential-space algorithm for Petri-net coverage shown in [Rac78] to solve robot reachability. When simulating the sink we require rules that decrease the volume of a Petri net. This algorithm works for general Petri nets so it implies membership with a sink.
For hardness, we first reduce Petri-net coverage to Petri-net production by adding a target dish T starting with 0 tokens and a new rule. This rule takes as input the number of tokens equal to the goal of the coverage problem and produces one token to the t dish. This token can only produced if the reach a configuration that has at least the target number of each species. We then use Lemma 4.4 to reduce production to robot reachability with the self-closing symmetric door and a spawner. It is relevant to note the first reduction does not work when exactly the target numbers are required. The reduction works even when not allowing the sink as described in Lemma 4.4.
Complexity of Reconfiguration
The reconfiguration problem has been studied in the single-robot case as the problem of moving the robot through the system of gadgets so that each gadget is in a desired final state. Targeted reconfiguration not only asked about the final states of the gadgets, but the location of the robot as well. Here, we study multi-robot targeted reconfiguration which requires both that all gadgets are in specified final states and that each location contains a target number of robots.
Definition 4.7. For a gadget G, the multi-robot targeted reconfiguration problem for G is the following decision problem. Given a system of gadgets consisting of copies of G and the starting location(s) a target configuration of gadgets and robots, is there a sequence of moves the robots can take to reach the target configuration?
The complexity of multi-robot targeted reconfiguration depends on whether we allow a destroyer. If we do not allow for a destroyer, the complexity is bounded by polynomial space since we can never have more robots than the total target size. If we allow for the ability to destroy robots, then the reconfiguration problem is the same as the configuration reachability problem in Petri nets from our relations between the models above. This is a fundamental problem about Petri nets and was only recently shown to be ACKERMANN-complete [Ler22,CO22].
Theorem 4.8. Multi-robot targeted reconfiguration is ACKERMANN-complete with symmetric self-closing doors, a spawner, and a destroyer.
Proof. For membership we can solve multi-robot target reconfiguration by converting the gadgets to the Petri net using Lemma 4.3. The target configuration is a state token for each gadget in the dish of its target state, and a number of tokens in each location dish as the number of robots in the target configuration. We can then call the ACKERMANN algorithm for configuration reachability in Petri nets shown in [LS19].
For hardness we can reduce from configuration reachability. It was shown in [CO22] that configuration reachability is ACKERMANN-hard.
The reduction presented in [CO22] vitally uses the ability of Petri nets to delete tokens, so we must use a sink in our simulation. Without a sink, we have PSPACE-completeness for multi-robot targeted reconfiguration. Theorem 4.9. Multi-robot targeted reconfiguration for symmetric self-closing doors and a spawner is PSPACE-complete.
Proof. Consider the input to the reconfiguration problem: two configurations of a system of gadgets. Namely, the start and end state of all the gadgets, and a start and end integer for each location. Since we can never destroy a robot once it is spawned, it always exists, so the player cannot spawn more robots than the total number of robots in the target configuration. We can then solve this problem in NPSPACE by nondeterministically selecting a robot to move, either from the source or another location. If we ever increase the total number of robots above the target we may reject. If we ever reach the configuration with the correct gadget states and robots at each location accept. Since PSPACE = NPSPACE we get membership.
We inherit hardness from the 1-player single-robot case by not including the source or connecting it to an unreachable location.
Impartial Unbounded 2-Player Motion Planning
In this section, we describe the 2-player impartial motion planning game and show that it is EXPTIME-complete for any reversible deterministic gadget.
Model
In the 2-player impartial motion planning game, two players control the same robot in a system of gadgets. Player 1 moves first, then Player 2 moves, then play repeats. On a given player's turn, they move the robot arbitrarily along the connection graph and through exactly one transition of a gadget. There is also a ko rule: The robot cannot traverse the same gadget on a player's turn as it traversed on their opponent's previous turn. If a player cannot make the robot traverse a gadget without breaking the ko rule, that player loses and the other player wins.
Lemma 5.1. Deciding whether Player 1 has a deterministic winning strategy in the 2-player impartial motion planning game is in EXPTIME for any set of gadgets.
Proof. An alternating Turing machine can solve the problem by using existential states to guess Player 1's moves and universal states to guess Player 2's moves, accepting when Player 1 wins and rejecting when Player 2 wins. This takes only polynomial space because the configuration of the game can be described in polynomial space. The machine can reject after a number of turns at least the number of configurations, which is at most exponential and thus can be counted to in polynomial space. Hence the problem is in APSPACE = EXPTIME.
Hardness
We introduce the locking 2-toggle, introduced in [DHL20] and shown in Figure 12. States 1 and 3 are leaf states and state 2 is the nonleaf state. If a robot crosses a tunnel in state 2, the tunnel flips direction and the other tunnel locks. Crossing a tunnel again will reverse this effect. Theorem 5.2. Deciding whether Player 1 has a deterministic winning strategy in the 2-player impartial motion planning game is EXPTIME-hard for the locking 2-toggle.
Proof. We reduce from G 4 as defined in [SC79]. G 4 is a 2-player game involving Boolean variables where the players flip a variable on their turn and try to be the one to satisfy a common DNF Boolean formula with 13 variables per clause (a 13-DNF). Players have their own variables and can't flip their opponent's variables, and a player may flip 1 variable on their turn or pass their turn. There is no ko rule. We start the robot next to a 1-toggle (a single tunnel of a locking 2-toggle) as shown in Figure 13. This 1-toggle is called the alternator. On each side of the alternator is a variable system for each player, which consists of variable branching and variable setting loops. The variable branching, as shown in Figure 14, has 2 locking 2-toggles before each branch. These start in the nonleaf state. At the end of each path is a variable flipping loop, which is shown in 15. The variable flipping loop for variable v contains 2 locking 2-toggles per instance of v or ¬v in the 13-DNF formula of the G 4 instance, as well as an path to the 13-DNF checker with 2 1-toggles on it. The locking 2-toggles representing v start in the nonleaf state iff v starts True in G 4 , and the locking 2-toggles representing ¬x start in the leaf state iff x starts True in G 4 . One path of the variable branch, on the other hand, leads to a pass loop, which is a variable flipping loop with 2 1-toggles in the loop instead of the locking 2-toggles. The 13-DNF checker contains a path for each clause in the 13-DNF, and each path contains a locking 2-toggle representing v, the same as one of the locking 2-toggles representing v in the variable flipping loop of v, followed by a 1-toggle, for each variable v in the corresponding clause. The paths all lead to a final 1-toggle called the finish line. This reduction can be done in polynomial time, as each variable and clause in G 4 is converted to a polynomial number of constant-size gadgets.
Start
To P1 variables
To P1 variables To P2 variables To P2 variables Figure 13: The robot's starting position, and the 1-toggle that's called the alternator. Figure 14: The variable branching for Player 1. Player 2's variable branching is on the other side of the alternator. In this example, player 1 has 3 variables: x, y, and z.
During intended play:
• Player 1 moves the robot through variable branching to select a variable to set. Because the locking 2-toggles are doubled, and because of the ko rule, Player 2 has no choice but to second Player 1's choices. Player 1 could also move the robot to the pass loop.
• Player 1 moves the robot around a variable selection loop, a variable by flipping whether each locking 2-toggle is locked or not. If they're in the pass loop, they just go around the loop. Again, Player 2 has no choice since the number of gadgets in the path is even.
• Player 1 either moves the robot to the 13-DNF checker or back through the variable branching to the alternator. • If Player 1 moves it back, they make it cross the alternator, and Player 2 goes through the same steps, but on the other side of the alternator.
• If a player moves the robot to the 13-DNF checker, they pick a path. If that path's corresponding clause in the 13-DNF is currently satisfied, they cross the finish line and win, since their opponent then has no legal moves. Otherwise, they get blocked by the first variable set to False, making their opponent win.
So Player 1 has the initiative and takes a G 4 turn on one side of the alternator, and Player 2 has the initiative and takes a G 4 turn on the other side. It is correct for a player to move the robot to the 13-DNF checker iff the 13-DNF is currently satisfied. We will now look at ways that the players can try to break the simulation of G 4 :
• Player 1 can make the robot cross the alternator as their first move. However, this lets Player 2 flip a variable or pass first. If Player 1 can win this way, they can also win by passing (moving the robot around the pass loop) first and then giving the initiative to Player 2. So not crossing the alternator first is always a correct move.
• A player can move the robot to a variable flipping loop and cut to the 13-DNF checker. However, if the player can win this way, they can win by passing and moving the robot to the 13-DNF checker.
• A player can try to turn around and flip another variable on the way back to the alternator. However, the ko rule prevents this.
• A player can try to move the robot to some other variable flipping loop from the start of the 13-DNF checker. However, 1-toggles will block the way.
Thus, the players are effectively forced to play G 4 in this game. Therefore, if Player 1 has a deterministic winning strategy in the G 4 instance, then they have one in this game, and if Player 1 has a deterministic winning strategy in this game, then they have one in the G 4 instance as well.
Theorem 5.3. Deciding whether Player 1 has a deterministic winning strategy in the 2-player impartial motion planning game is EXPTIME-hard for any interacting k-tunnel reversible deterministic gadget.
To variable flipping loops and pass loops y z x ¬x ¬x w z ¬w ¬w Figure 16: A 13-DNF checker, except that it represents a 3-DNF. This example represents (y ∨ z ∨ x) ∧ (¬w ∨ ¬w ∨ ¬x) ∧ (z ∨ w ∨ ¬x). The dotted paths are part of variable setting loops.
Proof. Figure 17 shows two tunnels that any interacting k-tunnel reversible deterministic gadget must have, as proved in [DHL20, Section 2.1], which further shows that these tunnels can be used to simulate a locking 2-toggle. For 2-player impartial motion planning, however, we must be careful of the simulation. To preserve parity, each traversal in the locking 2-toggle must correspond to an odd number of traversals in the simulation. In addition, if a traversal is not allowed, it must be blocked after an even number of traversals so the player who started moving the robot along that path loses. And to simulate the gadget ko rule, the gadgets at the ends of the simulation must be in the way of both paths. If all the constraints are met, then if a player makes the robot start a traversal along the simulation, the players must follow through, and in the end, it will be said player's opponent's turn. The opponent would have to make the robot traverse a gadget not in the simulation. Players would be disincentivized to start a traversal along a closed path, because they will be the one stuck with no legal moves. So the simulation would act exactly like a locking 2-toggle in the above reduction, giving us a straightforward reduction 2-player impartial motion planning with locking 2-toggles to 2-player impartial motion planning with any interacting k-tunnel reversible deterministic gadget. First we simulate a 1-tunnel reversible deterministic gadget with a directed tunnel, as shown in Figure 18. The robot cannot cross from right to left. If it crosses from left to right, it may cross back (after traversing some other gadget, of course), and the path from left to right may optionally still be open, this time leading to whatever state. Note that it takes two traversals to cross the simulation, and that a closed path in state 1 of the gadget used in the simulation blocks the robot after 0 traversals. Figure 18: Simulation of a 1-tunnel reversible deterministic gadget with a directed tunnel. We draw double bars crossing the 1-tunnel gadget as a reminder that it takes two traversals to cross. Now we simulate the locking 2-toggle, as shown in Figure 19. The simulation currently simulates the locking 2-toggle in the nonleaf state. The robot can traverse from top right to top left or from bottom left to bottom right. The robot will get blocked after two traversals in an attempt to traverse from top left to top right or from bottom right to bottom left. If the robot traverses from top right to top left, the robot will be able to traverse from top left to top right (after traversing a dif-ferent gadget). But an attempt to traverse from bottom left to bottom right gets the robot blocked after 0 traversals, thanks to the tunnel interaction in the left gadget, and an attempt to traverse from bottom right to bottom left or from top right to top left gets blocked after two traversals. So this would simulate a leaf state of the locking 2-toggle. The center gadget never becomes relevant for blocking, so we can argue by symmetry that traversing from bottom left to bottom right results in the other leaf state. Note that each path takes nine traversals to cross, so we have successfully simulated the locking 2-toggle meeting the constraints. This completes the proof. ? Figure 19: Simulation of the locking 2-toggle, under the constraints.
By Lemma 5.1 and Theorem 5.3, it is EXPTIME-complete to determine whether Player 1 has a deterministic winning strategy in the 2-player impartial motion planning game with any interacting k-tunnel reversible deterministic gadget.
Open Problems
For 0-player motion planning, we leave as an open problem whether the finite-time reachability problem is undecidable for a smaller set of gadgets. In particular, we used gadgets that can separate one robot from the rest when they are all stuck at the same spot. Is the problem undecidable for gadgets without this ability? What about classes of gadgets that have already been studied such as self-closing doors or reversible, deterministic gadgets?
In the 0-player model with spawners we investigated a synchronous model for the robots where they all took turns making their moves. One could imagine asking about various asynchronous models of robot motion through the gadgets.
For 1-player multi-agent motion planning, we investigated robot reachability and multi-agent targeted reconfiguration. The hardness for both these problems relies on simulating Petri nets with a symmetric self-closing door. Do there exist reversible gadgets for which the problem is the same complexity? How does this relate to reversible Petri nets?
We also did not investigate spawners in the 2-player setting. It seems likely that this problems is Undecideable for many gadget; however, the 0-player and 1-player constructions do not obviously adapt to give this result.
Finally, in the 2-player impartial case, does the complexity change for other gadgets? Are there any gadgets for which finding a winning strategy is provably easier? What about cases where the impartial game is harder than the regular 2-player game?
Figure 1 :
1The increment gadget, shown with state transitions.
Figure 2 :
2The register gadget, shown with state transitions.
Figure 3 :
3Implementation of the register of a counter machine.
Figure 4 :
4The context of the increment gadget, along with the spawner and a US gadget.
Figure 5 :
5Two instructions implemented using UPDSDS gadgets.
Figure 6 :
6A 2-counter machine constructed with the gadgets. 2 counters are shown instead of 3 to save space.
Definition 4. 1 .
1A Petri net {D, R} consists of a set of dishes D and rules R. A configuration t is a vector over the elements of D which represents the number of tokens in each dish. Each rule (u, v) ∈ R is a pair of vectors over D. A rule can be applied to a configuration d 0 if d 0 − u contains no negative integers to change the configuration to d 1 = d 0 − u + v. The volume of a configuration denoted |d| is the sum of all its elements.
Figure 7 :
7General Petri-net rule (u, v), where u's nonzero dishes are shown on the left side and v's nonzero dishes are shown on the right side.
Figure 8 :
8Petri-net rules which simulate a 2-tunnel toggle gadget.
Figure 9 :
9Left: Rule we include when a gadget can be traversed from the source. Right: Rule we include when a traversal leads to the sink.
Figure 10 :Figure 11 :
1011Symmetric self-closing door.Symmetric self-closing door. The symmetric self-closing door is a 2-state 2-tunnel gadget shown inFigure 10. The states are {1, 2} and the traversals are • in state 1 from A to B changing state to 2, and • in state 2 from C to D changing state to 1. How to simulate a rule which decreases volume (Left) and a rule which increases volume (Right).
Definition 4. 5 (
5Coverage Problem). Input: A Petri net {D, R}, and vectors d 0 and d c . Output: Does there exist a reachable configuration d ∈ REACH({D, R}, d 0 ) such that d[k] ≥ d c [k] for all 0 ≤ k < |D|.
Figure 12 :
12The locking 2-toggle.
Figure 15 :
15The variable flipping loop for variable x. This example represents the case where the 13-DNF has 1 instance of x and 1 instance of ¬x. Currently, x is True.
Figure 17 :
17Two tunnels that an interacting k-tunnel reversible deterministic gadget must have.Solid arrows indicate open traversals, hollow arrows with "?" indicate optionally open traversals, and absent arrows indicate closed traversals. State 3 could be any state, including 1 and 2.
To P1 variable xTo P1 variable yTo P1 variable z
To P1 pass loop
To alternator
In[DGLR18,DHL20], "reachability" refers to whether an agent/robot can reach a target location. Here we refer to it as robot reachability since for models such as Petri-nets the Reachability problem refers to whether a full configuration is reachable.
There was no need to apply directions to the connection graph in[ADHL22] because each location acted exclusively as either the start of transitions or the end of transitions. In[DHHL22] the connections were undirected and it was assumed the robot proceeded away from the gadget where it just traversed.
Problems names may differ.
Walking through doors is hard, even without staircases: Proving PSPACE-hardness via planar assemblies of door gadgets. Abd + 20] Joshua, Jeffrey Ani, Erik D Bosboom, Yevhenii Demaine, Diomidov, Proceedings of the 10th International Conference on Fun with Algorithms (FUN 2020). the 10th International Conference on Fun with Algorithms (FUN 2020)Favignana, ItalyDylan Hendrickson, and Jayson LynchABD + 20] Joshua Ani, Jeffrey Bosboom, Erik D. Demaine, Yevhenii Diomidov, Dylan Hendrick- son, and Jayson Lynch. Walking through doors is hard, even without staircases: Prov- ing PSPACE-hardness via planar assemblies of door gadgets. In Proceedings of the 10th International Conference on Fun with Algorithms (FUN 2020), Favignana, Italy, Septem- ber 2020.
Pushing blocks via checkable gadgets: PSPACE-completeness of Push-1F and Block/Box Dude. Joshua Ani, Lily Chung, Erik D Demaine, Yevhenii Diomidov, Dylan Hendrickson, Jayson Lynch, 1-2:30Proceedings of the 11th International Conference on Fun with Algorithms. the 11th International Conference on Fun with AlgorithmsIsland of Favignana, Sicily, Italy2ACD + 22[ACD + 22] Joshua Ani, Lily Chung, Erik D. Demaine, Yevhenii Diomidov, Dylan Hendrickson, and Jayson Lynch. Pushing blocks via checkable gadgets: PSPACE-completeness of Push-1F and Block/Box Dude. In Proceedings of the 11th International Conference on Fun with Algorithms, pages 2:1-2:30, Island of Favignana, Sicily, Italy, May-June 2022.
Traversability, reconfiguration, and reachability in the gadget framework. Joshua Ani, Erik D Demaine, Yevhenii Diomidov, Dylan H Hendrickson, Jayson Lynch, Proceedings of the 16th International Conference and Workshops on Algorithms and Computation. Petra Mutzel, Md. Saidur Rahman, and Slaminthe 16th International Conference and Workshops on Algorithms and ComputationJember, Indonesia13174ADD + 22[ADD + 22] Joshua Ani, Erik D. Demaine, Yevhenii Diomidov, Dylan H. Hendrickson, and Jayson Lynch. Traversability, reconfiguration, and reachability in the gadget framework. In Petra Mutzel, Md. Saidur Rahman, and Slamin, editors, Proceedings of the 16th International Conference and Workshops on Algorithms and Computation, volume 13174 of Lecture Notes in Computer Science, pages 47-58, Jember, Indonesia, March 2022.
Characterizing universal reconfigurability of modular pivoting robots. A Hugo, Erik D Akitaya, Andrei Demaine, H Gonczi, Adam Hendrickson, Matias Hesterberg, Oliver Korman, Korten, Irene Ja Yson Lynch, Vera Parada, Sacristán, Proceedings of the 37th International Symposium on Computational Geometry, LIPIcs. Kevin Buchin andÉric Colin de Verdièrethe 37th International Symposium on Computational Geometry, LIPIcs1020ADG + 21[ADG + 21] Hugo A. Akitaya, Erik D. Demaine, Andrei Gonczi, Dyl an H. Hendrickson, Adam Hesterberg, Matias Korman, Oliver Korten, Ja yson Lynch, Irene Parada, and Vera Sacristán. Characterizing universal reconfigurability of modular pivoting robots. In Kevin Buchin andÉric Colin de Verdière, editors, Proceedings of the 37th International Symposium on Computational Geometry, LIPIcs, pages 10:1-10:20, 2021.
Trains, games, and complexity: 0/1/2-player motion planning through input/output gadgets. Joshua Ani, Erik D Demaine, Dylan Hendrickson, Jayson Lynch, Proceedings of the 16th International Conference and Workshops on Algorithms and Computation. Petra Mutzel, Md. Saidur Rahman, and Slaminthe 16th International Conference and Workshops on Algorithms and ComputationJember, Indonesia13174Joshua Ani, Erik D. Demaine, Dylan Hendrickson, and Jayson Lynch. Trains, games, and complexity: 0/1/2-player motion planning through input/output gadgets. In Petra Mutzel, Md. Saidur Rahman, and Slamin, editors, Proceedings of the 16th In- ternational Conference and Workshops on Algorithms and Computation, volume 13174 of Lecture Notes in Computer Science, pages 187-198, Jember, Indonesia, March 2022.
Robert M Alaniz, Bin Fu, Timothy Gomez, Elise Grizzell, Andrew Rodriguez, Robert Schweller, Tim Wylie, arXiv:2211.12603Reachability in restricted chemical reaction networks. arXiv preprintAFG + 22[AFG + 22] Robert M. Alaniz, Bin Fu, Timothy Gomez, Elise Grizzell, Andrew Rodriguez, Robert Schweller, and Tim Wylie. Reachability in restricted chemical reaction networks. arXiv preprint arXiv:2211.12603, 2022.
Full tilt: Universal constructors for general shapes with uniform external forces. Austin Bmlc + 19] Jose Balanza-Martinez, David Luchsinger, Rene Caballero, Reyes, A Angel, Robert Cantu, Luis Angel Schweller, Tim Garcia, Wylie, Proceedings of the 30th Annual ACM-SIAM Symposium on Discrete Algorithms. the 30th Annual ACM-SIAM Symposium on Discrete AlgorithmsSIAMBMLC + 19] Jose Balanza-Martinez, Austin Luchsinger, David Caballero, Rene Reyes, Angel A Cantu, Robert Schweller, Luis Angel Garcia, and Tim Wylie. Full tilt: Universal con- structors for general shapes with uniform external forces. In Proceedings of the 30th Annual ACM-SIAM Symposium on Discrete Algorithms, pages 2689-2708. SIAM, 2019.
Relocating units in robot swarms with uniform control signals is PSPACE-complete. Ccg + 20] David, Angel A Caballero, Timothy Cantu, Austin Gomez, Robert Luchsinger, Tim Schweller, Wylie, 2020CCG + 20] David Caballero, Angel A. Cantu, Timothy Gomez, Austin Luchsinger, Robert Schweller, and Tim Wylie. Relocating units in robot swarms with uniform control signals is PSPACE-complete. CCCG 2020, 2020.
Reachability in vector addition systems is Ackermann-complete. Wojciech Czerwiński, Łukasz Orlikowski, Proceedings of the 62nd Annual IEEE Symposium on Foundations of Computer Science. the 62nd Annual IEEE Symposium on Foundations of Computer ScienceWojciech Czerwiński and Łukasz Orlikowski. Reachability in vector addition sys- tems is Ackermann-complete. In Proceedings of the 62nd Annual IEEE Symposium on Foundations of Computer Science, pages 1229-1240, 2022.
Computational complexity of motion planning of a robot through simple gadgets. Erik D Demaine, Isaac Grosof, Jayson Lynch, Mikhail Rudoy, Proceedings of the 9th International Conference on Fun with Algorithms. the 9th International Conference on Fun with AlgorithmsLa Maddalena, Italy18Erik D. Demaine, Isaac Grosof, Jayson Lynch, and Mikhail Rudoy. Computational complexity of motion planning of a robot through simple gadgets. In Proceedings of the 9th International Conference on Fun with Algorithms, pages 18:1-18:21, La Maddalena, Italy, June 2018.
PSPACEcompleteness of reversible deterministic systems. Erik D Demaine, Robert A Hearn, Dylan Hendrickson, Jayson Lynch, Proceedings of the 9th Conference on Machines, Computations and Universality. the 9th Conference on Machines, Computations and UniversalityDebrecen, HungaryErik D. Demaine, Robert A. Hearn, Dylan Hendrickson, and Jayson Lynch. PSPACE- completeness of reversible deterministic systems. In Proceedings of the 9th Confer- ence on Machines, Computations and Universality, pages 91-108, Debrecen, Hungary, August-September 2022.
Toward a general theory of motion planning complexity: Characterizing which gadgets make games hard. Erik D Demaine, Dylan Hendrickson, Jayson Lynch, Proceedings of the 11th Conference on Innovations in Theoretical Computer Science. the 11th Conference on Innovations in Theoretical Computer ScienceSeattle, WashingtonErik D. Demaine, Dylan Hendrickson, and Jayson Lynch. Toward a general theory of motion planning complexity: Characterizing which gadgets make games hard. In Proceedings of the 11th Conference on Innovations in Theoretical Computer Science, Seattle, Washington, January 2020.
Decidability and complexity of petri net problems-an introduction. Lectures on Petri Nets I: Basic Models: Advances in Petri Nets. Javier Esparza, Javier Esparza. Decidability and complexity of petri net problems-an introduction. Lectures on Petri Nets I: Basic Models: Advances in Petri Nets, pages 374-428, 2005.
Gadgets and gizmos: A formal model of simulation in the gadget framework for motion planning. Dylan Hendrickson, Massachusetts Institute of TechnologyMaster's thesisDylan Hendrickson. Gadgets and gizmos: A formal model of simulation in the gad- get framework for motion planning. Master's thesis, Massachusetts Institute of Tech- nology, 2021.
The reachability problem for Petri nets is not primitive recursive. Jérôme Leroux, Proceedings of the 62nd Annual IEEE Symposium on Foundations of Computer Science. the 62nd Annual IEEE Symposium on Foundations of Computer ScienceJérôme Leroux. The reachability problem for Petri nets is not primitive recursive. In Proceedings of the 62nd Annual IEEE Symposium on Foundations of Computer Science, pages 1241-1252, 2022.
Reachability in vector addition systems is primitive-recursive in fixed dimension. Jérôme Leroux, Sylvain Schmitz, Proceedings of the 34th Annual ACM/IEEE Symposium on Logic in Computer Science. the 34th Annual ACM/IEEE Symposium on Logic in Computer ScienceIEEEJérôme Leroux and Sylvain Schmitz. Reachability in vector addition systems is primitive-recursive in fixed dimension. In Proceedings of the 34th Annual ACM/IEEE Symposium on Logic in Computer Science, pages 1-13. IEEE, 2019.
A framework for proving the computational intractability of motion planning problems. Jayson Lynch, Massachusetts Institute of TechnologyPhD thesisJayson Lynch. A framework for proving the computational intractability of motion planning problems. PhD thesis, Massachusetts Institute of Technology, 2020.
Computation: Finite and Infinite Machines. Marvin Minsky, Prentice-Hall, IncMarvin Minsky. Computation: Finite and Infinite Machines. Prentice-Hall, Inc, 1967.
The covering and boundedness problems for vector addition systems. Charles Rackoff, Theoretical Computer Science. 62Charles Rackoff. The covering and boundedness problems for vector addition sys- tems. Theoretical Computer Science, 6(2):223-231, 1978.
Provably difficult combinatorial games. J Larry, Ashok K Stockmeyer, Chandra, Siam Journal on Computing. 82Larry J. Stockmeyer and Ashok K. Chandra. Provably difficult combinatorial games. Siam Journal on Computing, 8(2):151-174, May 1979.
| []
|
[
"Snapshot hyperspectral imaging of intracellular lasers",
"Snapshot hyperspectral imaging of intracellular lasers"
]
| [
"Soraya Caixeiro [email protected]@uni-koeln.de \nDepartment of Chemistry\nHumboldt Centre for Nano-and Biophotonics\nUniversity of Cologne\nGreinstr. 4-650939CologneGermany\n",
"Philip Wijesinghe \nCentre of Biophotonics\nSchool of Physics and Astronomy\nSUPA\nUniversity of St Andrews\nNorth Haugh\nKY16 9SSSt Andrews, FifeUK\n",
"Kishan Dholakia \nCentre of Biophotonics\nSchool of Physics and Astronomy\nSUPA\nUniversity of St Andrews\nNorth Haugh\nKY16 9SSSt Andrews, FifeUK\n\nCentre of Light for Life and School of Biological Sciences\nThe University of Adelaide\nAdelaideSouth AustraliaAustralia\n",
"Malte C Gather \nDepartment of Chemistry\nHumboldt Centre for Nano-and Biophotonics\nUniversity of Cologne\nGreinstr. 4-650939CologneGermany\n\nCentre of Biophotonics\nSchool of Physics and Astronomy\nSUPA\nUniversity of St Andrews\nNorth Haugh\nKY16 9SSSt Andrews, FifeUK\n"
]
| [
"Department of Chemistry\nHumboldt Centre for Nano-and Biophotonics\nUniversity of Cologne\nGreinstr. 4-650939CologneGermany",
"Centre of Biophotonics\nSchool of Physics and Astronomy\nSUPA\nUniversity of St Andrews\nNorth Haugh\nKY16 9SSSt Andrews, FifeUK",
"Centre of Biophotonics\nSchool of Physics and Astronomy\nSUPA\nUniversity of St Andrews\nNorth Haugh\nKY16 9SSSt Andrews, FifeUK",
"Centre of Light for Life and School of Biological Sciences\nThe University of Adelaide\nAdelaideSouth AustraliaAustralia",
"Department of Chemistry\nHumboldt Centre for Nano-and Biophotonics\nUniversity of Cologne\nGreinstr. 4-650939CologneGermany",
"Centre of Biophotonics\nSchool of Physics and Astronomy\nSUPA\nUniversity of St Andrews\nNorth Haugh\nKY16 9SSSt Andrews, FifeUK"
]
| []
| Intracellular lasers are emerging as powerful biosensors for multiplexed tracking and precision sensing of cells and their microenvironment. This sensing capacity is enabled by quantifying their narrow-linewidth emission spectra, which is presently challenging to do at high speeds. In this work, we demonstrate rapid snapshot hyperspectral imaging of intracellular lasers. Using integral field mapping with a microlens array and a diffraction grating, we obtain images of the spatial and spectral intensity distribution from a single camera acquisition. We demonstrate widefield hyperspectral imaging over a 3×3 mm 2 field of view and volumetric imaging over 250×250×800 µm 3 volumes with a spatial resolution of 5 µm and a spectral resolution of less than 0.8 nm. We evaluate the performance and outline the challenges and strengths of snapshot methods in the context of characterising the emission from intracellular lasers. This method offers new opportunities for a diverse range of applications, including highthroughput and long-term biosensing with intracellular lasers. Intracellular lasing | laser particles | integral field mapping | whispering gallery mode lasers | biolaser. | null | [
"https://export.arxiv.org/pdf/2306.01083v1.pdf"
]
| 259,064,012 | 2306.01083 | 56b5fd02bfd882ecc1ff66a22d18169eb0bb1c5c |
Snapshot hyperspectral imaging of intracellular lasers
Soraya Caixeiro [email protected]@uni-koeln.de
Department of Chemistry
Humboldt Centre for Nano-and Biophotonics
University of Cologne
Greinstr. 4-650939CologneGermany
Philip Wijesinghe
Centre of Biophotonics
School of Physics and Astronomy
SUPA
University of St Andrews
North Haugh
KY16 9SSSt Andrews, FifeUK
Kishan Dholakia
Centre of Biophotonics
School of Physics and Astronomy
SUPA
University of St Andrews
North Haugh
KY16 9SSSt Andrews, FifeUK
Centre of Light for Life and School of Biological Sciences
The University of Adelaide
AdelaideSouth AustraliaAustralia
Malte C Gather
Department of Chemistry
Humboldt Centre for Nano-and Biophotonics
University of Cologne
Greinstr. 4-650939CologneGermany
Centre of Biophotonics
School of Physics and Astronomy
SUPA
University of St Andrews
North Haugh
KY16 9SSSt Andrews, FifeUK
Snapshot hyperspectral imaging of intracellular lasers
† These authors contributed equally to this work
Intracellular lasers are emerging as powerful biosensors for multiplexed tracking and precision sensing of cells and their microenvironment. This sensing capacity is enabled by quantifying their narrow-linewidth emission spectra, which is presently challenging to do at high speeds. In this work, we demonstrate rapid snapshot hyperspectral imaging of intracellular lasers. Using integral field mapping with a microlens array and a diffraction grating, we obtain images of the spatial and spectral intensity distribution from a single camera acquisition. We demonstrate widefield hyperspectral imaging over a 3×3 mm 2 field of view and volumetric imaging over 250×250×800 µm 3 volumes with a spatial resolution of 5 µm and a spectral resolution of less than 0.8 nm. We evaluate the performance and outline the challenges and strengths of snapshot methods in the context of characterising the emission from intracellular lasers. This method offers new opportunities for a diverse range of applications, including highthroughput and long-term biosensing with intracellular lasers. Intracellular lasing | laser particles | integral field mapping | whispering gallery mode lasers | biolaser.
Introduction
Intracellular lasers are micro-to-nanoscale, tissue-integrated lasing particles that enable multiplexed large-scale tracking and precision sensing in biomedicine [1][2][3][4]. They are an attractive alternative to conventional luminescent particles, like quantum dots and fluorescent beads, due to their high spectral purity, very narrow emission linewidth, and high output intensities [5][6][7][8]. When integrated into tissues and cells, lasing particles can serve as optical barcodes for multiplexed cell tracking in 2D and 3D [3,4,9,10] and can perform sensing of the local microenvironment due to the minute shifts in their spectral mode position upon changes in local refractive index [1,2,[10][11][12][13][14]. Recent studies in cardiac tissue and phantoms have shown that lasing particles can be detected at depth through turbid media, and even localised in 3D [1,15].
To date, most studies on intracellular lasers have involved manually addressing individual lasers point-by-point using a spectrometer, i.e., intracellular lasers are excited by an external pump laser and their emission is collected through a microscope objective and imaged onto the entrance slit of a spectrometer. This approach is sequential by nature, limiting the throughput and speed of detection.
Recording the spectrally rich emission from intracellular lasers across multiple cells distributed across a field of view requires hyperspectral imaging, i.e., a technique that measures both the spectral and spatial intensity of a light field. Hyperspectral imaging acquires spatial and spectral information in multidimensional 'datacubes' or 'hypercubes', e.g., of the form: ( , , ) [16], where λ denotes the wavelength. Higher dimensions may also include depth, z, or time t. In many cases spectral features of interest are relatively broad, e.g., when hyperspectral imaging is applied to distinguish different fluorescent dyes or light-absorbing species in a biological sample [17][18][19], or when air-borne and satellite-based hyperspectral imagers are used for environmental monitoring and in agriculture [20]. In these situations, cameras with appropriate filters or multichannel photo-multiplier tubes enable rapid data acquisition and sufficient spectral resolution. However, to take full advantage of the information encoded in the narrow emission spectra of intracellular lasers, a higher spectral resolution is generally required.
One way to achieve high spectral resolution is though point scanning the sample volume, for instance, using galvanometer mirrors or translation stages, and analysing the laser emission with a spectrometer. This typically limits imaging speed to the integration time and the readout rate of the spectrometer camera; ensuring sufficiently high signal-to-noise ratios in the recorded laser spectra can result in scan times on the order of hours [4,9]. There have been few demonstrations of such hyperspectral imaging of intracellular lasers to date. For instance, fast confocal scanning and spectral readout of the emission from intracellular semiconductor disk lasers has allowed for tracking cell migration in a portion of a tumour spheroid (1 × 1 × 0.28 mm 3 ), with the acquisition of each stack taking ∼47 mins [4], which is prohibitively slow in many scenarios. Similarly, a point-scanning spectrometer was integrated with fluorescence microscopy and optical coherence tomography for multimodal structural and spectral imaging of nanowire laser-tracked stem cell migration in rabbit eyes [9]. However, because the spectral collection time was a few seconds for each location, 13 lasers were addressed individually by tracking their positions from their fluorescence signal. Methods for faster hyperspectral imaging of intracellular lasers are thus a burgeoning need for applications that require monitoring of biological processes in real-time and to improve throughput in existing applications.
Outside the realm of intracellular lasers, important advances have been made to accelerate hyperspectral imaging [16]. Fundamentally, increases in speed must come from an increase in the total number of available detector elements; for instance, in 'push-broom' spectrometers that disperse a line scan over a 2D area detector [21], or from a more efficient use of the detector elements, for instance, in the coded-aperture snapshot spectral imager (CASSI) that avails itself of sparsity in the spatio-spectral content to reconstruct high quality hypercubes [22]. Particularly powerful speed advances are achievable from a class of hyperspectral imaging termed snapshot hyperspectral imaging (SSHI) [16]. SSHI describes methods that multiplex both spatial and spectral content into one wide-area detector, such that a hypercube can be reconstructed from one acquisition event of a camera, i.e., a 'snapshot'. These methods lead to video-rate imaging, however, multiplexing 3D information onto a 2D detector requires some trade-off in spatial and spectral resolution. Common SSHI methods are based on integral field mapping (IFM). IFM features some integration of the light field into discrete points and dispersing these points spectrally to efficiently fill the detector. There are several implementations of IFM, for instance, using fibre optic arrays [16] and microlens arrays [23], with the latter offering a particularly compact, facile and low-loss solution.
Here, we demonstrate the use of snapshot hyperspectral imaging based on integral field mapping using a microlens array for rapid, widefield, spectrally resolved mapping of intracellular laser emission. Our study focuses on disk-shaped whispering gallery mode (WGM) lasers, made from a III/V semiconductor multi-quantum well material [3,24]. The high refractive index of these lasers allows for a sub-micron cavity size (total volume, ~0.1 μm ), much smaller than the nucleus of a eukaryotic cell (~100 μm ), which is essential for preserving normal cell function. In contrast to point scanning, our SSHI method enables spatial and spectral detection in a single acquisition, which increases the imaging speed from minutes [4,9] to video rate. Furthermore, for low-light samples, the technique can be much faster since the laser emission can be collected concurrently over long integration times instead of sequentially for each individual location. We calibrate our system for the detection of the narrow linewidths of the disk lasers and discuss the opportunities and challenges of snapshot detection schemes in this area. We experimentally demonstrate a spatial resolution of 5 µm and a spectral resolution of under 0.8 nm. We demonstrate the utility of our system for volumetric hyperspectral imaging via objective scanning, achieving a 250 × 250 × 800 µm 3 volume size in x, y and z, respectively. We further demonstrate widefield hyperspectral detection over a 3 × 3 mm 2 field of view in a few minutes by sequentially tiling 17 × 17 snapshot regions. The imaging speed was limited by the speed of the motorized stages and the integration time required to achieve a sufficient signal to noise ratio. Within each field of view, our technique is inertia-free, i.e., it does not require beam scanning, and thus is relatively simple to control without the need for expensive data acquisition cards. It can be seamlessly integrated into existing microscopes, making it a versatile and easy-to-use imaging solution. This detection scheme offers new opportunities for high-throughput, widefield and volumetric imaging, and is compatible with widefield excitation schemes and reduces photodamage. Figure 1 shows the experimental setup developed in this work. A custom-built inverted microscope (Cerna modular pieces, Thorlabs, USA) is equipped with an objective lens (ObjS, Plan Apochromat, 20x 0.75 NA, Nikon, Japan) mounted on a motorised focusing module (ZFM2020, Thorlabs, USA). The sample is mounted on a two-axis translation stage (PLS-XY, Thorlabs, USA), equipped with an on-stage incubator (H301, Okolab, Italy) for cell measurements, set at 37°C and purged with a 5% CO2:air mixture. For brightfield measurements, the sample is illuminated with a white LED and Koehler illumination optics in transmission. A tunable pulsed optical parametric oscillator laser system (OPO, 5 ns, 20 Hz pulse, OPOTEK, USA), is used for the calibration of the hyperspectral imaging, and a 473 nm pulsed blue diode laser (1.5 ns, 1 kHz pulse, Alphalas, Germany) is used to pump the disk lasers. A lens focuses the pump light to the back focal plane of the objective, such that the light arrives at the sample plane collimated. The emitted and reflected light is coupled to the collection arm via a beamsplitter (BS1). The sample image is relayed to the SSHI unit via a 4f lens system (L4, L5). To provide context to the hyperspectral imaging, a removable beamsplitter (BS2: 10R/90T) can be used to image the brightfield and lasing intensity of the sample with a widefield camera (Cam1, Retiga EXi CCD, QImaging, Canada). An additional flip mirror allows point measurements of the lasing signal from the centre of the sample using a spectrometer equipped with a CCD detector (SR500i, Andor, UK) for calibration and validation purposes. In-house software based on MATLAB is used to synchronise the light delivery, camera acquisitions, and x, y and z scanning.
Methods
Experimental setup
Snapshot hyperspectral imaging
Our embodiment of the IFM SSHI system, which is inspired by the geometry in Boniface et al. [25], is highlighted in Fig. 1(a) by the grey dashed box. A microlens array (MLA, 18-00079, SUSS MicroOptics, Switzerland) placed at the image plane integrates the detected light field into an evenly distributed hexagonal array of foci. This array of foci is re-imaged by a secondary objective (ObjH, Plan N 10x 0.25 NA, Olympus, Japan). A diffraction grating (DG, 300 grooves/mm, GT13-03, Thorlabs, USA) placed at the back aperture of the objective spectrally disperses the light, which is then imaged by the camera (Cam2, Orca Flash 4.0, Hamamatsu, Japan) using a tube lens (TL). This results in a hexagonal point array that is spatially dispersed if the incident light is composed of different wavelengths (Figs. 1(b) and (c)). The rotation angle of the diffraction grating with respect to the microlens array is chosen to optimally fill the camera [23]. We utilised a commercially available telemacro lens as the tube lens (TL; Tele-Macro AF 70-300mm, Tamron, Germany) to reduce the footprint of the SSHI unit and provide the capacity to tune the magnification.
The pitch (spacing), d, and numerical aperture, NA, of the microlenses determine the maximal spectral sampling. The size of the foci of the microlens array, s, is determined by the Abbe diffraction limit, = 0.61 /NA (2.6 µm for our system). Based on the angle of dispersion with respect to the hexagonal lattice illustrated in Fig. 1 (b), the separation between microlenses along the dispersion axis is √19 , and thus the number of resolvable spectra is:
= √19 / . The spectral bandwidth, , dispersed by the diffraction grating with a dispersion angle, ⁄ , must fit between the microlenses. The total dispersion at the camera is ⁄ , which when adjusted for the camera magnification, = / , must satisfy: < √19 ( ⁄ ) . This results in a spectral resolution: = / . The spatial FOV and resolution are determined by the magnification, M, of the sample image onto the microlens array and the total pixels available in the camera, P. The spatial resolution is: = ⁄ , and the spatial sampling per FOV is:
= /( √3/2), where is the microlens pitch on the camera in pixels: = /( ); p is the physical pixel size. Optimising our system for hyperspectral imaging of disk laser emission, we used a microlens array with a pitch d of 30 m and NA of 0.16, an objective lens with an of 18 mm, a diffraction grating with a ⁄ of 0.31 × 10 rad/m, and a camera with 2048 × 2048 pixels. At a central wavelength of 670 nm, this resulted in a theoretical spectral bandwidth, , of 24 nm, comparable to the emission bandwidth of our microdisk lasers [3], and a spectral resolution, , of 0.46 nm, leading to = 51 resolvable spectra. Ultimately the speed of the acquisition of large areas is limited by the desired spatial resolution. With this trade-off in mind and given the imaging requirements, such as the typical volume of a mammalian cell (~10 µm ) and diameter of the microdisk lasers (1-3 µm), we chose a magnification of = 6 to achieve a single cell spatial resolution of 5 µm. The resolution was set by the effective microlens pitch at the sample, which was verified experimentally using a ruled test target. The variable telephoto lens was tuned to fit the full aperture of the microlens array, which has a spatial sampling of = 50, leading to a FOV of 250 × 250 µm 2 .
Hyperspectral image calibration and reconstruction
The reconstruction of hypercubes from hyperspectral snapshots requires the precise mapping of spatial and spectral sample intensities from spatial camera coordinates. A hexagonal lattice can be described using two basis vectors, and , and an offset, , such that the position of each spot, , is within the set: { = + + | , ∈ ℤ and ∈ }, where denotes the spatial bounds of the camera. The basis vectors describe two independent vectors of separation between adjacent foci, and the offset describes the position vector of a spot at the central wavelength, . The hexagonal lattice positions at a wavelength, , can then be described as: = + ( − ) , where = ⁄ is the rate of dispersion in pixels per wavelength, and is the unit vector along the dispersion axis. As such, the calibration of our SSHI system requires accurate measurement of the basis vectors, offset vector, and the dispersion. To do so, we perform calibration by illuminating a scattering sample with several wavelengths using a tunable laser within the bandwidth of detection. Each wavelength generates a hexagonal lattice in the snapshot image. The basis vectors and offset can be readily measured from a single calibration image using one wavelength. Using at least two wavelengths, the dispersion parameter can also be readily estimated from the relative shift in the hexagonal lattice. We further image a micrometre ruler with both brightfield and SSHI to aid with the co-registration of the camera images. This procedure was performed prior to each set of experiments.
After the calibration is performed, raster hypercubes, ( , , ), are reconstructed by using scattered interpolation of the snapshot intensities queried at the lattice coordinates, , over a regular grid, for each sample wavelength. Specifically, we implement the native scattered interpolation method in MATLAB, which is based on Delaunay triangulation [26]. We generate hypercubes with dimensions of 100 × 100 × 100 pixels, in x, y and to satisfy the Nyquist sampling criterion. Smoothing the snapshot intensity image in the dispersion axis by the spectral sampling size and in the orthogonal axis by the separation distance between adjacent spectral lines further ensured that information was not lost during image reconstruction. Hyperspectral reconstruction took less than 1 s per snapshot. The calibration and hyperspectral reconstruction code is provided as open source, as detailed in the data availability statement.
Fabrication of disk lasers
The fabrication of disk-shaped WGM microlasers for this study largely followed our previously reported protocols [3,24]. In brief, we used a heterostructure grown on GaAs wafers and made up of layers of InGaP wells and AlInGaP barriers, forming a double quantum well structure with a total thickness of 180 nm that is located on an AlGaAs sacrificial layer as described in detail in [3,24,27] (EPSRC National Epitaxy Facility, Sheffield, UK). Substrates were cleaned by a 3 min sonication in isopropanol, acetone, deionised water and methanol, followed by 3 min of O2 plasma. The substrates were spin-coated with SU8 photoresist (SU8 2000.5, KayakuAM, USA; 3:1 dilution with cyclopentanone) and soft-baked on a hotplate at 90°C for 2 min. After cooling down, the sample was exposed with a UV mask aligner using a custom mask with 3-µm diameter holes, followed by a post-exposure bake for 2 min at 90°C. The photoresist was developed in 2-methoxy-1-methylethyl acetate (EC solvent, Microposit, Germany) for 60 s and then cured at 180°C for 5 min. A 30 s plasma descumming step was performed, followed by a 12 s wet etch in aqueous solution of HBr (1 M) and Br2 (0.4 M), which defined the circular disk shape. The SU8 photoresist caps were removed by reactive ion etching in O2 plasma for 7 min. A subsequent selective 3 min wet etch in 5% HF collapsed the disk onto the GaAs substrate. The resulting disk diameter was 1-3 µm and the thickness of the disks was 180 nm.
Cell culture and disk internalisation
Macrophage cells were isolated from blood samples obtained from healthy human donors after ethical review (School of Medicine, University of St Andrews) and under informed ethical consent. They were cultured in RPMI supplemented with 10% fetal bovine serum and 1% penicillin-streptomyocin in ibidi dishes (μ-dishes, Ibidi, Germany). Wafer pieces containing disk lasers were incubated in isopropanol for several minutes to ensure sterility. They were washed in PBS and finally harvested in a 2 ml tube by sonication directly into cell culture medium. The solution was filtered through a 5 µm pore size to remove any fragments of wafer and added to the ibidi dishes with adherent cells. The cells were allowed to uptake disks by naturally occurring phagocytosis overnight before the measurements.
Phasor analysis
For visualisation of widefield hyperspectral data, we utilise an approach inspired by phasor analysis in fluorescence imaging [28]. The phasor approach is a fit-free method of obtaining the centre of mass (phasor angle) and the spectral width (phasor amplitude) of the emission spectrum, and a convenient method for visualising multidimensional information. We perform a single normalised discrete Fourier transform for each spectrum:
C = • Ω ⁄ ∈Ω ∈Ω .
From this, we can readily extract several useful parameters, including the lasing peak centre of mass, ∠ , where the angle in [0, 2 ] corresponds linearly to the wavelength in ; and spectral width, || || , which is in [0, 1] and corresponds to the relative width of the peak (where 0 is a uniform distribution and 1 is a delta function).
Results
Calibration
We first calibrated the performance of our hyperspectral imaging system using a tunable optical parametric oscillator (OPO) laser system and a reference spectrometer. This was accomplished by illuminating a scattering sample sequentially with two distinct wavelengths within the calculated spectral bandwidth of our system, recording the scattered light by both the SSHI unit and the reference spectrometer, and finally determining the basis vectors, offset, and spectral dispersion parameters from this data (see Methods). We then validated the calibration by recording a set of additional wavelengths. Figure 2 shows the normalised spectra from nine distinct illumination wavelengths recorded using SSHI, which compares favourably with the data obtained by the reference spectrometer. The peak position and width of each spectral peak were estimated using a Voigt fit [13]. The resulting spectral peak positions matched well with the ground truth spectra, with a mean absolute error of 7 pm. This very close agreement indicates that the spectral dispersion in the SSHI is linear and does not require more than two wavelengths for calibration. The mean full-width at half-maximum (FWHM) of the spectra acquired by SSHI was 0.51 nm, which is slightly larger than the expected spectral resolution of 0.46 nm, likely due to the spherical aberration by the microlenses leading to a larger-than-ideal spot size. Fig. 2. Spectral calibration of hyperspectral imaging using a reference spectrometer and a tunable laser system. Mean normalised spectral response recorded by SSHI unit and by the reference spectrometer.
Hyperspectral imaging of disk lasers
Next, we evaluate the capacity of the calibrated SSHI system to image disk lasers. Figure 3 shows the lasing spectra of representative disk lasers in PBS solution reconstructed using SSHI unit compared with the ground truth spectra. Figure 3 also shows the maximum intensity projection of all wavelengths in the recorded hypercube. We find a good agreement between the individual disk laser spectra with the ground truth spectra, with a 0.52 nm mean absolute error in peak position for the six lasers depicted in Fig. 3. Fig 3(c) and (d) demonstrate that our SSHI system can record spectra from more than one laser in the FOV from a single snapshot. By contrast, the spectrometer reference measurements required the disk lasers to be addressed individually by translating them to the centre of the FOV. We further note that relative differences in the spectral intensities of different lasers were reproduced in a similar manner by SSHI and the spectrometer reference measurement. Figure 3 also illustrates different cases of SNR in the reconstructed spectrum. While the noise floor is well visible in the spectra in Fig. 3(a,b), spectra in Fig. 3(c,d) show excellent SNR. The SNR of our measurement depends on the brightness of the disk laser and the optical collection efficiency. An intrinsic characteristic of the WGM disk lasers is that their emission is predominantly in plane [29,30]. Consequently, the collection efficiency is influenced by the tilt angle of the disk laser relative to the collection numerical aperture of the objective [30]. The unique emission properties of the disk lasers coupled with the SSHI detection can give rise to several artefacts, which are illustrated by the measurements in Fig. 3 and which we discuss in turn. In Fig. 3(b), there is a tail in the intensity image on the right, corresponding to the fluorescence of the disk laser. This can also be verified by the elevated background in the corresponding spectrometer measurement. Fluorescence can be present when disks lasers are close to the lasing threshold. Because of the broadband spectral emission outside of the design bandwidth of the SSHI unit, fluorescence is incorrectly mapped to adjacent pixels. For sparsely distributed disk lasers, this can be trivially filtered from the image intensity or spectral peak width. To further improve the situation, an optical filter commensurate with the calibration bandwidth of the SSHI unit could be added to the detection path.
Defocusing and scattering effects are also evident from the intensity images and the spectra. The disk lasers are smaller than the nominal spatial resolution of our system (i.e., < 5 µm) and, thus, should appear as point sources. Scattering of the laser emission in the media leads to nonuniform spatial broadening of the laser intensity, which is further emphasised by the anisotropic emission of disk lasers [29,30]. This scattered light is defocused and not conjugated to the microlens array. For a uniform defocus, we expect a larger spot size after the microlens array, leading to spectral broadening. A non-uniform intensity and phase profile within the microlens integration area (5 µm) may lead to a non-paraxial incidence on the diffraction grating and, thus, a spectral shift. In fact, such spatial shifting due to the integration of non-uniform wavefronts is the mechanism of Shack-Hartmann wavefront sensors [31]. Here, scattering and non-uniform intensity have likely contributed to the observed lower than expected spectral resolution and to the deviations between the measured peak positions and the nominal laser wavelengths (as determined by the spectrometer). Specifically, the mean FWHM for the lasers shown in Fig. 3 is 0.79 nm for SSHI and 0.12 nm for the spectrometer measurements, while the mean error in peak position is 0.52 nm (measured relative to the spectrometer measurement). The spectrometer resolution was limited by the slit size and grating dispersion, chosen to ensure sufficient light collection and, thus, signal-to-noise ratio (SNR). We observed that the disks with fewer artefacts displayed a nominal FWHM similar to the resolution achieved during the calibration measurements, i.e., 0.52 nm. For example, the FWHM of the disk laser in Fig. 3(a) is 0.59 nm, and disk laser (i) in Fig. 3(c) even has a FWHM of 0.45 nm. An extreme negative example, which is associated with the distorted profile in the corresponding intensity image, is disk laser (ii) in Fig 3(c) where we observe a dual emission peak and a combined FWHM over both peaks of 1.34 nm. We will discuss strategies to overcome the artefacts described above and the associated challenges in the Discussion section.
Reproducibility in translated disk lasers
Because of the potential for artefacts from scattering and defocussed light, we also evaluate the reproducibility of spectral reconstruction from disk lasers translated across the FOV. First, we evaluate how a non-uniform intensity incident on a microlens affects spectral accuracy. We do this by translating a single disk laser in 1 µm steps over 10 µm to make sure that the laser emission traverses the receptive field of several microlenses. Figures 4(a-c) show the disk laser position with respect to the hexagonal array and the reconstructed spatial and spectral intensities. At the centre of the disk laser, the peak positions are relatively consistent with a 75 pm mean absolute error compared to the spectrometer reference. The inset in Fig. 4(a) shows a small linear shift in the spot intensity as the laser moves relative to the microlens. Figures 4(a) and (b) also illustrate the effect of scattering in the raw and reconstructed images (marked by an asterisk in each image). The microlens pitch at the image plane is 5 µm, which exceeds the disk laser diameter of 1-3 µm. This indicates any light collected away from the microlenses conjugate to the disk laser position must correspond to scattering or defocussed imaging. The spectral broadening and dispersion resulting from the scattering is evident in Fig. 4(b) by the broad position-dependent blurring in the raw image obtained with the SSHI unit, which then leads to spectral shifts away from the centre of the disk laser in the reconstructed image ( Fig. 4(b)). However, at the position of the disk laser, i.e., the position corresponding to the highest intensity, the spectra closely match that of the reference spectrometer. Figures 4(d) and (e) illustrate the capacity to track a single disk laser undergoing large spatial shifts, which, for instance, would be encountered in tracking, dynamic and microfluidic imaging applications. The disk lasers were translated by 50-100 µm using a motorised microscope translation stage (PLS-XY, Thorlabs, USA) and a hand-operated controller (MCM3001, Thorlabs, USA). The intensity images in Fig. 4(d) overlay the maximum peak intensity of the same disk laser from the entire sequence of positions. The spectra shown were obtained from the centre of the disk laser at each position, i.e., the position of highest intensity.
The spectral peak positions correspond well to the spectrometer measurements, which were taken at location (1) in each case. The lasing intensity is not uniform across the FOV, but this is at least in part due to a spatial variation in the pump laser intensity. Additional peaks are observed for some positions (e.g., position 3, Fig. 4(e)). This is likely due to the presence of increased scattering. These variations have likely arisen from the minute shifts in disk laser position and scatterer rearrangement due to the inertia of translation. Despite this, the main peak corresponding to the unscattered laser emission remains prominent in the spectrum. The mean absolute error in peak position was 0.089 nm and 0.21 nm for Figs. 4(d) and (e) respectively.
Volumetric hyperspectral imaging
Next, we demonstrate how our SSHI method can perform volumetric hyperspectral imaging. A volumetric sample was prepared by suspending disk lasers in agarose gel. For this, disk lasers where mixed with a heated aqueous solution of 5% wt agarose and pipetted into a glass-bottom dish (μ-dishes, Ibidi, Germany). The agarose was rapidly cooled in a freezer (-20°C) to ensure disk lasers remained suspended. Figure 5 visualises the hyperspectral information in a FOV containing three disk lasers. The volume was recorded in the region of interest by scanning the objective lens with an automated focusing module (ZFM2020, Thorlabs, USA). The volume comprised 160 sequential scans with a 5 µm separation, leading to a 250 × 250 × 800 µm 3 total volume size in x, y and z, respectively. The volume of the peak spectral intensity is visualised in Fig. 5(a). A linear transparency map applied to the normalised intensity enables the visualisation of the recorded conical scattering profile of the three lasers (marked 1-3). The conical scattering profile is a product of the numerical aperture of the objective and the anisotropic in-plane disk laser emission [29,30].
The microlenses in the SSHI unit acts as individual apertures, conjugated to the imaging plane. The light field from defocused regions will be not only spread across several microlenses, but also will possess a non-planar wavefront. This leads to not only blurring in the spatial intensity, such as would be encountered in brightfield microscopy, but additional blurring and spatial shift on the hyperspectral detector. This manifests as spectral blurring and shifting, which, to a large extent, is filtered out by the image field mapping (similarly to what is illustrated in Fig. 4(a)). As evident in Fig. 5, the rejection of out-of-plane light in the reconstructed lasing intensity enables optical sectioning in SSHI. As a consequence, we can localise the laser position in a volume, and determine its absolute position, which is not possible in a conventional widefield imaging system. The distances between the disk lasers determined in this way are given in Fig. 5(a). Figure 5(b) shows the spectra from the centre of each of the three disk lasers. We see a sharp spectral peak and few artefacts from scattering, indicating that when lasers are positioned away from optical interfaces, SSHI can provide high quality spectra.
Widefield hyperspectral imaging in biology
Finally, we showcase the ability to perform widefield hyperspectral imaging in biologically relevant settings by imaging disk lasers internalised by macrophage cells. To maximize the investigated area, we sequentially tile 17 × 17 snapshot acquisitions using an automated xy translation stage. As a result, we can obtain a panoramic view of the cells and of individual disk laser spectra with an effective total FOV measuring 3,240 × 3,240 µm 2 , as shown in Fig. 6(a). Brightfield, lasing emission and SSHI were taken at each of the 289 snapshot positions, which required less than 9 min in total. Figure 6(a) visualises the spatial position and emission wavelength of 400 brightest disk lasers as points overlaid on the brightfield image. A zoomed-in region, indicated by the black square, is shown in Fig. 6(b). The false colour map indicates both the lasing central wavelength (λ) and the amplitude (A) of the lasing peak, calculated using phasor analysis described in the Methods section [28]. This analysis simplifies the visualisation of large numbers of laser spectra in widefield images. Figure 6(c) shows representative spectra from six disk lasers marked in Fig. 6(b). The distribution of the peak lasing wavelength for all of the 400 lasers is shown in Figs. 6(d) and (e), with the histogram in Fig. 6(d) validating that the designed disk laser emission range is commensurate with the selected bandwidth of the SSHI system. The phasor plot in Fig. 6(e) displays the individual lasers using their phasor amplitude (A), which represents the sharpness of the spectral emission peak and, in disk lasers, is proportional to the laser brightness and the SNR of the detection. The angle of the phasor plot in Fig. 6(e) describes the central wavelength of the laser emission. These results highlight the promise of rapid SSHI of disk lasers for multiplexed sensing.
Discussion
Snapshot hyperspectral imaging promises rapid widefield readout of the spectra emitted by intracellular lasers. We demonstrated an embodiment of SSHI using a microlens array that can measure and distinguish disk lasers based on their spatial and spectral intensity and peak wavelength over large FOV and in 3D. The development of such methods can unveil new applications for multiplexed rapid sensing of biological properties with cellular precision.
The acquisition rate of our SSHI system is given by the integration time of the camera, which in the present case was set to 100 ms to ensure a sufficient SNR is reached, leading to a maximum imaging rate of 10 hypercubes per second. We expect that in the future the integration time can be shortened by at least 10-fold by optimising light throughput, repetition rate of the pump laser, camera sensitivity and the brightness of the used intracellular lasers. Even for the relatively low camera frame rate used in the present study, the equivalent acquisition time when using a raster scanning approach across the same number of pixels per FOV (100 × 100 pixels) is 10 µs per sampled spectrum. Fast linescan cameras can in principle support raster-scanning spectral acquisition at such rates; sub 1-ms spectral acquisition times have already been demonstrated using an InGaAs linescan camera to record the spectra emitted by disk lasers [4]. However, this requires collecting a relatively large amount of light from a small volume in the sample at a time, which in turn requires high intensity excitation light and thus can lead to issues with phototoxicity and photodamage to the intracellular lasers. Instead, widefield detection methods, such as the SSHI developed in this study, make use of all light emitted across the FOV, and thus we expect that they ultimately enable faster detection and lower photodamage than raster scanning approaches.
The major challenge in SSHI methods is the necessary trade-off in the spatial and spectral resolution. In an IFM embodiment of SSHI that uses a microlens array, the resolution and FOV/bandwidth of the spatial and spectral detection can be readily tuned by varying magnification and dispersion properties. However, the need to provide both high spatial and spectral resolution with the same camera, coupled with the spectral broadening arising from scattering and defocus artefacts, makes the use of SSHI challenging in applications that rely on detecting minute spectral mode shifts of intracellular lasers. Instead, SSHI methods could excel in wide area multiplexing, such as for single cell tracking in 3D tissue and high-throughput applications such as microfluidics [13].
While SSHI has been the subject of several studies in the past [16], its use to record the spectra emitted by intracellular lasers is new and presents several specific challenges. For instance, in the field of intracellular lasing, an emphasis is placed on laser miniaturisation to ensure minimal impact on cells and tissue [32]. Thus, when evaluating SSHI, it is pertinent to consider disk lasers as sub-diffraction-limited point sources. Conventional SSHI methods have been designed to sample scenes with the expected power spectral content of natural images, i.e., a logarithmic attenuation of intensity with spatial frequency [33]. However, the imaged light field from point-sources, like narrow-linewidth disk lasers, comprises exceptionally high spatio-spectral frequencies. As a consequence, the light field incident on each microlens is no longer approximated by a plane wave, leading to imperfect field integration. Practically, in our study, this has manifested in the observed artefacts from scattering and defocus.
Early implementations of SSHI have flourished in the field of astronomy [16]. Many such methods were designed to record distant objects. The incoming near-planar wavefronts enabled ready field integration using optical elements with a proportionally higher etendue. This requirement for higher etendue in the integration optics compared to the detection optics has been noted in IFM SSHI [16]. Microscopy, however, is characterised by the use of high-NA imaging optics. Microlenses are unlikely to match the NA of typical microscopy objectives, and higher NA microlenses present substantial challenges in alignment and stability. This presents a major challenge in overcoming field inhomogeneity and the resulting scattering and defocus artefacts. Instead, using lower-NA objectives, commensurate with the reduced spatial resolution of SSHI, would overcome this challenge, albeit at the cost of lower collection efficiency. Alternatively, spatial filtering might be used, for instance, by using a matched pinhole array at the foci of the microlenses or by using coherent fibre bundle IFM [16]; however, these methods introduce a high loss in the detected light intensity.
Improvements and modification in disk lasers fabrication, such as modifications to the cavity geometry [34] and introduction of defects, can lead to omnidirectional emission [30]. Using lasers with such modifications could improve the speed and the accuracy of SSHI detection as their isotropic omnidirectional emission would prevent a situation where scattered in-plane emission from lasers overshadows the direct signal from a disk laser located in focus. It is important to note, however, that we found scattering artefacts to be most prominent in samples where disks were located on a glass substrate (in Figs 3 and 4), and less noticeable in volumetric and in-cell measurements (in Figs 5 and 6).
SSHI can be further enhanced using digital and computational imaging approaches [35]. The rejection of defocused light and, thus, defocusing artefacts, may be achieved through digital pinholes. For instance, a similar effect to confocal pinholes can be achieved via a spatiotemporal modulation of the image intensity and, subsequently, digital demodulation [36]. This can be realised using a physical coded aperture or a digital micromirror device [37,38]. Coded aperture-based encoding can also be used in the spatio-spectral domain to realise CASSI [22]. CASSI enables compressive sensing of hypercubes with a strong potential for disk laser sensing. Exploiting the natural spatial and spectral sparsity of disk laser emission has the potential to substantially mitigate the trade-offs in resolution. However, the reliance on solving an underdetermined problem and the challenges in physical implementation could lead to new errors and artefacts in quantitative sensing [16].
Conclusions
We demonstrated rapid snapshot hyperspectral imaging of intracellular lasers, implemented using integral field mapping with a microlens array. We characterised the performance of the system in detecting distinct emission spectra from micron-sized disk lasers, demonstrating a spatial resolution of 5 µm and spectral resolution of under 0.8 nm. We then applied this method to widefield imaging in cells over 3 × 3 mm 2 areas and to rapid volumetric imaging to depths of 800 µm. The unique geometry and emission spectra of intracellular lasers make it a distinct and interesting challenge for SSHI methods. We show that while SSHI systems must be carefully tuned to intracellular laser detection, they offer new opportunities towards highthroughput and massively multiplexed detection of disk lasers and high-throughput precision biosensing applications.
Fig. 1 .
1Optical setup. (a) Optical layout of the snapshot hyperspectral imaging system, including widefield brightfield and fluorescence imaging and conventional spectrometer for reference measurements. ObjS and ObjH: sample and hyperspectral objectives; L1-6: lenses; BS1-2: beam splitters; FM: flip mirror; MLA: microlens array; DG: diffraction grating; TL: tube lens (telemacro). (b) Illustration of the field mapping via the spectral dispersion of the hexagonal array. (c) Experimental image detected on the hyperspectral camera coloured using false colours based on wavelength.
Fig. 3 .
3Hyperspectral imaging of microdisk lasers. Laser spectra reconstructed from hyperspectral data and spectrometer reference measurements (left) and maximum spectral intensity projections (right). (a) FOV with single laser operating well above its lasing threshold. (b) FOV with a single laser showing fluorescence background emission (marked with *) in addition to the lasing peak. (c) and (d) FOVs with multiple lasers (labelled i to iv). Scale bars denote 100 μm.
Fig. 4 .
4Influence of laser position on spectral detection. (a-c) Hyperspectral detection of a disk laser translated in 1 μm steps over 10 μm. (a) Raw hyperspectral images with each position coded by false colour. Solid circles correspond to the expected spot positions at the spectrometer reference wavelength. Dashed circles correspond to the laser start and finish position. The inset shows the spectral peak at the central microlens. (b) Hyperspectral image with maximum peak intensity in grey and the difference between peak wavelength determined by SSHI and the spectrometer reference in false colour. Inset shows the two-dimensional colour scale bar. Asterisks in (a, b) illustrate the effect of scattering. (c) Spectra of the disk laser during the 10 µm translation performed in a. (d, e) Data for translation of disk lasers over 50-100 µm, showing maximum peak intensity overlaid for all positions (marked as 1-4, left) and the corresponding spectra obtained at the centre position in each frame (right).
Fig. 5 .
5Volumetric imaging of an agarose phantom loaded with disk lasers obtained by SSHI. (a) Volumetric rendering of the peak spectral intensity at each point using a false colour scale and linear transparency. Three separate disk lasers, numbered 1-3, can be clearly identified; their respective distances are given in the image. (b) Corresponding spectra at the centre position of each laser.
Fig. 6 .
6Widefield hyperspectral imaging of cells containing intracellular disk lasers. (a) Position and central wavelength of the brightest 400 disk lasers overlaid on a brightfield image of the culture of macrophages on which the measurement is performed. (b) Amplitude (A) and peak wavelength (λ) of lasers across the zoomed-in region marked by a black square in (a). False colour representation as per the colour scale in the lower right inset. (c) Disk laser spectra from locations marked as (1-6) in (b). (d) Histogram of the central wavelength of all lasers in the field of view shown in a. (e) Phasor plot distribution of all laser spectra. Amplitude (A) is the sharpness of the emission peak, and the angle (λ) is the central wavelength of the emission spectra. Scale bars are 0.5 mm in (a) and 100 µm in (b).
AcknowledgementsWe thank Dr Ivan Gusachenko for designing the initial microlens array spectrometer and Dr Simon J. Powis for the isolation of human microphages.Data availabilityThe reconstruction code for SSHI is available at https://github.com/philipwijesinghe/snapshothyperspectal-imaging. The data underpinning this work is available at https://doi.org/10.17630/ 4fe7ae1c-9e0d-4671-a795-f29cb64d504c [39].FundingThis work received financial support from a UK EPSRC Programme Grant (EP/P030017/1). PW was supported by the 1851 Research Fellowship from the Royal Commission. KD acknowledges support from the Australian Research Council (FL210100099). MCG acknowledges support from the Alexander von Humboldt Foundation (Humboldt professorship).
. M Schubert, L Woolfson, I R M Barnard, A M Dorward, B Casement, A Morton, G B Robertson, P L , M. Schubert, L. Woolfson, I. R. M. Barnard, A. M. Dorward, B. Casement, A. Morton, G. B. Robertson, P. L.
Monitoring contractility in cardiac tissue with cellular resolution using biointegrated microlasers. G B Appleton, C S Miles, S J Tucker, M C Pitt, Gather, Nat Photonics. 147Appleton, G. B. Miles, C. S. Tucker, S. J. Pitt, and M. C. Gather, "Monitoring contractility in cardiac tissue with cellular resolution using biointegrated microlasers," Nat Photonics 14(7), 452-458 (2020).
Nanowire lasers as intracellular probes. X Wu, Q Chen, P Xu, Y C Chen, B Wu, R M Coleman, L Tong, X Fan, Nanoscale. 1020X. Wu, Q. Chen, P. Xu, Y. C. Chen, B. Wu, R. M. Coleman, L. Tong, and X. Fan, "Nanowire lasers as intracellular probes," Nanoscale 10(20), 9729-9735 (2018).
Nonobstructive intracellular nanolasers. A H Fikouras, M Schubert, M Karl, J D Kumar, S J Powis, A Di Falco, M C Gather, Nat Commun. 914817A. H. Fikouras, M. Schubert, M. Karl, J. D. Kumar, S. J. Powis, A. Di Falco, and M. C. Gather, "Non- obstructive intracellular nanolasers," Nat Commun 9(1), 4817 (2018).
. S J J Martino, A C Kwok, S Liapis, H Forward, H Jang, S J Kim, J Wu, P H Wu, S.-J Dannenberg, Martino, S. J. J. Kwok, A. C. Liapis, S. Forward, H. Jang, H. Kim, S. J. Wu, J. Wu, P. H. Dannenberg, S.-J.
Wavelength-encoded laser particles for massively multiplexed cell tagging. Y Jang, S.-H Lee, Yun, Nat Photonics. 13Jang, Y. Lee, and S.-H. Yun, "Wavelength-encoded laser particles for massively multiplexed cell tagging," Nat Photonics 13(October), 720-727 (2019).
The potential of optofluidic biolasers. X Fan, S.-H Yun, Nat Methods. 112X. Fan and S.-H. Yun, "The potential of optofluidic biolasers.," Nat Methods 11(2), 141-7 (2014).
Review of biosensing with whispering-gallery mode lasers. N Toropov, G Cabello, M P Serrano, R R Gutha, M Rafti, F Vollmer, Light Sci Appl. 101N. Toropov, G. Cabello, M. P. Serrano, R. R. Gutha, M. Rafti, and F. Vollmer, "Review of biosensing with whispering-gallery mode lasers," Light Sci Appl 10(1), (2021).
Biological Lasers for Biomedical Applications. Y C Chen, X Fan, Adv Opt Mater. 1900377Y. C. Chen and X. Fan, "Biological Lasers for Biomedical Applications," Adv Opt Mater 1900377, 1-14 (2019).
. A Fernandez-Bravo, K Yao, E S Barnard, N J Borys, E S Levy, B Tian, C A Tajon, L Moretti, M , A. Fernandez-bravo, K. Yao, E. S. Barnard, N. J. Borys, E. S. Levy, B. Tian, C. A. Tajon, L. Moretti, M. V.
Continuous-wave upconverting nanoparticle microlasers. S Altoe, K Aloni, F Beketayev, B E Scotognella, E M Cohen, P J Chan, Schuck, Nat Nanotechnol. Altoe, S. Aloni, K. Beketayev, F. Scotognella, B. E. Cohen, E. M. Chan, and P. J. Schuck, "Continuous-wave upconverting nanoparticle microlasers," Nat Nanotechnol (2018).
In vivo tracking of individual stem cells labeled with nanowire lasers using multimodality imaging. X Li, W Zhang, Y Li, X Wu, M Wang, X Tan, Y M Paulus, X Fan, X Wang, Biomed Opt Express. 1394706X. Li, W. Zhang, Y. Li, X. Wu, M. Wang, X. Tan, Y. M. Paulus, X. Fan, and X. Wang, "In vivo tracking of individual stem cells labeled with nanowire lasers using multimodality imaging," Biomed Opt Express 13(9), 4706 (2022).
Lasing within Live Cells Containing Intracellular Optical Microresonators for Barcode-Type Cell Tagging and Tracking. M Schubert, A Steude, P Liehm, N M Kronenberg, M Karl, E C Campbell, S J Powis, M C Gather, Nano Lett. 158M. Schubert, A. Steude, P. Liehm, N. M. Kronenberg, M. Karl, E. C. Campbell, S. J. Powis, and M. C. Gather, "Lasing within Live Cells Containing Intracellular Optical Microresonators for Barcode-Type Cell Tagging and Tracking," Nano Lett 15(8), 5647-5652 (2015).
Intracellular microlasers. M Humar, S Hyun Yun, Nat Photonics. 99M. Humar and S. Hyun Yun, "Intracellular microlasers," Nat Photonics 9(9), 572-576 (2015).
In-vitro sensing of biomechanical forces in live cells by a whispering gallery mode biosensor. M Himmelhaus, A Francois, Biosens Bioelectron. 252M. Himmelhaus and A. Francois, "In-vitro sensing of biomechanical forces in live cells by a whispering gallery mode biosensor," Biosens Bioelectron 25(2), 418-427 (2009).
. S Caixeiro, C Kunstmann-Olsen, M Schubert, J Hill, I R M Barnard, M D Simmons, S Johnson, M , S. Caixeiro, C. Kunstmann-Olsen, M. Schubert, J. Hill, I. R. M. Barnard, M. D. Simmons, S. Johnson, and M.
Local Sensing of Absolute Refractive Index during Protein-Binding Using Microlasers with Spectral Encoding Adv. C Gather, Optical Mater. 2300530C. Gather, Local Sensing of Absolute Refractive Index during Protein-Binding Using Microlasers with Spectral Encoding Adv. Optical Mater, 2300530, (2023).
Silk-Based Biocompatible Random Lasing. S Caixeiro, M Gaio, B Marelli, F G Omenetto, R Sapienza, Adv Opt Mater. 47S. Caixeiro, M. Gaio, B. Marelli, F. G. Omenetto, and R. Sapienza, "Silk-Based Biocompatible Random Lasing," Adv Opt Mater 4(7), 998-1003 (2016).
Deep tissue localization and sensing using optical microcavity probes. A Kavčič, M Garvas, M Marinčič, K Unger, A M Coclite, B Majaron, M Humar, Nat Commun. 131A. Kavčič, M. Garvas, M. Marinčič, K. Unger, A. M. Coclite, B. Majaron, and M. Humar, "Deep tissue localization and sensing using optical microcavity probes," Nat Commun 13(1), (2022).
Review of snapshot spectral imaging technologies. N Hagen, M W Kudenov, Optical Engineering. 52990901N. Hagen and M. W. Kudenov, "Review of snapshot spectral imaging technologies," Optical Engineering 52(9), 090901 (2013).
Medical hyperspectral imaging: a review. G Lu, B Fei, J Biomed Opt. 19110901G. Lu and B. Fei, "Medical hyperspectral imaging: a review," J Biomed Opt 19(1), 010901 (2014).
Hyperspectral imaging in biomedical applications. H L Offerhaus, S E Bohndiek, A R Harvey, Journal of Optics (United Kingdom). 211H. L. Offerhaus, S. E. Bohndiek, and A. R. Harvey, "Hyperspectral imaging in biomedical applications," Journal of Optics (United Kingdom) 21(1), (2019).
. J Yoon, J Joseph, D J Waterhouse, A S Luthman, G S D Gordon, M Di Pietro, W Januszewicz, R C , J. Yoon, J. Joseph, D. J. Waterhouse, A. S. Luthman, G. S. D. Gordon, M. di Pietro, W. Januszewicz, R. C.
A clinically translatable hyperspectral endoscopy (HySE) system for imaging the gastrointestinal tract. S E Fitzgerald, Bohndiek, Nat Commun. 101Fitzgerald, and S. E. Bohndiek, "A clinically translatable hyperspectral endoscopy (HySE) system for imaging the gastrointestinal tract," Nat Commun 10(1), (2019).
Current and near-term advances in Earth observation for ecological applications. S L Ustin, E M Middleton, Ecol Process. 101S. L. Ustin and E. M. Middleton, "Current and near-term advances in Earth observation for ecological applications," Ecol Process 10(1), (2021).
Optical Configurations for Imaging Spectrometers. X Prieto-Blanco, C Montero-Orille, B Couce, R De La Fuente, Computational Intelligence for Remote Sensing. R. J. DuroSpringerStudies in Computational IntelligenceX. Prieto-Blanco, C. Montero-Orille, B. Couce, and R. de la Fuente, "Optical Configurations for Imaging Spectrometers," in Computational Intelligence for Remote Sensing, M. Graña and R. J. Duro, eds., Studies in Computational Intelligence (Springer, 2008), pp. 1-25.
Compressive coded aperture spectral imaging: An introduction. G R Arce, D J Brady, L Carin, H Arguello, D S Kittle, IEEE Signal Process Mag. 311G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, "Compressive coded aperture spectral imaging: An introduction," IEEE Signal Process Mag 31(1), 105-115 (2014).
Snapshot Hyperspectral Imaging: The Hyperpixel Array Camera. A Bodkin, A Sheinis, A Norton, J Daly, S Beaven, J Weinheimer, Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XV (International Society for Optics and Photonics. 733473340A. Bodkin, A. Sheinis, A. Norton, J. Daly, S. Beaven, and J. Weinheimer, "Snapshot Hyperspectral Imaging: The Hyperpixel Array Camera," in Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XV (International Society for Optics and Photonics, 2009), 7334, p. 73340H.
Red-Shifted Excitation and Two-Photon Pumping of Biointegrated GaInP/AlGaInP Quantum Well Microlasers. M Titze, S Caixeiro, A Di Falco, M Schubert, M C Gather, ACS Photonics. M. Titze, S. Caixeiro, A. Di Falco, M. Schubert, and M. C. Gather, "Red-Shifted Excitation and Two-Photon Pumping of Biointegrated GaInP/AlGaInP Quantum Well Microlasers," ACS Photonics (2021).
Rapid broadband characterization of scattering medium using hyperspectral imaging. A Boniface, I Gusachenko, K Dholakia, S Gigan, Optica. 63274A. Boniface, I. Gusachenko, K. Dholakia, and S. Gigan, "Rapid broadband characterization of scattering medium using hyperspectral imaging," Optica 6(3), 274 (2019).
Scattered Data Interpolation Methods for Electronic Imaging Systems: A Survey. I Amidror, J Electron Imaging. 112I. Amidror, "Scattered Data Interpolation Methods for Electronic Imaging Systems: A Survey," J Electron Imaging 11(2), 157-176 (2002).
Visible submicron microdisk lasers. L Zhang, V Yang, T Liu, K Hong, A Vahala, Scherer, Appl Phys Lett. 9011Zhang, L. Yang, V. Liu, T. Hong, K. Vahala, and A. Scherer, "Visible submicron microdisk lasers," Appl Phys Lett 90(11), 1-4 (2007).
Phasor-based hyperspectral snapshot microscopy allows fast imaging of live, three-dimensional tissues for biomedical applications. P N Hedde, R Cinco, L Malacrida, A Kamaid, E Gratton, Commun Biol. 41P. N. Hedde, R. Cinco, L. Malacrida, A. Kamaid, and E. Gratton, "Phasor-based hyperspectral snapshot microscopy allows fast imaging of live, three-dimensional tissues for biomedical applications," Commun Biol 4(1), (2021).
Far-field emission narrowing effect of microdisk lasers. T D Lee, P H Cheng, J S Pan, R S Tsai, Y Lai, K Tai, Appl Phys Lett. 7218T. D. Lee, P. H. Cheng, J. S. Pan, R. S. Tsai, Y. Lai, and K. Tai, "Far-field emission narrowing effect of microdisk lasers," Appl Phys Lett 72(18), 2223-2225 (1998).
Laser particles with omnidirectional emission for cell tracking. S J Tang, P H Dannenberg, A C Liapis, N Martino, Y Zhuo, Y F Xiao, S H Yun, Light Sci Appl. 101S. J. Tang, P. H. Dannenberg, A. C. Liapis, N. Martino, Y. Zhuo, Y. F. Xiao, and S. H. Yun, "Laser particles with omnidirectional emission for cell tracking," Light Sci Appl 10(1), (2021).
Objective Measurement of Wave Aberrations of the Human Eye with the Use of a Hartmann-Shack Wave-Front Sensor. J Liang, B Grimm, S Goelz, J F Bille, 11J. Liang, B. Grimm, S. Goelz, and J. F. Bille, Objective Measurement of Wave Aberrations of the Human Eye with the Use of a Hartmann-Shack Wave-Front Sensor (1994), 11(7).
Advances in small lasers. M T Hill, M C Gather, M. T. Hill and M. C. Gather, "Advances in small lasers," (2014).
A Van Der Schaaf, J H Van Hateren, Modelling the Power Spectra of Natural Images: Statistics and Information. 36A. Van Der Schaaf and J. H. Van Hateren, Modelling the Power Spectra of Natural Images: Statistics and Information (1996), 36(17).
Active control of emission directionality of semiconductor microdisk lasers. S F Liew, B Redding, L Ge, G S Solomon, H Cao, Appl Phys Lett. 10423S. F. Liew, B. Redding, L. Ge, G. S. Solomon, and H. Cao, "Active control of emission directionality of semiconductor microdisk lasers," Appl Phys Lett 104(23), (2014).
Emergent physics-informed design of deep learning for microscopy. P Wijesinghe, K Dholakia, JPhys Photonics. 32P. Wijesinghe and K. Dholakia, "Emergent physics-informed design of deep learning for microscopy," JPhys Photonics 3(2), (2021).
Method of Obtaining Optical Sectioning by Using Structured Light in a Conventional Microscope. M A A Neil, R Juškaitis, T Wilson, 22M. A. A. Neil, R. Juškaitis, and T. Wilson, Method of Obtaining Optical Sectioning by Using Structured Light in a Conventional Microscope (1997), 22(24).
Compressed Hadamard microscopy for high-speed optically sectioned neuronal activity recordings. V J Parot, C Sing-Long, Y Adam, U L Böhm, L Z Fan, S L Farhi, A E Cohen, J Phys D Appl Phys. 5214V. J. Parot, C. Sing-Long, Y. Adam, U. L. Böhm, L. Z. Fan, S. L. Farhi, and A. E. Cohen, "Compressed Hadamard microscopy for high-speed optically sectioned neuronal activity recordings," J Phys D Appl Phys 52(14), (2019).
Resolution doubling in live, multicellular organisms via multifocal structured illumination microscopy. A G York, S H Parekh, D D Nogare, R S Fischer, K Temprine, M Mione, A B Chitnis, C A Combs, H Shroff, Nat Methods. 97A. G. York, S. H. Parekh, D. D. Nogare, R. S. Fischer, K. Temprine, M. Mione, A. B. Chitnis, C. A. Combs, and H. Shroff, "Resolution doubling in live, multicellular organisms via multifocal structured illumination microscopy," Nat Methods 9(7), 749-754 (2012).
Data underpinning: Snapshot hyperspectral imaging of intracellular lasers. S Caixeiro, P Wijesinghe, K Dholakia, M C Gather, Dataset. University of St Andrews Research PortalS. Caixeiro , P. Wijesinghe, K. Dholakia, M.C. Gather, "Data underpinning: Snapshot hyperspectral imaging of intracellular lasers. Dataset. University of St Andrews Research Portal" (2023).
| [
"https://github.com/philipwijesinghe/snapshothyperspectal-imaging."
]
|
[
"Local Message Passing on Frustrated Systems",
"Local Message Passing on Frustrated Systems"
]
| [
"Luca Schmid \nCommunications Engineering Lab (CEL)\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany\n",
"Joshua Brenk \nCommunications Engineering Lab (CEL)\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany\n",
"Laurent Schmalen \nCommunications Engineering Lab (CEL)\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany\n"
]
| [
"Communications Engineering Lab (CEL)\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany",
"Communications Engineering Lab (CEL)\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany",
"Communications Engineering Lab (CEL)\nKarlsruhe Institute of Technology (KIT)\nKarlsruheGermany"
]
| []
| Message passing on factor graphs is a powerful framework for probabilistic inference, which finds important applications in various scientific domains. The most wide-spread message passing scheme is the sum-product algorithm (SPA) which gives exact results on trees but often fails on graphs with many small cycles. We search for an alternative message passing algorithm that works particularly well on such cyclic graphs. Therefore, we challenge the extrinsic principle of the SPA, which loses its objective on graphs with cycles. We further replace the local SPA message update rule at the factor nodes of the underlying graph with a generic mapping, which is optimized in a data-driven fashion. These modifications lead to a considerable improvement in performance while preserving the simplicity of the SPA. We evaluate our method for two classes of cyclic graphs: the 2 × 2 fully connected Ising grid and factor graphs for symbol detection on linear communication channels with inter-symbol interference. To enable the method for large graphs as they occur in practical applications, we develop a novel loss function that is inspired by the Bethe approximation from statistical physics and allows for training in an unsupervised fashion.Recently, model-based deep learning has shown great potential to empower various suboptimal algorithms, such as the SPA on cyclic graphs. Neural BP, proposed by Nachmani et al. [2016], unfolds the iterations of the SPA on its underlying graph and equips the resulting deep network with tunable weights. The GAP algorithm of Schmid and Schmalen [2022] varies the observation model by preprocessing, thereby shaping a graph with more favorable properties with respect to BP performance. Satorras and Welling [2021] extend graph neural networks (GNNs) to factor graphs and propose a hybrid model where BP runs conjointly to a GNN which is structurally identical to the | null | [
"https://export.arxiv.org/pdf/2306.01494v1.pdf"
]
| 259,064,047 | 2306.01494 | 682ea36404be75337304d09df43a22e87fb5f23c |
Local Message Passing on Frustrated Systems
Luca Schmid
Communications Engineering Lab (CEL)
Karlsruhe Institute of Technology (KIT)
KarlsruheGermany
Joshua Brenk
Communications Engineering Lab (CEL)
Karlsruhe Institute of Technology (KIT)
KarlsruheGermany
Laurent Schmalen
Communications Engineering Lab (CEL)
Karlsruhe Institute of Technology (KIT)
KarlsruheGermany
Local Message Passing on Frustrated Systems
Message passing on factor graphs is a powerful framework for probabilistic inference, which finds important applications in various scientific domains. The most wide-spread message passing scheme is the sum-product algorithm (SPA) which gives exact results on trees but often fails on graphs with many small cycles. We search for an alternative message passing algorithm that works particularly well on such cyclic graphs. Therefore, we challenge the extrinsic principle of the SPA, which loses its objective on graphs with cycles. We further replace the local SPA message update rule at the factor nodes of the underlying graph with a generic mapping, which is optimized in a data-driven fashion. These modifications lead to a considerable improvement in performance while preserving the simplicity of the SPA. We evaluate our method for two classes of cyclic graphs: the 2 × 2 fully connected Ising grid and factor graphs for symbol detection on linear communication channels with inter-symbol interference. To enable the method for large graphs as they occur in practical applications, we develop a novel loss function that is inspired by the Bethe approximation from statistical physics and allows for training in an unsupervised fashion.Recently, model-based deep learning has shown great potential to empower various suboptimal algorithms, such as the SPA on cyclic graphs. Neural BP, proposed by Nachmani et al. [2016], unfolds the iterations of the SPA on its underlying graph and equips the resulting deep network with tunable weights. The GAP algorithm of Schmid and Schmalen [2022] varies the observation model by preprocessing, thereby shaping a graph with more favorable properties with respect to BP performance. Satorras and Welling [2021] extend graph neural networks (GNNs) to factor graphs and propose a hybrid model where BP runs conjointly to a GNN which is structurally identical to the
INTRODUCTION
Message passing on graphical models is a powerful framework to efficiently solve inference and optimization problems. The most prominent message passing algorithm is the sum-product algorithm (SPA), also known as belief propagation (BP) [Pearl, 1988], which implements exact inference on tree-structured graphs [Kschischang et al., 2001]. Due to its simplicity, the SPA is often applied to cyclic graphs where it becomes an iterative and approximate algorithm. While this works surprisingly well for various applications, such as decoding of low-density parity-check codes [Gallager, 1963], a class of error-correcting codes, the SPA performs poorly on frustrated systems, i.e., on graphs with many cycles and strong coupling between the nodes.
The seminal work of Yedidia et al. [2000] revealed a connection between the SPA and free energy approximations of statistical physics, in particular, the fixed points of BP correspond to stationary points of the Bethe free energy. Based on this insight, alternative message passing methods were proposed which directly minimize the Bethe free energy [Yuille, 2002, Welling andTeh, 2013]. These algorithms are guaranteed to converge to an extremum of the Bethe free energy but are computationally more demanding than plain BP. Wainwright et al. [2003] proposed tree-reweighted BP as a message passing algorithm on the "convexified" Bethe free energy, which is guaranteed to have a global minimum. While this algorithm has stronger convergence guarantees compared to BP, it involves the selection and optimization of so-called edge appearance probabilities, a graph-specific problem that is often non-trivial for practical applications. Yedidia et al. [2000] proposed "generalized BP" as an algorithm that passes messages between regions of nodes instead of single nodes. Larger regions will generally improve the quality of the approximation, however, they also increase the computational complexity.
original factor graph but has fully parametrized message updates. All these works have in common that they are based on the SPA as a core concept which is vigorously improved using machine learning in order to compensate for its shortcomings on graphs with cycles. In this work, we follow an alternative approach and directly search for alternative message passing algorithms that perform especially well on graphs with cycles, where the SPA tends to fail. To this end, we replace the well-known SPA message update rule with a compact neural network (NN), which is optimized to find a superior local message update rule. Furthermore, we discuss the role of the extrinsic information principle which was originally introduced for tree-structured graphs. Based on the close connection of BP to the Bethe approximation, we propose a novel end-to-end loss function that allows unsupervised and application-agnostic training of new message passing schemes.
BACKGROUND
We briefly introduce factor graphs and the SPA as a widespread framework for probabilistic inference on graphical models. We refer the reader to [Kschischang et al., 2001] for an excellent in-depth treatment of the topic.
FACTOR GRAPHS
Let f (X ) be a multivariate function of X = {x 1 , . . . , x N } which factors into a product of local functions f j :
f (X ) = 1 Z J j=1 f j (X j ), X j ⊆ X .(1)
A factor graph visualizes the factorization in (1) as a bipartite graph. Every variable x n is represented by a unique vertex, a so-called variable node, which we draw as a circle in the graph. Factor nodes represent the local functions f j and are visualized by squares. The undirected edges of the graph connect a factor node f j (X j ) with a variable node x n if and only if f j is a function of x n , i.e., if x n ∈ X j . From a graphical perspective, X j thus corresponds to the set of adjacent variable nodes to the factor node f j . Similarly, we define N (x n ) to be the set of adjacent factor nodes to the variable node x n .
In this work, we restrict the variables x n ∈ {+1, −1} to be binary and the local factors f j to be either functions of a singular variable x n or functions of pairs (x n , x m ) such that the factorization becomes
f (X ) = 1 Z N n=1 ψ n (x n ) (n,m)∈E ψ n,m (x n , x m ),(2)
where E is the set of edges in the graph. Figure 1 shows an exemplary factor graph.
s 1 s 3 s 2 s 4 ψ 31 ψ 21 ψ 43 ψ 42 ψ 32 ψ 41 ψ 1 ψ 2 ψ 3 ψ 4 ψ 1 ψ 2 ψ 3 ψ 4
Figure 1: Factor graph representation of (2) with factor nodes of degree 2 (blue) and degree 1 (red). This graph also models the 2 × 2 fully connected Ising graph of Sec. 2.3.
SUM-PRODUCT ALGORITHM
The SPA is a message passing algorithm that operates in a factor graph and attempts to determine the marginals of the multivariate function f (X ). Messages are propagated between the nodes of the factor graph along its edges and represent interim results of the marginalization. Let m fj →xn (x n ) denote a message sent from a factor node f j along an edge to a variable node x n and let m xn→fj (x n ) denote a message on the same edge, but sent in the opposite direction. If the factor graph visualizes a probabilistic model, i.e., if the variable nodes represent random variables, a message m fj →xn (x n ) can be interpreted as a probabilistic statement from node f j about the random variable x n to be in one of its possible states [Yedidia et al., 2005]. The SPA defines the updates of the propagating messages at the nodes of the factor graph according to the simple rules [Kschischang et al., 2001]:
m x→fj (x) = fi∈N (x)\fj m fi→x (x) (3) m fj →x (x) = ∼{x} f j (X j ) x ′ ∈Xj \x m x ′ →fj (x ′ ) . (4)
The summary operator ∼{x} denotes the marginalization over all variables in X j except for x. One key property of the SPA is the extrinsic information principle which states that the update of an outgoing message m A→B at node A destined to node B does not depend on the incident message m B→A which travels on the same edge but in opposite direction. For the special case of degree-2 factor nodes ψ n,m (x n , x m ), the SPA update rule (4) thereby simplifies to m ψn,m→xn (x n ) = xm ψ n,m (x n , x m ) · m xm→ψn,m (x m ).
Messages at factors nodes ψ n (x n ) with degree 1 are not updated at all.
Initially, all messages are set to some unbiased state before they are iteratively updated according to a certain schedule. For tree-structured graphs, the messages converge after they have once traveled forward and backward through the entire graph. The result of the SPA, i.e., the marginal functions f (x n ), are finally obtained by a combination of all messages incident to the respective variable nodes:
f (x n ) = fi∈N (xn) m fi→xn (x n ).
Since the SPA makes no reference to the topology of the factor graph and the message updates are local, the SPA may also be applied to factor graphs with cycles [Yedidia et al., 2003]. On graphs with cycles, the SPA only yields an approximation of the exact marginals. While this approximation works surprisingly well in many cases, even including particular classes of graphs with many small cycles, there are also cases where the results are quite poor or where the SPA does not converge at all [Murphy et al., 1999].
Relation to the Bethe Approximation In their seminal work, Yedidia et al. [2000] showed a revealing connection between the SPA and free energy approximations in statistical physics. From a variational perspective, probabilistic inference can be seen as an optimization problem
q = arg min q∈M D KL (q||p),(5)
where we want to find the distribution q from the set M of all globally valid probability distributions, known as the marginal polytope [Wainwright and Jordan, 2008]. Since the Kullback-Leibler (KL) divergence D KL (q||p) is always non-negative and zero if and only if q = p, we reach the minimum exactly for q = p. Obviously, optimizing over all possible probability distributions q ∈ M is generally intractable. Based on some general assumption, free energy methods simplify the problem in (5) to the minimization of a variational free energy term. We refer the reader to [Yedidia et al., 2005] for a detailed elaboration on this topic.
The Bethe approximation restricts the distribution q(X ) to be a product of univariate distributions b n (x n ) and joint distributions b n,m (x n , x m ) between pairs (n, m) ∈ E:
q Bethe (X ) := N n=1 b n (x n ) (n,m)∈E b n,m (x n , x m ).
This simplification leads to the Bethe free energy
F Bethe = (n,m)∈E xn,xm b n,m (x n , x m ) log b n,m (x n , x m ) ϕ n,m (x n , x m ) − N n=1 (|X n | − 1) xn b n (x n ) log b n (x n ) ψ n (x n ) ,
with ϕ n,m (x n , x m ) := ψ n (x n )ψ n,m (x n , x m )ψ m (x m ). Moreover, the Bethe approximation relaxes the search space in (5) from the marginal polytope M to the local polytope
L = {b n (x n ), b n,m (x n , x m ) : ∀n ∈ [1; N ], (n, m) ∈ E}: xn b n,m (x n , x m ) = b m (x m ), xn b n (x n ) = 1 xm b n,m (x n , x m ) = b n (x n ) .
This means that the distributions b n (x n ) and b n,m (x n , x m ) only need to locally fulfill consistency in a pairwise sense. In summary, the Bethe approximation converts (5) into the optimization problem
q Bethe = arg min {bn,bn,m}∈L F Bethe ({b n , b n,m }).(6)
Yedidia et al. [2000] showed that the fixed points of BP applied to a factor graph correspond to the stationary points of the respective Bethe free energy. Seen in this light, BP is a suboptimal algorithm to minimize F Bethe . The approximative nature in this sense is twofold: First, there may exist multiple fixed points of the SPA for the same factor graph, i.e., the solution of the (converged) BP might correspond to an extremum of F Bethe other than the global minimum in L [Knoll et al., 2018]. Second, the beliefs only fulfill the pairwise consistency constraints at the fixed points of BP. This means that the solution only lies within the local polytope L after BP has converged. However, BP does not necessarily converge and failure of convergence is a major error mode [Yuille, 2002].
Various methods to directly solve (6) or variants thereof were proposed (see [Yedidia et al., 2005] and references therein). Yuille [2002] proposed to decompose the Bethe free energy into concave and convex parts which enables the application of a concave-convex procedure (CCCP). That algorithm consists of a double loop where the outer loop iteratively minimizes F Bethe and the inner loop ensures that the pairwise consistency constraints are fulfilled. Due to the CCCP, the algorithm provably converges to an extremum of the Bethe free energy.
EXAMPLES
For the remainder of this section, we introduce two important classes of factor graphs which are the basis for the numerical experiments in Sec. 4.
Example 1 -Ising Graphs We consider factor graphs with N = M 2 variable nodes, arranged in a square 2D lattice, in which pairs of adjacent variable nodes (x n , x m ) are symmetrically coupled by the weights J n,m via factor nodes ψ n,m (x n , x m ) = exp(J n,m x n x m ). Additionally, each variable node x n has local evidence in the form of a degree-1 factor node ψ n (x n ) = exp (θ n x n ). The Ising model originates from statistical physics where the binary variables x n ∈ {+1, −1} represent the orientation of elementary magnets in a lattice [Peierls, 1936]. Each magnet is exposed to a local field θ n and is influenced by its neighbors via an assigned pairwise coupling J n,m . Besides its fundamental significance in statistical physics, the Ising model is a universal mathematical model and finds applications in many other scientific domains such as image processing [Besag, 1986] and modeling of social networks [Banerjee andEl Ghaoui, 2008, Wainwright andJordan, 2008].
Following [Yedidia et al., 2005, Mooij and Kappen, 2007, Knoll et al., 2018, we study the fully connected 2 × 2 Ising model, i.e., M = 2 and N = 4, where every pair of variable nodes is connected. A factor graph representation of this model is given in Fig. 1. With more cycles than variable nodes and a girth of 3, this graph can be parametrized to a highly frustrated system and is thus able to highlight the weaknesses of the SPA [Yedidia et al., 2005]. In particular, we consider the Ising spin glass, where the parameters θ n and J n,m are independent and identically distributed (i.i.d.) random variables, sampled from a uniform distribution U[−S, +S] with S ∈ R + . We are interested in the computation of the marginal functions
f (x n ) = ∼{xn} f (x 1 , x 2 , x 3 , x 4 ), n = 1, 2, 3, 4,(7)
which correspond to marginal probability distributions p(x n ) = f (x n ) if the Ising graph represents a probabilistic model. While the direct computation of (7) is still feasible for our example with N = 4, the number of summations grows exponentially with N , which calls for alternative methods with lower complexity. Applying the SPA on the factor graph in Fig. 1 yields the single beliefs b n (x n ) as an approximation of f (x n ) with a complexity that only grows quadratically with N .
Example 2 -Symbol Detection
We study the problem of symbol detection in a digital communication system [Proakis and Salehi, 2007]. A transmitter sends a sequence of N independent and uniformly distributed symbols c n ∈ {+1, −1} over a linear channel with memory, impaired by additive white Gaussian noise (AWGN). The receiver observes the sequence
y = h 0 . . . h 0 0 h L . . . . . . h L h 0 0 . . . . . . h L =:H c 1 c 2 . . . c N =:c + w 1 w 2 . . . w N +L =:w ,(8)
where h ∈ R L+1 describes the impulse response of the channel of length L + 1 and w k ∼ CN (0, σ 2 ) are independent noise samples from a complex circular Gaussian distribution. Applying Bayes' theorem, the posterior distribution p(c|y) can be expressed in terms of the likelihood:
p(c|y) = 1 Z p(y|c) = 1 Z exp − (y − Hc) 2 σ 2 .
In the context of symbol detection, we want to infer the transmit symbols c n based on the channel observation y, i.e., we are interested in the marginal distributions p(c n |y).
Based on an observation model by Ungerboeck [1974]
p(y|c) ∝ exp 2Re c H H H y − c H H H Hc σ 2 ,
we can factorize the likelihood
p(y|c) = 1 Z N n=1 Fn(cn) N m=1 m<n I n,m (c n , c m ) (9) into the factors F n (c n ) := exp 1 σ 2 Re 2x n c ⋆ n − G n,n |c n | 2 I n,m (c n , c m ) := exp − 2 σ 2 Re{G n,m c m c ⋆ n } ,
MESSAGE PASSING FOR CYCLIC GRAPHS
Despite its drawbacks on cyclic graphs, the amazing success of the SPA lies in its simplicity and generality: it is only defined by a local message update rule which can be applied to any generic factor graph based on a suitable message update schedule. Driven by this elegant concept, we are interested in finding message passing algorithms that perform well on graphs with many cycles where the SPA fails. More specifically, we ask the following questions:
• If the SPA fails to converge, does an alternative local message update rule exist that converges (possibly to an extremum of the Bethe free energy) and which provides better results than the SPA?
• If the SPA converges to an extremum of the Bethe free energy, is there a local message update rule which yields superior performance, either because the SPA converges to a fixed point which only corresponds to a local instead of global minimum of the Bethe free energy, or because the Bethe approximation itself is a bad approximation in this case?
ON MESSAGE UPDATE RULES
A message update rule defines a mapping from one or multiple incident messages to one outgoing message, which is applied locally at the variable or factor nodes of a factor graph. Besides the initialization of the messages and their update schedule, these mappings fully define a graph-based inference algorithm. The SPA update rule at the variable nodes (3) is simply the product of all extrinsic messages. We adopt this quite intuitive aggregation principle and focus on finding a message update rule for the factor nodes, i.e., an alternative to (4). For factor nodes of degree 2, such as in (2), the update rule simplifies to a mapping from one single incident message to one outgoing message:
FN e (ψ n,m ) : m xn→ψn,m (x n ) → m ψn,m→xm (x m ).(10)
If the pairwise factors ψ n,m (x n , x m ) are symmetric with regard to x n and x m , and follow the exponential form
ψ n,m (x n , x m ) = exp(E n,m x n x m ), x n , x m ∈ {+1, −1},
we can distill the dependency from the function ψ n,m to the scalar parameter E n,m ∈ R, which quantifies the repulsive (E n,m < 0) or attractive (E n,m > 0) coupling between the nodes x n and x m . This directly coincides with the pairwise coupling weights J n,m = E n,m of the Ising model in Example 1. The factor nodes I n,m of Example 2 can be reduced to the coupling parameters E n,m = −2G n,m /σ 2 .
Challenging the Extrinsic Principle Most of the existing message passing algorithms follow the extrinsic information principle. For instance in turbo decoding, it is known to be an important property of good message passing decoders [Richardson and Urbanke, 2001]. Ensuring that only extrinsic messages are received, it prevents backcoupling of intrinsic information in tree-structured graphs, which would otherwise lead to a self-enhancement of the messages, also known as "double counting". Thereby, it guarantees that the SPA is exact on trees [Kuck et al., 2020]. We argue that this is in general not valid for cyclic graphs where backcoupling of messages is inevitable due to the very nature of the cycles. Therefore, we propose a second message update rule which operates contradictory to the extrinsic principle: instead of ignoring the intrinsic message, the message update should rather actively leverage this additional information, e.g., to ensure that local consistency between neighboring nodes is fulfilled.
Without the extrinsic principle, we need to reconsider the messages from degree-1 factor nodes which are then also subject to iterative updates. To avoid an increase in complexity due to additional message updates at the degree-1 factor nodes, we apply a clustering approach similar to [Rapp et al., 2022]. We split up the single factors ψ n (x n ) into |X n | parts Ψ n (x n ) := (ψ n (x n )) 1 |Xn| and merge them into the adjacent pairwise factors ψ n,m (x n , x m ), such that the new clustered factors are Ψ n,m (x n , x m ) := Ψ n (x n )ψ n,m (x n , x m )Ψ m (x m ).
The overall factorization (2) simplifies to
f (x 1 , . . . , x N ) = 1 Z (n,m)∈E Ψ n,m (x n , x m ),
which leads to the non-extrinsic mapping
FN(Ψ n,m ) : m xn→Ψn,m (x n ) m xm→Ψn,m (x m ) → m Ψn,m→xm (x m ).
(11) If the single factors are in exponential form
Ψ n (x n ) = exp (E n x n ) , x n ∈ {+1, −1},
the clustered factors Ψ n,m (x n , x m ) are fully characterized by the three scalars E n , E n,m and E m .
Neural Networks as Function Approximators
Finding suitable mappings (10) or (11) such that the overall message passing algorithm performs well is generally nontrivial. We employ feed-forward NNs, known to be efficient universal function approximators [Hornik et al., 1989], to reduce the search space of all possible mappings to a set of weights and biases P, which fully parametrize the NN. At a factor node f j , the network accepts N in inputs and produces the updated outgoing message m fj →xn . For factor graphs with binary variables x n , the messages m fj →xn (x n ) can be expressed in scalar log-likelihood ratios (LLRs) L fj →xn := log m fj →xn (x n = +1) m fj →xn (x n = −1) .
A similar definition holds for the LLRs L xn→fj based on the messages m xn→fj (x n ). For the extrinsic update (10), there are N in = 2 inputs: the LLR of the incoming extrinsic message and the coupling parameter E n,m of the local factor node. Without the extrinsic principle, the NN furthermore accepts the LLR of the intrinsic message as well as E n and E m , i.e., in total N in = 5 inputs. Since we only approximate a local mapping from a few scalar inputs to a single output, we can choose a very compact NN structure with a single hidden layer and 7 neurons, as summarized in Table 1.
Having set up the NN structure, we are able to define a convenient message update rule by appropriately tuning the parameterization P of the NN. We are interested in a local update rule such that the overall message passing performs well. To this end, we optimize P with respect to an objective function that evaluates the end-to-end performance of the inference task. Therefore we apply a fixed number of message passing iterations and back-propagate the gradient of the objective function in order to iteratively optimize P using gradient descent based on a representative set of examples. Note that this data-driven approach inevitably leads to a specialization of the learned message update to the data. However, we expect the result to be fairly generic and to have good generalization capabilities since we only optimize very few parameters in an otherwise model-aware system. Moreover, despite the end-to-end optimization, we only use a single message update rule for the entire factor graph, i.e., we employ the same instance of the NN for the message updates at all factor nodes and in each iteration 1 .
We note that our approach can be interpreted as a special instance of a GNN as, e.g., described by Yoon et al. [2019].
In comparison, our model passes scalar messages instead of high-dimensional vectors and does not use any hidden states or embeddings at the variable nodes. For this reason, we do not require a second NN with a gated recurrent unit, as used in [Yoon et al., 2019] to update the hidden states based on the aggregated messages. Furthermore, we do not require a third NN which implements a trainable readout function to interpret the final node embeddings.
END-TO-END OBJECTIVE FUNCTIONS
In the generic context of marginal inference, we hope to find a good approximation of the true marginals. A convenient objective function is the KL divergence which measures a type of statistical distance between the beliefs b n (x n ) and the exact marginal distributions p(x n ) = ∼{xn} p(x 1 , . . . , x N ):
L KL := D KL (b n (x n )∥p(x n )) .(12)
For large graphs, the computation of p(x n ) might be infeasible, and L KL becomes impractical. Therefore, we propose alternative loss functions in what follows.
The training of a symbol detector as in Example 2 is a typical supervised learning scenario where the labels are given by the transmitted symbols c n . An appropriate performance measure for symbol detection is the bitwise mutual information (BMI) which is an achievable information rate 2 for our scenario [Guillén i Fàbregas et al., 2008]. By a sample mean estimate over D labeled examples (c, y) from the data batch D, the BMI can be approximated by
BMI ≈ 1 − 1 DN N n=1 (c,y)∈D log 2 e −cnLn(y) + 1 ,
where L n (y) denotes the LLR from the belief b n (c n ) [Alvarado et al., 2018].
Other applications such as the Ising model in Example 1 relate to the class of unsupervised problems if the true marginals are not accessible. For such scenarios, we consider a novel and application-agnostic objective function in the following. Inspired by the Bethe approximation, which is known to yield excellent results for many applications, even in cases where the SPA performs poorly [Yuille, 2002], we propose a regularized minimization of the Bethe free energy:
L Bethe := F Bethe + αL L , α ∈ R + .(13)
To ensure local consistency, we introduce the Bethe consistency distance
L L := D KL xm b n,m (x n , x m ) b n (x n ) + D KL xn b n,m (x n , x m ) b m (x m )
as a type of distance measure between the solution of the approximative inference {b n , b n,m } and the local polytope L. The weight α in (13) is a hyperparameter that controls how strictly the local consistency is enforced. With this penalty term L L , we hope to suppress oscillations in the message passing, as they occur in the SPA for graphs with strong coupling.
EXPERIMENTS
We consider the examples of Sec. 2.3 for numerical evaluation. To enable a deeper analysis, we fix the number of variable nodes to N = 4 such that the computation of the true marginals is feasible. Despite this rather small extent, these models lead to factor graphs with a high density of short cycles and are thus expressive examples to highlight the weaknesses of the SPA. Furthermore, we fix the global settings of the message passing to standard choices: all LLR messages are initialized with zero and we perform 10 iterations of a parallel schedule, i.e., each iteration comprises the parallel update of all messages at the factor nodes followed by message updates at all variable nodes.
A common technique to improve the performance of the SPA on graphs with cycles is the use of "momentum", i.e., replacing a message L (t) of the SPA in iteration t with the weighted average (1 − µ)L (t) + µL (t−1) [Murphy et al., 1999]. By choosing 0 < µ < 1, the idea is to improve the convergence behavior of the message passing scheme compared to the original SPA (µ = 0) while retaining the same fixed points. As in [Murphy et al., 1999], we set µ = 0.1 and use this variant of the SPA as an additional baseline in the following experiments, where we refer to it as SPA µ .
Besides the SPA, we similarly apply message passing based on the newly proposed update rule (11). We call the resulting inference algorithm cycBP (BP for cyclic graphs). If we use the extrinsic update rule (10), we denote the algorithm with cycBP e . We also consider the CCCP for the Bethe free energy as defined in [Yuille, 2002], since it gives interesting insights into the quality of the Bethe approximation. For the double loop, we apply 25 outer iterations, each comprising 25 inner iterations.
Ising model We study the 2 × 2 fully connected spin glass model of Example 1 for S = 2, i.e., all parameters θ n and J n,m are independently sampled from a uniform distribution U[−2, +2]. Table 2 evaluates the behavior of all discussed inference schemes, averaged over 10 5 different graphs.σ LKL denotes the empirical standard deviation of L KL of the individual graphs from the empirical mean. We can observe that the SPA does not leverage the full potential of the Bethe approximation, since the average loss L KL = 0.087 of the SPA is twice as large compared to L KL = 0.044 for the CCCP.
Although the SPA reaches on average a smaller F Bethe than the CCCP, the beliefs of the SPA show local inconsistencies with L L = 0.3 due to non-convergent behavior. Using "momentum" in the SPA message updates can help to mitigate this behavior: the SPA µ shows improved pairwise consistency L L = 0.12 and also yields in average a better approxi- mation of the true marginals (L KL = 0.035). The CCCP has a vanishing Bethe consistency distance L L , i.e., the results of the CCCP lie within the local polytope L. We search for alternative message update rules, by optimizing P of the NN-based mappings towards minimal L KL . The training batches are sampled from a spin glass model with S = 3 to put more emphasis on graphs with strong coupling, where the SPA is known to be susceptible to convergence errors. The results in Tab. 2 show that there indeed exist superior message update rules to the SPA for this class of cyclic graphs. Using the extrinsic update rule (10), the cycBP e algorithm reaches L KL = 0.04 and thereby outperforms the original SPA as well as the CCCP.
We visualize the message update rule of the cycBP e algorithm in Fig. 2 by plotting the optimized mapping (10) from the incoming LLR message L xn→ψn,m to the outgoing LLR message L ψn,m→xm . Similar to the SPA, the mapping is point-symmetric to the origin. The major difference is the behavior for incident LLR messages with high magnitudes |L xn→ψn,m | > 8, where the outgoing messages are heavily attenuated. Intuitively, this behavior reduces the potential of oscillation in graphs with strong coupling E n,m . We can further improve the inference performance by disabling the extrinsic principle in the message passing procedure. The resulting algorithm cycBP can be interpreted as a generalization of cycBP e and outperforms the latter with L KL = 0.014, as reported in Tab. 2. It also yields a superior approximation of the true marginals compared to the momentum-based SPA µ , although the Bethe consistency distance L L = 0.48 is relatively high in this case.
Moreover, we consider unsupervised training towards the proposed loss function L Bethe . For the cycBP e algorithm, the unsupervised training leads to a smaller loss L KL = 0.03 compared to the supervised training, i.e., the loss function L Bethe is better suited for the optimization via stochastic gra- dient descent than the loss function L KL in this case. In the unsupervised training towards L Bethe , we observe substantial differences between the two variants cycBP e and cycBP: while the optimization of cycBP e converges reliably, the training of cycBP is unstable and the optimization needs to run multiple times with different initializations for P until a reasonable result is obtained. This behavior is also reflected in the results in Tab. 2, where the cycBP algorithm shows a degraded performance with L KL = 0.027, compared to the supervised training (L KL = 0.014). We conjecture that this is accounted for by the local consistency constraint L L , which can be directly enforced at the message update at the factor nodes if the intrinsic message also takes part in the update. Optimization of the hyperparameter α did not lead to considerable changes in this behavior and we used α = 25 for all presented results.
To analyze the convergence properties on highly frustrated systems, we consider the Ising model with constant parameters θ and J. Note that we do not specifically optimize the models for this scenario, but rather use the previous parametrization P which is optimized for spin glasses with S = 3. Figure 3 plots L KL over θ and J for different inference algorithms. Knoll et al. [2018] showed that the Bethe free energy has a unique minimum in the complete antiferromagnetic domain (J < 0) and in large parts of the ferromagnetic case (J > 0), except for a region around θ = 0, where F Bethe has two minima in L. This coincides with our findings of the approximation error L KL for the CCCP in Fig. 3. The SPA shows failure to converge in large parts of the antiferromagnetic region with J < −1, where it does not converge to the unique fixed point and produces large approximation errors. The extrinsic message passing scheme cycBP e , optimized towards L Bethe , shows an improved behavior. However, in the antiferromagnetic case with strong repellings (J < −1.5), there are still considerable approximation errors. The non-extrinsic message passing algorithm cycBP shows good inference capabilities over the complete considered region if it is optimized towards L KL . The unsupervised training with respect to L Bethe leads to a similar performance as the CCCP, however, the training procedure is again relatively unstable in this case.
Symbol Detection
We further consider the factor graphs of Example 2 for approximate symbol detection on linear channels with memory L = 2. To generate random channels, we independently sample each tap h ℓ of the channel impulse response for every example from a Gaussian distribution with zero mean and unit variance and subsequently normalize each channel to unit energy ∥h∥ 2 = 1. Figure 4 evaluates the detection performance of the considered inference algorithms in terms of the BMI over the signal-to-noise ratio E b /N 0 = 1/σ 2 . Both, the SPA and the CCCP run into an error floor for high E b /N 0 , where the graphs tend to have strong coupling via the factor nodes I n,m (c n , c m ). The momentum-based message updates of the SPA µ enhance the original SPA in the entire E b /N 0 range under consideration and close the performance gap to the CCCP. During the optimization of the new message update rules, the E b /N 0 in dB was sampled from U[0, 16] for each batch element independently. To help the update rule adapt to different channel realizations, i.e., different E b /N 0 and different channel taps h, we feed E b /N 0 and h as additional inputs to the NN. Figure 4 shows that the optimized algorithms cycBP e and cycBP clearly outperform the SPA and the CCCP, especially for high E b /N 0 . Consistently with our findings on the Ising graphs, the cycBP algorithm performs better than the extrinsic variant cycBP e . The latter can also be trained towards L Bethe without degrading the detection performance. This is particularly surprising since it thereby clearly outperforms the CCCP. The optimization of the cycBP algorithm towards L Bethe does not converge and is therefore not shown in Fig. 4. However, since training towards the BMI is feasible for large N , the supervised training of the cycBP algorithm yields an attractive algorithm with low complexity and superior performance, which can be highly relevant for practical applications.
DISCUSSION
To investigate the two central questions which we formulated in Sec. 3 and to show the potential of our method, we investigated compact models with N = 4. These models are expressive examples for analysis since they have a high density of short cycles and because the true marginals are available as ground truth data. However, verifying the capability of the proposed cycBP algorithm for practical applications requires extensive numerical evaluation on larger graphs and varying graph structures. This is ongoing work and our preliminary results are promising.
CONCLUSION
This work considered message passing for approximate inference and showed the existence of message update rules which perform especially well on cyclic graphs where the SPA fails. We challenged the extrinsic information principle for cyclic graphs and proposed an alternative message update rule which also takes intrinsic information into account. The gain was demonstrated by numerical experiments on two exemplary classes of factor graphs. The learned message update rules generalize well and training is extremely fast since the update rule is defined by a very compact NN that is reused at all factor nodes. We furthermore proposed a novel unsupervised and application-agnostic loss function that follows the idea of the Bethe approximation.
where x := H H y and G := H H H are the matched filtered versions of the observation and the channel matrix, respectively. Modeling a factor graph based on (9) and applying the SPA yields a low-complexity symbol detection algorithm, originally proposed byColavolpe et al. [2011].
F
Bethe
Figure 2 :
2The mapping (10) of the cycBP e algorithm, optimized for the 2 × 2 spin glass with S = 3 (solid), in comparison with the mapping of the SPA (dotted), plotted for different coupling E n,m .
Figure 3 :
3Approximation error L KL on the 2 × 2 Ising model with constant parameters θ and J: SPA (top left), CCCP (top right), cycBP optimized on spin glasses w.r.t. L KL (bottom left), and cycBP e trained with L Bethe (bottom right).
Figure 4 :
4Detection performance of the proposed cycBP algorithm, averaged over 10 7 random channels.
Table 1 :
1NN Architecture
Layer (linear) Activation Dimension
Input
ReLU
(N in , 7)
Hidden
Tanh
(7, 7)
Output
Linear
(7, 1)
Table 2 :
2Behavior of the Novel Message Passing Algorithm
cycBP for the 2 × 2 Spin Glass, Averaged over 10 5 Graphs
Algo.
Loss
L KLσLKL
As a consequence, the training procedure of the NN is not entirely local because the local copies of the NN at each factor node must be globally synchronized during optimization. However, the local nature of the message updates is still retained.
In our case, where the symbols cn follow a Rademacher distribution, the BMI is equivalent to the mutual information.
AcknowledgementsThis work has received funding in part from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 101001899) and in part from the German Federal Ministry of Education and Research (BMBF) within the project Open6GHub (grant agreement 16KISK010).
Achievable information rates for fiber optics: Applications and computations. Alex Alvarado, Tobias Fehenberger, Bin Chen, Frans M J Willems, J. Lightw. Technol. 362Alex Alvarado, Tobias Fehenberger, Bin Chen, and Frans M. J. Willems. Achievable information rates for fiber op- tics: Applications and computations. J. Lightw. Technol., 36(2):424-439, 2018.
Model selection through sparse maximum likelihood estimation for multivariate gaussian or binary data. Onureena Banerjee, Laurent El Ghaoui, J. Mach. Learn. Re. 9Onureena Banerjee and Laurent El Ghaoui. Model selec- tion through sparse maximum likelihood estimation for multivariate gaussian or binary data. J. Mach. Learn. Re., 9:485-516, 2008.
On the statistical analysis of dirty pictures. Julian Besag, J. R. Stat. Soc. Ser. B. Stat. Soc. 483Julian Besag. On the statistical analysis of dirty pictures. J. R. Stat. Soc. Ser. B. Stat. Soc., 48(3):259-302, 1986.
SISO detection over linear channels with linear complexity in the number of interferers. Giulio Colavolpe, Dario Fertonani, Amina Piemontese, IEEE J. Sel. Topics Signal Process. 58Giulio Colavolpe, Dario Fertonani, and Amina Piemontese. SISO detection over linear channels with linear complex- ity in the number of interferers. IEEE J. Sel. Topics Signal Process., 5(8), 2011.
Low-density parity check codes. Robert G Gallager, MIT PressPhD thesisRobert G. Gallager. Low-density parity check codes. PhD thesis, MIT Press, 1963.
Bit-interleaved coded modulation. Albert Guillén I Fàbregas, Alfonso Martinez, Giuseppe Caire, In Found. Trends Commun. Inf. Theory. 5Now PublishersAlbert Guillén i Fàbregas, Alfonso Martinez, and Giuseppe Caire. Bit-interleaved coded modulation. In Found. Trends Commun. Inf. Theory, volume 5. Now Publish- ers, Delft, NL, 2008.
Multilayer feedforward networks are universal approximators. Kurt Hornik, Maxwell Stinchcombe, Halbert White, Neural Networks. 25Kurt Hornik, Maxwell Stinchcombe, and Halbert White. Multilayer feedforward networks are universal approxi- mators. Neural Networks, 2(5):359-366, January 1989.
Fixed points of belief propagation -an analysis via polynomial homotopy continuation. Christian Knoll, Dhagash Mehta, Tianran Chen, Franz Pernkopf, IEEE Trans. Pattern Anal. Mach. Intell. 409Christian Knoll, Dhagash Mehta, Tianran Chen, and Franz Pernkopf. Fixed points of belief propagation -an analy- sis via polynomial homotopy continuation. IEEE Trans. Pattern Anal. Mach. Intell., 40(9):2124-2136, 2018.
Factor graphs and the sum-product algorithm. R Frank, Brendan J Kschischang, Hans-Andrea Frey, Loeliger, IEEE Trans. Inf. Theory. 47222Frank R. Kschischang, Brendan J. Frey, and Hans-Andrea Loeliger. Factor graphs and the sum-product algorithm. IEEE Trans. Inf. Theory, 47(2):22, 2001.
Jiaming Song, Ashish Sabharwal, and Stefano Ermon. Belief propagation neural networks. Jonathan Kuck, Shuvam Chakraborty, Hao Tang, Rachel Luo, Adv. Neural Inf. Process. Syst. 33Jonathan Kuck, Shuvam Chakraborty, Hao Tang, Rachel Luo, Jiaming Song, Ashish Sabharwal, and Stefano Er- mon. Belief propagation neural networks. Adv. Neural Inf. Process. Syst., 33:667-678, 2020.
Sufficient conditions for convergence of the sum-product algorithm. M Joris, Hilbert J Mooij, Kappen, IEEE Trans. Inf. Theory. 5312Joris M. Mooij and Hilbert J. Kappen. Sufficient conditions for convergence of the sum-product algorithm. IEEE Trans. Inf. Theory, 53(12):4422-4437, 2007.
Loopy belief propagation for approximate inference: An empirical study. Kevin Murphy, Yair Weiss, Michael I Jordan, Proc. Innov. InnovKevin Murphy, Yair Weiss, and Michael I. Jordan. Loopy be- lief propagation for approximate inference: An empirical study. In Proc. Innov. Appl. Artif. Intell. Conf., 1999.
Learning to decode linear codes using deep learning. Eliya Nachmani, Yair Be'ery, David Burshtein, Proc. nullEliya Nachmani, Yair Be'ery, and David Burshtein. Learn- ing to decode linear codes using deep learning. In Proc.
. Annu. Allerton Conf. Commun., Control, Comput. Annu. Allerton Conf. Commun., Control, Comput., Monti- cello, IL, 2016.
Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Judea Pearl, Morgan KaufmannJudea Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, 1988.
On Ising's model of ferromagnetism. Rudolf Peierls, Math. Proc. Camb. Philos. Soc. 32Rudolf Peierls. On Ising's model of ferromagnetism. Math. Proc. Camb. Philos. Soc., 32:477-481, 1936.
. John Proakis, Massoud Salehi, Digital Communications. McGraw Hill. 5 editionJohn Proakis and Massoud Salehi. Digital Communications. McGraw Hill, 5 edition, 2007.
Structural optimization of factor graphs for symbol detection via continuous clustering and machine learning. Lukas Rapp, Luca Schmid, Andrej Rode, Laurent Schmalen, arXiv:2211.11406Lukas Rapp, Luca Schmid, Andrej Rode, and Laurent Schmalen. Structural optimization of factor graphs for symbol detection via continuous clustering and machine learning. arXiv:2211.11406, 2022.
The capacity of low-density parity-check codes under messagepassing decoding. J Thomas, Rüdiger L Richardson, Urbanke, IEEE Trans. Inf. Theory. 472Thomas J. Richardson and Rüdiger L. Urbanke. The ca- pacity of low-density parity-check codes under message- passing decoding. IEEE Trans. Inf. Theory, 47(2):599- 618, 2001.
Neural enhanced belief propagation on factor graphs. Garcia Victor, Max Satorras, Welling, Proc. Int. Conf. on Artificial Intelligence and Statistics (AISTATS). Int. Conf. on Artificial Intelligence and Statistics (AISTATS)San Diego, CA, USA2021Victor Garcia Satorras and Max Welling. Neural enhanced belief propagation on factor graphs. In Proc. Int. Conf. on Artificial Intelligence and Statistics (AISTATS), pages 685-693, San Diego, CA, USA, 2021.
Low-complexity nearoptimum symbol detection based on neural enhancement of factor graphs. Luca Schmid, Laurent Schmalen, IEEE Trans. Commun. 7011Luca Schmid and Laurent Schmalen. Low-complexity near- optimum symbol detection based on neural enhancement of factor graphs. IEEE Trans. Commun., 70(11):7562- 7575, November 2022.
Adaptive maximum-likelihood receiver for carrier-modulated data-transmission systems. Gottfried Ungerboeck, IEEE Trans. Commun., COM. 225Gottfried Ungerboeck. Adaptive maximum-likelihood re- ceiver for carrier-modulated data-transmission systems. IEEE Trans. Commun., COM-22(5):624-636, 1974.
Graphical models, exponential families, and variational inference. J Martin, Michael I Wainwright, Jordan, Found. Trends Mach. 11-2Martin J. Wainwright and Michael I. Jordan. Graphical models, exponential families, and variational inference. Found. Trends Mach., 1(1-2):1-305, 2008. Number: 1-2.
Willsky. Tree-reweighted belief propagation algorithms and approximate ML estimation by pseudo-moment matching. Martin J Wainwright, Tommi S Jaakkola, Alan S , Martin J. Wainwright, Tommi S. Jaakkola, and Alan S. Will- sky. Tree-reweighted belief propagation algorithms and approximate ML estimation by pseudo-moment matching.
. PMLR R4Proc. Int. Workshop Artificial Intelligence and Statistics. 4Proc. Int. Workshop Artificial Intelligence and Statistics, R4(PMLR R4):308-315, 2003.
Belief optimization for binary networks: A stable alternative to loopy belief propagation. Max Welling, Yee Whye Teh, Proc. Conf. on Uncertainty in Artificial Intelligence (UAI). Conf. on Uncertainty in Artificial Intelligence (UAI)Max Welling and Yee Whye Teh. Belief optimization for binary networks: A stable alternative to loopy belief prop- agation. In Proc. Conf. on Uncertainty in Artificial Intel- ligence (UAI), pages 554-561, 2013.
Generalized belief propagation. Jonathan S Yedidia, William T Freeman, Yair Weiss, Advances in Neural Information Processing Systems. 13Jonathan S. Yedidia, William T. Freeman, and Yair Weiss. Generalized belief propagation. Advances in Neural In- formation Processing Systems, 13, 2000.
Understanding belief propagation and its generalizations. Jonathan S Yedidia, William T Freeman, Yair Weiss, Exploring Artificial Intelligence in the New Millennium. Morgan Kaufmann Publishers Inc8Jonathan S. Yedidia, William T. Freeman, and Yair Weiss. Understanding belief propagation and its generalizations. In Exploring Artificial Intelligence in the New Millen- nium, volume 8, pages 236-239. Morgan Kaufmann Pub- lishers Inc., 2003.
Constructing free-energy approximations and generalized belief propagation algorithms. Jonathan S Yedidia, William T Freeman, Yair Weiss, IEEE Trans. Inform. Theory. 517Jonathan S. Yedidia, William T. Freeman, and Yair Weiss. Constructing free-energy approximations and generalized belief propagation algorithms. IEEE Trans. Inform. The- ory, 51(7):2282-2312, 2005.
Inference in probabilistic graphical models by graph neural networks. Kijung Yoon, Renjie Liao, Yuwen Xiong, Lisa Zhang, Ethan Fetaya, Raquel Urtasun, Richard Zemel, Xaq Pitkow, Proc. Asilomar Conf. Signals Syst. Comput. Asilomar Conf. Signals Syst. ComputKiJung Yoon, Renjie Liao, Yuwen Xiong, Lisa Zhang, Ethan Fetaya, Raquel Urtasun, Richard Zemel, and Xaq Pitkow. Inference in probabilistic graphical models by graph neu- ral networks. In Proc. Asilomar Conf. Signals Syst. Com- put., 2019.
CCCP algorithms to minimize the Bethe and Kikuchi free energies: Convergent alternatives to belief propagation. Alan L Yuille, Neural Computation. 147Alan L. Yuille. CCCP algorithms to minimize the Bethe and Kikuchi free energies: Convergent alternatives to belief propagation. Neural Computation, 14(7):1691- 1722, 2002.
| []
|
[
"SCALING UP SEMI-SUPERVISED LEARNING WITH UNCONSTRAINED UNLABELLED DATA",
"SCALING UP SEMI-SUPERVISED LEARNING WITH UNCONSTRAINED UNLABELLED DATA"
]
| [
"Shuvendu Roy [email protected] \nDept\nECE and Ingenuity Labs Research Institute Queen's University\nKingstonCanada\n",
"Ali Etemad [email protected] \nDept\nECE and Ingenuity Labs Research Institute Queen's University\nKingstonCanada\n"
]
| [
"Dept\nECE and Ingenuity Labs Research Institute Queen's University\nKingstonCanada",
"Dept\nECE and Ingenuity Labs Research Institute Queen's University\nKingstonCanada"
]
| []
| We propose UnMixMatch, a semi-supervised learning framework which can learn effective representations from unconstrained unlabelled data in order to scale up performance. Most existing semi-supervised methods rely on the assumption that labelled and unlabelled samples are drawn from the same distribution, which limits the potential for improvement through the use of free-living unlabeled data. Consequently, the generalizability and scalability of semi-supervised learning are often hindered by this assumption. Our method aims to overcome these constraints and effectively utilize unconstrained unlabelled data in semi-supervised learning. UnMixMatch consists of three main components: a supervised learner with hard augmentations that provides strong regularization, a contrastive consistency regularizer to learn underlying representations from the unlabelled data, and a self-supervised loss to enhance the representations that are learnt from the unlabelled data. We perform extensive experiments on 4 commonly used datasets and demonstrate superior performance over existing semi-supervised methods with a performance boost of 4.79%. Extensive ablation and sensitivity studies show the effectiveness and impact of each of the proposed components of our method. | null | [
"https://export.arxiv.org/pdf/2306.01222v1.pdf"
]
| 259,064,159 | 2306.01222 | 393b2bba2dd1e25f4988e36e5c9abc4f560d5fe3 |
SCALING UP SEMI-SUPERVISED LEARNING WITH UNCONSTRAINED UNLABELLED DATA
Shuvendu Roy [email protected]
Dept
ECE and Ingenuity Labs Research Institute Queen's University
KingstonCanada
Ali Etemad [email protected]
Dept
ECE and Ingenuity Labs Research Institute Queen's University
KingstonCanada
SCALING UP SEMI-SUPERVISED LEARNING WITH UNCONSTRAINED UNLABELLED DATA
We propose UnMixMatch, a semi-supervised learning framework which can learn effective representations from unconstrained unlabelled data in order to scale up performance. Most existing semi-supervised methods rely on the assumption that labelled and unlabelled samples are drawn from the same distribution, which limits the potential for improvement through the use of free-living unlabeled data. Consequently, the generalizability and scalability of semi-supervised learning are often hindered by this assumption. Our method aims to overcome these constraints and effectively utilize unconstrained unlabelled data in semi-supervised learning. UnMixMatch consists of three main components: a supervised learner with hard augmentations that provides strong regularization, a contrastive consistency regularizer to learn underlying representations from the unlabelled data, and a self-supervised loss to enhance the representations that are learnt from the unlabelled data. We perform extensive experiments on 4 commonly used datasets and demonstrate superior performance over existing semi-supervised methods with a performance boost of 4.79%. Extensive ablation and sensitivity studies show the effectiveness and impact of each of the proposed components of our method.
Introduction
Semi-supervised learning (SSL) is an effective paradigm for utilizing small amounts of labelled data along with large amounts of unlabeled data, reducing the reliance on fully-labelled datasets. Most existing semi-supervised methods rely on the assumption that the labelled and unlabelled data belong to the same distributions. This assumption is not necessarily true in real-world scenarios. Moreover, this assumption prohibits us from leveraging free-living unlabelled data with different distributions. In fact, it has been shown in previous studies that incorporating out-of-distribution data with the unlabelled set for SSL impairs the performance [1].
To adopt a less constrained approach regarding unlabelled data, open set SSL has been proposed [2,3], which allows the unlabeled training set to contain samples from classes which are not necessarily present in the labelled set. However, this approach still imposes some constraints on the unlabelled data by requiring it to include samples of all of the known classes. These constraints create two important challenges. First, collecting an unlabelled dataset that necessarily includes samples from certain classes can be challenging in real-world settings. Second, this approach significantly limits the scalability of SSL to large, web-scale unlabelled data since such data may contain different data or class distribution to that of the labelled set.
In this paper, we propose a novel SSL approach called UnMixMatch, which can learn effective representations from unconstrained unlabelled data and effectively enable SSL to scale up using web-scale unlabelled data. UnMixMatch comprises three main components: (1) A supervised learner with hard augmentations: The hard augmentation module of UnMixMatch combines RandAug and MixUp to provide strong regularization for the supervised learner, which prevents overfitting on small labelled sets. (2) A contrastive consistency regularizer: The regularizer in UnMixMatch is a noise contrastive estimation loss (i.e. InfoNCE) [4,5], which learns the underlying representations of data by enforcing the model to produce consistent predictions under strong perturbations. (3) A self-supervised pre-text learning module: This component further improves the representations learned by UnMixMatch by providing additional self-supervision. Specifically, we include a pre-text task known as rotation prediction, where the model learns the data representations by predicting the degree of rotation applied to an unlabelled sample.
We conduct extensive research on four common datasets: CIFAR-10, CIFAR-100, SVHN, and STL-10. First, we reimplement and benchmark 13 recent semi-supervised methods with unconstrained unlabelled data, using ImageNet-100 as the unlabelled set. We find that existing methods experience performance degradation in unconstrained settings. In comparison, UnMixMatch outperformed existing methods by an average of 4.79%. Additionally, UnMixMatch exhibits robust scaling capabilities regarding the size of the unlabelled datasets, as we observe an additional 5.61% improvement when we increase the unlabelled data size by a factor of 10 (see Fig. 1). Furthermore, we achieved state-of-the-art results in the open set SSL. Finally, we ablate each component of UnMixMatch and demonstrate the crucial role that each component plays in the performance.
In summary, we make the following contributions: (1) We propose a novel semi-supervised method that can learn effective representations from unconstrained unlabelled data for the first time. (2) We conduct an extensive study to benchmark the performance of existing semi-supervised methods when the unlabelled data are not constrained to match the distribution of the labelled data. (3) We demonstrate that our method outperforms previous methods by a large margin in unconstrained learning and sets a new state-of-the-art for open set SSL. We also show that the performance of our method scales up by increasing the amount of unconstrained unlabelled data.
Related Work
In this section, we first discuss recent developments in SSL in constrained settings, followed by a discussion of open set SSL.
Constrained Semi-supervised Learning
Prior works on semi-supervised can be broadly divided into two main categories: pseudo-labelling and consistency regularization. Pseudo-labelling techniques such as [6,7] mainly rely on the strategy of predicting pseudo-labels for the unlabeled data using the encoder being trained and learns with a combination of the labelled data and unlabeled data plus their pseudo-labels, and iterating on those predictions to gain gradual improvements. Consistency regularization techniques such as [8,9,10] learn by forcing the embeddings of different augmented unlabeled samples to be similar. This takes place while the model also simultaneously learns via a supervised loss which is optimized on the labelled samples. In the pseudo-labelling category, [6] first introduced the overall approach, and subsequent works improved the technique by adding various interesting elements. For example, in Noisy Student [7], a pre-trained teacher was introduced to generate the pseudo-labels, and a student learned from the pseudo-labels along with the labelled data. In the consistency regularization category, Pi-model [9] was one of the earliest works, which used a consistency loss on the predictions of two augmentations of a sample. Later, Mean Teacher [8] improved the performance by enforcing consistency between the predictions of an online encoder and an exponential moving average (EMA) encoder. VAT [10] is another modification of Mean Teacher which replaced the augmentations with adversarial perturbations. Later, Unsupervised Domain Adaptation (UDA) [11] showed large improvements by replacing simple augmentations with hard augmentations.
It should be noted that one of the key weaknesses of pseudo-label-based methods is the confidence-bias problem, which arises when the model generates confident wrong pseudo-labels. Yet, their ability to virtually increase the labelled set size by generating pseudo-labels for the unlabeled data has motivated researchers to combine them with consistency regularization methods within the same framework. For instance, MixMatch [12] predicts pseudo-labels for unlabelled samples while enforcing consistency across augmented images. ReMixMatch [13] improved upon MixMatch with several implementation tricks, such as augmentation anchoring and distribution alignment. FixMatch is another popular hybrid method known for its simplicity and performance. It predicts the pseudo-label of a sample from a weakly augmented image and treats it as a label for a heavily augmented sample when the confidence of the pseudo-label is above a certain threshold. Subsequently, several works have attempted to improve different aspects of FixMatch. FlexMatch [14], for instance, employs an adaptive curriculum threshold for each class based on that class's performance. CoMatch [15] improves upon FixMatch by introducing a graph-based contrastive loss that learns both class representations and low-dimensional embeddings. ConMatch [16] also employs a similar concept, using pseudo-labels as supervision in a contrastive loss. Similarly, SimMatch [17] improves FixMatch by introducing an instance similarity loss in addition to the semantic similarity loss imposed by pseudo-labels. ScMatch [18] utilizes the concept of clustering with SSL by dynamically forming super-classes.
Open Set Semi-supervised Learning
Open set SSL is a type of SSL where the unlabelled set includes samples from unknown classes. In prior work, [1] demonstrated that the presence of unknown classes in the unlabelled dataset has a severe negative impact on the performance of semi-supervised methods. Similar findings were also reported by [19] that analyzed the performance of more recent semi-supervised methods in open set settings. Nonetheless, some recent methods have attempted to effectively address the detrimental effect of unknown classes while learning from open set unlabelled data. For example, [20] learned to distinguish known classes from unknown ones, effectively avoiding samples from unknown classes in the learning process. A similar approach was taken in [21], which proposed a novel scoring function called energy discrepancy to detect and remove instances of unknown classes. OpenMatch [2] used the concept of out-of-distribution to mitigate the negative impact of unknown classes. CCSSL [22] introduced a class-aware contrastive learning approach to improve performance in open set settings. In this study, we tackle a more challenging scenario where the unlabelled set contains instances of unknown classes and does not necessarily include all the known classes. Consequently, our goal is not to detect and remove the images of unknown classes, but rather to learn domain-invariant features from them and use free-living data to scale up SSL.
Method
In this section, we describe the preliminaries followed by a detailed description of the proposed method. Next, we discuss some plausible extensions of UnMixMatch.
Preliminaries and Overview
Let, X U = (x i ) N i=1 be an unlabelled set, and X L = (x i , y i ) n i=1 be a labelled set where n ≪ N . In general, semisupervised methods learn from X L and X U in supervised and unsupervised settings, respectively. Formally, SSL is formulated as: min
θ (x,y)∈X L L S (x, y, θ) supervised + α x∈X U L U (x, θ) unsupervised ,(1)
where, θ represents the learnable model, L S is the supervised loss, and L U is the unsupervised loss. Trivially, it is assumed that X L and X U come from the same data and class distributions. Let Y l and Y u be the set of classes for labelled and unlabelled data. In the constrained setting, Y l = Y u , and in the open set setting, Y l is a proper subset of Y u , i.e., Y l ⊂ Y u . Regarding data distribution, both settings assume that the data comes from the same source and hence have similar distributions. These assumptions are hard to satisfy in a real-world task.
In this study, our objective is to learn from unconstrained data that does not have any particular constraints and can come from different data or class distributions, or both. The unlabelled set may consist of images of unknown classes, where Y u \Y l ̸ = ∅. Additionally, the unlabelled set may not contain all the known classes, and in the extreme case, Y u ∩ Y l = ∅. To address this challenge, we propose UnMixMatch, a method that can learn effective visual representations from unconstrained unlabelled data. UnMixMatch comprises three modules, which we discuss in detail in the following subsections.
Supervised Module
As mentioned earlier, SSL requires X L to be learned using a supervised component. The first contribution of our method is, therefore, to create a supervised learner suitable for our purpose of scalable SSL. Here, we hypothesize that given large amounts of unlabeled data in an unconstrained setting and relatively very small amounts of labelled data, the supervised module may overfit the small labelled set. Thus, unlike FixMatch [23], MixMatch [12], and ReMixMatch [13], which use weak augmentations for their supervised modules, we apply hard augmentations on the labelled samples in our supervised module. This acts as a regularizer for supervised learning and is better able to deal with overfitting compared to weaker augmentations. We utilize RandAug [24] as the hard augmentation followed by the MixUp operation [12,13]. We denote RandAug plus MixUp as the RandMixUp operation. Finally, a supervised loss is applied to a batch of samples.
RandAug. This is a hard augmentation technique for generating diverse samples by employing a sequence of transformations [24]. More specifically, it applies R n ∈ 1..13 transformations randomly chosen from a list of 13 augmentations, including rotation, translation, and colour distortion. The magnitude of each transformation is sampled randomly from a pre-defined range. We denote the augmentation operation asx = RandAug(x).
MixUp. Let (x 1 , y 1 ) and (x 2 , y 2 ) be two pairs of samples and their class labels. The MixUp operation interpolates between the data points to generate mixed samples and labels as:
x = λ · x 1 + (1 − λ) · x 2 (2) y = λ · y1 + (1 − λ) · y2,(3)
where λ is the mixing coefficient. Following MixMatch [12], we sample λ from a beta distribution (λ ∼ Beta(α, α)) with hyper-parameter α.
Supervised Loss. For a batch of unlabeled samples X u = ((x i ); i ∈ (1, .., b)), with batch size b, we first generate the pseudo-label for each sample X i as p i = P θ (x i ), where P θ is the encoder with a classification head.
Next, for a batch of labelled samples X l = ((x i , y i ); i ∈ (1, ..., b)) and the unlabelled samples with pseudo-labels X p = ((x i , p i ); i ∈ (1, .., b)), we augment all the samples usingX = RandAugM ix(X l , X p ). Accordingly, we define the supervised loss of our method as:
L sup = 1 b x,ȳ∈X H(ȳ, P θ (y|x)),(4)
where H is the cross-entropy loss.
Consistency Regularization Module
To deal with the unconstrained nature of existing unlabeled data and learn effective representations we apply a consistency regularizer as our second contribution. A consistency regularizer learns from the unlabelled data by enforcing consistency on its predictions under different augmentations. Prior works that have used consistency regularization for SSL [9,12,13] enforce consistency on the class predictions under different perturbations. However, regularization over class predictions is not useful for learning in unconstrained settings where unlabelled data do not necessarily come from the same classes as the labelled data. To address this, we enforce consistency in the low-dimensional embedding space using a contrastive loss. Using contrastive loss on the embedding space enables the model to learn class-agnostic representations from unconstrained unlabelled data. In UnMixMatch, we adopt the Noise Contrastive Estimation loss, a.k.a InfoNCE [4].
Contrastive learning learns from positive (perturbations of the same sample) and negative samples (all other samples) by bringing embeddings of the positive samples together and pushing them away for the negatives. For each sample, x i ∈ X U , two augmentations are applied to generate two augmented imagesx i = RandAug(x i ). These are first passed through the encoder and a projection head (shallow linear layers with non-linearity and batch normalization) to obtain embeddings z i = P θp (z|x i ). The contrastive loss is accordingly defined as: where, κ(i) is the index of the second augmented sample, 1 [k̸ =i] is an indicator function which returns 1 when k is not equal to i, and 0 otherwise. τ is a temperature parameter.
L con = − 1 2b 2b i=1 log exp(z i , z κ(i) /τ ) 2b k=1 1 [k̸ =i] exp(z i , z k /τ ) ,(5)
Self-supervised Module
Finally, we intend to enhance the quality of the representations extracted from the unconstrained unlabelled data using the consistency regularizer. It has been recently shown that self-supervised techniques can be employed to learn underlying domain-invariant representations for unlabelled data [25,26]. Moreover, this idea has already been demonstrated to be useful in conjunction with SSL [27,28,13]. As a result, we integrate a straightforward yet effective self-supervised pre-text task called rotation prediction, which learns by predicting the degree of rotations applied to unlabelled images. In practice, a rotation module randomly samples one of the following rotations and applies it to an unlabelled image: 0 • , 90 • , 180 • , 270 • . As a result, the rotation prediction task can be viewed as a four-way classification problem, represented as:
L rot = 1 b u∈U H(r, P θr (r|Rotate(u))(6)
Here, P θr is the encoder with a rotation head that predicts the rotation, and H is the cross-entropy loss.
Total Loss
Finally, we incorporate the loss functions for the three modules above to create the total loss:
L U nM ixM atch = L sup + βL con + γL rot .(7)
Here, β and γ are hyper-parameters that balance the significance of the contrastive and rotation losses. An overview of our proposed method is presented in Fig. 2.
Experiments and Results
This section presents the experimental setup and results of our proposed UnMixMatch. First, we describe the datasets and important implementation details in section 4.1. In section 4.2, we present the main results followed by detailed ablation studies in section 4.3 that examine the main components of UnMixMatch and its performance for plausible alternatives for each component. Finally, we present a sensitivity study of important hyper-parameters in section 4.4
Datasets and Implementation Details
For our main experiments, we follow the standard semi-supervised evaluation protocol from prior works [23], and present the results for four datasets: CIFAR-10 [29], CIFAR-100 [29], SVHN [30], and STL-10 [31]. We present the results for different numbers of labelled samples, averaged over three runs. We use ImageNet-1K [32], and ImageNet-100 (a subset of ImageNet-1K) as the unconstrained unlabeled datasets (we discuss more on the intuition for selecting ImageNet-100 and ImageNet-1K as an unconstrained unlabelled dataset in the Supplementary Material S2). Our implementation and hyper-parameters closely follow Flex-Match [14] and use WideResNet as the encoder. We train the method for 2 20 iterations with a batch size of 64, a learning rate of 0.03, and an SGD optimizer with a momentum of 0.9 and weightdecay of 0.0005. Further details on the hyper-parameters and training settings are provided in Supplementary Material S3. The code is implemented with Pytorch and built using TorchSSL [14] (pseudo-code available in S1).
Results
Here, we present the main results of our method, including the performance on the four datasets (with ImageNet-100 for unlabeled data) in unconstrained settings, analysis of UnMixMatch's performance with increasing the number of unlabelled data, its effectiveness in open set settings, and its performance in a barely supervised setting. Table 1 presents the main results of our work on the four datasets. Here, we first re-implement 13 semi-supervised methods and report the results with unconstrained unlabelled data. We report the average accuracies and standard deviations across three individual runs for each setting. We also report the average accuracy across all settings for overall comparison. It should be noted that the performance of prior methods is considerably lower than what has been reported in the original papers, where the unlabeled and labelled samples came from the same datasets (unlabeled data were not unconstrained). Next, we observe that UnMixMatch demonstrates superior performance compared to other methods, with an average improvement of 4.79%. We obtain considerable improvement across all datasets and splits, except using CIFAR-100 with 2500 labels, where CCSSL achieves a better result.
Unconstrained Settings
When considering the number of labelled samples, we notice that the differences between UnMixMatch and other methods are more pronounced when the labelled set size is small. For example, with only 40 labelled samples from CIFAR-10, UnMixMatch achieves a 17.04% performance gain over CCSSL (which has the second highest overall average performance after ours), and 6.25% higher than the next best result for this specific setting, which was obtained by CoMatch. A similar pattern is observed for SVHN, where UnMixMatch outperforms CCSSL and FlexMatch by 22.88% and 12.4%, respectively. Our main motivation for using unconstrained unlabelled data is to take advantage of the abundance of free-living unlabeled data. In this experiment, we evaluate the performance of UnMixMatch as the size of the unlabelled set is increased. The results of this study are summarized in Fig. 1 and presented in detail in Table 2. Specifically, we increase the size of the unlabelled set from 130K images of ImageNet-100 (a subset of ImageNet-1K) to 1.28M images of ImageNet-1K, with two more subsets of 450K and 850K images from ImageNet-1K. We perform this experiment on CIFAR-10 with 40 labelled samples with the two best methods (CoMatch and ReMixMatch) on this setting and observe an increasing trend in the accuracy of UnMixMatch as the number of images in the unlabelled set increases. With ImageNet-1K used as the unlabelled set, which is approximately 10 times larger than ImageNet-100, the accuracy of UnMixMatch improves from 47.93% to 53.54%, a significant improvement of 5.61% by simply increasing the size of the unlabelled set. CoMatch and ReMixMatch, on the other hand, show very small improvements with the increase in the unlabelled data, but the performance difference with our method further increases.
Scaling Up the Unlabelled Set
Results on Open Set Setting
Next, we investigate the performance of UnMixMatch on open set SSL. Open set SSL is a relatively less challenging setting than unconstrained settings, where the unlabelled set may contain images of unknown classes but must contain images of all known classes [2,3]. For learning in open set settings, we employ a variant of UnMixMatch which takes advantage of the fact that the unlabelled set contains samples of all known classes from the labelled set and learns from the predicted pseudo-labels on the unlabelled set. In this variant, we replace the contrastive loss in our method with the [24] 46.01 MixUp [12] 45.23 CutMix [34] 43.12 AugMix [35] 44.75
(c) Variants of contrastive regularizers.
Contrastive loss Accuracy Ours 47.93 ConMatch [16] 45.82 Contrastive Reg. [36] 45.77 Graph Contrast [15] 47.12 Class-aware Contrast [22] 47 [37] 46.89 SimSiam [38] 46.50 VICReg [39] 47 class-aware contrastive loss of CCSSL [22]. This method first predicts the pseudo-labels for the unlabelled samples and uses them with contrastive loss to learn clusters of known classes in the embedding space. For this experiment, we follow the experimental setup of OpenMatch [2], which reports the results for CIFAR-10 with a 6/4 split. This split means that the labelled set contains images of 6 classes from CIFAR-10, while the unlabelled set includes images of 6 known and 4 unknown classes. Like OpenMatch, we take 6 animal classes as the known classes and 4 object classes as the unknown classes. We perform this experiment using three different splits with 50, 100, and 400 labelled samples per class in the known set. The results of this experiment are presented in Table 3, where it can be observed that our method outperforms the existing methods and sets a new state-of-the-art for open set SSL. Once again, UnMixMatch better demonstrates its effectiveness when the amount of labelled data is limited. With 50 labelled samples per class, our approach provides a 6.1% improvement over the second-best method, OpenMatch. For 100 and 400 labelled samples per class, UnMixMatch shows 3.9% and 3.1% improvements, respectively. In this section, we aim to test the performance of SSL in the extreme scenario where only one labelled sample per class is available. This barely supervised setting is considered to be very challenging, even with conventional SSL techniques that use constrained unlabelled data. Given the extremely low number of labelled samples, the results in such settings generally exhibit high variance, and therefore, we increase the number of folds to 5 to account for this variability. This is due to the fact that the quality of labelled data greatly influences the performance in such low data settings [23,33]. The results of this experiment are presented in Table 4. It shows the performance of CIFAR-10 for the three best methods identified previously in Table 1.
Barely Supervised Setting
In this experiment, UnMixMatch achieves an accuracy of 27.54% and outperforms other methods by 5.58%. As before, CCSSL struggles to learn in low data settings, achieving a near chance-level accuracy of only 15.63%. However, FlexMatch shows relatively better performance with an accuracy of 21.96%. In general, we find that it is quite difficult to learn effective representations with just one labelled sample per class while using unconstrained unlabelled data. However, as previously shown in Table 1, UnMixMatch quickly gains significant improvements with a small increase in the number of labelled data and achieves an accuracy of 47.93% with four labelled samples per class (40 labels in total).
Ablation Study
Next, we present various ablation experiments by removing different components of our method. Then we provide a systematic study of different design choices and their impact on performance.
Main Components
RandMixUp
As demonstrated in the main ablation study, the RandMixUp augmentation is a crucial component of our approach. In Table 5b, we present the accuracy of CIFAR-10 with 40 samples for different alternatives to RandMixUp. Recall that RandMixUp combines RandAug [24] and MixUp [12]. We first test the accuracy for RandAug and MixUp individually and observe a decrease in performance for both settings alone, with a 1.92% and 2.7% decrease, respectively. Here, using RandAug exhibits a lower drop than MixUp, suggesting the higher importance of RandAug in the RandMixUp. This analysis also shows that, unlike MixMatch and ReMixMatch, learning using MixUp does not have the most significant impact on the supervised component. Instead, the role of hard augmentation as a regularizer is the key to the impact of RandMixUp in our method.
We also examine the performance of other well-known hard augmentation techniques, namely CutMix and AugMix. Like MixUp, CutMix combines two samples, but in this case, a part of the second sample is cut and inserted into the first sample to create a new one. Despite the conceptual similarity, CutMix yields 2.11% worse accuracy than MixUp, which is 4.81% lower than the accuracy achieved by the proposed RandAugMix. On the other hand, AugMix shares a similar concept as RandAug. While RandAug applies multiple augmentations in sequence, AugMix applies multiple augmentations separately and generates the final sample by interpolating between them. AugMix achieves an accuracy of 44.75%, which is 1.26% lower than that of RandAug and 3.18% lower than that of RandAugMix. The study's overall findings demonstrate the critical importance of RandAugMix in our method, with other conceptually similar augmentation techniques failing to achieve the same level of performance.
Consistency Regularizer
As shown in the main ablation study, the contrastive regularizer is the second most crucial component of our method.
Recall that the proposed contrastive loss for UnMixMatch is a noise contrastive estimation loss that has gained popularity in the self-supervised learning literature [5,4]. However, other variants of contrastive loss have also been introduced in the literature [16,36,15,22] and have shown improvements in different problem settings, including SSL. In this study, we investigate four variants of contrastive loss in the proposed framework. More specifically, we explore the contrastive variants of ConMatch [16] and Contrastive Regularization [36], as well as graph contrastive learning [15], and class-aware contrastive learning [22]. ConMatch [16] is based on a variant of contrastive loss that involves two hard augmentations and one weak augmentation. In [36], high-confidence pseudo-labels were used for supervision while learning with a contrastive loss. Two other methods proposed slightly different versions of utilizing the pseudo-labels in contrastive learning settings: CoMatch [15] and CCSSL [22], where CoMatch used a graph contrastive learning and CCSSL utilized pseudo-labels with a contrastive loss for out-of-distribution learning.
We show the results for these variants of contrastive loss in Table 5c. Using the contrastive variant of both ConMatch and Constartive Regularizer result in a large drop in accuracy with 2.16% and 2.11%, respectively. Using CoMatch with UnMixMatch results in a 47.12% accuracy, dropping by 0.81%. Finally, the class-aware contrastive loss of CCSSL gets the closest accuracy of the proposed UnMixMatch with 47.88% accuracy.
Next, we investigate different regularization strategies. First, we study the choice of applying the contrastive loss on the low-dimensional embedding space rather than the final prediction. In Table 5d, we present the results for this experiment, where we observe a 5.48% decrease in accuracy, indicating the importance of learning representations by regularizing the embedding space instead of the class predictions. Next, we examine another aspect of our regularization approach by replacing two strong augmentations with one weak and one strong augmentation (similar to FixMatch). This variant results in a 3.78% drop in accuracy.
Finally, we show the performance of our method by completely replacing the contrastive loss with other important viable losses, namely, BYOL [37], SimSiam [38], and VICReg [39]. BYOL [37] learns by predicting the representation of a target encoder (exponential moving average of the online encoder), using the online encoder for different augmentations of the same sample. SimSiam [38] also learns by matching the representations of two augmented samples, but unlike previous methods, it doesn't need negative samples or the target encoder. VICReg [39] learns from unlabelled data by combining three terms: variance, invariance, and covariance regularizations. The results of this experiment are presented in Table 5e, where we see that the accuracy of BYOL and SimSiam is considerably lower than the proposed contrastive loss with 46.89% and 46.50%. However, we find a competitive performance for VICReg. In Supplementary Material S4, we present the results of VICReg for all datasets and observe a slightly lower average accuracy than the contrastive loss.
Relation to Existing Methods
Our method has a few similarities with some of the previous semi-supervised methods. For example, MixMatch [12] popularized the use of MixUp [40] in the context of SSL. ReMixMatch [13] built over MixMatch by using MixUp and a rotation prediction loss, making it the most similar to our proposed method. However, our method has certain distinctions from ReMixMatch. Firstly, ReMixMatch only uses MixUp with the supervised module, whereas ours uses RandMixUp. Moreover, our approach uses a contrastive loss as a regularizer, which is applied to the intermediate embeddings, whereas ReMixMatch uses a match loss on the predictions of mixed samples. Finally, ReMixMatch uses KL-divergence loss between the predictions of weakly and strongly augmented samples, which our method does not employ. Overall, recall from Table 1 that ReMixMatch, on average, shows 5.62% lower performance than our method. In Table 5f, we further investigate the differences by incorporating the design choices of ReMixMatch into our method. First, by removing the RandAug from RandMixUp (using MixUp like ReMixMatch), we obtain an accuracy of 45.23% (2.7%↓). Next, using the match loss of ReMixMatch as a regularizer on the class predictions in our method results in an accuracy of 44.25% (3.68%↓). Finally, adding the KL-divergence loss to our method further results in a 0.91% drop in the performance.
Sensitivity Study
Our method involves three important hyper-parameters, namely the α value in MixUp, the contrastive loss weight β, and the rotation loss weight γ. Here, we present a sensitivity analysis of different values for these hyper-parameters. As illustrated in Fig. 3a, the best accuracy is achieved with an alpha value of 0.1, while very large or small values of α lead to a drop in accuracy. Fig. 3b indicates that the optimal performance is obtained with a β of 1.0. Furthermore, Fig. 3c reveals that a higher value of γ leads to better performance, highlighting the critical role of the self-supervised loss in UnMixMatch.
Conclusion
Existing semi-supervised methods struggle to learn when the assumption that the unlabeled data comes from the same distribution as the labelled data, is violated. This work proposes a new semi-supervised method called UnMixMatch for learning from unconstrained unlabelled data. Our method shows large improvements over existing methods and even larger improvements under low-labelled data settings. Our approach also outperforms existing methods on open set settings. Most importantly, UnMixMatch scales up in performance when the size of unlabelled data increases. We hope this research will draw attention to this more challenging and realistic SSL setting with unconstrained unlabelled data.
Supplementary Material
We provide additional details and results. A complete algorithm for our method, UnMixMatch, is presented in Section S1. In Section S2, we describe the four datasets used. Additionally, we discuss the rationale behind utilizing ImageNet as the source for unconstrained unlabelled data. Section S3 presents all the hyper-parameters and implementation details required for reproducing UnMixMatch. Finally, we provide some additional results in Section S4.
S1 Algorithm
We provide the complete algorithm for UnMixMatch in Algorithm 1.
S2 Datasets
In this section, we first provide a short description of all the datasets used in our experiments. We then discuss the role of ImageNet as the source for unconstrained unlabelled data.
For our main experiments, present the results for four datasets: CIFAR-10 [29], CIFAR-100 [29], SVHN [30], and STL-10 [31]. We use ImageNet-1K [32] as unconstrained unlabelled data.
CIFAR-10 is a dataset of 60,000 32×32 colour images in 10 classes, with 6,000 images per class. The dataset is divided into 50,000 training images and 10,000 test images. The classes in CIFAR-10 are airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck.
CIFAR-100 is similar to CIFAR-10 but with 100 classes. The dataset contains 60,000 32×32 colour images, with 600 images per class. The dataset is split into 50,000 training images and 10,000 test images.
SVHN is a dataset for digit recognition in house numbers. The dataset contains over 600,000 digit images of size 32×32, extracted from Google Street View images. The dataset contains three subsets: the training set containing 73,257 images, the test set containing 26,032 images, and an extra dataset containing 531,131 unlabelled images.
STL-10 is a dataset with 10 classes. It contains 5,000 labelled training images and 8,000 labelled test images, with each class containing 500 training images and 800 test images. Additionally, there are 100,000 unlabeled images that can be used for unsupervised learning. The images in the STL-10 dataset are 96×96 colour images. The classes in STL-10 are airplane, bird, car, cat, deer, dog, horse, monkey, ship, and truck.
ImageNet is a widely used image classification dataset that consists of over 1 million high-resolution RGB images, each labelled with one of 1,000 object categories. The images in the dataset vary greatly in size, content, and complexity. The categories are diverse, ranging from animals and plants to everyday objects and scenes.
The ImageNet dataset contains a different class distribution in comparison to the four aforementioned datasets (CIFAR-10, CIFAR-100, SVHN, and STL-10). Firstly, the number of classes is different between the datasets: CIFAR-10 has only 10 classes, CIFAR-100 has 100 classes, SVHN has 10 classes, STL-10 has 10 classes, and ImageNet has 1,000 classes. While the classes in CIFAR-10 and CIFAR-100 are more general and broader, the classes in ImageNet are more specific and fine-grained. For example, in CIFAR-10, the category "bird" includes all types of birds, while in ImageNet there are separate categories for different bird species, such as 'ostrich' and 'macaw'. On the other extreme, SVHN has no overlap with ImageNet classes since this is a dataset of digits containing numbers as classes. Secondly, in CIFAR-10, CIFAR-100, and STL-10, the class distributions are perfectly balanced, and SVHN is slightly imbalanced, while the class distribution in ImageNet is highly imbalanced with some categories having only a few images and others having tens of thousands of images. Moreover, the datasets have different data distributions since they are collected from different sources and have different image sizes. ImageNet contains high-resolution images of varying sizes, while CIFAR-10, CIFAR-100, SVHN, and STL-10 have small and fixed image sizes.
S3 Implementation Details
In this section, we present all the training hyper-parameters and implementation details in Table S3. For the encoder, following existing literature such as [23,14], we use Wide ResNet-28-2 [41] for CIFAR-10 and SVHN, WRN-28-8 for CIFAR-100, and WRN-37-2 for STL-10.
S4 Additional Ablation Results
Here, we expand the ablation study from the main paper to all datasets for the settings that showed close performance to the proposed method. Note the experiments in the main paper were carried out on CIFAR-10 with 40 samples. In Table S1, we show the results for replacing the proposed contrastive regularize with different variants of the contrastive loss. This table is expanded from the ablation study in the main paper (Table 5(c)) to include all datasets. Here, we show the results for two contrastive variants: graph contrast of CoMatch [15] and class-aware contrast of CCSSL [22]. The details are explained in the main paper. The experiment shows competitive results on CIFAR-10 with 40 labelled samples. Results from this table show that on average, the proposed contrastive regularizer achieves 2.99% and 4.45% improvements over the graph contrast and class-aware contrast alternatives. The class-aware contrast obtains better results than the proposed method in one setting (CIFAR-100 with 400 labelled samples) only.
Figure 1 :
1Comparison between the CoPompt and existing prompting approach.
Figure 2 :
2Overview of our proposed method.
Figure 3 :
3Sensitivity study on important hyper-parameters.
16 :
16H(ȳ, P θ (y|x)), // Cross-entropy loss on the mixed labelled data. Lcon 1 [k̸ =i] exp(z i ,z k /τ ) // Consistency regularization loss; κ(i) is the index of the second augmented sample 17: Lrot = 1 b b i=1 H(r, zi,r) // Rotation loss 18: return Lsup + Lcon + Lrot
Table 1 :
1Comparison of our method against other SSL methods with unconstrained unlabelled data on 4 different datasets.CIFAR-10
CIFAR-100
SVHN
STL-10
Methods
40 labels 250 labels 4000 labels 400 labels 2500 labels 100000 labels 40 labels 250 labels 1000 labels 1000 labels Avg.
Supervised
24.24±1.1 43.33±2.2
83.76±0.4
10.39±0.3
39.57±0.4
63.4±0.1
24.67±2.1 75.42±2.4
87.74±0.6
60.96±1.3 51.35
Pi-Model [9]
24.16±1.5 48.14±1.1
84.19±0.3
12.81±0.9
40.01±0.3
63.55±0.1
28.02±1.1 76.56±2.2
87.83±0.6
69.73±0.6
53.5
Mean Teacher [8] 26.22±1.0 49.35±1.7
83.7±0.4
13.72±0.9
41.57±0.3
63.83±0.1
28.57±1.3 76.78±2.3
87.68±0.6
70.12±0.1 54.15
VAT [10]
24.7±1.3 46.18±1.3
84.73±0.2
11.5±0.8
41.73±0.2
63.76±0.1
41.95±2.5 76.18±1.9
88.07±0.5
63.12±0.6 54.19
Pseudo-label [6] 24.88±1.6 50.29±1.3
84.11±0.1
12.12±0.1
39.72±0.8
63.57±0.0
36.04±2.7 78.04±1.2
88.91±0.3
65.6±0.9
54.33
UDA [11]
28.12±2.5 65.59±1.7
88.31±0.1
21.11±0.8
51.82±0.6
69.42±0.6
48.8±4.1 77.73±1.9
88.83±0.3
82.54±0.2 62.23
MixMatch [12]
32.58±0.8 58.24±0.3
84.59±0.3
20.26±0.6
45.94±0.4
65.89±0.2
57.46±1.7 77.65±1.2
89.95±0.2
71.85±0.6 60.44
ReMixMatch [13] 35.56±0.8 64.71±0.6
87.64±0.3
18.9±0.8
49.11±0.9
69.38±0.1
56.94±4.2 79.57±0.9 90.57±0.3
79.13±0.8 63.15
FixMatch [23]
27.91±3.9 64.98±1.0
88.18±0.1
21.2±0.6
51.4±1.3
67.8±0.3
43.71±5.5 74.52±1.0
88.06±0.6
81.85±0.4 60.96
FlexMatch [14]
32.8±0.8 63.22±1.0
87.82±0.1
20.87±0.7
51.28±0.8
69.52±0.5
60.5±2.5 79.72±0.6
88.85±0.4
82.67±0.4 63.72
CoMatch [15]
41.68±0.7 62.31±0.7
84.52±0.3
22.6±0.6
44.0±1.0
58.55±0.3
45.87±2.8 73.19±0.3
86.45±0.2
82.0±0.0
60.12
CCSSL [22]
30.89±5.9 67.2±1.5
88.77±0.1
24.53±1.5
56.3±0.2
71.13±0.3
50.02±6.6 80.39±0.6
88.6±0.3
82.0±0.0
63.98
SimMatch [17]
23.77±1.8 57.72±1.3
84.12±0.7
18.65±1.2
47.33±1.0
66.54±0.8
51.23±1.6 74.48±1.1
88.57±1.0
77.23±1.2 58.96
ScMatch [18]
27.81±1.1 56.78±0.6
83.09±0.2
18.14±1.1
46.21±0.7
64.24±0.2
56.59±0.9 75.08±0.8
89.23±0.3
79.44±0.7 59.66
UnMixMatch
47.93±1.1 68.72±0.6
89.58±0.2
26.13±1.1
54.18±0.7
71.73±0.2
72.9±0.9 80.78±0.8
91.03±0.3
84.73±0.7 68.77
Table 2 :
2The impact of unlabeled set size. Here, Subsets 1 & 2 are two random subsets of ImageNet-1K(IN-1K).Data
IN-100 Subset 1 Subset 2 IN-1K
No. of samples 130K
450K
850K 1.28M
ReMixMatch
35.56
35.72
36.15
36.24
CoMatch
41.68
42.52
42.31
43.38
UnMixMatch
47.93
50.01
51.99
53.54
Table 3 :
3Performance comparison on open set SSL for CIFAR-10 with 6/4 known-unknown class split.Labelled samples/class
Methods
50
100
400
Supervised
64.3±1.1
69.5±0.7
80.0±0.3
FixMatch [23]
56.8±1.2
70.2±0.6
83.7±0.5
MTC [3]
79.7±0.9
86.3±0.9
91.0±0.5
OpenMatch [2]
89.6±0.9
92.9±0.5
94.1±0.5
UnMixMatch
95.7±0.8
96.8±0.5
97.2±0.4
Table 5 :
5Ablation studies on our method. All studies are on CIFAR-10 with 40 labelled samples.(a) Ablation of main components.
Ablation
Accuracy
UnMixMatch
47.93
w/o RandMixUp
38.72
w/o Contrast Loss
41.25
w/o Rotation Loss
41.83
(b) Alternate hard augmentations.
Augmentation
Accuracy
RandMixUp
47.93
RandAug
Table 4 :
4Performance comparison on barely supervised learning. Only 1 sample per class is used for training.Method
Accuracy (%)
FlexMatch [14]
21.96±1.4
CCSSL [22]
15.63±1.7
UnMixMatch
27.54±2.5
Table 5a presents
5athe main ablation results by removing each of the three main components of the proposed method: RandMixUp augmentation, consistency regularization, and rotation prediction. Note that we can not remove two components simultaneously, since semi-supervised learners require a minimum of one supervised and one unsupervised loss. These experiments are done on CIFAR-10 with 40 samples. The table demonstrates that all three components have a significant impact on the final performance of the model, with the removal of any one component resulting in a large drop in accuracy. In the first ablation experiment, we remove the RandMixUp augmentation module, effectively learning from the labelled samples with weak augmentations only (random resized crop and horizontal flip). This experiment results in the highest drop in accuracy across the ablation settings, with a 9.21% decrease. The second largest drop in accuracy is observed when removing consistency regularization, resulting in a 6.68% decrease. Similarly, removing the rotation prediction component results in a 6.1% drop in performance.
Table S1 :
S1Comparison of our method against different variants of contrastive regularizers on 4 different datasets. Methods 40 labels 250 labels 4000 labels 400 labels 2500 labels 100000 labels 40 labels 250 labels 1000 labels 1000 labels Avg. Class-aware Contrast[22] 47.88±2.4 68.45±0.4 88.39±0.2 26.59±0.3 50.25±0.7 69.29±0.2 43.99±12.2 78.05±0.2 87.76±0.1 82.59±0.4 64.32CIFAR-10
CIFAR-100
SVHN
STL-10
Ours
47.93±1.1 68.72±0.6
89.58±0.2
26.13±1.1
54.18±0.7
71.73±0.2
72.9±0.9 80.78±0.8
91.03±0.3
84.73±0.7 68.77
Graph Contrast [15]
47.12±0.5 67.67±0.7
87.3±0.3
25.38±0.4
49.14±0.6
68.08±0.3
62.54±3.6 79.78±0.9
88.49±0.9
82.25±0.5 65.78
Table S2 :
S2Comparison of our method for different alternate regularization strategies on 4 different datasets. labels 4000 labels 400 labels 2500 labels 100000 labels 40 labels 250 labels 1000 labels 1000 labels Avg.CIFAR-10
CIFAR-100
SVHN
STL-10
Methods
40 labels 250 Ours
47.93±1.1 68.72±0.6
89.58±0.2
26.13±1.1
54.18±0.7
71.73±0.2
72.9±0.9 80.78±0.8
91.03±0.3
84.73±0.7 68.77
VICReg [39] 47.91±1.3 69.75±0.2
89.08±0.1
24.74±0.5
50.25±0.8
68.51±0.6
68.94±2.9 82.67±0.3
91.21±0.3
85.44±0.5 67.85
Table S3 :
S3Summary of all hyper-parameters for training UnMixMatch.Parameter
Value
Number of iterations
2 20
Batch-size
64
Learning rate
0.03
Optimizer
SGD
SGD Momentum
0.9
Weight-decay
0.0005
α
0.1
β
1.0
γ
5.0
AcknowledgementsWe would like to thank Bank of Montreal and Mitacs for funding this research. We are also thankful to SciNet HPC Consortium for helping with the computing resources.
Realistic evaluation of deep semi-supervised learning algorithms. Avital Oliver, Augustus Odena, Colin A Raffel, Ekin Dogus Cubuk, Ian Goodfellow, Advances in Neural Information Processing Systems. 313Avital Oliver, Augustus Odena, Colin A Raffel, Ekin Dogus Cubuk, and Ian Goodfellow. Realistic evaluation of deep semi-supervised learning algorithms. Advances in Neural Information Processing Systems, 31, 2018. 1, 3
Class-aware contrastive semi-supervised learning. Fan Yang, Kai Wu, Shuyi Zhang, Guannan Jiang, Yong Liu, Feng Zheng, Wei Zhang, Chengjie Wang, Long Zeng, IEEE/CVF Conference on Computer Vision and Pattern Recognition. 67Fan Yang, Kai Wu, Shuyi Zhang, Guannan Jiang, Yong Liu, Feng Zheng, Wei Zhang, Chengjie Wang, and Long Zeng. Class-aware contrastive semi-supervised learning. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14421-14430, 2022. 1, 3, 6, 7
Multi-task curriculum framework for open-set semisupervised learning. Qing Yu, Daiki Ikami, Go Irie, Kiyoharu Aizawa, European Conference on Computer Vision. 16Qing Yu, Daiki Ikami, Go Irie, and Kiyoharu Aizawa. Multi-task curriculum framework for open-set semi- supervised learning. In European Conference on Computer Vision, pages 438-454, 2020. 1, 6
A simple framework for contrastive learning of visual representations. Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey Hinton, International Conference on Machine Learning. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International Conference on Machine Learning, pages 1597-1607, 2020. 2, 4, 8
Aaron Van Den Oord, Yazhe Li, Oriol Vinyals, arXiv:1807.03748Representation learning with contrastive predictive coding. 2arXiv preprintAaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018. 2, 8
Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. Dong-Hyun Lee, Workshop on Challenges in Representation Learning, ICML. 35Dong-Hyun Lee et al. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Workshop on Challenges in Representation Learning, ICML, volume 3, page 896, 2013. 2, 5
Self-training with noisy student improves imagenet classification. Qizhe Xie, Minh-Thang Luong, Eduard Hovy, Quoc V Le, IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020Qizhe Xie, Minh-Thang Luong, Eduard Hovy, and Quoc V Le. Self-training with noisy student improves imagenet classification. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10687-10698, 2020. 2
Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. Antti Tarvainen, Harri Valpola, Advances in Neural Information Processing Systems. 305Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. Advances in Neural Information Processing Systems, 30, 2017. 2, 5
Regularization with stochastic transformations and perturbations for deep semi-supervised learning. Mehdi Sajjadi, Mehran Javanmardi, Tolga Tasdizen, Advances in Neural Information Processing Systems. 295Mehdi Sajjadi, Mehran Javanmardi, and Tolga Tasdizen. Regularization with stochastic transformations and perturbations for deep semi-supervised learning. Advances in Neural Information Processing Systems, 29, 2016. 2, 4, 5
Virtual adversarial training: a regularization method for supervised and semi-supervised learning. Takeru Miyato, Masanori Shin-Ichi Maeda, Shin Koyama, Ishii, IEEE Transactions on Pattern Analysis and Machine Intelligence. 4185Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(8):1979-1993, 2018. 2, 5
Unsupervised data augmentation for consistency training. Qizhe Xie, Zihang Dai, Eduard Hovy, Thang Luong, Quoc Le, Advances in Neural Information Processing Systems. 335Qizhe Xie, Zihang Dai, Eduard Hovy, Thang Luong, and Quoc Le. Unsupervised data augmentation for consistency training. Advances in Neural Information Processing Systems, 33:6256-6268, 2020. 2, 5
Mixmatch: A holistic approach to semi-supervised learning. David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, Colin A Raffel, Advances in Neural Information Processing Systems. 329David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin A Raffel. Mixmatch: A holistic approach to semi-supervised learning. Advances in Neural Information Processing Systems, 32, 2019. 2, 3, 4, 5, 7, 8, 9
Remixmatch: Semi-supervised learning with distribution alignment and augmentation anchoring. David Berthelot, Nicholas Carlini, D Ekin, Alex Cubuk, Kihyuk Kurakin, Han Sohn, Colin Zhang, Raffel, arXiv:1911.0978559arXiv preprintDavid Berthelot, Nicholas Carlini, Ekin D Cubuk, Alex Kurakin, Kihyuk Sohn, Han Zhang, and Colin Raffel. Remixmatch: Semi-supervised learning with distribution alignment and augmentation anchoring. arXiv preprint arXiv:1911.09785, 2019. 2, 3, 4, 5, 9
Flexmatch: Boosting semi-supervised learning with curriculum pseudo labeling. Bowen Zhang, Yidong Wang, Wenxin Hou, Hao Wu, Jindong Wang, Manabu Okumura, Takahiro Shinozaki, Advances in Neural Information Processing Systems. 3413Bowen Zhang, Yidong Wang, Wenxin Hou, Hao Wu, Jindong Wang, Manabu Okumura, and Takahiro Shinozaki. Flexmatch: Boosting semi-supervised learning with curriculum pseudo labeling. Advances in Neural Information Processing Systems, 34:18408-18419, 2021. 2, 5, 6, 7, 13
Comatch: Semi-supervised learning with contrastive graph regularization. Junnan Li, Caiming Xiong, C H Steven, Hoi, IEEE/CVF International Conference on Computer Vision. 14Junnan Li, Caiming Xiong, and Steven CH Hoi. Comatch: Semi-supervised learning with contrastive graph regularization. In IEEE/CVF International Conference on Computer Vision, pages 9475-9484, 2021. 3, 5, 7, 8, 14
Conmatch: Semi-supervised learning with confidence-guided consistency regularization. Jiwon Kim, Youngjo Min, Daehwan Kim, Gyuseong Lee, Junyoung Seo, Kwangrok Ryoo, Seungryong Kim, European Conference on Computer Vision. 7Jiwon Kim, Youngjo Min, Daehwan Kim, Gyuseong Lee, Junyoung Seo, Kwangrok Ryoo, and Seungryong Kim. Conmatch: Semi-supervised learning with confidence-guided consistency regularization. In European Conference on Computer Vision, pages 674-690, 2022. 3, 7, 8
Simmatch: Semi-supervised learning with similarity matching. Mingkai Zheng, Shan You, Lang Huang, Fei Wang, Chen Qian, Chang Xu, IEEE/CVF Conference on Computer Vision and Pattern Recognition. 35Mingkai Zheng, Shan You, Lang Huang, Fei Wang, Chen Qian, and Chang Xu. Simmatch: Semi-supervised learning with similarity matching. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14471-14481, 2022. 3, 5
Improving barely supervised learning by discriminating unlabeled samples with super-class. Guan Gui, Zhen Zhao, Lei Qi, Luping Zhou, Lei Wang, Yinghuan Shi, Advances in Neural Information Processing Systems. 35Guan Gui, Zhen Zhao, Lei Qi, Luping Zhou, Lei Wang, and Yinghuan Shi. Improving barely supervised learning by discriminating unlabeled samples with super-class. In Advances in Neural Information Processing Systems, 2022. 3, 5
A realistic evaluation of semi-supervised learning for finegrained classification. Jong-Chyi Su, Zezhou Cheng, Subhransu Maji, IEEE/CVF Conference on Computer Vision and Pattern Recognition. Jong-Chyi Su, Zezhou Cheng, and Subhransu Maji. A realistic evaluation of semi-supervised learning for fine- grained classification. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12966-12975, 2021. 3
Classificationreconstruction learning for open-set recognition. Ryota Yoshihashi, Wen Shao, Rei Kawakami, Shaodi You, Makoto Iida, Takeshi Naemura, IEEE/CVF Conference on Computer Vision and Pattern Recognition. Ryota Yoshihashi, Wen Shao, Rei Kawakami, Shaodi You, Makoto Iida, and Takeshi Naemura. Classification- reconstruction learning for open-set recognition. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4016-4025, 2019. 3
Safe deep semi-supervised learning for unseen-class unlabeled data. Lan-Zhe Guo, Zhen-Yu Zhang, Yuan Jiang, Yu-Feng Li, Zhi-Hua Zhou, International Conference on Machine Learning. Lan-Zhe Guo, Zhen-Yu Zhang, Yuan Jiang, Yu-Feng Li, and Zhi-Hua Zhou. Safe deep semi-supervised learning for unseen-class unlabeled data. In International Conference on Machine Learning, pages 3897-3906, 2020. 3
Class-aware contrastive semi-supervised learning. Fan Yang, Kai Wu, Shuyi Zhang, Guannan Jiang, Yong Liu, Feng Zheng, Wei Zhang, Chengjie Wang, Long Zeng, IEEE/CVF Conference on Computer Vision and Pattern Recognition. 814Fan Yang, Kai Wu, Shuyi Zhang, Guannan Jiang, Yong Liu, Feng Zheng, Wei Zhang, Chengjie Wang, and Long Zeng. Class-aware contrastive semi-supervised learning. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14421-14430, 2022. 3, 5, 7, 8, 14
Fixmatch: Simplifying semi-supervised learning with consistency and confidence. Kihyuk Sohn, David Berthelot, Nicholas Carlini, Zizhao Zhang, Han Zhang, Colin A Raffel, Ekin Dogus Cubuk, Alexey Kurakin, Chun-Liang Li, Advances in Neural Information Processing Systems. 3313Kihyuk Sohn, David Berthelot, Nicholas Carlini, Zizhao Zhang, Han Zhang, Colin A Raffel, Ekin Dogus Cubuk, Alexey Kurakin, and Chun-Liang Li. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. Advances in Neural Information Processing Systems, 33:596-608, 2020. 3, 5, 6, 7, 13
Randaugment: Practical automated data augmentation with a reduced search space. Barret Ekin D Cubuk, Jonathon Zoph, Quoc V Shlens, Le, IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. 7Ekin D Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V Le. Randaugment: Practical automated data augmentation with a reduced search space. In IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 702-703, 2020. 3, 4, 7, 8
Unsupervised representation learning by predicting image rotations. Spyros Gidaris, Praveer Singh, Nikos Komodakis, arXiv:1803.07728arXiv preprintSpyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsupervised representation learning by predicting image rotations. arXiv preprint arXiv:1803.07728, 2018. 5
Colorful image colorization. Richard Zhang, Phillip Isola, Alexei A Efros, European Conference on Computer Vision. Richard Zhang, Phillip Isola, and Alexei A Efros. Colorful image colorization. In European Conference on Computer Vision, pages 649-666, 2016. 5
Unsupervised representation learning by predicting image rotations. Spyros Gidaris, Praveer Singh, Nikos Komodakis, arXiv:1803.07728arXiv preprintSpyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsupervised representation learning by predicting image rotations. arXiv preprint arXiv:1803.07728, 2018. 5
S4l: Self-supervised semi-supervised learning. Xiaohua Zhai, Avital Oliver, Alexander Kolesnikov, Lucas Beyer, IEEE/CVF International Conference on Computer Vision. Xiaohua Zhai, Avital Oliver, Alexander Kolesnikov, and Lucas Beyer. S4l: Self-supervised semi-supervised learning. In IEEE/CVF International Conference on Computer Vision, pages 1476-1485, 2019. 5
Learning multiple layers of features from tiny images. Alex Krizhevsky, Geoffrey Hinton, 513Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. 5, 13
Reading digits in natural images with unsupervised feature learning. Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, Andrew Y Ng, 513Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. 2011. 5, 13
An analysis of single-layer networks in unsupervised feature learning. Adam Coates, Andrew Ng, Honglak Lee, Proceedings of the fourteenth international conference on artificial intelligence and statistics. the fourteenth international conference on artificial intelligence and statistics513JMLR Workshop and Conference ProceedingsAdam Coates, Andrew Ng, and Honglak Lee. An analysis of single-layer networks in unsupervised feature learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pages 215-223. JMLR Workshop and Conference Proceedings, 2011. 5, 13
Imagenet: A large-scale hierarchical image database. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, Li Fei-Fei, IEEE conference on computer vision and pattern recognition. 513Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In IEEE conference on computer vision and pattern recognition, pages 248-255, 2009. 5, 13
Impact of labelled set selection and supervision policies on semi-supervised learning. Shuvendu Roy, Ali Etemad, arXiv:2211.14912arXiv preprintShuvendu Roy and Ali Etemad. Impact of labelled set selection and supervision policies on semi-supervised learning. arXiv preprint arXiv:2211.14912, 2022. 7
Cutmix: Regularization strategy to train strong classifiers with localizable features. Sangdoo Yun, Dongyoon Han, Sanghyuk Seong Joon Oh, Junsuk Chun, Youngjoon Choe, Yoo, IEEE/CVF international conference on computer vision. Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. Cutmix: Regularization strategy to train strong classifiers with localizable features. In IEEE/CVF international conference on computer vision, pages 6023-6032, 2019. 7
Augmix: A simple data processing method to improve robustness and uncertainty. Dan Hendrycks, Norman Mu, D Ekin, Barret Cubuk, Justin Zoph, Balaji Gilmer, Lakshminarayanan, arXiv:1912.02781arXiv preprintDan Hendrycks, Norman Mu, Ekin D Cubuk, Barret Zoph, Justin Gilmer, and Balaji Lakshminarayanan. Augmix: A simple data processing method to improve robustness and uncertainty. arXiv preprint arXiv:1912.02781, 2019. 7
Contrastive regularization for semi-supervised learning. Doyup Lee, Sungwoong Kim, Ildoo Kim, Yeongjae Cheon, Minsu Cho, Wook-Shin Han, IEEE/CVF Conference on Computer Vision and Pattern Recognition. 7Doyup Lee, Sungwoong Kim, Ildoo Kim, Yeongjae Cheon, Minsu Cho, and Wook-Shin Han. Contrastive regularization for semi-supervised learning. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3911-3920, 2022. 7, 8
Bootstrap your own latent-a new approach to self-supervised learning. Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, Advances in Neural Information Processing Systems. 33Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems, 33:21271-21284, 2020. 7, 8
Exploring simple siamese representation learning. Xinlei Chen, Kaiming He, IEEE/CVF Conference on Computer Vision and Pattern Recognition. 89Xinlei Chen and Kaiming He. Exploring simple siamese representation learning. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15750-15758, 2021. 7, 8, 9
Vicreg: Variance-invariance-covariance regularization for selfsupervised learning. Adrien Bardes, Jean Ponce, Yann Lecun, arXiv:2105.049061415arXiv preprintAdrien Bardes, Jean Ponce, and Yann LeCun. Vicreg: Variance-invariance-covariance regularization for self- supervised learning. arXiv preprint arXiv:2105.04906, 2021. 7, 8, 9, 14, 15
Hongyi Zhang, Moustapha Cisse, David Yann N Dauphin, Lopez-Paz, arXiv:1710.09412mixup: Beyond empirical risk minimization. arXiv preprintHongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412, 2017. 9
Wide residual networks. Sergey Zagoruyko, Nikos Komodakis, arXiv:1605.07146arXiv preprintSergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016. 13
. Algorithm 1 UnMixMatch Algorithm. 1Input: Batch of unlabeled samples Xu = ((uiAlgorithm 1 UnMixMatch Algorithm 1: Input: Batch of unlabeled samples Xu = ((ui);
b)), with batch size b, and batch of labelled samples with corresponding class labels X l =. i ∈ (1, ..,xi, yii ∈ (1, .., b)), with batch size b, and batch of labelled samples with corresponding class labels X l = ((xi, yi);
/ Apply first strong data augmentation to unlabelled sample ui 5:ûi,2 = RandAug(ui) // Apply second strong data augmentation to unlabelled sample ui 6:ûi,r = Rotaton(ui) // Apply rotation augmentation to unlabelled sample ui 7: pi = P θ (ûi) // Predicting pseudo-label for the unlabelled sample 8: zi,1 = P θp (ûi,1) // Embedding for the first augmentation of the unlabelled image 9: zi,2 = P θp (ûi,2) // Embedding for the second augmentation of the unlabelled image 10: zi,r = P θr. xi = RandAug(xi) // Apply strong data augmentation to labelled sample xi 4:ûi,1 = RandAug. 3end for 12:X l = (xi, yido 3:xi = RandAug(xi) // Apply strong data augmentation to labelled sample xi 4:ûi,1 = RandAug(ui) // Apply first strong data augmentation to unlabelled sample ui 5:ûi,2 = RandAug(ui) // Apply second strong data augmentation to unlabelled sample ui 6:ûi,r = Rotaton(ui) // Apply rotation augmentation to unlabelled sample ui 7: pi = P θ (ûi) // Predicting pseudo-label for the unlabelled sample 8: zi,1 = P θp (ûi,1) // Embedding for the first augmentation of the unlabelled image 9: zi,2 = P θp (ûi,2) // Embedding for the second augmentation of the unlabelled image 10: zi,r = P θr (ûi,r) // Predicted angle from the rotated image 11: end for 12:X l = (xi, yi);
Xp = (ûi. 1, pi13i ∈ (1, . . . , b) // All augmented labeled examples and their labels 13:Xp = (ûi,1, pi);
15: Lsup = 114:X = M ixU p. i ∈ (1, . . . , b) // All augmented unlabelled samples with predicted pseudo-labels 14:X = M ixU p(X l ,Xp) // MixUp operation 15: Lsup = 1
The results from this table show that VICReg loss achieves better results for a few settings (CIFAR-10 with 250 labelled samples, SVHN with 250 and 1000 labelled samples, and STL-10 with 1000 labelled samples). However, the overall average accuracy with VICReg loss is 0.92% lower than the proposed contrastive regularizer, and hence. Here, we show the results for VICReg [39] since it showed comparable results to the proposed contrastive regularizer on CIFAR-10 with 40 labelled samples. we choose this as the default for the proposed UnMixMatchHere, we show the results for VICReg [39] since it showed comparable results to the proposed contrastive regularizer on CIFAR-10 with 40 labelled samples. The results from this table show that VICReg loss achieves better results for a few settings (CIFAR-10 with 250 labelled samples, SVHN with 250 and 1000 labelled samples, and STL-10 with 1000 labelled samples). However, the overall average accuracy with VICReg loss is 0.92% lower than the proposed contrastive regularizer, and hence, we choose this as the default for the proposed UnMixMatch.
| []
|
[
"CORRESPONDENCES AND INDEX",
"CORRESPONDENCES AND INDEX"
]
| [
"Bogdan Bojarski ",
"Andrzej Weber "
]
| []
| []
| We define certain class of correspondences of polarized representations of C *algebras. Our correspondences are modeled on the spaces of boundary values of elliptic operators on bordisms joining two manifolds. In this setup we define the index. The main subject of the paper is the additivity of the index. * Supported by KBN grant 1 P03A 005 26 | 10.1007/978-3-7643-7687-1_1 | [
"https://export.arxiv.org/pdf/math/0507060v2.pdf"
]
| 9,919,717 | math/0507060 | c8558846abe60f62e75af2150930e19b55e03059 |
CORRESPONDENCES AND INDEX
20 Dec 2005 March 29, 2022
Bogdan Bojarski
Andrzej Weber
CORRESPONDENCES AND INDEX
20 Dec 2005 March 29, 2022arXiv:math/0507060v2 [math.KT]
We define certain class of correspondences of polarized representations of C *algebras. Our correspondences are modeled on the spaces of boundary values of elliptic operators on bordisms joining two manifolds. In this setup we define the index. The main subject of the paper is the additivity of the index. * Supported by KBN grant 1 P03A 005 26
Introduction
Let X be a closed manifold. Suppose it is decomposed into a sum of two manifolds X + , X − glued along the common boundary ∂X + = ∂X − = M .
Let
D : C ∞ (X; ξ) → C ∞ (X; η)
be an elliptic operator of the first order. We assume that it possesses the unique extension property: if Df = 0 and f |M = 0 then f = 0. In what follows we will consider only elliptic operators of the first order such that D and D * have the unique extension property. One defines the spaces H ǫ (D) ⊂ L 2 (M; ξ) for ǫ ∈ {+, −}, which are the closures of the spaces of boundary values of solutions of Df = 0 on the manifolds X ǫ with boundary ∂X ǫ = M. The space H ǫ (D) is defined to be the closure of :
{f ∈ C ∞ (M; ξ) : ∃f ∈ C ∞ (X ǫ ; ξ), f =f |M , D(f ) = 0 } in L 2 (M; ξ). The pair of spaces H ± (D) is a Fredholm pair, [Bo1]. There are associated Calderón projectors P + (D) and P − (D), see [Sl].
To organize somehow the set of possible Cauchy data we will introduce certain algebraic object. We fix a C * -algebra B, which is the algebra of functions on M in our case. Suppose it acts on a Hilbert space H. Now we consider Fredholm pairs in H. In our case H = L 2 (M; ξ) and one of the possible Fredholm pairs is H ± (D). Note that this pair is not arbitrary. It has a property which we called good. A Fredholm pair is good if (roughly speaking) it remains to be Fredholm after conjugation with functions, see §4. These pairs act naturally on K 1 (M). Nevertheless the concept of a good Fredholm pair is not convenient to manipulate, thus we restrict our attention to the pairs of geometric origin, see §5.
We call them admissible. They are the pairs of subspaces which are images of projectors which almost commute with the actions of the algebra B. This concept allows to extract the relevant analytico-functional information out of the Cauchy data. Further a Morse decomposition of a manifold is translated into this language.
Our paper is devoted to the study of the cut and paste technique on manifolds and its effect on indices. The spirit of these constructions comes from the earlier papers [Bo1]- [Bo3] or [BW1]. According to the topological and conformal field theory we investigate the behaviour of the index of a differential operator on a manifold composed from bordisms
X = X 0 ∪ M 1 X 1 ∪ M 2 . . . ∪ M m−1 X m−1 ∪ Mm X m .
We think of M i 's as objects and we treat bordisms of manifolds as morphisms. Starting from this geometric background we introduce a category PR, whose objects are polarized representations. The algebra B may vary. We keep in mind that such objects arise when:
• B is an algebra of functions on a manifold M,
• there is given a vector bundle ξ over M, then H = L 2 (M; ξ) is a representation of B,
• there is given a pseudodifferential projector in H.
The morphisms in PR are certain correspondences, i.e. linear subspaces in the product of the source and the target. A particular case of principal value for our theory are the correspondences coming from bordisms of manifolds equipped with an elliptic operator. Precisely: suppose we are given a manifold W with a boundary ∂W = M 1 ⊔ M 2 . Moreover, suppose that there is given an elliptic operator of the first order acting on the sections of a vector bundle ξ over W . Then the space of the boundary values of the Cauchy data of solutions is a linear subspace in L 2 (M 1 ; ξ |M 1 ) ⊕ L 2 (M 2 ; ξ |M 2 ). In another words it is a correspondence from L 2 (M 1 ; ξ |M 1 ) to L 2 (M 2 ; ξ |M 2 ).
Basic example: The following example is instructive and serves as the model situation (see [BWe]): Let W = {z ∈ C : r 1 ≥ |z| ≥ r 2 } be a ring domain and let D be the Cauchy-Riemann operator. The space L 2 (M i ) for i = 1, 2 is identified with the space of sequences {a n } n∈Z , such that n∈Z |a n | 2 r 2n i < ∞. The sequence {a n } defines the function on M i given by the formula f (z) = n∈Z a n z n . The subspace of the boundary values of holomorphic functions on W is identified with ({a n }, {b n }) : Σ n∈Z |a n | 2 r 2n 1 < ∞ , Σ n∈Z |b n | 2 r 2n 2 < ∞ and a n = b n .
It can be treated as the graph of an unbounded operator Φ : L 2 (M 1 ) → L 2 (M 2 ). When we restrict Φ to the space L 2 (M 1 ) ♯ consisting of the functions with coefficients a n = 0 for n < 0 we obtain a compact operator. On the other hand the inverse operator Φ −1 : L 2 (M 2 ) → L 2 (M 1 ) is compact when restricted to L 2 (M 2 ) ♭ , the space consisting of the functions with coefficients a n = 0 for n ≥ 0. The Riemann-Hilbert transmission problem of the Cauchy data across a hypersurface is a model for another class of morphisms. These are called twists. Our approach allows us to treat bordisms and twists in a uniform way. We calculate the global index of an elliptic operator in terms of local indices depending only on the pieces of the decomposed manifold (see Theorems 9.6 and 11.1). An interesting phenomenon occurs. The index is not additive with respect to the composition of bordisms. Instead each composition creates a contribution to the global index (Theorem 10.2):
L 1 , L 2 ❀ L 2 • L 1 + δ(L 1 , L 2 ) .
In the geometric situation this contribution might be nonzero for example when a closed manifold is created as an effect of composition of bordisms. One can show that if the bordisms in PR come from connected geometric bordisms supporting elliptic operators with the unique extension property then the index is additive. The contributions coming from twists are equivalent to the effects of pairings in the odd K-theory, Theorem 9.7.
It's a good moment now to expose a fundamental role of the splitting of the Hilbert space into a direct sum. The need of introducing a splitting was clear already in [Bo1]:
• It was used to the study of Fredholm pairs with application to the Riemann-Hilbert transmission problem in [Bo1] • Splitting also came into light in the paper of Kasparov [Ka], who introduced a homological K-theory built from the Hilbert modules. The program of noncommutative geometry of A.Connes develops this idea, [Co1,Co2] .
• Splitting plays an important role in the theory of loop groups in [PSe].
• There is also a number of papers in which surgery of the Dirac operator is studied. Splitting serves as a boundary condition, see e.g. [DZ], [SW]. These papers originate from [APS].
In the present paper we omit the technicalities and problems arising for a general elliptic operator. We concentrate on the purely functional calculus of correspondences. This is mainly the linear algebra.
Fredholm pairs
Let us first summarize some facts about Fredholm pairs. We will follow [Bo1]- [Bo3]. Suppose that H + and H − are two closed subspaces of a Hilbert space, such that H + + H − is also closed and
• H + ∩ H − is of finite dimension, • H + + H − is of finite codimension.
We assume that both spaces have infinite dimension. Then we say that the pair (H + , H − ) = H ± is Fredholm. We define its index
Ind(H ± ) = dim(H + ∩ H − ) − codim(H + + H − ) .
The following statements follow from easy linear algebra. Ind(H ± ) = ind(ι) .
Here Ind denotes the index of a pair, whereas ind stands for the index of an operator. Suppose that H is decomposed into a direct sum
H = H ♭ ⊕ H ♯ .
We may assume that this decomposition is given by a symmetry S: a "sign" or "signature" operator. Let P ♭ and P ♯ be the corresponding projectors. We can write S = P ♯ − P ♭ . We easily have:
Proposition 2.2 If H ± is a pair with H + = H ♯ ,Ind(H ± ) = ind(P ♭ |H − ) .
Let I ⊂ L(H) be an ideal which is lies between the ideal of finite rank operators and the ideal of compact operators F ⊂ I ⊂ K .
Define GL(P ♭ , I) ⊂ GL(H) to be the set of the invertible automorphisms of H commuting with P ♭ up to the ideal I. We will say that φ almost commutes with P ♭ or we will write φP ♭ ∼ P ♭ φ. Obviously GL(P ♭ , I) = GL(P ♯ , I) = GL(S, I). We have the following description of Fredholm pairs stated in [Bo1]. (The proof is again an easy linear algebra.)
Theorem 2.3 Let H ± be a Fredholm pair with H + = H ♯ . Then there exists a complement H ♭ (that is H ♭ ⊕ H ♯ = H) and there exists φ ∈ GL(P ♭ , I), such that H − = φ(H ♭ ). If H ± is given by a pair of projectors P ± satisfying P − + P − − 1 ∈ I, then we can take H ♭ = ker P + . Moreover, the operator φP ♭ + P ♯ is Fredholm and
ind(φP ♭ + P ♯ ) = Ind(H ± ) . The map ind : GL(P ♭ , I) → Z ind(φ) = ind(φP ♭ + P ♯ )
is a group homomorphism.
It follows that
ind(φP ♭ + P ♯ ) = ind(P ♭ φ : H ♭ → H ♭ ) = ind(P ♯ φ −1 : H ♯ → H ♯ ) .
Index formula for a decomposed manifold
The main example of a Fredholm pair is the following. Let D be an elliptic operator on X = X + ∪ M X − . Then the pair of boundary value spaces H ± (D) (as defined in the introduction) is a Fredholm pair. This formula is easy to explain: a global solution restricted to M lies in H + (D) ∩ H − (D). On the other hand if a section f of ξ over M can be extended to both X + and X − , such that the extensions are solutions of Df = 0 then we can glue them to obtain a global solution. The unique extension property is necessary, because we need to know that a solution is determined by its restriction to M. Following the reasoning in [Bo1], with the Assumption 3.1 for D and D * we have:
Corollary 3.2
Ind(H ± (D)) = ind(D) .
For a rigorous proof see [BW2], §24 for Dirac type operators.
Remark 3.3 It may happen that D does not have the unique extension property. This is so for example when X is not connected. Then the Cauchy data H ± (D) do not say anything about the index of the operator D on the components of X disjoined with M. There are also known elliptic operators without the unique extension property on connected manifolds, [Pl], [Al]. It is difficult to characterize the class of all operators D with the unique extension property. Nevertheless the most relevant are Cauchy-Riemann and Dirac type operators. These operators have the unique extension property.
Good Fredholm pairs
Suppose there is given an algebra B and its representation ρ in a Hilbert space H. For a Fredholm pair H ± in H and an invertible matrix A ∈ GL n (B) we define a new pair of subspaces A✶H ± in H ⊕n . We set
(A✶H ± ) − = ρA(H ⊕n − ) (A✶H ± ) + = H ⊕n + .
(As usually we treat ρA as an automorphism of H ⊕n .) Definition 4.1 Let B be a C * -algebra which acts on a Hilbert space H. A good Fredholm pair is a pair of subspaces (H + , H − ) in H, such that for any invertible matrix A ∈ GL(n; B) the pair A✶H ± is a Fredholm pair.
We will see that the pair of boundary values H ± (D) ⊂ H = L 2 (M; ξ) for the operator D considered in the introduction is good.
Example 4.2 [Main example: Riemann-Hilbert problem] Consider the following problem: there is given a matrix-valued function A : M → GL n (C). We look for the sequence (s 1 ± , . . . , s n ± ) of solutions of Ds = 0 on X ± satisfying the transmission condition on M A(s 1 − , . . . , s n − ) = (s 1 + , . . . , s n + ) . A Fredholm operator is related to this problem and we study its index, see §11. On the other hand the matrix A treated as the gluing data defines an n-dimensional vector bundle Θ A X over X. Then Ind(A✶H ± (D)) = ind(D ⊗ Θ A X ) . This formula was obtained in [BW1], §1 under the assumption that D has a product form along M. Remark 4.4 Consider the differential in the Mayer-Vietoris exact sequence of
X = X + ∪ M X − δ : K 0 (X) → K −1 (M) .
The operator D defines a class [D] ∈ K 0 (X). The element δ[D] can be recovered from the good Fredholm pair H ± (D) ⊂ L 2 (M; ξ). Note that the pair H ± (D) encodes more information. One can recover the index of the original operator. We describe the map δ via duality, therefore we neglect the torsion of K-theory. The construction is the following: for an element a ∈ K 1 (M) we define the value of the pairing
δ[D], a = [D], ∂a .
The element a is represented by a matrix A ∈ GL n (C ∞ (M)). Then
[D], ∂a = ind(D ⊗ Θ A X ) − n ind(D) , where Θ A X is the bundle defined in Example 4.2. Now [D], ∂a = Ind(A✶H ± (D)) − nInd(H ± (D)) .
Admissible Fredholm pairs
The following can be related to the paper of Birman and Solomyak [BS] who introduced the name admissible for the subspaces which are the images of pseudodifferential projectors. Suppose that ξ is a vector bundle over a manifold M. We consider Fredholm pairs H ± in H = L 2 (M; ξ) such that the subspaces H ± are images of pseudodifferential projectors P ± with symbols satisfying σ(P + ) + σ(P − ) = 1 .
We would like to free ourselves from the geometric context and state admissibility condition in an abstract way. We assume that H is an abstract Hilbert space with a representation of an algebra B, which is the algebra of functions on M in the geometric case. The condition that P ± is pseudodifferential we substitute by the condition: P ± commutes with the algebra action up to compact operators. We are ready now to give a definition:
Definition 5.1 We say that a pair of subspaces H ± is an admissible Fredholm pair if there exist a pair of projectors P ǫ for ǫ ∈ {+, −}, such that H ǫ = im P ǫ and P ǫ commutes with the action of B up to compact operators. Moreover, we assume that P + + P − − 1 is a compact operator.
Proposition 5.2 Each admissible Fredholm pair is a good Fredholm pair.
Proof. Set K = P + + P − − 1. If v ∈ H + ∩ H − , then K(v) = v. Since K is a compact operator, dim(H + ∩ H − ) < ∞.
To prove that H + + H − is closed and of finite codimension, note that im(P + + P − ) ⊂ H + + H − . Since P + + P − is Fredholm its image is closed and of finite codimension. This way we have shown that H ± is a Fredholm pair. Now, if we conjugate P ⊕n + by ρA we obtain again an almost complementary pair of projectors. Thus A✶H ± is a Fredholm pair as well. ✷
We denote by AF P (B) the set of good Fredholm pairs divided by the equivalence relation generated by homotopies and stabilization with respect to the direct sum. We also consider as trivial the pairs associated to projectors strictly satisfying P + + P − = 1 and commuting with the action of B. In another words these are just direct sums of two representations of B. It is not hard to show that
Proposition 5.3 AF P (B) ≃ K 1 (B) ⊕ Z .
Proof. We have the following natural transformation:
β : AF P (M) → K 1 (M) (H, P ± ) → (H, S + ) .
Here S + = 2P + − 1 is s just the symmetry defined by P + . We remind that the objects generating K 1 (M) are odd Fredholm modules, see [Co2], pp 287-289. This procedure is simply forgetting about P − . We can recover P − (up to homotopy) by fixing the index of the pair, i.e β ⊕Ind is the isomorphism we are looking for. Precisely, the pseudodifferential projector is determined up to homotopy by its symbol and the index, see [BW2]. ✷
Splittings and polarization
We adopt the concepts of splitting and polarization to our situation.
Definition 6.1 Let H be a representation of a C * -algebra B in a Hilbert space. A splitting of H is a decomposition H = H ♭ ⊕ H ♯ ,
such that the projectors on the subspaces P ♭ , P ♯ commute with the action of B up to compact operators.
The basic example of a splitting is the one coming from a pseudodifferential projector. Another equivalent way of defining a splitting(as in [Bo2]) is to distinguish a symmetry S, almost commuting with the action of B. Then H ♭ is the eigenspace of −1 and H ♯ is the eigenspace of 1. Then we may think of H as a superspace, but we have to remember that the action of B does not preserve the grading.
Definition 6.2 In the set of splittings we introduce an equivalence relation: two splittings are equivalent if the corresponding projectors coincide up to compact operators. An equivalence class of the above relation is called a polarization of H.
Informally we can say, that polarization is a generalization of the symbol of a pseudodifferential projector.
Example 6.3 Let ξ → M be a complex vector bundle over a manifold. Let ξ be the pull back of ξ to T * M \ {0}. Suppose p : ξ → ξ is a bundle map which is a projector (hence p is homogeneous of degree 0) . Then p defines a polarization of L 2 (M; ξ). Just take a pseudodifferential projector P = P ♯ with σ(P ) = p and set
H ♭ = ker P , H ♯ = im P .
Example 6.4 Suppose (H + , H − ) is an admissible Fredholm pair given by projectors (P + , P − ).
Then the polarizations associated with P + and 1 − P − coincide. This way an admissible Fredholm pair defines a polarization. Furthermore each polarization defines an element of K 1 (B).
Intuitively polarizations can be treated as a kind of orientations dividing H into the upper half and lower half. Such a tool was used in [DZ] to split the index of a family of Dirac operators. (In [DZ] splittings were called generalized spectral sections.) Polarizations were discussed in the lectures of G. Segal (see [Sg], Lecture 2).
Correspondences, bordisms, twists
Definition 7.1 We consider the category PR having the following objects and morphisms
• Ob(PR) = Hilbert spaces (possibly of finite dimension) with a representation of some C * -algebra B and with a distinguished polarization,
• Mor PR (H 1 , H 2 ) = closed linear subspaces L ⊂ H 1 ⊕ H 2 , such that the pair (L, H ♭ 1 ⊕ H ♯ 2 ) is Fredholm. We write also H 1 L − − → H 2 .
In particular Mor PR (H, 0) ⊂ Grass(H) ⊃ Mor PR (0, H) .
By Proposition 2.2 a subspace L ⊂ H 1 ⊕ H 2 is a morphism if and only if
Π = P ♯ 1 ⊕ P ♭ 2 : L → H ♯ 1 ⊕ H ♭ 2
is a Fredholm operator. The composition in PR is the standard composition of correspondences:
L 1 ⊂ H 1 ⊕ H 2 , L 2 ⊂ H 2 ⊕ H 3 , L 2 • L 1 = {(x, z) ∈ H 1 ⊕ H 3 : ∃y ∈ H 2 , (x, y) ∈ L 1 , (y, z) ∈ L 2 } .
In another words the morphisms are certain correspondences or relations, as they were called in [Bo1]. Our approach also fits to the ideas of the topological field theory as presented in [Sg].
Proposition 7.2 The composition of morphism is a morphism.
Proof. Let L 1 ∈ Mor PR (H 1 , H 2 ) and L 2 ∈ Mor PR (H 2 , H 3 ). A simple linear algebra argument shows that
• the kernel of Π 13 :
L 2 • L 1 → H ♯ 1 ⊕ H ♭ 3
is a quotient of ker(Π 12 ) ⊕ ker(Π 23 ),
• the cokernel of Π 13 is a subspace of coker(Π 23 ) ⊕ coker(Π 12 ). ✷
The role of polarizations in the definition of morphisms is clear and the algebra actions are involved implicitly. In fact, the object which plays the crucial role is the algebra of operators commuting with P ♯ up to compact operators, i.e. the odd universal algebra. The role of this algebra was emphasized in [Bo2]. However, in the further presentation we prefer to expose the geometric origin of our construction and keep the name B.
We have two special classes of morphisms in PR: Proof. To show that (L, H ♭ ⊕ H ♯ ) is a Fredholm pair let us show that the projection
Definition 7.3 A subspace L ⊂ H ⊕H is a twist if itΠ = P ♯ ⊕ P ♭ : L → H ♯ ⊕ H ♭ ⊂ H ⊕ H is a Fredholm operator. Indeed, L is parameterized by (1, φ) : H → L ⊂ H ⊕ H .
The composition of these maps is equal to
F = P ♯ ⊕ P ♭ φ .
Since φ almost commutes with P ♭ the map F has a parametrix F = P ♯ ⊕ P ♭ φ −1 . ✷ Definition 7.5 A subspace L ⊂ H 1 ⊕ H 2 is a bordism if L is the image of a projector P L , such that P L ∼ P ♯ 1 ⊕ P ♭ 2 . By 5.2 for any P L ∼ P ♯ 1 ⊕ P ♭ 2 the pair (L, H ♭ 1 ⊕ H ♯ 2 ) is Fredholm. The motivation for the Definition 7.5 is the following:
Example 7.6 Let X be a bordism between closed manifolds M 1 and M 2 , i.e.
∂X = M 1 ⊔ M 2 .
Suppose that D : C ∞ (X; ξ) → C ∞ (X; η) is an elliptic operator of the first order. Then the symbols of Calderón projectors define polarizations of H 1 = L 2 (M 1 ; ξ) and H 2 = L 2 (M 2 ; ξ), see Example 6.3. We reverse the polarization on M 2 , i.e. we switch the roles of H ♭ and H ♯ . Let L ⊂ L 2 (M 1 ; ξ) ⊕ L 2 (M 2 ; ξ) be the closure of the space of boundary values of solutions of Ds = 0. Then L ∈ Mor PR (H 1 , H 2 ) is a bordism in PR. This procedure indicates the following:
• the space L ⊂ L 2 (M 1 ⊔ M 2 ; ξ) = L 2 (M 1 ; ξ) ⊕ L 2 (M 2 ; ξ) and the associated Calderón projector are global objects. One cannot recover them from the separated data in L 2 (M 1 ; ξ) and L 2 (M 2 ; ξ).
• but up to compact operators one can localize the projector P L and obtain two projectors acting on L 2 (M 1 ; ξ) and L 2 (M 2 ; ξ).
We note that the following proposition holds:
Proposition 7.7
1. The composition of bordisms is a bordism.
The composition of a bordism and a twist is a bordism.
3. The composition of twists is a twist.
Remark 7.8 Let H 1 L 1 − − → H 2 L 2 − − → H 3 be a pair of bordisms in PR coming from geometric bordisms M 1 ∼ X 1 M 2 , M 2 ∼ X 2 M 3
and an elliptic operator on X 1 ∪ M 2 X 2 , as in Example 7.6. Then L 2 • L 1 coincides with the space of the Cauchy data along ∂(X 1 ∪ M 2 X 2 ) = M 1 ⊔ M 3 of the solutions of Ds = 0 on X 1 ∪ M 2 X 2 .
Chains of morphisms
Now we introduce the notion of a chain. This is a special case of a Fredholm fan considered in [Bo2] and in §12 below. A chain of morphisms is a sequence correspondences Just take L 1 = (0 ⊕ H ♭ 1 ) ⊂ (0 ⊕ H 1 ) and L 2 = (H ♯ 2 ⊕ 0) ⊂ (H 2 ⊕ 0). Example 8.3 It is proper to explain why we are interested in chains of morphisms. Suppose there is given a closed manifold which is composed of usual bordisms
0 L 0 − − → H 1 L 1 − − → H 2 L 2 − − → . . . L m−1 − − → H m Lm − − → 0 .X = X 0 ∪ M 1 X 1 ∪ M 2 . . . ∪ M m−1 X m−1 ∪ Mm X m .
We treat the manifolds M i as objects and bordisms
M i−1 ∼ X i M i as morphisms. In particular ∅ ∼ X 1 M 1 and M m ∼ Xm ∅ .
Let D : C ∞ (X; ξ) → C ∞ (X; η) be an elliptic operator of the first order. This geometric situation gives rise to a chain of bordisms in the category PR:
• H i = L 2 (M i ; ξ) with the action of B i = C(M i ) and the polarization defined by the symbol of Calderón projector, as in 7.6,
• L i ⊂ L 2 (M i ; ξ) ⊕ L 2 (M i+1 ; ξ)
is the space of boundary values of the solutions of Ds = 0 on X i .
Indices in PR
Definition 9.1 Fix the splittings S of the objects of PR. The pair (L, H ♭ 1 ⊕ H ♯ 2 ) in H 1 ⊕H 2 is Fredholm by Definition 7.1. Define the index of a morphism L ∈ Mor PR (H 1 , H 2 ) by the formula:
Ind S 1 ,S 2 (L) def = Ind(L, H ♭ 1 ⊕ H ♯ 2 ) = ind(P ♯ 1 ⊕ P ♭ 2 : L → H ♯ 1 ⊕ H ♭ 2 ) .
Proposition 9.2 We have the equality of indices for a twist 1. Ind S,S (graph φ),
2. index of 1 P ♭ φ P ♯ : H ⊕ H → H ⊕ H, 3. ind(φ) = ind(φP ♭ + P ♯ ) = Ind(φ(H ♭ ), H ♯ ) (compare Theorem 2.3),
Proof. The graph of φ is parameterized by (1, φ) and H ♭ ⊕ H ♯ is parameterized by (P ♭ , P ♯ ). Thus by Theorem 2.1 the first equality follows. Now we multiply the matrix (2.) from the left by the symmetry P ♯ P ♭ P ♭ P ♯ and we obtain
P ♭ φ + P ♯ 0 P ♯ φ + P ♭ 1 ∼ φP ♭ + P ♯ 0 φP ♯ + P ♭ 1 .
The second equality follows. ✷ Remark 9.3 The index of a twist depends only on the polarization, not on the particular splitting. This is clear from 9.2.2. It is worthwhile to point out that if the twist φ = A : H ⊕n → H ⊕n is given by a matrix A ∈ GL n (B), then
ind( A) = [ A], [S H ♭ ] ,
where S H ♭ is the symmetry with respect to H ♭ and the bracket is the pairing in K-theory of K 1 (B) with K 1 (B).
On the other hand Ind S 1 ,S 2 (L) does depend on the splitting for general morphisms.
Remark 9.4 The index in Example 7.6 is equal to the index of the operator D with the boundary conditions given by the splittings, as in [APS].
Remark 9.5 There are certain morphisms in PR which are interesting from the point of view of composition. We will say that L is a special correspondence if:
• L is the graph of an injective function φ defined on a subspace of H 1 ,
• the images of the projections of L onto H 1 and H 2 are dense.
(The second condition is equivalent to the first one for the adjoint correspondence defined as the ortogonal complement L ⊥ .) If L is special, then
Ind S 1 ,S 2 (L) = Ind(L(H ♭ 1 ), H ♯ 2 ) , where L(H ♭ 1 ) = {y ∈ H 2 : ∃x ∈ H ♭ 1 (x, y) ∈ L } .
Indeed in this case we have
L ∩ (H ♭ 1 ⊕ H ♯ 2 ) ≃ L(H ♭ 1 ) ∩ H ♯ 2 and L ⊥ ∩ (H ♭ 1 ⊥ ⊕ H ♯ 2 ⊥ ) ≃ L ⊥ (H ♭ 1 ⊥ ) ∩ H ♯ 2 ⊥ .
Of course each twist is a special morphism. Another example of a special morphism is the one which comes from the Cauchy-Riemann operator. In general, we obtain a special morphism if the operator (and its adjoint) satisfies the following:
• if s = 0 on a hypersurface M and Ds = 0, then s = 0 on the whole component containing M.
In the set of morphisms we can introduce an equivalence relation: we say that L ∼ L ′ if L and L ′ are images of embeddings i, i ′ : H ֒→ H 1 ⊕ H 2 of a Hilbert space H, such that i − i ′ is a compact operator. If L ∼ L ′ , then Ind S 1 ,S 2 (L) = Ind S 1 ,S 2 (L ′ ). If L is a bordism, then L is equivalent to a direct sum of subspaces in coordinates: L ∼ L 1 ⊕ L 2 , L i ⊂ H i , such that L 1 is a finite dimensional perturbation of H ♯ 1 and L 2 is a finite dimensional perturbation of H ♭ 2 . Then Ind S 1 ,S 2 (L) = Ind(H ♭ 1 , L 1 ) + Ind(L 2 , H ♯ 2 ). Suppose, as in Example 8.3, we have an elliptic operator on a closed manifold X which is composed of geometric bordisms. Fix n ∈ N and a sequence of matrices
A i ∈ GL n (B i ) .
Define a bundle Θ
{A i } X obtained from trivial ones on X i 's and twisted along M i 's. Define bordisms L i (D) ∈ Mor PR (H i , H i+1 ) as in Example 7.6.
Theorem 9.6 Suppose that 3.1 holds for D and D * on each X i for i = 0, . . . , n. Then
ind(D ⊗ Θ {A i } X ) = n m i=0 Ind S i ,S i+1 (L i (D)) + m i=1 ind( A i ) .
Here, as it was denoted before, A : H ⊕n → H ⊕n is the operator associated to the matrix A ∈ GL n (B). This Theorem is a special case of Theorem 11.1 proved below.
Taking into account Remark 9.3 the difference between the idices of the original and twisted operator can be expressed through the pairing in K-theory.
Theorem 9.7 ind(D ⊗ Θ {A i } X ) − n ind(D) = m i=1 ind( A i ) = m i=1 [A i ], [S H ♭ i ] .
The braked is the pairing between [
A i ] ∈ K 1 (M i ) and [S H ♭ i ] ∈ K 1 (M i ).
10 Indices of compositions
H 1 φ − − → H 1 L − − → H 2 ,
where φ is a twist and L is a morphism we have
Ind S 1 ,S 2 (L • φ) = Ind S 1 ,S 2 (L) + ind(φ) .
The same holds for the opposite type composition
H 1 L − − → H 2 φ − − → H 2 , Ind S 1 ,S 2 (φ • L) = ind(φ) + Ind S 1 ,S 2 (L) .
On the other hand Ind S 0 ,S 2 (L 2 •L 1 ) differs from Ind S 0 ,S 1 (L 1 )+Ind S 1 ,S 2 (L 2 ) in general. This is clear due to the basic example that comes from a decomposition X = X − ∪ M X + . The space L 1 = H − (D) is a correspondence 0 → L 2 (M; ξ) and L 2 = H + (D) a correspondence L 2 (M; ξ) → 0. By 9.6 we have
Ind Id,S 1 (L 1 ) + Ind S 1 ,Id (L 2 ) = ind(D) ,
while L 2 • L 1 : 0 → 0 and Ind Id,Id (L 2 • L 1 ) = 0. Instead we have the following interesting property of indices:
Theorem 10.2 The difference δ(L 1 , L 2 ) = Ind S 0 ,S 1 (L 1 ) + Ind S 1 ,S 2 (L 1 ) − Ind S 0 ,S 2 (L 2 • L 1 )
does not depend on the particular splittings.
Proof. Since
Ind S i−1 ,S i (L i ) = ind(H ♭ i−1 ⊕ L i ⊕ H ♯ i → H i−1 ⊕ H i ) we have to compare indices of the operators α : H ♭ 0 ⊕ L 1 ⊕ H ♯ 1 ⊕ H ♭ 1 ⊕ L 2 ⊕ H ♯ 2 → H 0 ⊕ H 1 ⊕ H 1 ⊕ H 2 and β : H ♭ 0 ⊕ L 2 • L 1 ⊕ H ♯ 2 → H 0 ⊕ H 2 .
The kernel of α is isomorphic to the kernel of the operator which is induced by inclusions
H ♭ 0 ⊕ L 1 ⊕ L 2 ⊕ H ♯ 2 → H 0 ⊕ H 1 ⊕ H 2 .
The former operator factors through
H ♭ 0 ⊕ (L 1 + L 2 ) ⊕ H ♯ 2 → H 0 ⊕ H 1 ⊕ H 2 .
Here the direct sum is replaced by the algebraic sum inside H 0 ⊕ H 1 ⊕ H 2 . The difference of the dimensions of the kernels is equal to the dimension of the intersection
(L 1 ⊕ 0) ∩ (0 ⊕ L 2 ) ⊂ H 0 ⊕ H 1 ⊕ H 2
Now we observe that the kernel of the last operator is isomorphic to
H ♭ 0 ⊕ L 2 • L 1 ⊕ H ♯ 2 → H 0 ⊕ H 2 .
Therefore the difference of the dimensions of the kernels of α and β is equal to dim((L 1 ⊕ 0) ∩ (0 ⊕ L 2 )), hence it does not depend on the splittings. We have the dual formula for cokernels and L ⊥ i , also not depending on the splittings. ✷ We obtain a procedure of computing the sum of indices
m i=0 Ind S i ,S i+1 (L i )
which would not involve splittings. We choose a pair of consecutive morphisms L i , L i+1 and replace them by their compositions. The composition produces a number δ(L i , L i+1 ) and the sequence of morphisms is shorter:
(L 0 , L 1 , . . . , L m ) ❀ (L 0 , L 1 , . . . , L i • L i+1 , . . . , L m ) + δ(L i , L i+1 ) .
We pick another composition and add its contribution to the previous one. We continue until we get 0 → 0. The sum of the contributions does not depend on the splittings. One can perform compositions in various ways. The sum of contributions stays the same.
Weird decompositions of manifolds
Let {M e } e∈E be a configuration of disjoined hypersurfaces in a manifold X. We assume that orientations of the normal bundles are fixed. For simplicity assume that X and M e 's are connected. Let
• X 0 M 1 − − → • X 1 M 2 − − → . . . M n−1 − − → • X n−1 Mn − − → • Xn .
Note that this is a dual description with respect to the one presented in Example 8.3. Suppose there is given an elliptic operator D : C ∞ (X; ξ) → C ∞ (X; η) and a set of transmission data {φ e } e∈E , that is for each hypersurface M e we are given a matrix-valued function M e → GL n (C). The Riemann-Hilbert problem gives rise to the operator
D [φ] : v∈V C ∞ (X v ; ξ) n → v∈V C ∞ (X v ; η) n ⊕ e∈E C ∞ (M e ; ξ) n D [φ] (f v ) def = Df v , e: t(e)=v f v|Me − e: s(e)=v φ e (f v|Me ) , for f v ∈ C ∞ (X v ; ξ) n .L(v), H out (v) ⊂ H bd (v) ,
is Fredholm. Let Ind v be its index with respect to the polarizations S e . Moreover, let Ind e = Ind Se,Se (φ e ) = ind(φ e ) denote the index of φ e , see Theorem 2.3.
Theorem 11.1 Assume that D and D * have unique extension property (3.1) on each X v .
Then
ind(D [φ] ) = v∈V Ind v + e∈E Ind e .
In particular:
Corollary 11.2 If there are no twists, i.e. each φ e = 1 ∈ GL 1 (C ∞ (M e )), then
ind(D) = v∈V Ind v .
Proof of 11.1. The general result follows from the case when we have one vertex and one edge starting and ending in it. We just sum up all X v 's and all M e 's. Say that X is obtained fromX with ∂X = M s ⊔ M t by identification M s with M t .
D [φ] : C ∞ (X; ξ) n → C ∞ (X; η) n ⊕ C ∞ (M; ξ) n D [φ] (u) = Du, u |Mt − φ(u |Ms ) .
We replace ξ ⊕n by ξ and treat φ as an automorphism of ξ. The index of the operator is equal to the index of a Fredholm pair: The proof of Theorem 11.1 relies on this formula. We will give a heuristic proof of 11.3. The precise argument demands introduction and consecutive use of the whole scale of Sobolev spaces with all usual technicalities involved. The reader may also take this formula as the definition of the index of the problem considered above. We calculate the kernel and cokernel of D [φ] :
Theorem 11.3 Let L ⊂ L 2 (M s ⊔ M t ; ξ) = L 2 (M; ξ) × L 2 (M; ξ) be
• the kernel consist of solutions of Du = 0 onX satisfying φ(u |Ms ) = u |Mt . By our assumption u is determined by its boundary value. Thus
ker D [φ] ≃ L ∩ graph φ .
The cokernel consists of
(v, w) ∈ C ∞ (X; η * ) ⊕ C ∞ (M; ξ * ) : ∀u ∈ C ∞ (X + ; ξ) Du, v + u |Mt − φ(u |Ms ), w = 0 .
Let G : ξ |M → η |M be the isomorphism of the bundles defined by the symbol of D as in [PS]. It follows that
• D * v = 0 (since we can take any u with support in intX)
• by Green formula Du, v = Gu |Ms , v |Ms + Gu |Mt , v |Mt
• since u |Ms and u |Mt may be arbitrary it follows that
G * (v |Ms ) = −φ * w, G * (v |Mt ) = w, • therefore v |Ms = −G * −1 φ * G * (v |Mt ).
Now we use the identification
G * × G * : L 2 (M s ; η * ) × L 2 (M t ; η * ) → L 2 (M s ; ξ * ) × L 2 (M t ; ξ * )
under which L ⊥ is equal to the space of boundary values H(D * ) and
(graph φ) ⊥ = (graph(−G * −1 φ * G * )) op .
(Here the opposite correspondence R op is defined by (x, y) ∈ R op ≡ (y, x) ∈ R.) In another words φ and G * −1 φ * G * are adjoined. Since the boundary values of v determine v we can identify
coker D [φ] ≃ H(D * ) ∩ (−graph(G * −1 φ * G * )) op ≃ L ⊥ ∩ (graph φ) ⊥ .
✷ Proof of 11.1, continuation. After fixing a splitting of L 2 (M; ξ) = H e , we have in our
notation H in v = H ♭ ⊕ H ♯ , H out v = H ♯ ⊕ H ♭ .= 1 0 φ 1 • P ♯ P ♭ P ♭ P ♯ . Thus Ind(graph φ, L) = ind Φ • P ♯ 0 0 P ♭ + Ψ • P ♭ 0 0 P ♯ .
Since Ψ almost commutes with P ♭ ⊕ P ♯ , the considered operator is almost equal to the composition
Φ • P ♯ 0 0 P ♭ + P ♭ 0 0 P ♯ • P ♯ 0 0 P ♭ + Ψ • P ♭ 0 0 P ♯ .
Now we use additivity of indices. The index of the second term is equal to Ind v . It remains to compute the first index, that is ind 1 P ♭ φP ♯ φP ♭ + P ♯ . If we conjugate the above matrix by the symmetry P ♯ P ♭ P ♭ P ♯ we obtain P ♯ + P ♭ φ 0 P ♭ + P ♯ φ 1 . Its index is equal to ind(P ♯ + P ♭ φ) = Ind e . ✷
The additivity of the index is not a surprise due to the well known integral formula for the analytic index. What is interesting in Theorem 11.2 is that the contribution coming from separate pieces of X is also an integer number. This partition into local indices depends only on the choice of splittings along hypersurfaces.
Index of a fan
We will give another formula for the index of D [φ] which is expressed in terms of the twisted fan {L(i)}. The general reference for fans is [Bo2]. Let us first say what we mean by a fan: it is a collection of spaces L 1 , L 2 , . . . , L n ⊂ H which is obtained from a direct sum decomposition
H 1 ⊕ H 2 ⊕ . . . ⊕ H n = H
by a sequence of twists Ψ 1 , Ψ 2 , . . . , Ψ n ∈ GL(H), i.e. L i = Ψ i (H i ). We assume that each Ψ i almost commutes with each projection P j of the direct sum. We say that the fan {L(i)} is a perturbation of the direct sum decomposition H = ⊕H i .
Theorem 12.1 (Index of a Fredholm fan) Let L 1 , L 2 , . . . , L n ⊂ H be a fan. Then the following numbers are equal:
1. the index of the map ι : L 1 ⊕ L 2 ⊕ . . . ⊕ L n → H, which is the sum of inclusions, 2. the index of the operator Ψ 1 P 1 + Ψ 2 P 2 + . . . + Ψ n P n : H → H,
the sum
n i=1 ind(P i Ψ i : H i → H i ) = n i=1 ind(P i : L i → H i ) , 4. the difference n−1 i=1 dim(L 1 + . . . + L i ) ∩ L i+1 − codim(L 1 + . . . + L n ) .
Proof. The equality (1.=2.) follows from the fact that Ψ i : H i → P i is a parameterization of L i . The equality (2.=3.) follows since Ψ 1 P 1 + Ψ 2 P 2 + . . . + Ψ n P n ∼ n i=1 (P 1 + . . . + Ψ i P i + . . . + P n ) .
To prove the equality (1.=4.) one checks that dim(ker ι) =
n−1 i=1 (L 1 + . . . + L i ) ∩ L i+1 .
This is done by induction with respect to n. ✷ Let us assume that the graph associated to our configuration does not contain edges starting and ending in the same vertex (e.g. the situation on fig.1 is not allowed). Then H bd (v) is a summand in H = e∈E H(e) (there are no terms H(e) appearing twice). Moreover, {L(v)} v∈V is a fan in H which is a perturbation of the direct sum decomposition
H = v∈V H in (v) .
Consider a fan, which is twisted with respect to {L(v)} v∈V . Set (φ✶L)(v) = φ v (L(v)), where φ v is an automorphisms of H: If there are no twists, then the equality follows from Proposition 2.2. In general the proof follows from additivity of ind, see Theorem 2.3. ✷
φ v (f ) def = φ e (f )
Proposition 2.1 A pair H ± is Fredholm if, and only if the map ι : H + ⊕ H − → H induced by the inclusions is a Fredholm operator. Moreover the indices are equal:
Assumption 3. 1 (
1Unique Extension Property) Let ǫ = + or − and let f ∈ C ∞ (X ǫ ; ξ). If Df = 0 and f |M = 0 then f = 0. If D has the unique extension property, then ker(D) ≃ H + (D) ∩ H − (D) .
For the elliptic operator D the pair H ± (D) ⊂ L 2 (M; ξ) is a good Fredholm pair.
is the graph of a linear isomorphism φ ∈ GL(P ♯ , K) ⊂ GL(H) commuting with the polarization projectors up to compact operators. Proposition 7.4 For a twist L = graph(φ) ⊂ H ⊕ H the pair (L, H ♭ ⊕ H ♯ ) is Fredholm, i.e. L ∈ Mor PR (H, H).
Example 8. 1
1Let (H + , H − ) be an admissible Fredholm pair in H. Then we have a chain of bordisms with respect to the polarization defined by P ♯ = P + (or 1 − P − ), see Example 6.4. Example 8.2 Each morphism in L ∈ Mor PR (H 1 , H 2 ) can be completed to
9.3 we have made some remarks about the dependence of indices on the particular splitting. Now let us see how indices behave under compositions of correspondences. From the considerations in §9 it is easy to deduce: Proposition 10.1 For the composition
Example 10.3 If D and D * on X i and X i+1 have the unique extension property 3.1, then δ(L i , L i+1 ) = 0 as long the gluing process along M i+1 does not create a closed component of X. If it does then δ(L i , L i+1 ) equals to the index of D restricted to this component.
decomposition of X into connected components. Our situation is well described by an oriented graph• the vertices (corresponding to open domains in X) are labelled by the set V • the edges (corresponding to hypersurfaces) are labelled by E. The edge e starts at the vertex v = s(e) corresponding to X v which is on the negative side of M e . It ends at v ′ = t(e), such that X v ′ lies on the positive side of M e . The functions s, t : E → V are the source and target functions. For example the configuration Fig. 1 is described by the following graph: Fig. 2 A sequence of bordisms leads to the linear graph
ForH
e ∈ E let us set H(e) = L 2 (M e ; ξ). The symbol of D together with the choice of orientations of the normal bundles define polarizations of H(e). Let us fix particular splittings of the spaces H(e) encoded in the symetries S e . SetH bd (vout (v) = e: s(e)=v H ♭ (e) ⊕ e: t(e)=v H ♯ (e) . Let L(v) ⊂ H bd (v) be the space of boundary values of solutions of Df v = 0 on X v . It is a perturbation of H in (v). For each vertex v (i.e. for each open domain X v ) the pair of subspaces
Fig. 3
3Then our operator D[φ] is of the form:
the space of boundary values of the operator D onX. Then ind(D [φ] ) = Ind(L, graph(φ)) .
if f ∈ H(e), s(e) = v , f if f ∈ H(e), s(e) = v . Theorem 12.2 Assume that D and D * have unique extension property (3.1) on each X v . The index of D [φ] is equal to the index of the Fredholm fan φ✶L.Proof. Combining Theorem 11.1 with 12.1.3 it remains to prove that for each vertex vind(P in v : (φ✶L)(v) → H in (v)) = Ind v + e : s(e)=vInd e .
then it is Fredholm if and only if the restriction P ♭ |H − : H − → H ♭ is a Fredholm operator. Moreover the indices are equal:
By 2.3 there exists an linear isomorphism Ψ : H ⊕ H → H ⊕ H almost commuting with P ♭ ⊕ P ♯ , such that L = Ψ(H ♭ ⊕ H ♯ ). We parameterize the graph of φ by H ♯ ⊕ H ♭ using the composition Φ
Non unicité du problèmes de Cauchy pour des opéreteurs de type principal II. S Alinhac, Ann. of Math. 117Alinhac, S.: Non unicité du problèmes de Cauchy pour des opéreteurs de type principal II. Ann. of Math. 117 (1983), 77-108
Spectral asymmetry and Riemannian geometry. M F Atiyah, V K Patodi, I M Singer, I. Math. Proc. Camb. Phil. Soc. 77Atiyah, M. F.; Patodi, V. K.; Singer, I. M.: Spectral asymmetry and Riemannian geometry. I. Math. Proc. Camb. Phil. Soc. 77 (1975), 43-69
On subspaces admitting pseudodifferential projections. (Russian) Vestn. M Birman, Sh, M Z Solomyak, English) Vestn. Leningr. Univ., Math. 151Mat. Mekh. Astron. No.Birman, M.Sh.; Solomyak, M.Z.: On subspaces admitting pseudodifferential pro- jections. (Russian) Vestn. Leningr. Univ. 1982, No.1, Mat. Mekh. Astron. No.1, 18-25 (1982). (English) Vestn. Leningr. Univ., Math. 15, 17-27 (1983).
The abstract linear conjugation problem and Fredholm pairs of subspaces. Bogdan Bojarski, Memoriam I. N. Vekua: Differential and Integral Equations. Boundary Value Problems. Publications of I. N. Vekua Institute of Applied Mathematics. TibilisiBojarski, Bogdan: The abstract linear conjugation problem and Fredholm pairs of subspaces. Volume in Memoriam I. N. Vekua: Differential and Integral Equa- tions. Boundary Value Problems. Publications of I. N. Vekua Institute of Applied Mathematics, Tibilisi 1979, 45-60.
Proceedings of a minisymposium on spectral invariants, heat equation approach. Bogdan ; Bojarski, The geometry of the Riemann-Hilbert problem. Booss-Bavnbek, Bernhelm. Roskilde, Denmark; Providence, RIAmerican Mathematical Society242Geometric aspects of partial differential equationsBojarski, Bogdan: The geometry of the Riemann-Hilbert problem. Booss-Bavnbek, Bernhelm (ed.) et al., Geometric aspects of partial differential equations. Proceed- ings of a minisymposium on spectral invariants, heat equation approach, Roskilde, Denmark, September 18-19, 1998. Providence, RI: American Mathematical Soci- ety. Contemp. Math. 242, 25-33 (1999)
The geometry of the Riemann-Hilbert problem II. in Boundary value problems, integral equations and related problems. Bogdan Bojarski, World Sci. PublishingBeijing/Chengde; River Edge, NJBojarski, Bogdan: The geometry of the Riemann-Hilbert problem II. in Boundary value problems, integral equations and related problems (Beijing/Chengde, 1999), 41-48, World Sci. Publishing, River Edge, NJ, 2000.
Generalized Riemann-Hilbert Transmission and Boundary Value Problems, Fredholm Pairs and Bordisms. Bogdan ; Bojarski, Andrzej Weber, Bull. Polish Acad. Sci. Math. 504Bojarski, Bogdan; Weber, Andrzej: Generalized Riemann-Hilbert Transmission and Boundary Value Problems, Fredholm Pairs and Bordisms. Bull. Polish Acad. Sci. Math. Vol. 50, No 4 (2002) 479-496
Desuspension of splitting elliptic symbols I & II. I. Booss-Bavnbek, ; Bernhelm, Wojciechowski, P Krzysztof, Ann. Glob. Anal. Geom. 33Ann. Glob. Anal. Geom.Booss-Bavnbek, Bernhelm; Wojciechowski, Krzysztof P.: Desuspension of splitting elliptic symbols I & II. I. Ann. Glob. Anal. Geom., Vol. 3, No. 3, 337-383, (1985), Ann. Glob. Anal. Geom., Vol. 4, No. 3, 349-400, (1986).
Booss-Bavnbek, ; Bernhelm, Wojciechowski, P Krzysztof, Elliptic boundary problems for Dirac operators. Mathematics: Theory & Applications. Boston, MABirkhaeuserBooss-Bavnbek, Bernhelm; Wojciechowski, Krzysztof P.: Elliptic boundary prob- lems for Dirac operators. Mathematics: Theory & Applications. Boston, MA: Birkhaeuser. (1993).
Alain: Non-commutative differential geometry. Connes, Publ. Math., Inst. Hautes Etud. Sci. 62Connes, Alain: Non-commutative differential geometry. Publ. Math., Inst. Hautes Etud. Sci. 62, 257-360 (1985).
Connes, Alain: Noncommutative geometry. San Diego, CAAcademic Press, IncConnes, Alain: Noncommutative geometry. Academic Press, Inc., San Diego, CA, 1994.
Weiping: Splitting of the family index. Xianzhe; Dai, Zhang, Comm. Math. Phys. 1822Dai, Xianzhe; Zhang, Weiping: Splitting of the family index. Comm. Math. Phys. 182 (1996), no. 2, 303-317
Topological invariants of elliptic operators. I. K-homology. G G Kasparov, Izv. Akad. Nauk SSSR Ser. Mat. 94Math. USSR-IzvKasparov, G. G. Topological invariants of elliptic operators. I. K-homology. Math. USSR-Izv. 9, no. 4, 751-792 (1975); translated from Izv. Akad. Nauk SSSR Ser. Mat. 39, no. 4, 796-838 (1975).
Cobordism invariance of the analytical index. R S Palais, R T Seeley, Palais, R. S. Seminar on the Atiyah-Singer index theorem. Annals of Mathematics Studies. 57Princeton University PressPalais, R. S.; Seeley, R. T. Cobordism invariance of the analytical index. in Palais, R. S. Seminar on the Atiyah-Singer index theorem. Annals of Mathematics Studies, No. 57 Princeton University Press, Princeton
Non-uniqueness in Cauchy's problem for differential equations of elliptic type. A Pliś, J. Math. Mech. 9Pliś, A.: Non-uniqueness in Cauchy's problem for differential equations of elliptic type. J. Math. Mech. 9, 557-562 (1960).
A Pressley, G Segal, Loop groups. Oxford Mathematical Monographs. New YorkOxford University PressPressley, A.; Segal, G.: Loop groups. Oxford Mathematical Monographs. Oxford Science Publications. The Clarendon Press, Oxford University Press, New York, 1986.
The ζ-determinant and Quillen determinant for a Dirac operator on a manifold with boundary. S G Scott, K P Wojciechowski, Geom. Func. Anal. 10Scott, S. G., Wojciechowski K. P.:The ζ-determinant and Quillen determinant for a Dirac operator on a manifold with boundary. Geom. Func. Anal. Vol. 10 (2000) 1202-1236
G B Segal, Topological Field Theory. Segal, G. B.: Topological Field Theory ('Stanford Notes'). available at http://www.cgtp.duke.edu/ITP99/segal/
Singular Integrals and Boundary value problems Amer. R T Seeley, J. Math. 88Seeley, R. T.: Singular Integrals and Boundary value problems Amer. J. Math 88 (1966) 781-809
. B B , Institute of Mathematics PAN. 8Poland [email protected]. B.: Institute of Mathematics PAN, ulŚniadeckich 8, 00-950 Warszawa, Poland [email protected]
. A W , 2WarszawaInstitute of Mathematics, Warsaw UniversityPoland [email protected]. W.: Institute of Mathematics, Warsaw University, ul.Banacha 2, 02-097, Warszawa, Poland [email protected]
| []
|
[
"Longitudinal wall shear stress evaluation using centerline projection approach in the numerical simulations of the patient-based carotid artery",
"Longitudinal wall shear stress evaluation using centerline projection approach in the numerical simulations of the patient-based carotid artery"
]
| [
"Kevin Richter \nInstitute of Mathematics\nFaculty of Natural and Enviromental Sciences\nUniversity of Koblenz-Landau\nGermany\n",
"Tristan Probst \nInstitute of Mathematics\nFaculty of Natural and Enviromental Sciences\nUniversity of Koblenz-Landau\nGermany\n",
"Anna Hundertmark \nInstitute of Mathematics\nFaculty of Natural and Enviromental Sciences\nUniversity of Koblenz-Landau\nGermany\n",
"Pepe Eulzer \nFaculty of Mathematics and Computer Science\nUniversity of Jena\nGermany\n",
"Kai Lawonn \nFaculty of Mathematics and Computer Science\nUniversity of Jena\nGermany\n"
]
| [
"Institute of Mathematics\nFaculty of Natural and Enviromental Sciences\nUniversity of Koblenz-Landau\nGermany",
"Institute of Mathematics\nFaculty of Natural and Enviromental Sciences\nUniversity of Koblenz-Landau\nGermany",
"Institute of Mathematics\nFaculty of Natural and Enviromental Sciences\nUniversity of Koblenz-Landau\nGermany",
"Faculty of Mathematics and Computer Science\nUniversity of Jena\nGermany",
"Faculty of Mathematics and Computer Science\nUniversity of Jena\nGermany"
]
| []
| In this numerical study areas of the carotid bifurcation and of a distal stenosis in the internal carotid artery are closely observed to evaluate the patient's current risks of ischemic stroke. An indicator for the vessel wall defects is the stress the blood is exerting on the surrounding vessel tissue, expressed standardly by the amplitude of the wall shear stress vector (WSS) and its oscillatory shear index. In contrast, our orientation-based shear evaluation detects negative shear stresses corresponding with reversal flow appearing in low shear areas. In our investigations of longitudinal component of the wall shear vector, tangential vectors aligned longitudinally with the vessel are necessary. However, as a result of stenosed regions and imaging segmentation techniques from patients' CTA scans, the geometry model's mesh is non-smooth on its surface areas and the automatically generated tangential vector field is discontinuous and multi-directional, making an interpretation of the orientation-based risk indicators unreliable. We improve the evaluation of longitudinal shear stress by applying the projection of the vessel's center-line to the surface to construct smooth tangetial field aligned longitudinaly with the vessel. We validate our approach for the longitudinal WSS component and the corresponding oscillatory index by comparing them to results obtained using automatically generated tangents in both rigid and elastic vessel modeling as well as to amplitude based indicators. The major benefit of our WSS evaluation based on its longitudinal component for the cardiovascular risk assessment is the detection of negative WSS indicating persitent reversal flow. This is impossible in the case of the amplitude-based WSS. | 10.1080/10255842.2023.2185478 | [
"https://export.arxiv.org/pdf/2204.04018v3.pdf"
]
| 252,438,824 | 2204.04018 | 65b8deec23d3812acb4537bf8d1ce319321239a7 |
Longitudinal wall shear stress evaluation using centerline projection approach in the numerical simulations of the patient-based carotid artery
September 23, 2022
Kevin Richter
Institute of Mathematics
Faculty of Natural and Enviromental Sciences
University of Koblenz-Landau
Germany
Tristan Probst
Institute of Mathematics
Faculty of Natural and Enviromental Sciences
University of Koblenz-Landau
Germany
Anna Hundertmark
Institute of Mathematics
Faculty of Natural and Enviromental Sciences
University of Koblenz-Landau
Germany
Pepe Eulzer
Faculty of Mathematics and Computer Science
University of Jena
Germany
Kai Lawonn
Faculty of Mathematics and Computer Science
University of Jena
Germany
Longitudinal wall shear stress evaluation using centerline projection approach in the numerical simulations of the patient-based carotid artery
September 23, 2022hemodynamicsfluid-structure interactionfinite element methodcardiovascular risk indicatorslon- gitudinal wall shear stressoscillatory shear index
In this numerical study areas of the carotid bifurcation and of a distal stenosis in the internal carotid artery are closely observed to evaluate the patient's current risks of ischemic stroke. An indicator for the vessel wall defects is the stress the blood is exerting on the surrounding vessel tissue, expressed standardly by the amplitude of the wall shear stress vector (WSS) and its oscillatory shear index. In contrast, our orientation-based shear evaluation detects negative shear stresses corresponding with reversal flow appearing in low shear areas. In our investigations of longitudinal component of the wall shear vector, tangential vectors aligned longitudinally with the vessel are necessary. However, as a result of stenosed regions and imaging segmentation techniques from patients' CTA scans, the geometry model's mesh is non-smooth on its surface areas and the automatically generated tangential vector field is discontinuous and multi-directional, making an interpretation of the orientation-based risk indicators unreliable. We improve the evaluation of longitudinal shear stress by applying the projection of the vessel's center-line to the surface to construct smooth tangetial field aligned longitudinaly with the vessel. We validate our approach for the longitudinal WSS component and the corresponding oscillatory index by comparing them to results obtained using automatically generated tangents in both rigid and elastic vessel modeling as well as to amplitude based indicators. The major benefit of our WSS evaluation based on its longitudinal component for the cardiovascular risk assessment is the detection of negative WSS indicating persitent reversal flow. This is impossible in the case of the amplitude-based WSS.
Introduction
The importance of a healthy and functioning cardiovascular system is reflected in the WHO death statistics of 2019. Ischemic stroke was the disease responsible for the highest proportion of deaths across all countries and wealth levels [37]. The cause of ischemic stroke is an arterial vascular disease, which in its most common form, atherosclerosis, is an inflammatory response of the vessel wall to lipid metabolism disturbances and endothelial stress. This leads to the formation of multi-focal plaques and thus to the narrowing and hardening of the arteries and consequently to an insufficient supply of oxygen to the brain [8]. A special role in atherosclerosis development plays the carotid artery, which is responsible for an estimated 18 -25 % of thromboembolic strokes [17]. In the carotid bifurcation the common carotid artery splits into the external and the internal carotid artery. While the former is responsible for supplying blood to the head and upper neck organs, the latter supplies blood to the brain. Both, the death toll of ischemic strokes and the drastic increase in the general prevalence of atherosclerosis, which is related to demographic change and the accompanying burden on health and care system, make it necessary to adequately address the danger posed by atherosclerosis. To provide necessary tools predicting the locations of sites susceptible to atherosclerotic damage, as well as to make recommendations for their optimal treatment, is one of the main goals of modern medicine.
The predictions of atherosclerosis and further cardiovascular risk through numerical simulations have become popular over the last decades. Especially in the field of fluid-structure-interaction there is an inexhaustible range of publications. They span from the incorporation of different mathematical models, such as non-Newtonian fluids [16,19], different structure models, e.g. shell models and membrane models [5,6], to exploring new numerical methods, which effectively tackle the multi-physicality with splitting techniques [4,15,33], just to name a few.
For the purpose of risk quantification, parameters derived from numerical flow data are of great interest, e.g. wall shear stress (WSS) [1,2,22,34,40], or oscillatory shear index (OSI) [1, 27,38], both referred to be correlated with cardiovascular risk. It has been reported [21], that apart from regions with high amplitude WSS, areas with low and temporary oscillating WSS promote atherosclerotic processes. The multi-directional behavior of WSS has also been linked to potential risk zones, see results on transversal WSS and other metrics in [14].
Some of the recent results report on proper visualisation tools for the localisation of cardiovascular risk zones. These are based on the interplay of imaging techniques for exploring the vessel morphology and simulated data as velocity streamlines or WSS, see, e.g., [9,10] and further citations therein. For those tools and their underlying numerical simulations the patient's unique vessel morphology plays a crucial role for reliable risk predictions of the above mentioned parameters.
The aim of our study is reliable numerical modeling and a quantification of the impact of the fluid flow dynamics on endothelial stresses in the carotid artery of a clinical patient. The obtained numerical data for a set of patients serve as the training set for machine learning algorithms within the research project MLgSA, see Acknowledgement, to explore potential stroke risks. The domain of interest is the carotid bifurcation area with its separation of the common carotid artery into the internal carotid artery (ICA) and the external carotid artery (ECA). The 3D lumen of the carotid vessel tree including its stenosed region, see Fig. 1, as well the shape of the surrounding wall tissue, Fig. 2, have been reconstructed from the computer tomography angiography (CTA) data set of a clinical patient with the method described in [9]. This carotid vessel tree as well as its arterial wall shape have been imported as fluid and solid regions in the finite-element-based software Comsol Multiphysics. We perform numerical simulations based on the incompressible fluid flow model for both rigid walls as well as compliant walls. For the latter the fluid-structure interaction (FSI) model describing the interplay of the fluid and thick compliant walls is considered. The corresponding wall tissue mechanics is modeled by the deformation of a linear elastic material.
The obtained numerical results are used to evaluate the hemodynamic risk parameters: the wall shear stress (WSS) and the oscillatory shear index (OSI), measuring the temporal change of the wall shear stress direction over one cardiac cycle. Herein, we consider the longitudinal component of WSS and compare it to the results for the amplitude of the WSS vector. Moreover, we evaluate the corresponding oscillatory indices using the temporal mean of the longitudinal WSS component, as well as the amplitude of the mean WSS vector. We give their hemodynamic interpretation and discuss the benefits of the orientation-based WSS evaluation in view of reversal flow detection.
For the calculation of the longitudinal WSS value the choice of proper tangential vectors is crucial. The analogous sensitivity of the transversal WSS to the orientation of the tangential field has been addressed in [11,14,23,24]. Regarding the realistic geometries with spatially non-smooth surface topology, the mesh-based erratic tangential vector field leads to problematic, spatially discontinuous behavior of longitudinal (or transversal) WSS on the surface of the extracted geometry. We address this problem and improve the evaluation of the longitudinal component of WSS in complex realistic geometries. Our approach is based on choosing tangential vectors obtained by the projection of the centerline of the vessel tree to the vessel surface, previously used by, e.g. Morbiducci et al. [25] or Arzani & Shadden [2]. We present numerical data and two wall parameters WSS and OSI derived from the projection method and from the automatically generated, mesh-based tangential vector fields. Further, we compare the values obtained for rigid as well as compliant carotid vessel walls in order to examine the importance of the compliance of wall tissue in considered mathematical model of the carotid artery.
Mathematical modeling of the carotid flow
To investigate the individual risk of arterial defects and imminent health issues the individual examination of the carotid geometry can give a more reliable assessment of the patient's current situation. A CTA scan of the patient is processed to a detailed three-dimensional arterial geometry, as described in [9] and the references therein. This technique was applied on the arterial lumen and expanded on the vessel wall domain as well. The resulting rigid domain of arterial lumen which includes part of the common carotid, its branching into internal and external carotid artery and two sub-branches of the latter, compare Fig. 1, has been considered for numerical simulations. On the other hand, simulations which featured fluid-structure-interaction are supplemented with a vessel domain which also reveals stenotic regions and the distinct wall thickness distribution of the patient throughout the area of interest, see Fig. 2. By cutting the computational domain proximal to the bifurcation area for the flow to fully develop and distal such that the boundary conditions don't affect the dynamics of the flow, the geometry is defined.
In particular, we denote with Ω t = Ω f t ∪ Ω s t , t ∈ [0, T ] the deforming fluid and structural domain in time t, respectively and their shared boundary with Γ f si t = Ω f t ∩ Ω s t . To model the blood flow in a deforming vessel the incompressible Navier-Stokes equations in a moving domain are considered in the arbitrary Lagrangian-Eulerian (ALE) formulation. They read as
ρ f Du Dt + ρ f ((u − w) · ∇) u = ∇ · T f , ∇ · u = 0 in Ω f t ,(1)
with the Cauchy stress tensor T f = −pI + 2µD(u) and the strain rate tensor D(u) = 1 2 (∇u + ∇u T ). The fluid velocity u and pressure p are solved in a fluid domain for constant viscosity µ and density ρ f . Here w(X, t) = ∂x(X,t) ∂t
, Γ f = Γ f in ∪ Γ f out ∪ Γ f si t .
On the fixed inflow boundary Γ f in a pulsating blood flow-rate was implemented, based on measured data taken from [26] as shown in Fig. 8. The outflow was assumed to have zero normal stress (do-nothing condition), that is
T f n out = −p out n out ≡ 0 on Γ f out .(2)
The mechanics of the arterial wall is modeled by a linear elastic material which reads in the reference configuration as,
ρ s ∂ 2 d ∂t 2 = ∇ · (F S) T + f s , in Ω s 0 , with S = JF −1 (C : ) F −T .(3)
Here d denotes the deformation, F = ∂x ∂X , x ∈ Ω s t , X ∈ Ω s 0 the deformation gradient, J = detF and the second Piola-Kirchhoff tensors is denoted with S. Outer forces acting on the volume are incorporated in f s . The elasticity tensor C = C(E, ν) is given with dependency on the Young's modulus E and the Poisson's ratio ν. The elastic strain tensor is given by the Green-Lagrange strain = 1 2 (F T F − I) = 1 2 ∇d + (∇d) T + ∇d (∇d) T . Note, that for small deformation gradients it holds S ≈ C : . The surfaces of the domain clipping, i.e. the annular boundaries of the vessel cuts, surrounding the in-and outflow boundaries of the fluid domain, Γ f in and Γ f out , are constraint to have 0 deformation. In contrast, the outer surface of the vessel, which would be in contact with surrounding tissue, is able to move freely. To maintain continuity of velocities and forces at the fluid-structure boundary layer, we enforce the coupling conditions to balance velocities and normal stresses of the fluid and the solid material, ∂d ∂t =ũ = w and
JT f n f = −(F S) T n s on Γ f si 0 .(4)
Here,ũ,T f stand for fluid quantities transformed to the reference fluid-solid layer. Note that the model considering rigid walls consists only of the fluid sub-problem (1) defined in the carotid lumen Ω f = Ω f 0 , moreover w = 0 and Du dt = ∂u ∂t . In analogy to the velocity continuity condition in (4) the no-slip condition u = 0 is prescribed on the vessel wall surface denoted by Γ f w in case of rigid walls.
Hemodynamic indicators 3.1 Wall shear stress
The wall shear stress (WSS) measures the endothelial stress exerted by blood on the vessel tissue. To explain the relationship between WSS and zones susceptible to atherosclerosis two main explanatory approaches can be found in the literature. The high shear stress theory identifies sites with prolonged high WSS as risk zones, the low shear stress theory considers also sites with oscillating and low WSS as potentially at risk, the correlation was reported in [21]. For a systematic review of both see [32]. Principally, the wall shear stress is defined on the interface boundary Γ f si t , or on the rigid vessel wall Γ f w , as the projection of the normal stress vector t f = −T f n f onto the tangential plane,
τ w = t f − ( t f · n f ) n f = ( t f · t 1 ) t 1 + ( t f · t 2 ) t 2 ,(5)
where t 1 , t 2 are unit vectors spanning the tangential plane. Alternatively a non-directional quantity describing the amplitude of the wall shear stress vector is frequently evaluated [28] as
τ a w = τ w = ( t f · t 1 ) 2 + ( t f · t 2 ) 2 .(6)
Different direction-based indicators of WSS have been used to measure the stress exerted by the fluid as well, [1, 11,14,23,24]. For cylinder-like or other simple geometrical objects vector quantities such as the rotary (transversal) or longitudinal component of the WSS can be considered. In this study, we evaluate the longitudinal component of the wall shear stress vector (longitudinal WSS), aiming to track the backward flow in the carotid artery bifurcation vessel tree, which is defined as
τ w = t f · t = −T f n f · t . (7)
Here, t is a vector of the tangential plane pointing in the longitudinal direction, i.e., the main flow, called "longitudinal tangent" in what follows. In (bi-)directional evaluations of WSS, i.e. considering either its longitudinal or transversal component, the proper choice of tangential vector fields on complex surfaces is crucial for its evaluation, as it is the case in (7) for longitudinal WSS. Due to the non-smooth surface of the studied 3D computational geometry, the specification of the proper longitudinal tangent vectors may be problematic. The direction of both tangent vectors t 1 , t 2 spanning the tangential plane, obtained automatically in Comsol for the surface topology, on neighboring mesh element surfaces may jump and could very well point to the opposite direction of the main flow. This behavior can be observed for t 2 on the left in Fig. 3.
In our first approach the tangent vector t 2 shown in Fig. 3 has been chosen as the longitudinal tangent vector t , since it fits the longitudinal direction better than t 1 . However, its discontinuous spatial behavior and its opposing direction to the main flow at some areas would affect the value of longitudinal WSS (7) substantially. In order to correct the orientation of t 2 we change it to its diametrically opposite vector. For that we switched its sign according to the angle θ between t 2 and an overall flow vector v. For θ ∈ (π/2, 3π/2) it holds t 2 · v ∝ cos(θ) < 0. Thus, we define the tangential vector to be used in (7) as
t = t 2 sign( t 2 · v).(8)
The overall flow direction vector v has to be specified locally for different sections of the computational domain tree. Note, that t defined by (8) is revolved from v by less then π/2 and thus aligned almost with the main flow, but it still has the same jumping behavior as the vector t 2 , compare Fig. 3 (right). In what follows we present an improved approach for constructing a proper longitudinal tangential field t , which is based on the alignment of the carotid tree centerline and can therefore be utilized globally.
Projection method for tangential field
As depicted above in Fig. 3, on complex surfaces the automatically rendered tangent vectors t 2 do not follow the overall flow direction in some topologically complicated areas. In order to overcome this difficulty we apply the approach based on the knowledge of the centerline of the vessel tree and its projection onto the vessel surface, similarly to the method of Morbiducci et al [25], presented in Fig. 5.
At first the centerline is obtained as the set of center-points of the maximally inscribed spheres. Here, we use the 3D Voronoi diagram of the geometry to find and connect the center-points, the method described and implemented in the vascular modeling toolkit [18]. The method yields robust and detailed results with a resolution of about 3000 points. After getting the 3D curves of the centerline, its tangential vectors c are obtained by subtraction of two points of the curve, see Afterwards, the centerline tangent vectors c are projected to the vessel surface, i.e., into each surface point P k . This is done in two steps, first c is extrapolated to the surface points by the geometry tool extrapolate with linear settings in Comsol. Then, the extrapolated centerline tangents c are projected into the tangential plane of the carotid artery surface by subtracting its normal component,
t = c − ( c · n f ) n f c − ( c · n f ) n f ,(9)
here n f are the normal vectors of the carotid surface. The procedure is illustrated in Fig. 5, where the centerline tangent is denoted by C and the corresponding longitudinal component of the WSS by WSS a . The resulting longitudinal tangential field t , see Fig. 6, shows a more uniform alignment at first sight compared to the flipped tangential field (8) presented in Fig. 3. In this manner, longitudinal tangent vectors t derived from the centerline, with proper unidirectional behavior on the surface, are implemented in Comsol. Both, projected (9) as well as flipped tangents (8)
Oscillatory shear index
The oscillatory shear index (OSI) introduced by Ku et al. [21] is a common indicator for disturbed flow. It characterises the temporal oscillations of WSS through its directional change at any point on the surface in the considered time period. The degree of oscillation is expressed by the ratio of averaged WSS compared to its averaged amplitude over the whole time interval, i.e in means of temporal mean values. Note, that OSI does not express the frequency of the sign change of the WSS. We introduce two definitions of OSI, which can be found in literature, see e.g., [1, 3,27,28,35,38,40], based either on the amplitude of the temporal mean of the WSS vector, (6), or on the size and sign of the temporal mean of its longitudinal component (7),
OSI = 1 2 1 − T 0 τ w dt T 0 τ w dt ,(10)OSI = 1 2 1 − T 0 τ w dt T 0 |τ w |dt .(11)
Note, that these formulas differ in the ratio and the norm of the ratio of temporal mean WSS. Consequently, formula and 1 describe predominantly negative WSS over the whole time interval. The definition (11) thus allows to locate not only sites of oscillating WSS, but also sites with long-lasting or predominantly negative WSS. Thus, in contrast to (10), (11) provides an index that can represent both indicators of low shear stress theory. In what follows, we refer to the directional definition (11) when mentioning the OSI, but we also evaluate the OSI defined with the use of the WSS amplitude (10).
Numerical method and convergence study
The numerical simulations have been performed with Comsol Multiphysics
Convergence study
The numerical mesh convergence study has been performed for the fluid flow problem with solid vessel walls to restrict the computational costs. The spatial mesh error was computed using a set of eight meshes, approximately doubling the mesh element number from mesh i to mesh i + 1. To compare numerical solutions on different meshes with non-coinciding mesh nodes, the linear shape functions (P1 finite elements) are applied. The solution difference is realised with use of join solution feature in Comsol by projection of the lower-mesh solution onto the higher mesh. The spatial discretization error has been evaluated for fluid velocities and for the longitudinal WSS in means of weighted L 2 -norm of the difference of the reference and actual i-th mesh solution obtained on meshes Nr. 1-6,
err(u i ) := 1 |Ω f | u i − u ref L 2 (Ω f ) , err(τ i ) := 1 |Γ f w | (τ w ) i − (τ w ) ref L 2 (Γ f w ) , i = 1, . . . 6,(12)
in the time-point of the maximal flow of the second cardiac cycle, t = 1.1 s. Here, u i , (τ w ) i are numerical data obtained on the i-th mesh. The reference solution has been obtained on mesh Nr. 8 consisting of a total of 8 717 584 tetrahedra and prism elements. The mesh errors (12) have been evaluated in the chosen section of our computational mesh presented in Fig. 9 (left), which is identical to the geometrical region of all numerical results presented. The mesh element sizes, measured by the diameter the maximum inscribed sphere, have been averaged for each mesh and spatial errors have been related to the mean mesh sizes h i . The descending sequence of h i starting with h 1 = 1.054 mm is presented in Table 1, whereby the decrease factor a i := h i /h i+1 lies between 1.271 and 1.22. In Fig. 7 the errors with respect to h i are presented in logarithmic scale. For the decreasing error curves one can observe a slope of approximately one, which slightly differs for higher and lower mesh sizes. In analogy to the error estimation for finite element method with a uniform mesh and constant element sizes h, u h −u exact L 2 ≈ Ch p , C < ∞, where the convergence order p is identified with the slope of the logarithmic error curves for different h, we denote the slope of our logarithmic error curve by the experimental order of convergence (EOC). EOC can be obtained by comparing errors of two consecutive meshes, EOC(u i ) = log 10 (err(u i )) − log 10 (err(u i+1 )) log 10 h i − log 10 h i+1 = log ai err(u i ) err(u i+1 ) ,
analogously for (τ w ) i , i = 1, . . . 6. The results of our mesh convergence study are summarised in Table 1 and show good convergence of the numerical velocities as well as for longitudinal wall shear stresses to the reference numerical solution. The EOC, initially lower than one, increases continuously with decreasing mesh size, with a small deviation in case of (τ w ) 3 . The averaged EOC for the velocities is about 1.13 and for the longitudinal WSS it is about 1.21, altogether a super-linear averaged convergence to the reference solution is obtained in our simulations with rigid vessel walls. Applying the finite element method with linear P1 elements, second order convergence rate is expected for spatial error. The decreased experimental order of convergence is caused by the discontinuity of the boundary conditions for velocity on the edges common for the inflow Γ f in as well as vessel wall boundary Γ f w , where the no-slip boundary condition meets the non-zero inflow velocity, considered to be constant over Γ f in . The evaluations of numerical data and considered wall parameters are compared for four model configurations. In two different vessel wall models: rigid & elastic vessel walls two different tangential fields t : flipped (8) & projected tangential field (9) have been used to calculate the longitudinal WSS (7) and the corresponding OSI (11), the overview of model configurations can be found in Table 2. Additionally, the amplitude-based wall parameters (6), (10) are evaluated as well and compared to its directional counterparts. In Fig. 10 the blood velocity streamlines in the stenosed region of the ICA are presented at the time of maximal flow-rate, t = 1.1s. The observed vortices are located around the stenotic bulges in the area adjacent to the stenosis. Note, that the appearance of vortices and backward flow is related to high OSI values, observed in Fig. 16, and may imply progression of lesions along the carotid artery tree, which is a common hemodynamic hypothesis [36]. Note, that amplitude-based OSI presented in Fig. 17 only indicates the edges of these areas. On the other hand, high-valued and unidirectional velocity streamlines are observed along the inner wall of the ICA and are related to high longitudinal WSS values τ w , as well as the amplitude τ a w of WSS vector, observed in Figs. 12, 15. The latter fits to the velocity profile observations in [39]. In Fig. 11 the movement of the carotid in regions close to its bifurcation is presented, whereby the clipped ends of the vessel are cut out. The arrows at the common fluid-solid interface (inner wall) and at the outer vessel wall show the dominance of the lateral vessel wall movement compared to the inflating effects. On the one hand, this is caused by the boundary condition at the outer wall, allowing free movement without any constraining effects of outer stresses by surrounding tissue, on the other hand, by the length of the whole computational domain considered. Indeed, the effects of clipping the bottom and top part of the vessel geometry, whose length is identical to the fixed geometry presented in Fig. 1, are weakened in the middle of the computational domain. This leads to the fluid-driven lateral motion.
mesh (i) # of elements h i (mm) err(u i ) (m/s) EOC(u i ) err(τ i ) (N/m 2 ) EOC(τ w ) i 1. 6.
Results and discussion
Wall shear stress
In what follows we present the wall shear stress computed from numerical data in means of its longitudinal component, τ w , (7), as well as the non-directional quantity τ a w , (6) measured by the amplitude of the WSS vector. To compare the WSS distributions for compliant and rigid walls the results are presented on the inner arterial wall in the reference geometry frame.
Longitudinal WSS
The longitudinal WSS (7) is evaluated on the carotid surface as well as along chosen surface curves and compared for model configurations (a)-(d), see Table 2. In Fig. 12 The occurrence of these extreme values is associated with the alignment of chosen tangential fields, which differ for the considered model configurations at the bifurcation point. We demonstrate this coherence for automatically rendered flipped (c) and projected tangents (d) on compliant walls in Fig. 13. Indeed, a clear side separation of the flipped tangent vectors loosing their longitudinal alignment even before the bifurcation can be observed in plot (c), explaining the discontinuity and the appearance of negative WSS values up to -20 N m 2 in this area. On the other hand, the longitudinal continuance of projected tangents until the separation point is apparent in plot (d). Obviously, configurations with flipped tangent vectors lead to a spurious longitudinal WSS evaluation on the carotid surface close to their bifurcation point, whereas the tangent field (d), which is projected from the centerline, seems to appropriately map the main flow and its separation in this problematic area.
To get more detailed comparisons, τ w along the chosen surface curves is shown in Fig. 14 nevertheless they can be identified with local blue areas on the outer wall of ICA in plot (c) of Fig. 12. Generally, we can conclude that the choice of the tangential field has a considerable effect on the evaluation of longitudinal WSS in the bifurcation region of the carotid artery surface, with the projected tangent vectors being the most suitable for this analysis. After passing the problematic area of the bifurcation the WSS results for projected and flipped tangent fields are comparable at maximal systolic flow, whereas certain inaccuracies occur using automatic flipped tangent vectors. This is because of the misalignment in longitudinal direction on complex and uneven surfaces. In addition, effects of the wall movement in FSI models, are present around the bifurcation point, compare e.g., plot (b) for rigid and (d) for compliant walls in Fig. 12.
Vector-valued WSS
The WSS on the carotid inner surface measured by the amplitude of the WSS vector τ a w (6) is presented in Fig. 15 with corresponding arrows of vector τ w as well as its longitudinal components τ w at the time point of maximal flow in the reference geometry frame. One can observe similar surface distribution of the amplitude of WSS and of the size of its longitudinal component comparing the Figures 15 and 13 -(d). Let us note, that the opposite direction of τ w with respect to the main flow, which occurs sometimes in blue areas with low WSS amplitude, cannot be recognised by τ a w . Thus the backward flow cannot be detected in the surface distribution on Fig. 15. The negative sign and the sign change of WSS resulting from opposite alignment with respect to the main flow plays a role in the evaluation of the oscillatory flow behavior, and it is discussed in what follows.
Oscillatory shear index
In this section we present the temporal change of both, the longitudinal WSS and the vector-valued WSS during the whole cardiac cycle in means of the oscillatory shear index and compare them for compliant as well as rigid wall models. The results are presented on the inner arterial surface in the reference, i.e., the initial geometry frame.
At first, the oscillatory index (11) for the longitudinal WSS (7) is presented for the fixed wall model using projected tangents (b) and for the FSI model using flipped (c) and projected tangents (d) in Fig. 16. Comparing the results, almost no difference in the OSI evaluation for configurations (b) and (d) with projected tangents can be observed. The only region of difference worth mentioning spreads out in the bifurcation area, where the centerline tangents have been projected on different surfaces, obviously due to the wall deformation in case (d). In contrast to (b) and (d), configuration (c) shows many small-scale and point-like extreme values, which is a consequence of the discontinuity and deflections of WSS arising from the erratic alignment of the flipped tangents in some surface regions discussed above.
Concerning the hemodynamical interpretation of the results presented in Fig. 16, conspicuous red regions of maxima indicating long-lasting negative WSS inside and oscillating WSS at their edges (green transition zones) can be observed in the bifurcation zone as well as prior and posterior to the sinusoidal stenotic occlusion of the ICA in all model configurations. Further punctual abnormalities are present, e.g., after the stenotic occlusion on the left side of the ICA wall. In consistency with the streamlines presented in Fig. 10, red OSI regions are related to vortices adjacent to the stenosis bulges of the ICA or prior to the bifurcation. Those maximum regions are an indicator of static reversal flow (vortices) and the green transition zones of high longitudinal WSS oscillations indicate the pathological progression of mechanical damage of the artery wall, according to the hemodynamic hypothesis.
Finally, a second oscillatory shear index (10) for the vector-valued WSS computed using the compliant wall model is presented in Fig. 17. Here, the red maximum values correspond to WSS regions with zero temporal mean and represent zones of complete sign balance of τ w , indicating temporal oscillations. As mentioned in Section 3.3, static vortices and continuous recirculations, corresponding to a permanent negative sign of the WSS, are zero valued in this OSI definition and cannot be tracked here. Nevertheless, the high OSI edge-like regions in Fig. 17 are in good consistency with the green transition zones observed in Fig. 16 -(b),(d), both indicating high wall shear stress oscillations and the pathological progression e.g. on the right and back side of the common carotid artery prior to the bifurcation point, or on the upper back side of the considered ICA region. Small discrepancies between the maxima edges of OSI and the green transition zones of OSI in Figs. 16 by differences in the considered wall shear stress indicators τ w and τ w .
Conclusion
In this contribution a computational study of fluid dynamic and hemodynamic risk parameters has been performed for a carotid artery. The patient-based lumen and its surrounding walls have been imported into the numerical software Comsol Multiphysics and were supplemented for one with fluid dynamics confined by rigid walls as well as a FSI model for compliant artery using the linear elasticity deformation model. The effects of fluid stresses on the arterial wall have been quantified using established hemodynamic risk factors: wall shear stress and oscillatory shear index. Following the low shear theory, we focused on exploration of low and negative WSS and evaluated the longitudinal component of WSS vector (longitudinal WSS) allowing to track the reverse flow in patient-based morphology. The presented results demonstrate the strong dependency of the orientation-based longitudinal WSS and its OSI-index on the proper construction of tangential vectors and address this problem on topologically complex surfaces obtained from patient CTA scans. For the studied carotid artery tree, we applied the projection of centerline tangents to the inner arterial surface in order to obtain properly aligned and smooth tangential field, and compared the longitudinal WSS computed for projected as well as generic mesh-based tangent vectors. Since the projected tangential field retains the longitudinal alignment on the craggy surface and maps the flow separation in the bifurcation area much better than the automatically generated tangent vectors, reliable numerical results for longitudinal WSS, allowing hemodynamic predictions of reverse flow have been obtained by applying the projection method. For comparisons, the commonly used vector-valued WSS and its amplitude have been evaluated as well and its oscillatory behavior has been quantified by the corresponding OSI index. Since the WSS-vector amplitude and the amplitude of its temporal mean does not track its opposing orientation, reversal flow or vortices cannot be detected, Table 2. Red areas show the occurrence of long lasting negative τ w . which is confirmed by our numerical results. Even though the OSI index, based on the vector-valued WSS instead of its longitudinal component, indicates flow oscillatory regions well, it is not able to detect long lasting reversal flow regions with low and negative WSS. Hence, the major benefit of our modeling approach using the direction-based longitudinal WSS and its oscillatory index lies the investigation of sites of low shear stresses and persistent reversal flow.
Obtained numerical data and derived risk indicators can be complemented with measured medical patient data and they will be explored using machine learning algorithm within the joint research project MLgSA, see Acknowledgement, to assess the potential stroke risk for a patient.
For further work, the range of risk prediction tools can be extended by considering further multi-directional WSS parameters, where the choice of the tangential field is of particular importance. According to [14], time-averaged WSS (TAWSS; [2, 20]) and relative residence time (RRT; [12,13,35]) are strong predictive clinical markers for disease development, even in early stages of atherosclerosis. Moreover, transversal WSS ( [23,30,31]) and cross flow index (CFI, [23]) have shown a predictive value when complex fluid flow appears in later phases of atherosclerosis. Taking those parameters into account, a more differentiated prediction of atherosclerotic development can be achieved.
Figure 1 :
1Computational geometry Ω f of the inner human carotid lumen for the fluid flow modeling with rigid walls.
Figure 2 :
2Computational geometry Ω f t ∪ Ω s t for the FSI-modeling of the human carotid including wall tissue, area of interest.
, x ∈ Ω f t describes
the fluid domain deformation velocity with respect to the reference domain Ω f 0 and Du Dt is the total (material) time derivative of the fluid velocity. The boundary of the fluid domain consists of inflow and outflow boundary as well as the shared fluid-structure interface
Figure 3 :
3Tangential vector field t 2 obtained automatically in Comsol Multiphysics (left) and flipped tangential field (8) (right) on the carotid surface, flow direction: from left to right.
Fig. 4 .
4
Figure 4 :
4The carotid domain and the normed tangential field c of the centerline.
Figure 5 :
5Projection method by Morbiducci, adapted from[25].
are used for the evaluation of longitudinal WSS and the results are compared in what follows.
Figure 6 :
6Tangential vector field t obtained from the centerline by the projection method.
defines values of the OSI between 0 and 0.5, where 0 stands for no or a complete change of sign over the mean WSS's entire time interval and 0.5 for completely balanced sign changes, e.g., oscillations of the mean WSS. Values in between imply corresponding sign changes of the WSS. In contrast, negative values of the mean longitudinal WSS in (11) play an important role. They lead to a range of OSI values from 0 to 1. Similarly to definition (10), values of 0.5 indicate completely balanced sign changes of WSS as well. An OSI value of 0 represents a point at which the WSS is positive over the entire time interval considered. For the value 1, on the other hand, the WSS is negative over the entire time interval. Values between 0 and 0.5 show predominantly positive, values between 0.5
[ 7 ]
7, Version 5.6 using the MEMS Module to incorporate the interaction of laminar fluid flow with the linear elastic wall material. The software uses the arbitrary Eulerian-Lagrangian formulation, which consists of the Navier-Stokes equations in the Eulerian frame (1) and the solid mechanics equations using the Lagrangian formulation (3). The interaction was chosen to be bidirectional, such that the fluid loading acted on the structure and the wall velocity is transmissioned to the fluid. We chose a monolithic, fully-coupled solving scheme to provide a robust solution for the dependent variables consisting of the solid deformation d, the fluid flow velocity u, pressure p and the spatial mesh displacement w.For the discretization of the fluid velocity and pressure linear and for the solid deformation quadratic finite elements have been chosen. For the time discretization BDF-method of order 1 (for the initialization) continuing with order 2 and an adaptive time-stepping has been applied. The simulations have been performed on a computational mesh consisting of about 163500 tetrahedral elements in the solid domain and 510800 elements in the fluid domain, about 412500 of which are tetrahedral and 98300 are prisms acting as two boundary layers. The whole simulation spans over two cardiac cycles, i.e. 1.8s, and is driven by a pulsatile flow rate presented inFig. 8, starting with the domain at rest, i.e., zero deformation and zero flow at t = 0. All results presented in the next section are chosen from the second cycle where the flow is fully developed.
Figure 7 :
7Decrease of L 2 error norm (12) for u and τ w with respect to the mean mesh size, gray dashed lines with slopes 1 and 2 represent first and second converegence order.
Figure 8 :Figure 9 :
89Flow waveform at the inflow boundary of the common carotid artery with 7.9 ml/s mean flow rate, waveform adapted from [1, 26]. Solid wall model mesh Mesh for compliant walls model (FSI) A part of computational mesh for solid and compliant vessel wall model, chosen sections of interest for data evaluation presented.
2 :
2Configurations of the model evaluations for directional wall parameters τ w and OSI . In every simulation fluid density ρ f = 1000 kg.m −3 and constant viscosity µ = 0.00345 Pa.s was chosen. The wall parameters density ρ s = 1070 kg.m −3 , Young's modulus E = 0.5 MPa and Poisson's ratio ν = 0.17 ν ∈ [0.17, 0.5) 1 were used in simulations (c) and (d) with fluid-structure interaction.
Figure 10 :
10Velocity streamlines in the stenosed region of ICA (stenotic plaque marked with arrows) for both vessel wall models colored by velocity magnitude, t = 1.1 s, viewpoints from front and back.
Figure 11 :
11The deformation vectors at the inner and outer surface of the thick vessel tissue shows the translational vessel movement in the zoomed area prior and subsequent to the carotid bifurcation. The surface is coloured by the deformation amplitude in cm.
the surface distributions of τ w for the four configurations at the time of highest flow-rate are presented. All plots (a)-(d) show high positive WSS up to 35 N m 2 in the bifurcation and in the sinusoidal constrictions of the stenotic bulge of the ICA. The center of the bifurcation and the regions around the stenotic bulges of the ICA are regions of low and negative WSS. In configuration (a), local point-wise extreme values of τ w appear using the flipped tangent vectors on rigid surface. These are smoothed out in (c) on the deformed surface with elastic walls. Beside these very local phenomena in (a), wider areas of negative extreme values occur close to the separation point of the bifurcation in configurations (b) and (c), compared to (d).
Figure 12 :
12for configurations (b), (c), (d). Its values in configurations (b) and (d) are almost identical and differ at most by 1 N m 2 along the circumferential curve (1). On the longitudinal line (2) we observe a very good agreement of (b) and (d). The situation is slightly different in case (c) with flipped tangents, where local deflections of WSS values can be observed. These are caused by the previously discussed alignment of differently flipped tangent vectors presented in Fig. 13. The negative deflections of τ w along the longitudinal curve (2) are not visible in Fig. 13 due to a different viewpoint, Longitudinal WSS distribution at t=1.1s for all configurations, frontal viewpoint.
Figure 13 :
13Surface distribution of longitudinal WSS evaluated using flipped (c) and projected (d) tangent fields (shown as normed arrows), evaluation with compliant walls at t = 1.1 s, viewpoint from back (different viewpoint as inFig. 12).
Figure 14 :
14Upper: circumferential (1) and longitudinal (2) intersection curves for the evaluation of WSS. The arc length of (1) starts at the respective green point and follow a clockwise direction. Below: comparison of longitudinal WSS for configurations (b)-(d) along intersection curves (1),(2).
Detail of the bifurcation point: WSS vector τw (black arrows) and its longitudinal component τ w (red arrows).
Figure 15 :
15Surface: τ a w -amplitude of the WSS vector on compliant walls at t = 1.1 s. Arrows: WSS vector τ w (black) and its longitudinal component τ w (red), both proportional to its size.
Figure 16 :
16Oscillatory shear index OSI (11) for longitudinal WSS (τ w ) evaluated for the second cardiac cycle, t ∈ [0.9s, 1.8s] in model configurations (b),(c),(d) see
Figure 17 :
17OSI index (10) for vector-valued WSS ( τ w ) evaluated for the second cardiac cycle (t ∈ [0.9s, 1.8s]), compliant wall model.
Table 1: Spatial discretization errors related to mean mesh size.106E+04
1.054
1.689E-01
0.613
3.046E+00
0.726
2.
1.250E+05
0.829
1.458E-01
0.736
2.559E+00
0.896
3.
2.484E+05
0.667
1.241E-01
0.956
2.104E+00
0.788
4.
4.584E+05
0.544
1.021E-01
1.497
1.792E+00
1.480
5.
1.049E+06
0.429
7.161E-02
1.819
1.262E+00
2.179
6.
2.079E+06
0.351
4.965E-02
-
8.135E-01
-
Table
The choice of Poisson's ratio has negligible impact on the vessel wall displacement in our modeling.
Acknowledgements
Role of oscillatory shear index in predicting the occurrence and development of plaque. M Blagojević, A Nicolić, M Zivković, M Zivković, G Stanković, A Pavlović, Journal of the Serbian Society of Computational Mechanics. 72Blagojević, M., Nicolić, A., Zivković, M., Zivković, M., Stanković, G.,Pavlović, A.: Role of oscillatory shear index in predicting the occurrence and development of plaque, Journal of the Serbian Society of Computational Mechanics, 7(2), 29-37, 2013
A modular, operator-splitting scheme for fluid-structure interaction problems with thick structures, International journal for numerical methods in fluids 74. M Bukač, S Čanić, R Glowinski, B Muha, A Quaini, 10.1002/fld.38638Bukač, M.,Čanić, S., Glowinski, R., Muha B., Quaini, A.:: A modular, operator-splitting scheme for fluid-structure interaction problems with thick structures, International journal for numerical methods in fluids 74.8, 577-604, 2014, DOI: 10.1002/fld.3863
Modeling viscoelastic of arterial walls and their interaction with pulsatile blood flow. S Čanić, J Tambača, G Guidoboni, A Mikelić, C J Hartley, D Rosenstrauch, SIAM J. Appl. Math. 671Čanić, S., Tambača, J., Guidoboni, G., Mikelić, A., Hartley, C.J., Rosenstrauch, D.: Modeling viscoelastic of arterial walls and their interaction with pulsatile blood flow, SIAM J. Appl. Math. 67(1), 164-193, 2006
P G Ciarlet, Theory of Shells. North-Holland, AmsterdamIIICiarlet, P.G.: Mathematical Elasticity, Volume III: Theory of Shells, North-Holland, Amsterdam, 2000
. Comsol Multiphysics. Reference Manual. 2022Comsol Multiphysics. Reference Manual, https://doc.comsol.com/5.6/doc/com.comsol.help.comsol/COMSOL-ReferenceManual.pdf, 2022
Ursachen und Risikofaktoren der Arteriosklerose. E S Debus, G Torsello, G Schmitz-Rixen, I Flessenkämper, M Storck, H Wehk, R T Grundmann, 10.1007/s00772-013-1233-6Gefässchirurgie. 18Debus, E.S., Torsello, G., Schmitz-Rixen, G., Flessenkämper, I., Storck, M., Wehk. H., Grundmann, R.T.: Ursachen und Risikofaktoren der Arteriosklerose, Gefässchirurgie, 18, 2413-2419, 2013, DOI: 10.1007/s00772- 013-1233-6
Visualizing carotid blood flow simulations for stroke prevention. P Eulzer, M Meuschke, C M Klinger, K Lawonn, 10.1111/cgf.14319Computer Graphics Forum. 40Eulzer, P., Meuschke, M., Klinger, C. M., Lawonn, K.: Visualizing carotid blood flow simulations for stroke prevention, Computer Graphics Forum, 40 435-446, 2021, DOI: 10.1111/cgf.14319
Automatic Cutting and Flattening of Carotid Artery Geometries. P Eulzer, K Richter, M Meuschke, A Hundertmark, K Lawonn, Proceedings of Eurographics Workshop on Visual Computing for Biology and Medicine. S. Oeltze-Jafra, N. N. Smit, and B. SommerEurographics Workshop on Visual Computing for Biology and MedicineEulzer, P., Richter, K., Meuschke M., Hundertmark A., Lawonn K.: Automatic Cutting and Flattening of Carotid Artery Geometries, in Proceedings of Eurographics Workshop on Visual Computing for Biology and Medicine 2021, (Editors: S. Oeltze-Jafra, N. N. Smit, and B. Sommer)
Insights into the co-localization of magnitude-based versus direction-based indicators of disturbed shear at the carotid bifurcation. D Gallo, D A Steinman, U Morbiducci, 10.1016/j.jbiomech.2016.02.010Journal of Biomechanics. 492Gallo, D., Steinman, D.A., Morbiducci, U.: Insights into the co-localization of magnitude-based versus direction-based indicators of disturbed shear at the carotid bifurcation, Journal of Biomechanics, 49(2), 2413- 2419, 2016, DOI: 10.1016/j.jbiomech.2016.02.010
Determining possible thrombus sites in an extracorporeal device, using computational fluid dynamics-derived relative residence time. N Gorring, L Kark, A Simmons, T Barber, 10.1080/10255842.2013.826655Computer Methods in Biomechanics and Biomedical Engineering. 186Gorring, N., Kark, L., Simmons, A., Barber, T.: Determining possible thrombus sites in an extracorporeal device, using computational fluid dynamics-derived relative residence time. Computer Methods in Biomechanics and Biomedical Engineering, 18(6), 628-634, 2014, DOI: 10.1080/10255842.2013.826655
Study of Coronary Atherosclerosis Using Blood Residence. J Hashemi, B Patel, Y S Chatzizisisi, G S Kassab, 10.3389/fphys.2021.625420Time Front. Physiol. 12Hashemi, J., Patel B., Chatzizisisi, Y.S., Kassab, G.S.: Study of Coronary Atherosclerosis Using Blood Resi- dence Time Front. Physiol., 12, 625420, 2021, DOI: 10.3389/fphys.2021.625420
Multidirectional wall shear stress promotes advanced coronary plaque development -comparing five shear stress metrics. A Hoogendoorn, E Mj Hartman, A Kok, G De Nisco, 10.1093/cvr/cvz212Cardiovascular Research. 1166Hoogendoorn, A., Hartman, E. MJ., Kok, A., De Nisco, G.: Multidirectional wall shear stress promotes advanced coronary plaque development -comparing five shear stress metrics, Cardiovascular Research, 116(6), 1136-1146, 2019, DOI: 10.1093/cvr/cvz212
Numerical study of shear-dependent non-Newtonian fluids in compliant vessels. A Hundertmark, M Lukáčová, Computers and Mathematics with Applications. 60Hundertmark, A., Lukáčová, M.: Numerical study of shear-dependent non-Newtonian fluids in compliant vessels, Computers and Mathematics with Applications, 60, 572-59, 2010.
Fluid-structure interaction for shear-dependent non-Newtonian fluids. A Hundertmark, M Lukáčová, G Rusnáková, Topics in Mathematical Modeling and Analysis. Kaplický P.7Hundertmark, A., Lukáčová, M., Rusnáková, G.: Fluid-structure interaction for shear-dependent non- Newtonian fluids, In: Kaplický P. (Ed.) Topics in Mathematical Modeling and Analysis Vol.7, 109-158, 2012
A Iannuzzi, P Rubba, M Gentile, V Mallardo, I Calcaterra, A Bresciani, G Covetti, G Cuomo, P Merone, A Di Lorenzo, R Alfieri, E Aliberti, F Giallauria, M Di Minno, G Iannuzzo, 10.3390/biomedicines9050521Carotid Atherosclerosis. 92021Iannuzzi, A., Rubba, P., Gentile, M., Mallardo, V., Calcaterra, I., Bresciani, A., Covetti, G., Cuomo, G., Merone, P., Di Lorenzo, A., Alfieri, R., Aliberti, E., Giallauria, F., Di Minno, M., Ian- nuzzo, G.: Carotid Atherosclerosis, Ultrasound and Lipoproteins. Biomedicines, 9(5), 521, 2021, DOI: 10.3390/biomedicines9050521
The Vascular Modeling Toolkit: A Python Library for the Analysis of Tubular Structures in. R Izzo, D Steinman, S Manini, L Antiga, Medical Images The Open Journal. 253Izzo, R., Steinman, D., Manini, S., Antiga L.: The Vascular Modeling Toolkit: A Python Library for the Analysis of Tubular Structures in Medical Images The Open Journal 25(3), 2018
A 3D non-Newtonian fluid-structure interaction model for blood flow in arteries. J Janela, A Moura, A Sequeira, J. Comput. Appl. Math. 234Janela, J., Moura, A., Sequeira, A.: A 3D non-Newtonian fluid-structure interaction model for blood flow in arteries, J. Comput. Appl. Math. 234, 2783-2791, 2010
On the influence of the wall shear stress vector form on hemodynamic indicators, Computing and Visualization in Science. L John, P Pustejovská, O Steinbach, 10.1007/s00791-017-0277-718John, L., Pustejovská, P., Steinbach, O.: On the influence of the wall shear stress vector form on hemodynamic indicators, Computing and Visualization in Science, 18, 113-122, 2017 DOI: 10.1007/s00791-017-0277-7
Pulsatile flow and atherosclerosis in the human carotid bifurcation, positive correlation between plaque location and low oscillating shear stress. D N Ku, D P Giddens, C K Zarins, S Glagov, Arteriosclerosis. 5Ku, D. N., Giddens, D. P., Zarins, C. K., . Glagov, S.: Pulsatile flow and atherosclerosis in the human carotid bifurcation, positive correlation between plaque location and low oscillating shear stress. Arteriosclerosis, 5, 293-302, 1985
Aneurysm growth occurs at region of low wall shear stress: Patientspecific correlation of hemodynamics and growth in a longitudinal study. M Lawton, R Higashida, W S Smith, W L Young, D Saloner, L Boussel, V Rayz, C Mcculloch, A Martin, G Acevedo-Bolton, https:/www.ahajournals.org/doi/full/10.1161/STROKEAHA.108.521617Stroke. 39Lawton, M., Higashida, R., Smith, W. S., Young, W. L., Saloner, D., Boussel, L., Rayz, V., McCulloch, C., Martin, A., Acevedo-Bolton, G.,: Aneurysm growth occurs at region of low wall shear stress: Patient- specific correlation of hemodynamics and growth in a longitudinal study, Stroke, 39, 2997-3002, 2008, DOI: 10.1161/STROKEAHA.108.521617
Understanding the fluid mechanics behind transverse wall shear stress. Y Mohamied, S J Sherwin, P D Weinberg, 10.1016/j.jbiomech.2016.11.035Journal of Biomechanics. 504Mohamied, Y., Sherwin, S.J., Weinberg, P.D.: Understanding the fluid mechanics behind transverse wall shear stress, Journal of Biomechanics, 50(4), 102-109, 2016, DOI: 10.1016/j.jbiomech.2016.11.035
Change of Direction in the Biomechanics of Atherosclerosis. Y Mohamied, E M Rowland, E L Bailey, S J Sherwin, M A Schwartz, P D Weinberg, 10.1007/s10439-014-1095-4Annals of Biomedical Engineering. 43Mohamied, Y., Rowland, E.M., Bailey, E.L. Sherwin, S.J., Schwartz, M. A., Weinberg, P.D.: Change of Direction in the Biomechanics of Atherosclerosis, Annals of Biomedical Engineering, 43, 16-25, 2014. DOI: 10.1007/s10439-014-1095-4
A rational approach to defining principal axes of multidirectional wall shear stress in realistic vascular geometries, with application to the study of the influence of helical flow on wall shear stress directionality in aorta. U Morbiducci, D Gallo, S Cristofanelli, R Ponzini, M A Deriu, G Rizza, D A Steinmann, 10.1016/j.jbiomech.2015.02.027Journal of Biomechanics. 486Morbiducci, U., Gallo, D., Cristofanelli, S., Ponzini, R., Deriu, M. A., Rizza, G., Steinmann, D. A.: A rational approach to defining principal axes of multidirectional wall shear stress in realistic vascular geometries, with application to the study of the influence of helical flow on wall shear stress directionality in aorta, Journal of Biomechanics, 48(6), 899-906, 2015, DOI: 10.1016/j.jbiomech.2015.02.027
Computed simulation of local blood flow and vessel mechanics in a compliant carotid artery bifurcation model. K Perktold, G Rappitsch, Journal of Biomechanics. 287Perktold, K., Rappitsch, G.: Computed simulation of local blood flow and vessel mechanics in a compliant carotid artery bifurcation model, Journal of Biomechanics, 28(7), 845-856, 1995
Optimal control and shape optimization of aorto-coronaric bypass anastomoses. A Quarteroni, R Gianluigi, Math. Models Methods Appl. Sci. 1312Quarteroni, A., Gianluigi, R.: Optimal control and shape optimization of aorto-coronaric bypass anastomoses, Math. Models Methods Appl. Sci., 13 (12), 1801-1823, 2003
A: Mathematical modelling and numerical simulation of the cardiovascular system. A Quarteroni, Formaggia, Handbook of Numerical Analysis. Ciarlet, P.G. & Lions, J.L.12ElsevierQuarteroni, A., Formaggia, A: Mathematical modelling and numerical simulation of the cardiovascular sys- tem. In Ciarlet, P.G. & Lions, J.L. (Hrsg.), Handbook of Numerical Analysis, 12 (3-127), 2004, Amsterdam: Elsevier.
Mathematical Modelling of the Human Cardiovascular System: Data, Numerical Approximation. A Quarteroni, ' Dede, L Manzoni, A Vergara, C , Cambridge Monographs on Applied and Computational Mathematics. Clinical ApplicationsQuarteroni, A., Dede', L., Manzoni, A., Vergara, C.: Mathematical Modelling of the Human Cardiovascu- lar System: Data, Numerical Approximation, Clinical Applications, Cambridge Monographs on Applied and Computational Mathematics, 2019
Inducing persistent flow disturbances accelerates atherogenesis and promotes thin cap fibroatheroma development in D374Y-PCSK9 hypercholesterolemic minipigs. R M Pedrigi, C B Poulsen, V V Mehta, Ramsing Holm, N Pareek, N Post, A L Kilic, I D Banya, W A S Dall'ara, G Mattesini, A Bjørklund, M M Andersen, N P Grøndal, A K Petretto, E Foin, N Davies, J E Mario, C Di Fog Bentzon, J , Erik Bøtker, H Falk, E Krams, R De Silva, R , 10.1161/CIRCULATIONAHA.115.016270Circulation. 132Pedrigi R.M., Poulsen C.B., Mehta VV, Ramsing Holm N., Pareek N., Post A.L., Kilic I.D., Banya W.A.S., Dall'Ara G., Mattesini A., Bjørklund M.M., Andersen N.P., Grøndal A.K., Petretto E., Foin N., Davies J.E., Mario C., Di Fog Bentzon J., Erik Bøtker H., Falk E., Krams R., de Silva R. Inducing persistent flow distur- bances accelerates atherogenesis and promotes thin cap fibroatheroma development in D374Y-PCSK9 hyperc- holesterolemic minipigs.Circulation, 132,1003-1012, 2015, DOI: 10.1161/CIRCULATIONAHA.115.016270
Computation in the rabbit aorta of a new metric-the transverse wall shear stress-to quantify the multidirectional character of disturbed blood flow. V Peiffer, S J Sherwin, P D Weinberg, 10.1016/j.jbiomech.2013.08.003Journal of Biomechanics. 46Peiffer V., Sherwin S.J., Weinberg P.D.: Computation in the rabbit aorta of a new metric-the transverse wall shear stress-to quantify the multidirectional character of disturbed blood flow. Journal of Biomechanics, 46, 2651-2658, 2013. DOI: 10.1016/j.jbiomech.2013.08.003
Does low and oscillatory wall shear stress correlate spatially with early atherosclerosis? A systematic review. P Pfeiffer, S J Sherwin, P D Weinberg, 10.1093/cvr/cvt044Cardiovascular Research. 99Pfeiffer, P., Sherwin, S.J., Weinberg, P.D.: Does low and oscillatory wall shear stress correlate spa- tially with early atherosclerosis? A systematic review, Cardiovascular Research, 99, 242-250, 2013, DOI: 10.1093/cvr/cvt044
Kinematic splitting algorithm for fluid-structure interaction in hemodynamics. G Rusnákova, M Lukáčová, A Hundertmark, Computer Methods in Applied Mechanics and Engineering. 265Rusnákova G., Lukáčová, M., Hundertmark, A.: Kinematic splitting algorithm for fluid-structure interaction in hemodynamics, Computer Methods in Applied Mechanics and Engineering 265, 83-106, 2013
Magnitude and Role of Wall Shear Stress on Cerebral Aneurysm Computational Fluid Dynamic Study of 20 Middle Cerebral Artery Aneurysms Stroke. M Shojima, M Oshima, K Takagi, R Torii, M Hayakawa, K Katada, A Morita, T Kirino, 10.1161/01.STR.0000144648.89172.0f35Shojima, M., Oshima, M., Takagi, K., Torii, R., Hayakawa, M., Katada, K., Morita, A., Kirino, T.: Magnitude and Role of Wall Shear Stress on Cerebral Aneurysm Computational Fluid Dynamic Study of 20 Middle Cerebral Artery Aneurysms Stroke, 35 (11), 2500-2505, 2004, DOI: 10.1161/01.STR.0000144648.89172.0f
Relative residence time and oscillatory shear index of non-Newtonian flow models in aorta. S V Soulis, G Giannoglou, D K Fytanidis, 10.1109/IWBE.2011.6079011International Workshop on Biomedical Engineering. 10Biomedical EngineeringSoulis, S. V., Giannoglou, G., Fytanidis, D.K.: Relative residence time and oscillatory shear index of non- Newtonian flow models in aorta, International Workshop on Biomedical Engineering, Biomedical Engineering, 10, 2013, DOI: 10.1109/IWBE.2011.6079011
K Spanos, G Petrocheilou, C Karathanos, N Labropoulos, D Mikhailidis, A Giannoukas, https:/doi-org.wwwdb.dbod.de/10.1177/0003319716678741Carotid Bifurcation Geometry and Atherosclerosis Angiology. 68Spanos, K., Petrocheilou, G., Karathanos, C., Labropoulos, N., Mikhailidis, D., Giannoukas, A.: Carotid Bifurcation Geometry and Atherosclerosis Angiology, 68(9), 757-764, 2017, DOI: 10.1177/0003319716678741
The top 10 cause of death. WHO, The top 10 cause of death, https://www.who.int/news-room/fact-sheets/detail/the-top-10-causes-of-death, December 2020
J Xiang, K Sabareesh, S K Natarajan, M Tremmel, D Ma, J Mocco, L N Hopkins, A H Siddiqui, E I Levy, H Meng, https:/www.ahajournals.org/doi/full/10.1161/STROKEAHA.110.592923Hemodynamic-Morphologic Discriminants for Intracranial Aneurysm Rupture. 42Xiang, J., Sabareesh K. Natarajan, S. K., Tremmel, M., Ma, D., Mocco, J., Hopkins L. N., Siddiqui, A. H., Levy, E. I., Meng H.,: Hemodynamic-Morphologic Discriminants for Intracranial Aneurysm Rupture, Stroke, 42(1), 144-152, 2011, DOI: 10.1161/STROKEAHA.110.592923
Carotid Bifurcation Atherosclerosis: Quantitative Correlation of Plaque Localization with Flow Velocity Profiles and Wall Shear Stress Circulation Research. C K Zarins, D P Giddens, B K Bharadvaj, V S Sottiurai, R F Mabon, S Glagov, 10.1161/01.RES.53.4.50253Zarins, C.K., Giddens, D.P., Bharadvaj, B. K., Sottiurai, V.S., Mabon, R.F., Glagov, S.: Carotid Bifurcation Atherosclerosis: Quantitative Correlation of Plaque Localization with Flow Velocity Profiles and Wall Shear Stress Circulation Research, 53(4), 502-514, 1983, DOI: 10.1161/01.RES.53.4.502
Finite element modeling of three-dimensional pulsatile flow in the abdominal aorta: relevance to atherosclerosis. C A Taylor, T J R Hughes, C K Zarins, 10.1114/1.140Annals of Biomedical Engineering. 26Taylor, C. A., Hughes, T. J. R., Zarins, C. K.: Finite element modeling of three-dimensional pulsatile flow in the abdominal aorta: relevance to atherosclerosis. Annals of Biomedical Engineering, 26, 975-987, 1998, DOI: 10.1114/1.140
| []
|
[
"A WEAK GALERKIN-MIXED FINITE ELEMENT METHOD FOR THE STOKES-DARCY PROBLEM",
"A WEAK GALERKIN-MIXED FINITE ELEMENT METHOD FOR THE STOKES-DARCY PROBLEM"
]
| [
"Hui Peng ",
"Qilong Zhai ",
"Ran Zhang ",
"ANDShangyou Zhang "
]
| []
| []
| In this paper, we propose a new numerical scheme for the coupled Stokes-Darcy model with Beavers-Joseph-Saffman interface condition. We use the weak Galerkin method to discretize the Stokes equation and the mixed finite element method to the Darcy equation. A discrete inf-sup condition is proved and optimal error estimates are also derived. Numerical experiments validate the theoretical analysis. | 10.1007/s11425-019-1855-y | [
"https://arxiv.org/pdf/2103.00411v1.pdf"
]
| 232,076,190 | 2103.00411 | b1324efd93fca7f7b2e394b7a8e42a0995c7a786 |
A WEAK GALERKIN-MIXED FINITE ELEMENT METHOD FOR THE STOKES-DARCY PROBLEM
Hui Peng
Qilong Zhai
Ran Zhang
ANDShangyou Zhang
A WEAK GALERKIN-MIXED FINITE ELEMENT METHOD FOR THE STOKES-DARCY PROBLEM
weak Galerkin finite element methodsmixed finite element methodsweak gradi- entcoupled Stokes-Darcy problems AMS subject classifications Primary65N3065N1565N12; Secondary35B4535J50
In this paper, we propose a new numerical scheme for the coupled Stokes-Darcy model with Beavers-Joseph-Saffman interface condition. We use the weak Galerkin method to discretize the Stokes equation and the mixed finite element method to the Darcy equation. A discrete inf-sup condition is proved and optimal error estimates are also derived. Numerical experiments validate the theoretical analysis.
1. Introduction. The coupling of fluid flow and porous media flow has received an increasing attention during the last decade. This coupled flow arises in many fields, such as the transport of contaminants through steams in environment, the filtration of flood through vessel walls in physiology, and some technologies involving fluid filter in industrial. Interested readers may refer to [14,17,27,32] and the reference therein.
The mathematical model of such a coupled problem consists of Stokes equations in the fluid region and the Darcy's law in the porous medium. Appropriate interface conditions, namely mass conservation, balance of force and the Beavers-Joseph-Saffman condition [8,20,36] are imposed on the interface between the free flow region and porous medium flow region.
Early studies on numerical simulations and error analysis for the coupled Stokes-Darcy problem can be found in [15,37]. In a comprehensive study presented in [14], Discacciati et al. analyze a standard velocity-pressure formulation in the Stokes region and a second order primal elliptic problem in the Darcy region. Continuous finite element methods are used in both space. In [26], Layton et al. consider a mixed formulation in Darcy region, which involves the velocity and pressure simultaneously. They prove the existence and uniqueness of a weak solution to the mixed Stokes-Darcy system. Continuous finite element method employed in Stokes region and the mixed finite element method used in Darcy region. Later, the discontinuous Galerkin(DG) methods are applied to this problem [33,34]. The work combines DG method for the Stokes equations with the mixed finite element method for the Darcy equation is proposed in [33]. Analysis of the DG method for both Stokes and Darcy equations introduced in [34]. In addition, preconditioning techniques are also used for the coupled flow [12]. More recent studies concerning the Stokes-Darcy problem can be found in [2,10,11,13,22,23,31,38,39,40,44].
The weak Galerkin (WG) finite element method is proposed in [41] by Wang and Ye for the second order elliptic equation. They introduce totally discontinuous weak functions and corresponding weak differential operators. Numerical implementation of WG methods for different models with more general finite element partitions is discussed in [29]. The WG scheme is designed on arbitrary shape of polygons in 2D or polyhedra in 3D with certain shape regularity by introducing a stabilizer in [42]. Unified study for WG methods and other discontinuous Galerkin methods is presented in [18,19]. In the past few years, the WG method is widely applied to many partial differential problems because of its flexibility and efficiency. The corresponding work can be found in [30,45,46,48,49].
Recently, WG methods are developed for solving the Stokes-Darcy model. In [23], the coupled system is described by Stokes equations in primal velocity-pressure formulation and the Darcy's law in primal pressure formulation. The piecewise constant elements are used to approximate the velocity, hydraulic and pressure. Furthermore, the same formulation is discussed in [22], different choices of WG finite element spaces are investigated, the classical meshes in [23] are extended to general polygonal meshes. In [13], the authors consider the mixed formulation in the Darcy region, both the Stokes region and Darcy region involve the velocity and the pressure. Strong coupling of the Stokes-Darcy system is achieved in the discrete space by using the WG approach.
As mentioned above, we can see that WG methods show a high flexibility for dealing with the Stokes-Darcy problem. However, the decoupling of the elements leads to an increase in the total degrees of freedom, which limits the practical utility of WG methods, especially in high order approximations. The aim of this article is to introduce a new numerical scheme with fewer number of degrees of freedom for the same mixed Stokes-Darcy formulation as [13]. To this end, we use different finite element discretizations for the two regions. The WG method is still employed to approximate the velocity and the pressure in Stokes region. A summary for the features of WG methods to solve Stokes equation is provided in [43]. As for the Darcy region, the same unknowns are approximated by the mixed finite element (MFEM) method, which is different from the WG approximation in [13]. Readers may refer to, e.g. [24] for a comparison of degrees of freedom between WG methods and MFEM methods. Several standard mixed finite element spaces can be chosen, such as RT spaces [35], BDM spaces [7], BDFM spaces [6] and so on. The efficiency of the MFEM has been demonstrated in [3,9,28]. Lagrange multiplier is introduced to impose the continuity of the velocity. The benefit of our approach is the possibility of combining the efficiency of the MFEM methods for Darcy problem with the flexibility of WG methods for Stokes problem. However, the combination of these two different finite element methods makes the proof process more complex for the inf-sup condition than [13]. Inspired by the work in [33], we construct two local projection operators in different region to prove it.
The rest of the paper is organized as follows. In the next section, we present the model problem, some notations and function spaces. In Section 3, we introduce weak Galerkin methods and construct WG-MFEM numerical scheme for the Stokes-Darcy problem. The well-posedness of the scheme is analyzed in Section 4. We derive the error estimates for the corresponding numerical approximations in Section 5. Finally, some numerical examples are presented to show the good performance of the developed algorithm in Section 6.
Model Problem and Weak Formulation.
Let Ω be a bounded domain in R 2 , subdivided into a free fluid region Ω s and a porous region Ω d . Denote by Γ = ∂Ω s ∩ ∂Ω d the interface, and by Γ s = ∂Ω s \ Γ, Γ d = ∂Ω d \ Γ the outer boundary. Moreover, let n and τ be the unit normal and tangential vectors to Γ, respectively, see In Ω s , the fluid flow is governed by Stokes equations.
−∇ · T(u s , p s ) = f s in Ω s , (2.1) ∇ · u s = 0 in Ω s , (2.2) u s = 0 on Γ s , (2.3)
where T is the stress tensor, T(u s , p s ) = 2νD(u s ) − p s I and D(u s ) = 1 2 (∇u s + ∇ T u s ), ν is the kinematic viscosity of the fluid and I is the identity matrix. f s is a given external body force.
In Ω d , the porous media flow is governed by Darcy's law.
∇ · u d = f d in Ω d , (2.4) u d = −K∇p d in Ω d , (2.5) u d · n d = 0 on Γ d , (2.6)
where K is the symmetric positive-defined permeability tensor, f d is the source term and satisfies the following condition
Ω d f d = 0.
The interface conditions on Γ consist of three parts.
u s · n = u d · n on Γ, (2.7) −T(u s , p s )n · n = p d on Γ, (2.8) −T(u s , p s )n · τ = µK 1/2 u s · τ on Γ. (2.9)
Condition (2.7) is the result of mass conservation across the interface, condition (2.8) represents the fact that normal force on the interface is balance, and condition (2.9) is the Beavers-Joseph-Saffman interface condition, in which µ ≥ 0 is a parameter depending on the properties of the porous medium.
Next, we recall some notations for Sobolev space [1]. Let K be a polygon in R 2 , H m (K) stands for the Sobolev space. We denote by · m,K and | · | m,K the norm and semi-norm in H m (K), m ≥ 0. When m = 0, H 0 (K) coincides with L 2 (K) and we shall drop the subscript K in the norm and semi-norm notations.
We define the space H(div; K) as follows.
H(div; K) = {v : v ∈ [L 2 (K)] d , ∇ · v ∈ L 2 (K)}, with norm v H(div,K) = ( v|| 2 K + ∇ · v 2 K ) 1 2 .
We also define
L 2 0 (K) = {q ∈ L 2 (K) : K q dx = 0}.
Then the function space for the velocity and the pressure are defined as
V := {v ∈ H(div, Ω), v| Ωs ∈ H 1 (Ω s ), v = 0 on Γ s , v · n d = 0 on Γ d },
and M : = L 2 0 (Ω). Now we are ready to state the weak formulation of the Stokes-Darcy problem (2.1) − (2.9). Find (u, p) ∈ V × M such that
a(u, v) + b(v, p) = (f s , v) Ωs ∀ v ∈ V, (2.10) b(u, q) = (f d , q) Ω d ∀ q ∈ M, (2.11) where a(u, v) = 2ν(D(u), D(v)) Ωs + (K −1 u, v) Ω d + µK 1 2 u s · τ , v s · τ Γ , b(v, q) = −(∇ · v, q) Ω .
The existence and the uniqueness of the weak solutions have been proved in [26].
Discretization.
In this section, we first introduce some basic definitions and preliminaries which will be used throughout the rest of this article. Then we construct numerical scheme for (2.10) − (2.11).
Notations for Partitions.
In what follows, Ω i refers to either Ω s or Ω d , and it is the same for the other symbols with subscript i. Let T i,h be the partition of Ω i . Denote by T h the union of T s,h and T d,h , where T s,h is a WG-regular partition [42] and T d,h consists of triangles or rectangles. T s represents the element of T s,h and T d represents the element of T d,h . Denote the edges in T h by E h , and define e i the edges on ∂T i . Let E s h be the set of all edges in T h ∩ (Ω s ∪ Γ s ), and E d h be the set of edges in T h ∩ (Ω d ∪ Γ d ). The set of all edges in T h ∩ Γ is denoted by Γ h . Especially, the partition T s,h and T d,h are not necessary to be consistent on the interface Γ. Denote the size of T i by h Ti , the mesh size of T i,h by h i . In addition, denote by ρ ∈ P ki (T i ) that ρ| Ti is polynomial with degree no more than k i .
To define the WG method, we first give a brief introduction of weak function on
T s , v s,h = v s,0 , in T s , v s,b ,
on ∂T s . In Stokes region, we define the following WG space for the velocity variable.
V s h = {v s,h = {v s,0 , v s,b } ∈ [L 2 (Ω s )] 2 × [L 2 (E s h )] 2 : v s,0 | Ts ∈ [P αs (T s )] 2 for T s ∈ T s,h , v s,b | es ∈ [P β (e s )] 2 for e s ∈ E s h ∪ Γ h , v s,b | es = 0 for e s ∈ E s h ∩ Γ s },
and the finite element space for the pressure variable as
M s h = {q s,h ∈ L 2 0 (Ω s ) : q s,h | Ts ∈ P γs (T ), T s ∈ T s,h }
where non-negative integers α s , β and γ s satisfy
β − 1 ≤ γ s ≤ β ≤ α s ≤ β + 1, α s ≤ γ s + 1, 1 ≤ β.
Remark 3.1. For α s = 1, β = 0, γ s = 0, the situation is more complicated. Interested readers may refer to [45,47] for details.
Then, we give the mixed finite element spaces corresponding to the Darcy region. For the velocity variable
V d h = {v d ∈ H(div, Ω d ) : v d | T ∈ P α d (T d ) for T d ∈ T d,h , v d · n = 0 for E d h ∩ Γ d },
and for the pressure variable
M d h = {q d,h ∈ L 2 0 (Ω d ) : q d,h | T d ∈ P γ d (T d ) for T d ∈ T d,h }, where γ d ≤ α d , α d − 1 ≤ γ d .
We assume that ∇ · V d h ⊂ M d h . In order to impose the continuity of the velocity on the interface, we introduce the discrete space for Lagrange multiplier.
Λ h = V d h ·
n. Now, we can define the global discrete velocity space V h and the discrete pressure space M h as follows.
V h = {v h = (v s,h , v d,h ) ∈ V s h × V d h : e∈Γ h e η(v s,h − v d,h ) · n = 0, ∀ η ∈ Λ h }, M h = M s h × M d h .
3.2. Discrete Weak Operators. Next, we introduce some weak differential
operators for v s,h ∈ V s h . Definition 3.1. For any v s,h ∈ V s h , T s ∈ T s,h , the discrete weak gradient ∇ w v s,h | Ts ∈ [P β (T s )] d×d satisfies (∇ w v s,h , τ ) Ts = −(v s,0 , ∇ · τ ) Ts + v s,b , τ · n ∂Ts , ∀τ ∈ [P β (T s )] d×d . (3.2)
Analogously, we can define the discrete weak divergence.
Definition 3.2. For any v s,h ∈ V s h , T s ∈ T s,h , the discrete weak gradient ∇ w v s,h | Ts ∈ P β (T s ) satisfies (∇ w · v s,h , q s,h ) T = −(v s,0 , ∇q s,h ) Ts + v s,b , q s,h n ∂Ts , ∀q s,h ∈ P β (T s ). (3.3)
Finally, denote by D w (v s,h ) the weak strain tensor given by
D w (v s,h ) = 1 2 (∇ w v s,h + ∇ w v T s,h ). 3.3. Numerical Scheme. Define Q h = {Q 0 , Q b } the projection operator from L 2 (Ω s ) onto V s h , where Q 0 is the L 2 projection onto [P αs (T s )] 2 , ∀ T s ∈ T s,h , Q b is the L 2 projection onto [P β (e s )] 2 , ∀ e s ∈ E s h .
We are now in a position to give a numerical scheme for the coupled Stokes-Darcy problem. To this end, we define some bilinear forms in the discrete spaces. For any
u h = (u s,h , u d,h ), v h = (v s,h , v d,h ) ∈ V h , p h = (p s,h , p d,h ) and q h = (q s,h , q d,h ) ∈ M h , define a s,h (u s,h , v s,h ) = Ts∈T s,h (2νD w (u s,h ), D w (v s,h )) Ts + s(u s,h , v s,h ), s(u s,h , v s,h ) = Ts∈T s,h h −1 Ts Q b u s,0 − u s,b , Q b v s,0 − v s,b ∂Ts , a i,h = µK − 1 2 u s,b · τ , v s,b · τ Γ h , b s,h (v s,h , q s,h ) = −(∇ w · v s,h , q s,h ) Ωs , b d,h (v d,h , q d,h ) = −(∇ · v d,h , q d,h ) Ω d , a h (u h , v h ) = a s,h (u s,h , v s,h ) + a i,h (u h , v h ) + a d (u d,h , v d,h ), b h (v h , q h ) = b s,h (v s,h , q s,h ) + b d,h (v d,h , q d,h ).
With these preparations, we give the numerical scheme as follows.
WG-MFEM Scheme 1. Seek u h ∈ V h , p h ∈ M h such that a h (u h , v h ) + b h (v h , p h ) = (f s , v h ) Ωs , (3.4) b h (u h , q h ) = (f d , q h ) Ω d , (3.5) for all v h = (v s,h , v d,h ) ∈ V h , and q h ∈ M h .
Existence and Uniqueness.
In this section, we prove two important properties of the numerical scheme: the boundedness of a h (·, ·) and the inf-sup condition of b h (·, ·). The existence and uniqueness of the approximate solutions then follow from the two properties.
We first define a discrete norm on V s h by v h 2 V s h = 2ν D w (v s,h ) 2 Ωs + Ts∈T s,h h −1 s Q b v s,0 − v s,b 2 ∂Ts + µ 1 2 K − 1 4 v s,b · τ 2 Γ h . It is obvious that · V s h is a semi-norm. In order to demonstrate · V s h is a well-defined norm on V s h , we introduce the following estimate. Lemma 4.1. For any v s,h ∈ V s h , we have Ts∈T s,h ∇v s,0 Ts ≤ C v s,h V s h , ∀ T s ∈ T s,h .
Proof. From [5], we know the following discrete Korn's inequality holds.
Ts∈T s,h ∇v s,0 2 Ts ≤ C Ts∈T s,h D(v s,0 ) 2 Ts + sup m∈RM, m Γs =1 Γs mds=0 Γs v s,0 · mds 2 + es∈E s,h \Γs π e [v s,0 ] 2 es ,
where RM is the space of rigid motions, π es is the L 2 projection operator onto [P 1 (e s )] d , [·] denotes the jump on edges. Each term on the left hand of the inequality can be handled as follows.
Using the integration by parts and the definition of ∇ w on each element T s ∈ T s,h , we have that
(D(v s,0 ), D(v s,0 )) Ts = (−v s,0 , ∇ · D(v s,0 )) Ts + v s,0 , D(v s,0 ) · n ∂Ts = (−v s,0 , ∇ · D(v s,0 )) Ts + v s,b , D(v s,0 ) · n ∂Ts + v s,0 − v s,b , D(v s,0 ) · n ∂Ts = (∇ w v s,h , D(v s,0 )) Ts + Q b v s,0 − v s,b , D(v s,0 ) · n ∂Ts = (D w v s,h , D(v s,0 )) Ts + Q b v s,0 − v s,b , D(v s,0 ) · n ∂Ts .
Summing over all element T s ∈ T s,h and applying the trace inequality (A.9), the inverse inequality (A.10), we obtain
D(v s,0 ) 2 Ts ≤ C( D w v s,h Ts D(v s,0 ) Ts + Q b v s,0 − v s,b ∂Ts D(v s,0 ) ∂Ts ) ≤ C( D w v s,h Ts + h − 1 2 Ts Q b v s,0 − v s,b ∂Ts ) D(v s,0 ) Ts . Therefore, Ts∈T s,h D(v s,0 ) Ts ≤ C v s,h V s h .
For the second and the third terms, since β ≥ 1 and v s,b = 0 on Γ s , we have
sup m∈RM, m Γs =1 Γs mds=0 Γs v s,0 · mds = sup m∈RM, m Γs =1 Γs mds=0 Γs (Q b v s,0 − v s,b ) · mds ≤ C v s,h V s h , and es∈E 0 s,h π e [v s,0 ] es ≤ es∈E 0 s,h Q b [v s,0 ] es ≤ Ts∈T s,h Q b v s,0 − v s,b ∂Ts ≤ C v s,h V s h .
The proof is completed.
Lemma 4.2. · V s h provides a norm in V s h . Proof. It suffices to check the positivity property of the semi-norm · V s h . To this end, assume that v s,h V s h = 0 for some v s,h ∈ V s h . Then we obtain D w (v s,h ) = 0 on all T s ∈ T s,h , Q b v s,0 = v s,b on ∂T s , v s,b · τ = 0 on Γ. From the Lemma 4.1, we have ∇v s,0 = 0 on all T s , which implies that v s,0 = constant on every T s . Moreover, Q b v s,0 = v s,b yields v s,h is a constant in Ω s . Combining with the fact that v s,b = 0 on Γ s , we know that v s,h = 0. Now, we can define a discrete norm on V h . v h 2 V h = v s,h 2 V s h + v d,h 2 Ω d + ∇ · v d,h 2 Ω d . (4.1)
It follows from the definition of norm (4.1) and the Cauchy Schwarz inequality that coercivity and boundedness hold true for the bilinear form a h (·, ·).
Lemma 4.3. For any u h , v h ∈ V h , we have a h (v h , v h ) = v h 2 V h , ∀v h ∈ V h , ∇ · v d,h = 0, (4.2) |a h (u h , v h )| ≤ C u h V h · v h V h , ∀u h , v h ∈ V h . (4.3)
Besides the projection Q h = {Q 0 , Q b } defined in the previous section, we need another local L 2 projections, for each element T s ∈ T s,h , denote by Q h the L 2 projection onto [P β (T s )] 2×2 and by Q h the L 2 projection onto P β (T s ).
∇ w (Q h v) = Q h (∇v) ∀ v ∈ [H 1 (Ω s )] d , (4.4) ∇ w · (Q h v) = Q h (∇ · v) ∀ v ∈ H(div, Ω s ). (4.5)
The proof of this Lemma can be found in [43].
As for Darcy region, denote the velocity space
V | Ω d by V d . Then we define the MFEM interpolant Π d h : V d ∩ [H θ (Ω d )] 2 → V d h with θ > 0 satisfying [9], for any v d ∈ V d ∩ (H θ [Ω d )] 2 , (∇ · Π d h v d − v d , q d,h ) = 0, ∀q d,h ∈ M d h , (4.6) e ((Π d h v d − v d ) · n e )w d · n e ds = 0, ∀e ∈ Γ d h ∪ Γ h , ∀ w d,h ∈ V d h . (4.7)
In addition, we denote by R s h the L 2 projection onto M s h , and by R d h the L 2 projection onto M d h . Next, we introduce the discrete inf-sup condition for the bilinear form b h (·, ·).
Lemma 4.5. (inf-sup) There exists a positive constant C independent of h such that sup v h ∈V h b h (v h , q h ) v h V h ≥ C q h M h for all q h ∈ M h .
Proof. According to [4], we know that for any
q h ∈ M h , there exists a v ∈ [H 1 0 (Ω)] 2 such that ∇ · v = −q h
in Ω,
and v 1,Ω ≤ C q h 0,Ω . Note that b s,h (Q h v, q h ) = −(∇ w · Q h v, q h ) Ωs = −(Q h (∇ · v), q h ) Ωs = −(∇ · v, q h ) Ωs = q h 2 Ωs , and b d,h (v, q h ) = −(∇ · v, q h ) Ω d = q h 2 Ω d .
Next, we construct an projection operator π h :
(V ∩ [H 1 (Ω)] 2 ) → V h such that b s,h (π h v − Q h v, q h ) = 0, b d,h (π h v − v, q h ) = 0, ∀q h ∈ M h . Let π h v = (π s h v, π d h v) ∈ V s h × V d h . First, we take π s h v = Q h v. It is obvious that b s,h (π h v − Q h v, q h ) = 0.
In addition, the following estimate holds.
Q h v s V s h ≤ C v s 1,Ωs
Readers may refer to [13] for the proof for this estimate. Next, we need to define the operator π d h v. Consider the following auxiliary problem
∇ · ∇φ = 0 in Ω d , ∇φ · n = 0 on Γ d , ∇φ · n = (π s h v − v) · n on Γ.
It follows from the definition of the projection operator Q h that
Γ (π s h v − v) · n ds = Γ (Q b v − v) · n ds = 0.
So the auxiliary problem is well-posed. Let z = ∇φ, we notice that the function π s h v · n ∈ H θ (Γ) for any 0 ≤ θ ≤ 1 2 . By elliptic regularity [25],
z θ,Ω d ≤ C π s h v − v θ− 1 2 ,Γ 0 ≤ θ ≤ 1 2 . (4.8) Let w = v + z. Then we have ∇ · w = ∇ · (v + z) = ∇ · v in Ω d , (4.9) w · n = v · n + z · n = π s h v · n on Γ. (4.10) Define π d h v := Π d h w. From the definition of Π d h , we know that b d,h (π d h v, q d,h ) = b d,h (Π d h w, q d,h ) = b d,h (w, q d,h ) = −(∇ · w, q d,h ) = −(∇ · v, q d,h ) = b d,h (v, q d,h ), ∀ q d,h ∈ M d h . So the interpolant operator π d h satisfies b d,h (π d h v − v, q d,h ) = 0. Next, we prove that π h v ∈ V h .
For any e ∈ Γ h and η ∈ Λ h , using (4.7), (4.9) and (4.10), we have
e π d h v · nη ds = e Π d h w · nη ds = e w · nη ds = e π s h v · nη ds.
It remains to give the bound of the operator π d h . From Lemma (A.2) and (4.8)
π d h v V d h = Π d h w V d h ≤ Π d h v V d h + Π d h z V d h ≤ C( v 1,Ω d + z θ,Ω d ) ≤ C( v 1,Ω d + (π s h v − v) · n Γ ).
Using the trace inequality (A.9) and the projection inequality (A.2), we have
(π s h v − v) · n e ≤ Q 0 v − v e ≤ Ch − 1 2 Q 0 v − v Ts + Ch 1 2 ∇(Q 0 v − v) Ts ≤ Ch 1 2 v 1,Ts . Thus, we obtain π d h v V d h ≤ C v 1,Ω . Furthermore, π h v V h ≤ C v 1,Ω .
Combining with the above estimates, we get
b h (π h v, q h ) π h v V h = b s,h (Q h v, q h ) + b d,h (Π d h v, q h ) π h v V h ≥ C b s,h (Q h v, q h ) + b d,h (Π d h v, q h ) π h v 1,Ω ≥ C (∇ · v, q h ) + b d,h (Π d h v, q h ) π h v 1,Ω ≥ C q h 2 Ω v 1,Ω ≥ C q h Ω ,
which completes the proof.
Lemma 4.6. For v ∈ [H 1 (Ω)] 2 , such that v| Ω d ∈ [H γ d +2 (Ω i )] 2 , there exists v h ∈ V h such that b d,h (v −ṽ, q d,h ) = 0, ∀ q d,h ∈ M h , (4.11) v −ṽ V d h ≤ C (h α d +1 d |v| α d +1,Ω d + h γ d +1 d |∇ · v| γ d +1,Ω d + h αs+ 1 2 s v αs+1,Ωs ). (4.12)
Proof. Recall the interpolant π d h v constructed in Lemma 4.5, then (4.11) can be deduced directly. We only need to prove (4.12). From the definition of
π d h , we know v − π d h v V d h = v − Π d h w V d h ≤ v − Π d h v V d h + Π d h (w − v) V d h . (4.13)
Using Lemma (A.2), the first term on the right-hand side of (4.13) can be estimated as follows
v − Π d h v V d h ≤ C(h α d +1 d |v| α d +1,Ω d + h γ d +1 d |∇ · v| γ d +1,Ω d ).
For the second term, using estimate (4.8) and (A.1),
Π d h (w − v) V d h = Π d h z V d h ≤ z θ,Ω d ≤ (π s h v − v) · n 0,Γ ≤ Ch αs+1/2 s v αs+1,Ωs .
Combining the estimates above we complete the proof. Proof. Since the problem is finite dimensional, it suffices to show that the solution is unique. Set f s = 0, f d = 0. Then take v h = u h and q h = p h , we have
a h (u h , u h ) = 0, and b h (u h , q h ) = 0 ∀ q h ∈ M h .
Combining with the results above, we know that a h (u h , u h ) = 0, which implies that u h = 0. Furthermore, we derive that
b(v h , p h ) = 0 ∀ v h ∈ V h .
From the inf-sup condition we know p h = 0.
Error Estimates.
In this section, we derive the optimal error estimates for the velocity in the energy norm and the pressure in the L 2 norm.
Lemma 5.1. For any w s ∈ [H 1 (Ω s )] 2 , ρ s ∈ H 1 (Ω s ), and v s,h ∈ V s h , it follows that (D w (Q h w s ), D w (v s,h )) Ωs (5.1) = (D(w s ), D(v s,0 )) Ωs − Ts∈T s,h v s,0 − v s,b , Q h D(w s ) · n ∂Ts , (∇ w · v s,h , R h ρ s ) Ωs (5.2) = (∇ · v s,0 , ρ s ) Ωs − Ts∈T s,h v s,0 − v s,b , (R h ρ s ) · n ∂Ts .
Proof. According to the commutative property (4.4), we know that
D w (Q h u s ) = Q h D(u s ) is symmetric. Thus, (D w (Q h w s ), D w v s,h ) Ts = (Q h D(w s ), D w v s,h ) Ts = (Q h D(w s ), ∇ w v s,h ) Ts .
It follows from the definition of weak gradient (3.2) and the integration by parts, we have
Ts∈T s,h (D w (Q h w s ), ∇ w v s,h ) Ts = Ts∈T s,h (−(∇ · (Q h D(w s )), v s,0 ) Ts + v s,b , Q h D(w s )n ∂Ts ) = Ts∈T s,h ((Q h D(w s ), ∇v s,0 ) Ts − v s,0 − v s,b , Q h D(w s )n ∂Ts ) = Ts∈T s,h ((Q h D(w s ), D(v s,0 )) Ts − v s,0 − v s,b , Q h D(w s )n ∂Ts ).
The proof of (5.2) is similar, so we omit details here.
With the above lemma, we can establish the error equations. By the regularity of the true solution u s and p s , and the fact that v s,b = 0 on Γ s h ,
a s,h (Q h u s − u s,h , v s,h ) + a i,h (Q h u s − u s,h , v s,h ) + b s,h (v s,h , R s h p s − p s,h ) (5.3) = l 1 (u s , v s,h ) − l 2 (p s , v s,h ) − l 3 (u s , v s,h ) − p d , v s,b · n Γ h + s(Q h u s , v s,h ), a d (u d − u d,h , v d,h ) + b d (v d,h , p d − p d,h ) = p d , v d,h · n Γ h , (5.4) b(Q h u s − u s,h , q s,h ) = 0, (5.5) b(u d − u d,h , q d,h ) = 0 (5.6) for any v ∈ V h and q h ∈ M h , where l 1 (u s , v s,h ) = Ts∈T s,h 2ν(v s,0 − v s,b ), D(u s ) · n − (Q h D(u s )) · n ∂Ts l 2 (p s , v s,h ) = Ts∈T s,h v s,0 − v s,b , (p s − R s h p s )n ∂Ts l 3 (u s , v s,h ) = e∈Γ h µK − 1 2 (u s − Q b u s ) · τ , v s,b · τ e .(f s , v s,0 ) Ωs = Ts∈T s,h (2νD(u s ), D(v s,0 )) Ts − Ts∈T s,h (∇ · v s,0 , p s ) Ts − Ts∈T s,h 2ν(v s,0 − v s,b ), D(u s ) · n ∂Ts + Ts∈T s,h v s,0 − v s,b , p s n ∂Ts − e∈Γ h v s,b , T(u s , p s )n e .
From the interface conditions, we know that
− e∈Γ h v s,b , T(u s , p s )n e = p d , v s,b · n Γ h + µK − 1 2 u s τ , v s,b · τ Γ h .
Applying Lemma (5.1) yields
(f s , v s,0 ) Ωs = Ts∈T s,h (2νD w (Q h u s ), D w (v s,h )) Ts − Ts∈T s,h (∇ w · v s,h , R s h p s ) Ts − Ts∈T s,h 2ν(v s,0 − v s,b ), D(u s ) · n − (Q h D(u s )) · n ∂Ts + Ts∈T s,h v s,0 − v s,b , (p s − R s h p s )n ∂Ts + e∈Γ h p d , v s,b · n e + e∈Γ h µK −1/2 u s · τ , v s,b · τ e = a s,h (Q h u s , v s,h ) + a i,h (Q h u s , v s,h ) + b s,h (v s,h , R s h p s ) − s(Q h u s , v s,h ) − Ts∈T s,h 2ν(v s,0 − v s,b ), D(u s ) · n − (Q h D(u s )) · n ∂Ts + Ts∈T s,h v s,0 − v s,b , (p s − R s h p s )n ∂Ts + e∈Γ h p d , v s,b · n e + e∈Γ h µK −1/2 (u s − Q h u s ) · τ , v s,b · τ e .
Therefore, we have
a s,h (Q h u s , v s,h ) + a i,h (Q h u s , v s,h ) + b s,h (v s,h , R s h p s ) = (f s , v s,0 ) Ωs + s(Q h u s , v s,h ) + T ∈T s,h 2ν(v s,0 − v s,b ), D(u s ) · n − (Q h D(u s )) · n ∂Ts − T ∈T s,h v s,0 − v s,b , (p s − R s h p s )n ∂Ts − e∈Γ h p d , v s,b · n e − e∈Γ h µK −1/2 (u s − Q h u s ) · τ , v s,b · τ e .
Using the definition of Q h and Q h , we have
b s,h (Q h u, q h ) = −(∇ w · (Q h u), q h ) = −(Q h (∇ · u), q h ) = (∇ · u, q h ) = 0.
As for the Darcy's law (2.5), multiplying a test function v d,h ∈ V d h and using integration by parts on the Darcy region yields Assume that u| Ωi ∈ [H αi+1 (Ω)] 2 , p| Ωi ∈ H γi+1 (Ω s ), i = s, d. Let (u h , p h ) be the discrete solutions of (3.4) − (3.5). Then the following estimate holds.
0 = (K −1 u d , v d,h ) + (∇p d , v d,h ) = (K −1 u d , v d,h ) − (p d , ∇ · v d,h ) − p d , v d,h · n Γ = a d,h (u d , v d,h ) + b d,h (v d,h , p d ) − p d , v d,h · n Γ h , which means that a d (u d , v d,h ) + b d,h (v d,h , p d ) = p d , v d,h · n Γ h .
It is obvious that
b d,h (u d , q h ) = (f d , q h ).Q h u s − u s,h V s h + u d − u d,+ C(h α d u d α d +1 + h γ d +1 u d γ d +2,Ω d + h γ d +1/2 d h 1/2 s p d γ d +1,Ω d ).
Proof. Adding equation (5.4) to (5.3), we have
a s,h (Q h u s − u s,h , v s,h ) + b s,h (v s,h , R s h p s − p s,h ) (5.8) + a i,h (Q h u s − u s,h , v s,h ) + a d (ũ d − u d,h , v d,h ) + b d (v d,h , R d h p d − p d,h ) = l 1 (u s , v s,h ) − l 2 (p s , v s,h ) − l 3 (u s , v s,h ) + s(Q h u s , v s,h ) + a d (ũ d − u d , v d,h ) + b d (v d,h , R d h p d − p d ) − p d , (v s,b − v d,h ) · n Γ h .
From the Lemma 4.6 and equation (5.6), we get
b d (u d,h −ũ d , q d,h ) = 0, ∀q h ∈ M d h . Since ∇ · V d h ⊂ M d h , ∇ · (u d,h −ũ d ) = 0, in Ω d . Define e s,h = Q h u s −u s,h , e d,h =ũ d −u d,h , s,h = R s h p s −p s,h and d,h = R d h p d −p d,h . Taking v s,h = e s,h , v d,h = e d,+ a d (ũ d − u d , e d,h ) − p d , (v s,b − v d,h ) · n Γ h .
We define e h = (e s,h , e d,h ). Making use of coercivity (4.2) and noting that ∇·e d,h = 0 in Ω d , we obtain
e h 2 V h = a s,h (e s,h , e s,h ) + a i,h (e s,h , e s,h ) + a d (e d,h , e d,h ) = l 1 (u s , e s,h ) − l 2 (p s , e s,h ) − l 3 (u s , e s,h ) + s(Q h u s , e s,h ) + a d (ũ d − u d , e d,h ) − p d , (e s,b − e d,h ) · n Γ h .
Next, we are going to estimate each term on the right-hand side of the above equation one by one. It follows from (A.14) − (A.17) that
l 1 (u s , e s,h ) − l 2 (p s , e s,h ) − l 3 (u s , e s,h ) + s(Q h u s , e s,h ) ≤ C(h β+1 s u s β+2,Ωs + h γs+1 s p s γs+1,Ωs + h β+1 s u s β+1,Γ + h αs s u s αs+1,Ωs ) e s,h V s h .
Using the Cauchy Schwarz inequality and (4.12), we have
a d (ũ d − u d , e d,h ) ≤ C ũ d − u d V d h · e d,h V d h ≤ C(h α d d u d α d +1,Ω d + h γ d +1 d u d γ d +2 + h αs+ 1 2 s u s αs+1,Ωs ) e d,h V d h .
Finally, to estimate p d , (e s,b − v d,h ) · n Γ h , we define a L 2 projection R e h onto Λ h as follows.
p d , λ h Γ h = R e h p d , λ h Γ h , ∀λ h ∈ Λ h . Since e h = (e s,h , e d,h ) ∈ V h , from the definition of V h , we know that e∈Γ h e η(e s,b − e d,h ) · n = 0, ∀η ∈ Λ h .
Combining with the fact that
R e h p d ∈ Λ h , e∈Γ h p d , (e s,b − e d,h ) · n e = e∈Γ h p d − R e h p d , (e s,b − e d,h ) · n e
Noting that e d,h · n ∈ Λ h , so we have
e∈Γ h p d , (e s,b − e d,h ) · n e = e∈Γ h p d − R e h p d , (e s,b ) · n e
For any constant vector c e , using the property of R e h , the trace inequality (A.9) and Lemma (4.1), we obtain
e∈Γ h p d − R e h p d , (e s,b ) · n e = e∈Γ h p d − R e h p d , (e s,b − c e ) · n e ≤ e∈Γ h p d − R e h p d e e s,b − c e e ≤ e∈Γ h p d − R d h p d e e s,b − c e e ≤ Ch γ d +1/2 d p d γ d +1,Ω d Ts∈T s,h ( e s,b − Q b e s,0 ∂Ts + Q b e s,0 − c e ∂Ts ) ≤ Ch γ d +1/2 d p d γ d +1,Ω d h 1/2 s e s,h V s h + Ts∈T s,h C(h −1/2 s e s,0 − c e Ts + h 1/2 s ∇e s,0 Ts ) ≤ Ch γ d +1/2 d p d γ d +1,Ω d h 1/2 s e s,h V s h .
Combining the above estimates, we obtain
e s,h V s h + u d − u d,h V d h = e s,h V s h + e d,h V d h + u d −ũ d V d h ≤ C(h+ C(h α d u d α d +1,Ω d + h γ d +1 d u d γ d +2 + h γ d +1/2 d h 1/2 s p d γ d +1,Ω d ).
which completes the proof of the theorem.
+ C(h α d u d α d +1,Ω d + h γ d +1 d u d γ d +2 + h γ d +1/2 d h 1/2 s p d γ d +1,Ω d ).
Proof. The error equation (5.8) can be written as
b s,h (v s,h , R s h p s − p s,h ) + b d (v d,h , R d h p d − p d,h ) = −a s,h (Q h u s − u s,h , v s,h ) − a i,h (Q h u s − u s,h , v s,h ) + a d (u d,h − u d , v d,h ) + l 1 (u s , v s,h ) − l 2 (p s , v s,h ) − l 3 (u s , v s,h ) + s(Q h u s , v s,h ) + b d (v d,h , R d h p d − p d ) − p d , (v s,b − v d,h ) · n s Γ h . From the definition of R d h , we know that b d (v d,h , R d h p d − p d ) = 0. Thus, b s,h (v s,h , R s h p s − p s,h ) + b d,h (v d,h , R d h p d − p d,h ) ≤ C Q h u s − u s,h V s h v s,h V s h + C u d,h − u d V d h v d,h V d h + C(h+ C(h α d u d α d +1,Ω d + h γ d +1 d u d γ d +2 + h γ d +1/2 d h 1/2 s p d γ d +1,Ω d ).
Finally, using the estimate (A.8), we have
R s h p s − p s Ωs + p d − p h,d Ω d ≤ C(h β+1+ C(h α d u d α d +1,Ω d + h γ d +1 d u d γ d +2 + h γ d +1/2 d h 1/2 s p d γ d +1,Ω d ),
which complete the proof.
6. Numerical Test. In this section, we use two examples to verify our theoretical results on the WG-MFEM scheme for the Stokes-Darcy problem.
In the first example, we solve the following coupled problem on {Ω s = (0, π) × (0, π)} ∪ {Ω d = (0, π) × (−π, 0)} and the interface Γ = (0, π) × {0}: We plot the velocity field (u s & u d ) in Figure 6.1.
−∇ · (∇u s + ∇ T u s ) + ∇p s = f s in Ω s , (6.1) −∇ · (∇p d ) = 0 in Ω d , (6.2) u s · n (−∇u s − ∇ T u s + p s I)n · n (−∇u s − ∇ T u s + p s I)n · τ = u d · n p d u s · τ on Γ,(6.
In the computation, the first level grid consists of four triangles, cutting each of two rectangles (see Figure 6.1) into two triangles by the north-west to south-east diagonal line. Then, each subsequent grid is a bi-sectional refinement. We apply the weak Galerkin P k finite element method for u s and p s and the mixed BDM P k finite element method for computing u d and p d in solving (6.4). The errors and numerical orders of convergence for the unknown functions in various norms are reported in Tables 6.1-6.4. We can see that all numerical solutions are convergent of optimal order, as proved in our two theorems. Because of coupling, the elliptic regularity for the Stokes-Darcy problem is not known. Partially for this reason, one order higher L 2 convergence for the velocity cannot be proved, or may be proved under some unknown conditions. It does appear, for this example but not for next example, in Tables 6.1-6.4. We still call such a phenomenon one-order superconvergence for the velocity in L 2 by the WG-BDM P k elements (k = 1, 2, 3, 4). The velocity field (u s , u d ) is plotted in Figure 6.2. The computational grids are same as those in last example, described above. We list the order of convergence in Tables 6.5-6.8, by P 1 , P 2 , P 3 and P 4 WG-BDM coupled finite element methods. The results confirm the two theorems proved here. Like the computation for the first example, one order superconvergence is obtained in the P 1 and P 3 WG-BDM element velocity solutions in L 2 -norm. Unlike the first example, this example does not have an L 2superconvergence for the P 2 and the P 4 WG-BDM coupled elements. To see the superconvergence in the other cases, we plot the solution and the error for the P 3 coupled element in Figures 6.3-6.5.
level Q 0 u s − u s,h 0 k |||Q h u s − u s,h ||| k R s h p s − p s,I h u d − u d,h 0 k div(u d − u d,h ) 0 n I h p d − p d,level Q 0 u s − u s,h 0 k |||Q h u s − u s,h ||| k R s h p s − p s,h 0 k 3 0.I h u d − u d,h 0 k div(u d − u d,h ) 0 k I h p d − p d,
To see if we have different L 2 -convergence, we compute the second example by the coupled P k WG vector and P k CG scalar elements with k varying from 1 to 5. The corresponding results are recorded in Table 6.9. The observed L 2 convergence orders of the velocity are k + 1 for all polynomial degrees, as predicated by the theory of the P k WG elements for the Stokes equations. In particular, we have another order higher L 2 -convergence for the P 2 element in the Darcy region. It behaves the same To the best of our knowledge, there exists no general analysis for optimal error estimates of the velocity in L 2 norm. Fortunately, some researchers have noticed this problem and made efforts for some specific scheme (such as a monolithic strongly conservative numerical scheme) [16,21]. But these work still cannot explain the L 2 convergence in our study. We will explore the phenomenon in the future work.
level Q 0 u s − u s,h 0 k |||Q h u s − u s,h ||| k R s h p s − p s,h 0 k 3 0.1791EI h u d − u d,h 0 k div(u d − u d,h ) 0 k I h p d − p d,level Q 0 u s − u s,h 0 k |||Q h u s − u s,h ||| k R s h p s − p s,I h u d − u d,h 0 k div(u d − u d,h ) 0 k I h p d − p d,
7. Conclusion. In this paper, the weak Galerkin finite element method coupled with the mixed finite element method is introduced for the Stokes-Darcy problem. We designed the numerical scheme and derived the optimal error estimates in broken H 1 norm for velocity and in L 2 for pressure. We found that the convergence order of velocity in L 2 norm is not always optimal form the numerical experiments. This phenomenon is strange and it will be studied in the following work.
Appendix A. Some Technique Tools. In this Appendix, we are going to introduce some technical results which have been used in previous section to derive error estimates.
Lemma A.1. Let T s,h be a finite element partition of domain Ω s satisfying the shape regularity assumptions as specified in [42], we assume w and ρ are sufficiently smooth. Then, for 0 ≤ m ≤ 1 we have
level Q 0 u s − u s,h 0 k |||Q h u s − u s,h ||| k R s h p s − p s,I h u d − u d,h 0 k div(u d − u d,h ) 0 k I h p d − p d,T ∈T h h 2m Ts w − Q 0 w 2 Ts ≤ Ch 2(r+1) Ts w 2 Ts,r+1 , 1 ≤ r ≤ α s , (A.1) T ∈T h h 2m Ts w − Q b w 2 es ≤ Ch 2(r+1) Ts w 2 es,r+1 , 1 ≤ r ≤ β, (A.2) T ∈T h h 2m Ts ∇w − Q h (∇w) 2 Ts ≤ Ch 2r Ts w 2 Ts,r+1 , 1 ≤ r ≤ β, (A.3) T ∈T h h 2m Ts ρ − Q h ρ 2 Ts ≤ Ch 2r Ts ρ 2 Ts,r , 1 ≤ r ≤ β. (A.4)
Here C denotes a generic constant independent of the mesh size h and the functions in the estimates.
level Q 0 u s − u s,h 0 k |||Q h u s − u s,h ||| k R s h p s − p s,I h u d − u d,h 0 k div(u d − u d,h ) 0 k I h p d − p d,Lemma A.2. Π d h satisfies the approximation properties v d − Π d h v d 0 , T ≤ Ch m T d |v d | m,T d , 1 ≤ m ≤ α d + 1, (A.5) ∇ · (v d − Π d h v d ) 0 , T ≤ Ch m T d |∇ · v d | m,T d , 0 ≤ m ≤ γ d + 1. (A.6) Lemma A.3. Let p| Ωs ∈ H γs (Ω s ), p| Ω d ∈ H γd (Ω d ), then we have p − R h p m,T ≤ Ch γs−m |p| γs,T T ∈ Ω s , m = 0, 1, (A.7) p − R h p m,T ≤ Ch γ d −m |p| γ d ,T T ∈ Ω d , m = 0, 1. (A.8)
Let T s be an element satisfying the assumption verified in [42] with e s as a side. For any function g ∈ H 1 (T ), the following trace inequality has been proved in [42] g 2 es ≤ C(h −1 Ts g 2 Ts + h Ts ∇g 2 Ts ). (A.9) Particularly, if g is polynomial in T we have the inverse inequality [42] ∇g 2 Ts ≤ Ch −2 Ts g 2 Ts , (A. 10) where C is a constant only related to the degree of polynomial and the dimension. Combining with the trace inequality we can get further that
level Q 0 u s − u s,h 0 k |||Q h u s − u s,h ||| k R s h p s − p s,I h u d − u d,h 0 k div(u d − u d,h ) 0 k I h p d − p d,∇g 2 e ≤ Ch −1 Ts g 2 Ts . (A.11)
The vector version of the trace theorem and the inverse theorem are trivial.
Lemma A.4. For any v s,h ∈ V s h , we have Ts∈T s,h v s,0 − v s,b ∂T s ≤ Ch 1 2 s v s,h V s h . (A.12)
Proof. When α s = β, (A.12) is obvious. So we only need to discuss the case that α s = β + 1. We only consider the vector valued function v S,h . From s v s,h V s h , which completes the proof.
Lemma A.5. Let w| Ωs ∈ [H αs (Ω s )] 2 , ρ| Ωs ∈ H γs (Ω s ), i = s, d, and v ∈ V s,h . Assume that the finite element partition T s,h is shape regular. Then we have the following estimates The similarly technique can be applied to the following estimate,
Fig. 2 . 1 .
21Domain schematic for Stokes-Darcy coupled flow.
function is formed by the internal function v s,0 and the boundary function v s,b , where v s,b may not necessarily be related to the trace of v s,0 on ∂T s . Note that v s,b takes single value on e s . For convenience, we write v s,h as {v s,0 , v s,b } in short.
Lemma 4 . 4 .
44The projection operators defined above satisfy
Lemma 4. 7 .
7The numercial scheme (3.4) − (3.5) has a unique solution.
Lemma 5. 2 .
2Let (u, p) be the solutions of (2.1) − (2.9), and (u h , p h ) be the solutions of (3.4) − (3.5), we have
Proof.
Multiplying the Stokes equation (2.1) with v s,0 in v s,h = {v s,0 , v s,b } ∈ V s h and integrating by parts over every element T s . (f s , v s,0 ) Ωs = Ts∈T s,h (2νD(u s ), ∇v s,0 ) Ts − Ts∈T s,h (∇ · v s,0 , p s ) Ts − Ts∈T s,h 2νv s,0 , D(u s ) · n ∂Ts + Ts∈T s,h v s,0 , p s n ∂Ts .
Combining with (3.4) − (3.5), we obtain equations (5.3) − (5.6). Theorem 5.3. Let (u, p) be the solutions of the coupled problem (2.1) − (2.9).
h , q s,h = s,h and q d,h = d,h in (5.8), and combining with (5.5), we have a s,h (e s,h , e s,h ) + a i,h (e s,h , e s,h ) + a d (e d,h , e d,h ) = l 1 (u s , e s,h ) − l 2 (p s , e s,h ) − l 3 (u s , e s,h ) + s(Q h u s , e s,h )
1 Fig. 6 . 1 .
161y cos y cos x (sin 2 y − 2) sin x on Γ s , v d · n = (e −y − e y ) cos x (e −y − e y ) sin x · n on Γ d .The source functions in (6.1)-(6.2) are defined by f s = sin y cos x(5 cos y + 1) sin x(− cos 2 y + 3 2 sin 2 y − y cos y cos x (sin 2 y − 2) sin xin Ω s ,p s = sin x sin y in Ω s , u d = −(e y − e −y ) cos x −(e y + e −y ) sin x in Ω d , p d = (e y − e −y ) sin x in Ω d .The velocity field of first example, (6.4).
In the second numerical example, we solve the coupled problem (6.4) on domain{Ω s = (0, 1) × (1, 2)} ∪ {Ω d = (0, 1) × (0, 1)}.The exact solutions are u s = − cos(πx) sin(πy) sin(πx) cos(πy) in Ω s , p s = sin(πx) in Ω s , u d = −yπ cos(πx) − sin(πx) in Ω d , p d = y sin(πx) in Ω d .
solving a pure Darcy problem.
Fig. 6 . 2 .
62The velocity field of the second example, (6.6).
Fig. 6 . 3 .
63The solution for (us, u d ) 1 and its error by P 3 elements on level 4, for (6.6).
Fig. 6 . 4 .
64The solution for (us, u d ) 2 and its error by P 3 elements on level 4, for (6.6).
Fig. 6 . 5 .
65s,h ∇v s,0 Ts ≤ C v s,h V s h . (A.13) The solution for (ps, p d ) and its error by P 3 elements on level 4, for (6.6).
level Q 0 u s − u s,h 0 k |||Q h u s − u s,h ||| k R s h p s − p s,h u d − u d,h 0 k div(u d − u d,h ) 0 k I h p d − p d,trace inequality A.9 and P oincaré inequality, we can obtain that
(w s , v s,h ) ≤ Ch β+1 w s β+1,Γ v s,h V s h , (A.16) s(Q h w s , v s,h ) ≤ Ch αs s w s αs+1 v s,h V s h . (A.17)
h
Using Cauchy Schwarz inequality, (A.12) and (A.3), we havel 1 (w s , v s,h ) = 2ν T ∈T s,h v s,0 − v s,b , D(w s ) · n − (Q h D(w s )) Ts D(w s ) − QD(w s )
u s − Q b w s ) · τ, v s,b τ e ≤ C w s − Q b w s Γ v s,Finally, from the property of Q h , trace inequality (A.9), we know thats(Q h w s , v s,h ) = T ∈T s,h h −1 Ts Q 0 w s − w s , Q b v s,0 − v s,
Table 6 .1
6The errors and the order O(h k ) of convergence by the P 1 WG elements and BDM 1 elements, for (6.4).
Table 6 .2
6The errors and the order O(h k ) of convergence by the P 2 WG elements and BDM 2 elements, for (6.4).
Table 6 .3
6The errors and the order O(h k ) of convergence by the P 3 WG elements and BDM 3 elements, for (6.4).
Table 6 .4
6The errors and the order O(h k ) of convergence by the P 4 WG elements and BDM 4 elements, for (6.4).
Table 6 .5
6The errors and the order O(h k ) of convergence by the P 1 WG and BDM 1 coupled element, for (6.6).
Table 6 .6
6The errors and the order O(h k ) of convergence by the P 2 WG and BDM 2 coupled element, for (6.6).
Table 6 .7
6The errors and the order O(h k ) of convergence by the P 3 WG and BDM 3 coupled element, for (6.6).
Table 6 .8
6The errors and the order O(h k ) of convergence by the P 4 WG and BDM 4 coupled element, for (6.6).
Table 6 .9
6The errors and the orders O(h k ) of convergence, for (6.6).level Q h u s − u s,h 0 k Q h p s − p s,h 0 k I h p d − p d,h 0k By coupled P 1 WG vector and P 1 CG scalar element.
Sobolev Spaces. R A Adams, Academic PressNew YorkR. A. Adams, Sobolev Spaces, Academic Press, New York, 1975.
A discretization and multigrid solver for a Darcy-Stokes system of three dimensional vuggy porous media. T Arbogast, M Gomez, Comput. Geosci. 13T. Arbogast and M. Gomez, A discretization and multigrid solver for a Darcy-Stokes system of three dimensional vuggy porous media, Comput. Geosci., 13(2009), pp.331-348.
Mixed finite elements for elliptic problems with tensor coefficients as cell-centered finite differences. T Arbogast, M Wheeier, I Yotov, SIAM. J. Numer. Anal. 34T. Arbogast, M. Wheeier, I. Yotov, Mixed finite elements for elliptic problems with tensor coefficients as cell-centered finite differences , SIAM. J. Numer. Anal., 34 (1997), pp. 828- 852.
On the existence, uniqueness and approximation of saddle-point problems arising from Lagrangian multipliers. F Brezzi, Rev. Fr. caise Autom. Informat. Rech. Op'erationnelle S'er. Rouge. 8F. Brezzi, On the existence, uniqueness and approximation of saddle-point problems arising from Lagrangian multipliers, Rev. Fr. caise Autom. Informat. Rech. Op'erationnelle S'er. Rouge, 8 (1974), pp.129-151.
Korn's inequalities for piecewise H1 vector fields. S C Brenner, Math. Comput. 73S. C. Brenner, Korn's inequalities for piecewise H1 vector fields, Math. Comput., 73 (2003), pp.1067-1087.
Efficient rectangular mixed finite elements in two and three space variables. F Brezzi, J DouglasJr, M Fortin, L D Marini, RAIRO Modèl. Math. Anal. Numèr. 21F. Brezzi, J. Douglas, Jr., M. Fortin, and L. D. Marini, Efficient rectangular mixed finite elements in two and three space variables, RAIRO Modèl. Math. Anal. Numèr., 21 (1987), pp. 581-604.
Two families of mixed elements for second order elliptic problems. F Brezzi, J Douglas, Jr , L D Marini, Numer. Math. 88F. Brezzi, J. Douglas, Jr., and L. D. Marini, Two families of mixed elements for second order elliptic problems, Numer. Math., 88 (1985), pp. 217-235.
Boundary conditions at a naturally impermeable wall. S Beavers, D Joseph, J. Fluid. Mech. 30S. Beavers, D. Joseph, Boundary conditions at a naturally impermeable wall , J. Fluid. Mech., 30 (1967), pp. 197-207.
Mixed and hybrid finite element methods. F Brezzi, M Fortin, Springer Serises in Computational Mathematics. New YorkSpringer-VerlagF. Brezzi, M. Fortin, Mixed and hybrid finite element methods , Springer Serises in Computa- tional Mathematics, Springer-Verlag, New York, 1991.
Stokes-Darcy boundary integral solutions using preconditioners. Y Boubendir, S Tlupova, J. Comput. Phys. 228Y. Boubendir and S. Tlupova, Stokes-Darcy boundary integral solutions using precondition- ers, J. Comput. Phys., 228 (2009), pp.8627-8641.
He an X. Wang, Robin-Robin domain decomposition methods for the steady Stokes-Darcy model with Beaver-Joseph interface condition. Y Cao, M Gunzburger, X , Numer. Math. 117Y. Cao, M. Gunzburger, X. He an X. Wang, Robin-Robin domain decomposition methods for the steady Stokes-Darcy model with Beaver-Joseph interface condition , Numer. Math., 117(2011), pp.601-629.
Preconditioning techniques for a mixed Stokes/Darcy model in porous media applications. M Cai, M Mu, J Xu, J. Comput. Appl. Math. 233M. Cai, M. Mu, J. Xu, Preconditioning techniques for a mixed Stokes/Darcy model in porous media applications, J. Comput. Appl. Math., 233(2009), pp.346-355.
Weak Galerkin method for the coupled Darcy-Stokes flow. W Chen, F Wang, Y Wang, IMA. J. Numer. Anal. 36W. Chen, F. Wang, Y. Wang, Weak Galerkin method for the coupled Darcy-Stokes flow , IMA. J. Numer. Anal., 36 (2016), pp. 897-921.
Mathematical and numerical models for coupling surface and groundwater flows. M Discacciati, E Miglio, A Quarteroni, Appl. Numer. Math. 43M. Discacciati, E. Miglio, A. Quarteroni, Mathematical and numerical models for coupling surface and groundwater flows, Appl. Numer. Math., 43(2002), pp. 57-74.
Simulation of coupled viscous and porous flow problems. D K Gartling, C E Hickox, R C Givler, Comp. Fluid Dynamics. 7D. K. Gartling, C. E. Hickox, R. C. Givler, Simulation of coupled viscous and porous flow problems, Comp. Fluid Dynamics, 7(1996), pp. 23-48.
Error analysis for a monolithic discretization of coupled Darcy and Stokes problems. V Girault, G Kanschat, B Rivière, J. Numer. Math. 22V. Girault, G. Kanschat, B. Rivière, Error analysis for a monolithic discretization of coupled Darcy and Stokes problems, J. Numer. Math., 22(2014), pp. 109-142.
Numerical analysis of coupled Stokes/Darcy flows in industrial filtrations. N Hanspal, A Waghode, V Nassehi, R Wakeman, Transport Porous Med. 64N. Hanspal, A. Waghode, V. Nassehi, R. Wakeman, Numerical analysis of coupled Stokes/Darcy flows in industrial filtrations, Transport Porous Med., 64 (2006), pp. 73-101.
Q Hong, F Wang, S Wu, J Chao, A unified study of continuous and discontinuous Galerkin methods. 62Q. Hong, F. Wang, S. Wu, J. Chao, A unified study of continuous and discontinuous Galerkin methods, Sci. China Math., 62 (2019), pp. 1-32.
Q Hong, J Xu, arXiv:1805.09670Uniform stability and error analysis for some discontinuous Galerkin methods. Q. Hong, J. Xu, Uniform stability and error analysis for some discontinuous Galerkin meth- ods, arXiv:1805.09670.
On the interface boundary condition of Beavers, Joseph, and Saffman. W Jäger, A Ikelić, SIAM J. Appl. Math. 60W. Jäger, A.M ikelić , On the interface boundary condition of Beavers, Joseph, and Saffman , SIAM J. Appl. Math., 60 (2000), pp. 1111-1127.
A strongly conservative finite element method for the coupling of Stokes and Darcy flow. G Kanschat, B Rivière, J. Comput. Phys. 229G. Kanschat, B. Rivière, A strongly conservative finite element method for the coupling of Stokes and Darcy flow, J. Comput. Phys., 229 (2010), pp. 5933-5943.
A weak Galerkin finite element method for a coupled Stokes-Darcy problem on general meshes. R Li, Y Gao, J Li, Z Chen, J. Comput. Appl. Math. 334R. Li, Y. Gao, J. Li, Z. Chen, A weak Galerkin finite element method for a coupled Stokes- Darcy problem on general meshes, J. Comput. Appl. Math., 334(2018), pp. 111-127.
A weak Galerkin finite element method for a coupled Stokes-Darcy problem. R Li, J Li, X Liu, Z Chen, Numer. Methods Partial Differential Equations. 33R. Li, J. Li, X. Liu, Z. Chen, A weak Galerkin finite element method for a coupled Stokes-Darcy problem , Numer. Methods Partial Differential Equations, 33(2017), pp. 1352-1373.
G Lin, J Liu, F Sadre-Marandi, A comparative study on the weak Galerkin, discontinuous Galerkin, and mixed finite element methods. 273G. Lin, J. Liu, F. Sadre-Marandi, A comparative study on the weak Galerkin, discontinuous Galerkin, and mixed finite element methods, J. Comput. Appl. Math., 273(2015), pp. 346- 362.
Lions and E. Magenes, Non-Homogeneous Boundary Value Problems and Applications. J L , Springer-VerlagNew YorkJ. L. Lions and E. Magenes, Non-Homogeneous Boundary Value Problems and Applications, Springer-Verlag, New York, 1972.
Coupling fluid flow with porous media flow. W Layton, F Schieweck, I Yotov, SIAM J. Numer. Anal. 40W. Layton, F. Schieweck, and I. Yotov, Coupling fluid flow with porous media flow , SIAM J. Numer. Anal., 40 (2003), pp. 2195-2218.
Analysis of long time stability and errors of two partitioned methods for uncoupling evolutionary groundwater-surface water flows. W Layton, H Tran, C Trenchea, SIAM. I. Numer. Anal. 51W. Layton, H. Tran, C. Trenchea, Analysis of long time stability and errors of two par- titioned methods for uncoupling evolutionary groundwater-surface water flows, SIAM. I. Numer. Anal., 51(2013), pp. 248-272.
A stabilized mixed finite element method for Darcy flow. A Masud, T Hughes, Comput. Methods Appl. Mech. Engrg. 191A. Masud, T. Hughes, A stabilized mixed finite element method for Darcy flow, Comput. Methods Appl. Mech. Engrg., 191 (2002), pp. 4341-4370.
A computational study of the weak Galerkin method for second-order elliptic equations. L Mu, J Wang, Y Wang, X , Ye , Numer. Algorithms. 63L. Mu, J. Wang, Y. Wang, X, Ye, A computational study of the weak Galerkin method for second-order elliptic equations , Numer. Algorithms, 63(2012), pp. 753-777.
A weak Galerkin finite element method for the Maxwell equations. L Mu, J Wang, S Zhang, X Ye, J. Sci. Comput. 65L. Mu, J. Wang, S. Zhang, X. Ye, A weak Galerkin finite element method for the Maxwell equations, J. Sci. Comput., 65 (2015), pp. 363-386.
A two-grid method of a mixed Stokes-Darcy model for coupling fluid flow with porous media flow. M Mu, J Xu, SIAM. J. Numer. Anal. 45M. Mu and J. Xu, A two-grid method of a mixed Stokes-Darcy model for coupling fluid flow with porous media flow , SIAM. J. Numer. Anal., 45(2009), pp. 115-140.
Modelling of combined Navier-Stokes and Darcy flows in crossflow membrane filtration. V Nassehi, Chen.Eng.Sci. 53V. Nassehi, Modelling of combined Navier-Stokes and Darcy flows in crossflow membrane filtration, Chen.Eng.Sci. 53(1998), pp.1253-1265.
B Rivière, I Yotov, Locally conservative coupling of Stokes and Darcy flow. 42B. Rivière, I. Yotov, Locally conservative coupling of Stokes and Darcy flow , 42(2005), pp. 1959-1977.
Analysis of a discontinuous finite element method for the coupled Stokes and Darcy problem. B Rivière, J. Sci. Comput. 22B. Rivière, Analysis of a discontinuous finite element method for the coupled Stokes and Darcy problem , J. Sci. Comput., 22 /23(2005), pp. 479-500.
A mixed finite element method for 2nd order elliptic problems. R A Raviart, J M Thomas, Mathematical Aspects of the Finite Element Method. New YorkSpringer-Verlag606R. A. Raviart and J. M. Thomas, A mixed finite element method for 2nd order elliptic problems, in Mathematical Aspects of the Finite Element Method , Lecture Notes in Math. 606, Springer-Verlag, New York, 1977, pp.292-315.
On the boundary condition at the surface of a porous media. P Saffman, Stud. Appl. Math. 50P. Saffman, On the boundary condition at the surface of a porous media , Stud. Appl. Math., 50 (1971), pp. 292-315.
Finite element formulations for large-scale, coupled flows in adjacent porous and open fluid domains. A G Salinger, R Aris, J J Derby, Int. Jour. for Numerical Methods in Fluids. 18A. G. Salinger, R. Aris, J. J. Derby, Finite element formulations for large-scale, coupled flows in adjacent porous and open fluid domains, Int. Jour. for Numerical Methods in Fluids, 18 (1994), pp. 1185-1209.
Partitioned time stepping method for fully evolutionary Stokes-Darcy flow with Beavers-Joseph interface conditions. L Shan, H Zheng, SIAM. J. Numer. Anal. 51L. Shan, H. Zheng, Partitioned time stepping method for fully evolutionary Stokes-Darcy flow with Beavers-Joseph interface conditions , SIAM. J. Numer. Anal., 51(2013), pp.813-839.
Boundary integral solutions of coupled Stokes and Darcy flow. S Tlupova, R Cortez, J. Comput. Phys. 228S. Tlupova, R. Cortez, Boundary integral solutions of coupled Stokes and Darcy flow , J. Comput. Phys., 228 (2009), pp. 158-179.
Domain decomposition for coupled Stokes and Darcy flow. D Vassilev, C Wang, I Yotov, Comput. Mehods Appl. Mech. Eng. 268D. Vassilev, C. Wang, and I. Yotov, Domain decomposition for coupled Stokes and Darcy flow , Comput. Mehods Appl. Mech. Eng., 268(2014), pp.264-283.
A weak Galerkin finite element method for second-order elliptic problems. J Wang, X Ye, J. Comput. Appl. Math. J. Wang, X. Ye, A weak Galerkin finite element method for second-order elliptic problems , J. Comput. Appl. Math., 241 (2013), pp. 103-115.
A weak Galerkin mixed finite element method for second order elliptic problems. J Wang, X Ye, Math. Comput. 83J. Wang and X. Ye, A weak Galerkin mixed finite element method for second order elliptic problems, Math. Comput., 83 (2014), pp.2101-2126.
A weak Galerkin finite element method for the Stokes equations. J Wang, X Ye, Adv. Comput. Math. 42J. Wang and X. Ye, A weak Galerkin finite element method for the Stokes equations, Adv. Comput. Math., 42(2016), pp.155-174.
A divergence free weak virtual element method for the Stokes-Darcy problem on general meshes. G Wang, F Wang, L Chen, Y He, Comput. Methods Appl. Mech. Engrg. 344G. Wang, F. Wang, L. Chen, Y. He, A divergence free weak virtual element method for the Stokes-Darcy problem on general meshes, Comput. Methods Appl. Mech. Engrg, 344(2019), pp.998-1020.
A locking-free weak Galerkin finite element method for elasticity problems in the primal formulation. C Wang, J Wang, R Wang, R Zhang, J. Comput. Appl. Math. 307C. Wang, J. Wang, R. Wang, R. Zhang, A locking-free weak Galerkin finite element method for elasticity problems in the primal formulation, J. Comput. Appl. Math., 307(2016), pp. 346-366.
The weak Galerkin method for solving the incompressible Brinkman flow. X Wang, Q Zhai, R Zhang, J. Comput. Appl. Math. 302X. Wang, Q. Zhai, R. Zhang, The weak Galerkin method for solving the incompressible Brinkman flow , J. Comput. Appl. Math., 302 (2016), pp. 171-185.
A stable weak Galerkin finite element method for Stokes problem. T Zhang, L Tao, J. Comput. Appl. Math. 333T. Zhang, L. Tao, A stable weak Galerkin finite element method for Stokes problem, J. Comput. Appl. Math., 333(2018), pp. 235-246.
A new weak Galerkin finite element scheme for the biharmonic equations. R Zhang, Q Zhai, J. Sci. Comput. 64R. Zhang, Q. Zhai, A new weak Galerkin finite element scheme for the biharmonic equations, J. Sci. Comput., 64 (2015), pp. 559-585.
A new weak Galerkin finite element scheme for the Brinkman model. Q Zhai, R Zhang, L Mu, Commun. Coumput. Phys. 19Q. Zhai, R. Zhang, L. Mu, A new weak Galerkin finite element scheme for the Brinkman model, Commun. Coumput. Phys., 19(2016), pp. 1409-1434.
| []
|
[
"Information Flow in Social Groups",
"Information Flow in Social Groups"
]
| [
"Fang Wu \nInformation Dynamics Lab, HP Laboratories\n1501 Page Mill Road94304-1126CA\n",
"Bernardo A Huberman \nInformation Dynamics Lab, HP Laboratories\n1501 Page Mill Road94304-1126CA\n",
"Lada A Adamic \nInformation Dynamics Lab, HP Laboratories\n1501 Page Mill Road94304-1126CA\n",
"Joshua R Tyler \nInformation Dynamics Lab, HP Laboratories\n1501 Page Mill Road94304-1126CA\n"
]
| [
"Information Dynamics Lab, HP Laboratories\n1501 Page Mill Road94304-1126CA",
"Information Dynamics Lab, HP Laboratories\n1501 Page Mill Road94304-1126CA",
"Information Dynamics Lab, HP Laboratories\n1501 Page Mill Road94304-1126CA",
"Information Dynamics Lab, HP Laboratories\n1501 Page Mill Road94304-1126CA"
]
| []
| We present a study of information flow that takes into account the observation that an item relevant to one person is more likely to be of interest to individuals in the same social circle than those outside of it. This is due to the fact that the similarity of node attributes in social networks decreases as a function of the graph distance. An epidemic model on a scale-free network with this property has a finite threshold, implying that the spread of information is limited. We tested our predictions by measuring the spread of messages in an organization and also by numerical experiments that take into consideration the organizational distance among individuals. | 10.1016/j.physa.2004.01.030 | [
"https://arxiv.org/pdf/cond-mat/0305305v2.pdf"
]
| 14,556,814 | cond-mat/0305305 | a5438cccda214e6e2af9495bb76efd2a1d91a179 |
Information Flow in Social Groups
22 May 2003
Fang Wu
Information Dynamics Lab, HP Laboratories
1501 Page Mill Road94304-1126CA
Bernardo A Huberman
Information Dynamics Lab, HP Laboratories
1501 Page Mill Road94304-1126CA
Lada A Adamic
Information Dynamics Lab, HP Laboratories
1501 Page Mill Road94304-1126CA
Joshua R Tyler
Information Dynamics Lab, HP Laboratories
1501 Page Mill Road94304-1126CA
Information Flow in Social Groups
22 May 2003
We present a study of information flow that takes into account the observation that an item relevant to one person is more likely to be of interest to individuals in the same social circle than those outside of it. This is due to the fact that the similarity of node attributes in social networks decreases as a function of the graph distance. An epidemic model on a scale-free network with this property has a finite threshold, implying that the spread of information is limited. We tested our predictions by measuring the spread of messages in an organization and also by numerical experiments that take into consideration the organizational distance among individuals.
The problem of information flows in social organizations is relevant to issues of productivity, innovation and the sorting out of useful ideas out of the general chatter of a community. How information spreads determines the speed with which individuals can act and plan their future activities. In particular, email has become the predominant means of communication in the information society. It pervades business, social and scientific exchanges and as such it is a highly relevant area for research on communities and social networks. Not surprisingly, email has been established as an indicator of collaboration and knowledge exchange [1,2,3,4,5]. Email is also a good medium for research because it provides plentiful data on personal communication in an electronic form.
Since individuals tend to organize both formally and informally into groups based on their common activities and interests, the way information spreads is affected by the topology of the interaction network, not unlike the spread of a disease among individuals. Thus one would expect that epidemic models on graphs are relevant to the study of information flow in organizations. In particular, recent work on epidemic propagation on scale free networks found that the threshold for an epidemic is zero, implying that a finite fraction of the graph becomes infected for arbitrarily low transmission probabilities [6,7,8]. The presence of additional network structure was found to further influence the spread of disease on scale-free graphs [9,10,11].
There are, however, differences between information flows and the spread of viruses. While viruses tend to be indiscriminate, infecting any susceptible individual, information is selective and passed by its host only to individuals the host thinks would be interested in it.
The information any individual is interested in depends strongly on their characteristics. Furthermore, individuals with similar characteristics tend to associate with one another, a phenomenon known as homophily [12,13,14]. Conversely, individuals many steps removed in a social network on average tend not to have as much in common, as shown in a study [15] of a network of Stanford student homepages and illustrated in Figure 1.
We therefore introduce an epidemic model with decay in the transmission probability of a particular piece of in- formation as a function of the distance between the originating source and the current potential target. In the following analysis, we show that this epidemic model on a scale-free network has a finite threshold, implying that the spread of information is limited. We further tested our predictions by observing the prevalence of messages in an organization and also by numerical experiments that take into consideration the organizational distance among individuals.
Consider the problem of information transmission in a power-law network whose degree distribution is given by [16]
p k = Ck −α ,(1)
where α > 1, and C is determined by the normalization condition. The generating function of the distribution is
G 0 (x) = ∞ k=0 p k x k = Li α (x) Li α (1) .(2)
Following the analysis in [17] for the SIR (susceptible, infected, removed) model, we now estimate the probability p (1) m that the first person in the community who has received a piece of information will transmit it to m of their neighbors. Using the binomial distribution, we find
p (1) m = ∞ k=m p k k m T m (1 − T ) k−m ,(3)
where the transmissiblity T is the probability that a person will transmit an item to a neighbor and the superscript "(1)" refers to first neighbors, those who received the information directly from the initial source. The generating function for p (1) m is given by
G (1) (x) = ∞ m=0 ∞ k=m p k k m T m (1 − T ) k−m x m (4) = G 0 (1 + (x − 1)T ) = G 0 (x; T ).(5)
Suppose the transmissibility decays as a power of the distance from the initial source. We choose this weakest form of decay as the results that are obtained from it will also be valid for stronger functional forms. Then the probability that an mth neighbor will transmit the information to a person with whom he has contact is given by
T (m) = (m + 1) −β T,(6)
where β > 0 is the decay constant. T (m) = T at the originating node (m = 0) and decays to zero as m → ∞.
The distribution of the number of 2nd neighbors can be written as
G (2) (x) = k p (1) k [G (1) 1 (x)] k = G (1) (G (1) 1 (x)),(7)
where G (1)
1 (x) = G 1 (x; 2 −β T ) = G 1 (1 + (x − 1)2 −β T ). (8)
Similarly, if we define G (m) (x) to be the the generating function for the number of mth neighbors affected, then we have
G (m+1) (x) = G (m) (G (m) 1 (x)) for m ≥ 1,(9)
where G (m)
1 (x) = G 1 (x; (m+1) −β T ) = G 1 (1+(x−1)(m+1) −β T ),(10)
and
G 1 (x) = G ′ 0 (x) G ′ 0 (1) = 1 z G ′ 0 (x).(11)
Or, more explicitly,
G (m+1) (x) = G (1) (G (1) 1 (G (2) 1 (· · · G (m)
1 (x)))).
The average number z m+1 of (m + 1)th neighbors is So the condition that the size of the outbreak remains finite is given by
z m+1 = G (m+1) ′ (1) = G (m) 1 ′ (1)G (m) ′ (1) = G (m) 1 ′ (1)z m .(13)z m+1 z m = G (m) 1 ′ (1) < 1,(14)
or
(m + 1) −β T G ′ 1 (1) < 1.(15)
For any given T , the left hand side of the inequality above goes to zero when m → ∞, so the condition is eventually satisfied for large m. Therefore the average total size
s = ∞ m=1 z m(16)
is always finite if the transmissibility decays with distance. Note that if T is constant the average total size is infinite for values of α < 3 as shown previously.
In the real world however, the size of a network is always finite, and in order to define a transmissibility threshold one needs an outbreak size that is compatible with the size of the whole network. Furthermore, many real world networks have a cutoff κ far below their size. Thus we can write for the link distribution p k = Ck −α exp(−k/κ).
As an example, consider a network made up of 10 6 vertices. We define an epidemic to be an outbreak affecting more than 1% or 10 4 vertices. Thus for fixed α, κ and β, we can define T c as the transmissibility above which s would be made to exceed 10 4 .
The numerical result of T c as a function of α is shown in Fig. 2, where we choose κ = 100 and β = 1. It is seen that when there is no decay, T c is very near zero for α close to 2, which means that for most values of T epidemics occur. However, when the transmissibility decays, T c rises substantially. For example, T c jumps to 0.54 at α = 2, implying that the information may not spread over the network.
In order to validate empirically that the spread of information within a network of people is limited, and hence distinct from the spread of a virus, we gathered a sample from the mail clients of 40 individuals (30 within HP Labs, and 10 from other areas of HP, other research labs, and universities). Each volunteer executed a program that identified URLs and attachments in the messages in their mailboxes, as well as they time the messages were received. This data was cryptographically hashed to protect the privacy of the users. By analyzing the message content and headers, we restricted our data to include only messages which had been forwarded at least one time, thereby eliminating most postings to mailing lists and more closely approximating true inter-personal information spreading behavior. The median number of messages in a mailbox in our sample is 2200, indicating that many users keep a substantial portion of their email correspondence. Although some messages may have been lost when users deleted them, we assume that a majority of messages containing useful information had been retained. Figure 3 shows a histogram of how many users had received each of the 3401 attachments and 6370 URLs. The distribution shows that only a small fraction (5% of attachments and 10% of URLs) reach more than 1 recipient. Very few (41 URLs and 6 attachments) reached more than 5 individuals, a number which, in a sample of 40, starts to resemble an outbreak. In follow-up discussions with our study subjects, we were able to identify the content and significance of most of these messages. 14 of the URLs were advertisements attached to the bottom of an email by free email services such as Yahoo and MSN. These are in a sense viral, because the sender is sending them involuntarily. It is this viral strategy that was responsible for the rapid buildup of the Hotmail free email service user base. 10 URLs pointed to internal HP project or personal pages, 3 URLs were for external commercial or personal sites, and the remaining 14 could not be identified.
In our sample, one group is overrepresented, allowing us to observe both the spread of information within a close group, and the lack of information spread across groups. A number of attachments reaching four or more people were resumes circulated within one group. A few attachments were announcements passed down by higher level management. This kind of top down transmission within an organization is another path through which information can be efficiently disseminated.
Next we simulated the effect of decay in the transmission probability on the email graph at HP Labs in Palo Alto, CA. The graph was constructed from recorded logs of all incoming and outgoing messages over a period of 3 months. The graph has a nearly power-law out degree distribution, shown in Figure 4, including both internal and external nodes. Because all of the outgoing and incoming contacts were recorded for internal nodes, their in and out degrees were higher than for the external nodes for which we could only record the email they sent to and received from HP Labs. We however considered a graph with the internal and external nodes mixed (as in [18]) to demonstrate the effect of a decay on the spread of email specifically in a power-law graph.
We simulated the spread of an epidemic by selecting a random initial sender to infect and following the email log containing 120,000 entries involving over 7,000 recip- ients in the course of a week. Every time an infective individual was recorded as sending an email to someone else, they had a constant probability p of infecting the recipient. Hence individuals who email more often have a higher probability of infecting. We also assume that an individual remains infective (willing to transmit a particular piece of information) for a period of 24 hours.
Next we introduced a decay in the transmission probability p as p * d −1.75 ij , where d ij is the distance in the organizational hierarchy between two individuals. This exponent roughly corresponds to the decay in similarity between homepages shown in Figure 1. The decay represents the fact that individuals closer together in the organizational hierarchy share more common interests. Individuals have a distance of one to their immediate superiors and subordinates and to those they share a superior with. The distance between someone within HP labs and someone outside of HP labs was set to the maximum hierarchical distance of 8.
In figure 5 we show the variation in the average outbreak size, and the average epidemic size (chosen to be any outbreak affecting more than 30 individuals). Without decay, the epidemic threshold falls below p = 0.01. With decay, the threshold is set back to p = 0.20 and the outbreak epidemic size is limited to about 50 individuals, even for p = 1.
As these results show, the decay of similarity among members of a social group has strong implications for the propagation of information among them. In particular, the number of individuals that a given email message reaches is very small, in contrast to what one would expect on the basis of a virus epidemic model on a scale free graph. The implication of this finding is that merely discovering hubs in a community network is not enough to ensure that information originating at a particular node will reach a large fraction of the community. We expect that these findings are also valid with other means of social communication, such as verbal exchanges, telephony and instant messenger systems.
FIG. 1 :
1Average similarity of Stanford student homepages as a function of the number of hyperlinks separating them.
FIG. 2 :
2Tc as a function of α. The three different curves, from bottom to top are: 1) no decay in transmission probability, no exponential cutoff in the degree distribution (κ = ∞, β = 0). 2) κ = 100, β = 0, 3) κ = 100, β = 1.
FIG. 3 :
3Number of people receiving URLs and attachments
FIG. 4 :
4Outdegree distribution for all senders (224,514 in total) sending email to or from the HP Labs email server over the course of 3 months. The outdegree of a node is the number of correspondents the node sent email to.
FIG. 5 :
5Average outbreak and epidemic size as a function of the transmission probability p.
. B Wellman, Science. 2932031B. Wellman, Science 293, 2031 (2002).
S Whittaker, C Sidner, Proceedings of CHI'96 Conference on Computer Human Interaction (Logos Verlag. CHI'96 Conference on Computer Human Interaction (Logos VerlagNew York, ADDRESSS. Whittaker and C. Sidner, in Proceedings of CHI'96 Conference on Computer Human Interaction (Logos Ver- lag, New York, ADDRESS, 21996), pp. 276-283.
. R Guimerà, unpublishedR. Guimerà et al., http://arxiv.org/PS_cache/cond-mat/pdf/0211/ (unpublished).
J R Tyler, D M Wilkinson, B A Huberman, Proceedings of the International Conference on Communities and Technologies. the International Conference on Communities and TechnologiesNetherlands, ADDRESSKluwer Academic PublishersJ. R. Tyler, D. M. Wilkinson, and B. A. Huberman, in Proceedings of the International Conference on Com- munities and Technologies (Kluwer Academic Publishers, Netherlands, ADDRESS, 2003).
. J.-P Eckmann, E Moses, D Sergi, unpublishedJ.-P. Eckmann, E. Moses, and D. Sergi, http://xyz.lanl.gov/abs/cond-mat/0304433" (un- published).
. Z Dezso, A.-L Barabasi, Phys. Rev. E. 6555103Z. Dezso and A.-L. Barabasi, Phys. Rev. E 65, 055103 (2002).
. R Pastor-Satorras, A Vespignani, Phys. Rev. Lett. 863200R. Pastor-Satorras and A. Vespignani, Phys. Rev. Lett. 86, 3200 (2001).
. M E J Newman, S Forrest, J Balthrop, Phys. Rev. E. 6635101M. E. J. Newman, S. Forrest, and J. Balthrop, Phys. Rev. E 66, 035101 (2002).
. V M Eguiluz, K Klemm, Phys. Rev. Lett. 89108701V. M. Eguiluz and K. Klemm, Phys. Rev. Lett. 89, 108701 (2002).
. A Vazquez, Physical Review E. 6746111A. Vazquez et al., Physical Review E 67, 046111 (2003).
. M E J Newman, Phys. Rev. Lett. 89208701M. E. J. Newman, Phys. Rev. Lett 89, 208701 (2002).
P Lazarsfeld, R K Merton, Chap. Friendship as a social Process: A Substantive and Methodological Analysis. M. Berger, T. Abel, and C. PageNew YorkVan NostrandFreedom and Control in Modern SocietyP. Lazarsfeld and R.K.Merton, in Freedom and Control in Modern Society, edited by M. Berger, T. Abel, and C. Page (Van Nostrand, New York, 1954), Chap. Friendship as a social Process: A Substantive and Methodological Analysis.
. J Touhey, Sociometry. 37363J. Touhey, Sociometry 37, 363 (1974).
. S Feld, American Journal of Sociology. 861015S. Feld, American Journal of Sociology 86, 1015 (1981).
. L A Adamic, E Adar, Social Networks. to appearL. A. Adamic and E. Adar, Social Networks (2003), to appear.
. M E J Newman, S H Strogatz, D J Watts, Phys. Rev. E. 6426118M. E. J. Newman, S. H. Strogatz, and D. J. Watts, Phys. Rev. E 64, 026118 (2001).
. M Newman, Phys. Rev. E. 6616128M. Newman, Phys. Rev. E 66, 016128 (2002).
. H Ebel, L.-I Mielsch, S Bornholdt, Phys. Rev. E. 6635103H. Ebel, L.-I. Mielsch, and S. Bornholdt, Phys. Rev. E 66, 035103 (2002).
| []
|
[
"ISOPARAMETRIC SUBMANIFOLDS IN TWO-DIMENSIONAL COMPLEX SPACE FORMS",
"ISOPARAMETRIC SUBMANIFOLDS IN TWO-DIMENSIONAL COMPLEX SPACE FORMS"
]
| [
"José Carlos Díaz-Ramos ",
"ANDMiguel Domínguez-Vázquez ",
"Cristina Vidal-Castiñeira "
]
| []
| []
| We show that an isoparametric submanifold of a complex hyperbolic plane, according to the definition of Heintze, Liu and Olmos', is an open part of a principal orbit of a polar action.We also show that there exists a non-isoparametric submanifold of the complex hyperbolic plane that is isoparametric according to the definition of Terng's. Finally, we classify Terng-isoparametric submanifolds of two-dimensional complex space forms.2010 Mathematics Subject Classification. 53C40, 53C12, 53C35. | 10.1007/s10455-017-9572-2 | [
"https://arxiv.org/pdf/1604.01237v1.pdf"
]
| 119,620,444 | 1604.01237 | 9da31a5a29ed17f15fd4775033c794e444e699d5 |
ISOPARAMETRIC SUBMANIFOLDS IN TWO-DIMENSIONAL COMPLEX SPACE FORMS
5 Apr 2016
José Carlos Díaz-Ramos
ANDMiguel Domínguez-Vázquez
Cristina Vidal-Castiñeira
ISOPARAMETRIC SUBMANIFOLDS IN TWO-DIMENSIONAL COMPLEX SPACE FORMS
5 Apr 2016
We show that an isoparametric submanifold of a complex hyperbolic plane, according to the definition of Heintze, Liu and Olmos', is an open part of a principal orbit of a polar action.We also show that there exists a non-isoparametric submanifold of the complex hyperbolic plane that is isoparametric according to the definition of Terng's. Finally, we classify Terng-isoparametric submanifolds of two-dimensional complex space forms.2010 Mathematics Subject Classification. 53C40, 53C12, 53C35.
Introduction
A submanifold M of a Riemannian manifoldM is said to be isoparametric according to Heintze, Liu and Olmos [15], henceforth simply isoparametric, if its normal bundle νM is flat, all nearby parallel submanifolds have constant mean curvature in the radial directions, and for any p ∈ M there exists a totally geodesic submanifold Σ p through p such that T p Σ p = ν p M.
We denote byM 2 (c) a 2-dimensional complex space form of constant holomorphic sectional curvature c = 0. Thus,M 2 (c) is a complex projective plane CP 2 if c > 0, or a complex hyperbolic plane CH 2 if c < 0. The first main result of this paper is:
Theorem A. An isoparametric submanifold ofM 2 (c) is congruent to an open part of a principal orbit of a polar action onM 2 (c).
The classification of isoparametric submanifolds for complex projective spaces CP n , n = 15, has been obtained in much greater generality using a different method in [14]. Here we deal with both cases simultaneously and obtain the result for CH 2 . We prove this theorem in Section 4.
Recall that an isometric action of a Lie group on a Riemannian manifold is called polar if there exists a submanifold Σ (called section) that intersects all the orbits of the action, and such that Σ is orthogonal to the orbits at intersection points. Polar actions on complex projective spaces have been classified in [16], and polar actions on the complex hyperbolic plane have been classified in [5]. See also [11] for the more general classification in CH n . Therefore, our result implies the classification of isoparametric submanifolds inM 2 (c) of arbitrary codimension. The classification for CH 2 seems to be the first one of these characteristics in a symmetric space of noncompact type and nonconstant curvature.
A submanifold M of a Riemannian manifoldM is called Terng-isoparametric if it has flat normal bundle and the eigenvalues of the shape operator with respect to any parallel normal vector field are constant. In our setting, Terng's definition is less rigid than Heintze, Liu and Olmos', and thus, a new example appears in codimension two:
Theorem B. A submanifold ofM 2 (c) is Terng-isoparametric if and only if it is congruent to an open part of:
(i) an isoparametric submanifold ofM 2 (c), or (ii) a Chen's surface in CH 2 , or (iii) a circle inM 2 (c).
The proof of Theorem B is given in Section 5. Apart from circles, which are trivial examples of Terng-isoparametric submanifolds, we do not get new examples in complex projective planes. However, there exists a Terng-isoparametric submanifold in CH 2 that is neither a circle nor a principal orbit of a polar action. We have called this new example Chen's surface, which is homogeneous, that is, an orbit of an isometric action on the ambient space, and unique up to isometric congruence (see §3). It was introduced by Chen in [7], and a geometric characterization was given in [9]. In Section 3 we present a new Lie theoretic description of this submanifold in terms of the root space decomposition of the Lie algebra of the isometry group of CH 2 .
The motivation for this paper comes from the study of isoparametric submanifolds in symmetric spaces. The history of isoparametric submanifolds can be traced back at least to the works of Somigliana [18] and Segre [17] who classified isoparametric hypersurfaces in Euclidean spaces. Thorbergsson showed in [20] that compact, full and irreducible isoparametric submanifolds of codimension greater than 2 in Euclidean spaces are homogeneous, which implies that such submanifolds are principal orbits of polar actions, which in turn correspond to isotropy representations of symmetric spaces [10].
Thorbergsson's remarkable result [20] readily implies the classification of isoparametric submanifolds of codimension ≥ 2 in spheres. However, the classification of isoparametric hypersurfaces in spheres is open and still an active topic of research. See [21] for a recent survey on this and other related topics.
Isoparametric hypersurfaces in real hyperbolic spaces were classified by Cartan [6], whereas for higher codimension, Wu [23] reduced the classification problem to that of isoparametric hypersurfaces in spheres. We highlight that, in real space forms, homogeneous isoparametric submanifolds are always principal orbits of polar actions.
The general study of isoparametric submanifolds was started by Terng [19], whose definition was given for spaces of constant curvature. Nowadays the general definition of isoparametric submanifold is credited to Heintze, Liu and Olmos [15]. This is the notion that we use in this paper, although we also consider Terng's definition, which turns out to be less rigid when the ambient space is a complex hyperbolic plane. This contrasts with the situation in real space forms, where both definitions agree.
Isoparametric submanifolds of complex projective spaces CP n have been studied by the second author, who gave a classification if n = 15. In this paper we also study Terngisoparametric submanifolds of CP 2 and conclude that no new interesting examples arise.
The classification of isoparametric hypersurfaces in complex hyperbolic spaces has recently been obtained in [12]. For higher codimension the problem seems to be much more complicated. We restrict to CH 2 in this paper and show that all examples are open parts of principal orbits of polar actions on CH 2 . Surprisingly, unlike in real space forms, there is a Terng-isoparametric submanifold of codimension 2 that is not isoparametric; this submanifold is homogeneous but not an orbit of a polar action.
Preliminaries
We start with some basic definitions and notations.
2.1. Submanifold geometry. We denote byM 2 (c) a complex space form of dimension 2 and constant holomorphic curvature c = 0. Thus,M 2 (c) is isometric to a complex projective plane CP 2 endowed with the Fubini-Study metric of constant holomorphic sectional curvature c > 0, or to a complex hyperbolic plane CH 2 endowed with the Bergman metric of constant holomorphic sectional curvature c < 0. We denote by · , · the Riemannian metric ofM 2 (c) and by∇,R and J its Levi-Civita connection, its curvature tensor and its complex structure, respectively. Thus,
R(X, Y )Z = c 4 ( Y, Z X − X, Z Y + JY, Z JX − JX, Z JY − 2 JX, Y JZ),
for vector fields X, Y , Z ∈ Γ(M 2 (c)). Now let M be a submanifold ofM 2 (c). We denote its normal bundle by νM, and by ∇ and R its Levi-Civita connection and its curvature tensor, respectively. The extrinsic geometry of M is determined by its second fundamental form II, which is defined by the formula∇ X Y = ∇ X Y + II(X, Y ), for X, Y ∈ Γ(T M). If ξ ∈ Γ(νM) is a normal vector, then the shape operator S ξ with respect to ξ is defined by S ξ X, Y = II(X, Y ), ξ . We also denote by ∇ ⊥ the normal connection of the normal bundle νM, which is related to the shape operator via the Weingarten formula∇ X ξ = −S ξ X + ∇ ⊥ X ξ. To a large extent, the geometry of M is governed by the Gauss, Codazzi and Ricci equations, that can be written as
R (X, Y )Z, W = R(X, Y )Z, W − II(Y, Z), II(X, W ) + II(X, Z), II(Y, W ) , R (X, Y )Z, ξ = ∇ X S ξ Y, Z − ∇ X Y, S ξ Z − S ∇ ⊥ X ξ Y, Z − ∇ Y S ξ X, Z + ∇ Y X, S ξ Z + S ∇ ⊥ Y ξ X, Z , R (X, Y )ξ, η = R ⊥ (X, Y )ξ, η − [S ξ , S η ]X, Y ,
where X, Y , Z, W ∈ Γ(T M), ξ, η ∈ Γ(νM) and R ⊥ denotes the curvature tensor of ∇ ⊥ .
We say that M has flat normal bundle if R ⊥ = 0. This is equivalent to requiring that each point has a neighborhood where there is an orthonormal frame of νM consisting of ∇ ⊥ -parallel normal vector fields.
It is easy to see that the shape operator with respect to a unit normal vector field ξ is selfadjoint, that is, S ξ X, Y = S ξ Y, X for all X, Y ∈ T M. Hence, each S ξ is diagonalizable with real eigenvalues and orthogonal eigenspaces. These eigenvalues are called the principal curvatures in the direction of ξ. The mean curvature in the direction of ξ is 1 k tr S ξ , while the mean curvature vector of M is defined by
H = 1 k k i=1 II(E i , E i ) = 1 k l i=1 (tr S ξ i )ξ i , where {E 1 , . .
. , E k } and {ξ 1 , . . . , ξ l } are orthonormal frames of T M and of νM, respectively.
A submanifold M is called totally umbilical if II(X, Y ) = X, Y H for any X, Y ∈ T M and totally geodesic if II = 0. Totally geodesic submanifolds ofM 2 (c) can be geodesics, real projective or hyperbolic planes RP 2 or RH 2 , and complex projective or hyperbolic lines CP 1 or CH 1 , depending on the sign of the holomorphic curvature c.
Isoparametric submanifolds.
LetM be a Riemannian manifold and M a submanifold ofM . The submanifold M is said to be almost isoparametric [15] if its normal bundle νM is flat and if, locally, the parallel submanifolds of M have constant mean curvature in radial directions.
The submanifold M is said to admit sections if for any point p ∈ M there is a totally geodesic submanifold Σ p , called the section through p, such that T p Σ p = ν p M. Then, we say M is isoparametric if it is almost isoparametric and admits sections. Throughout this paper whenever we consider an isoparametric submanifold, we understand that it is isoparametric according to this definition.
The submanifold M is said to have constant principal curvatures if for any curve σ : I → M and any parallel unit normal vector field ξ ∈ Γ(σ * ν 1 M) along σ the eigenvalues of the shape operator S ξ(t) with respect to ξ(t) are constant along σ. Then, M is called Terng-isoparametric (or isoparametric according to Terng [19]) if it has constant principal curvatures and flat normal bundle.
Chen's surface
In this section we give a Lie theoretic description of the surface that arises in Theorem B (ii). This surface was introduced by Chen in [7].
First we recall the characterizing properties of this surface according to [7]. A surface M in CH 2 is called slant if its tangent space has constant Kähler angle (called Wirtinger angle or slant angle in [7]), that is, if for each nonzero vector v ∈ T p M the angle between Jv and T p M is independent of p ∈ M and v ∈ T p M. Such surface is called proper slant if it is neither complex nor totally real, that is, if the Kähler angle is neither 0 nor π/2. The Chen's surface that appears in Theorem B (ii) is, according to [7, Theorem A] and [9, Theorem 5.1], the unique (up to isometric congruence) proper slant surface of CH 2 with Kähler angle θ = arccos(1/3), and satisfying H, H = 2K − c(1 + 3 cos 2 θ)/2, where K is the Gaussian curvature of M.
Chen's surface turns out to be homogeneous, although not an orbit of a polar action, and the aim of this section is to give a subgroup of the isometry group of CH 2 one of whose orbits is precisely the Chen's surface. For that matter let G = SU(1, 2) and K = S(U(1)U(2)) ⊂ G, and write CH 2 = G/K. Then K is the isotropy group of G at the origin o = 1K. We denote by g and k the Lie algebras of G and K respectively. We have the Cartan decomposition with respect to o, g = k ⊕ p, where p is the orthogonal complement of k in g with respect to the Killing form of g. Let a be a maximal abelian subspace of p, which is known to be 1-dimensional. For a covector λ ∈ a * we define g λ = {X ∈ g : [H, X] = λ(H)X, ∀H ∈ a}. Then, one can write g = g −2α ⊕ g −α ⊕ g 0 ⊕ g α ⊕ g 2α , the so-called root space decomposition of g with respect to o and a. It is known that g 0 = k 0 ⊕ a, where k 0 = g 0 ∩ k, and that g 2α is 1-dimensional. We determine an ordering in a * so that α is positive, and define the nilpotent subalgebra n = g α ⊕ g 2α ; we denote by N the connected subgroup of G whose Lie algebra is n. The subspace a ⊕ n is then a solvable subalgebra of g and we denote by AN the connected subgroup of G whose Lie algebra is a ⊕ n. One can show that AN acts simply transitively on CH 2 , and that the metric of CH 2 induces a left-invariant metric in AN that we denote by · , · . We also denote by J the complex structure in a ⊕ n induced by the complex structure of T o CH 2 . This turns a ⊕ n into a complex vector space such that g α is J-invariant (that is, g α ∼ = C), and Ja = g 2α . Moreover, the decomposition a ⊕ g α ⊕ g 2α is orthogonal. We choose a unit vector B ∈ a and define Z = JB ∈ g 2α . The Levi-Civita connection of AN in terms of left-invariant vector fields is determined by
(1) 1 √ −c∇ aB+U +xZ bB + V + yZ = xy + 1 2 U, V B − 1 2 bU + yJU + xJV + −bx + 1 2 JU, V Z,
where a, b, x, y ∈ R, and U, V ∈ g α . See for example [4]. Now assume that V ∈ g α is a unit vector. We have g α = RV ⊕ RJV . We define the following subalgebra of a ⊕ n:
h = RU 1 ⊕ RU 2 , with U 1 = 1 √ 3 √ 2B + JV , and U 2 = 1 √ 3 V + √ 2Z .
Let H be the connected subgroup of AN whose Lie algebra is h, and M = H · o the orbit through the origin. Since AN acts simply transitively on CH 2 we may identify H with M for the calculations that follow. First notice that {U 1 , U 2 } is an orthonormal basis of the tangent space of M, and JU 1 , U 2 = 1/3. By homogeneity we conclude that M is a proper slant surface with Kähler angle θ = arccos(1/3). Using (1) we get the mean curvature vector and the Gaussian curvature
H = √ −c 3 B − √ 2JV , and K = c 6 .
It readily follows from this equation that H, H = 2K − c(1 + 3 cos 2 θ)/2 and hence, [7, Theorem A] and [9, Theorem 5.1] imply that M is isometrically congruent to the Chen's surface.
Proof of Theorem A
Let M be an isoparametric submanifold ofM 2 (c). By definition, M has a section at every point, that is, for each p ∈ M there exists a totally geodesic submanifold Σ p through p such that T p Σ p = ν p M. Totally geodesic submanifolds of complex space forms are known to be either complex or totally real.
First we assume that the section is complex. Then, M is an almost complex submanifold of a Kähler manifold, and hence, M is Kähler. Since the normal bundle of M is flat, [1,Theorem 19] implies that M is either a point or an open part ofM 2 (c).
Hence, we may assume from now on that sections are totally real. In this case, sections are either geodesics or totally geodesic real projective planes RP 2 in CP 2 or real hyperbolic planes RH 2 in CH 2 . If the section is a geodesic, M is an isoparametric hypersurface. The classification of isoparametric hypersurfaces in CP 2 follows from [14], and all examples are open parts of orbits of cohomogeneity one actions. Indeed, we get from [14] the full classification of isoparametric submanifolds of CP 2 , but the arguments that follow for higher codimension are also valid for this case. Isoparametric hypersurfaces in CH n have been classified in [12, Corollary 1.2] and it follows from here that M is an open part of a principal orbit of a cohomogeneity one action on CH 2 .
Therefore, we can assume that M has codimension 2. Since in this case sections are totally real, it follows that T M and νM are both totally real. Indeed, M is Lagrangian as
JT p M = ν p M for each p ∈ M.
If M is totally umbilical, then it follows from [8] that M is an open part of a totally geodesic real projective plane RP 2 in CP 2 or a totally geodesic real hyperbolic plane RH 2 in CH 2 . However, these are not isoparametric because their normal bundles are not flat.
We denote by ν 1 M the unit normal bundle of M. By assumption νM is flat. For a given parallel unit normal vector field ξ ∈ Γ(ν 1 M) and r > 0 we define Φ r,ξ : M →M 2 (c), p → exp p (rξ). Let γ ξp be the geodesic ofM 2 (c) with initial conditions γ ξp (0) = p, γ ′ ξp (0) = ξ p . We also define the vector field η r along Φ r,ξ by η r (p) = γ ′ ξp (r). Parallel submanifolds to M are of the form M r,ξ = Φ r,ξ (M). We calculate their mean curvature at Φ r,ξ (p) in the direction of η r (p).
We denote by λ 1 , λ 2 : ν 1 M → R the principal curvature functions, which are given by the fact that λ 1 (ξ) and λ 2 (ξ) are the eigenvalues of the shape operator S ξ . We have already seen that M cannot be totally umbilical, so we may assume that there exists ξ ∈ ν 1 M such that λ 1 (ξ) = λ 2 (ξ). By continuity, the principal curvature functions are thus different on an open neighborhood of ξ in ν 1 M. In the sequel we assume that calculations take place in such a neighborhood. We also denote by U 1 (ξ) and U 2 (ξ) a (local) orthonormal frame of T M consisting of principal curvature vectors associated with λ 1 (ξ) and λ 2 (ξ).
Let p ∈ M. Using standard Jacobi field theory, we get that Φ r,ξ * p (v) = X v (r) for each v ∈ T p M, where X v denotes the Jacobi vector field along γ ξ with initial conditions X v (0) = v and X ′ v (0) = −S ξ (v). Here (·) ′ stands for covariant derivative along γ ξ . Recall that the Jacobi equation onM 2 (c) along γ ξ can be written as 4X ′′ + cX + 3c X, Jγ ′ ξ Jγ ′ ξ = 0. Moreover, it is known that the points where a Jacobi field vanishes correspond to the singularities of the Riemannian exponential map. Since the Riemannian exponential map is a local diffeomorphism, it is then clear that Φ r,ξ is a local diffeomorphism for sufficiently small values of r. Thus, we will take, if necessary, a sufficiently small neighborhood of p and sufficiently small values of r so that Φ r,ξ is a diffeomorphism. In order to simplify notation we define u i = U i (ξ p ), i = 1, 2, and set v = u i in the previous calculations. Then
X u i (t) = f λ i (t)P ξ u i (t)+ u i , Jξ g λ i (t)Jγ ′ ξ (t),
where P ξ v (t) denotes parallel transport of v ∈ T p M along the geodesic γ ξ . The functions f λ and g λ are defined by
f λ (t) = cosh t √ −c 2 − 2λ √ −c sinh t √ −c 2 , g λ (t) = cosh t √ −c 2 − 1 1 + 2 cosh t √ −c 2 − 2λ √ −c sinh t √ −c 2 .
(For c > 0 one would have to replace hyperbolic trigonometric functions by standard trigonometric functions.) In other words, X u i is the parallel transport along γ ξ of the tangent vector f λ i u i + u i , Jξ g λ i Jξ. At this point we recall that M has totally real tangent and normal bundles. Thus, Jξ is tangent to M and can be written as Jξ = U 1 (ξ), Jξ U 1 (ξ) + U 2 (ξ), Jξ U 2 (ξ). Moreover, since T Φ r,ξ (p) M r,ξ = Φ r,ξ * p (T p M) and Φ r,ξ is a diffeomorphism, it is then clear that T Φ r,ξ (p) M r,ξ = P ξ TpM (r), that is, the tangent space of M r,ξ at the point Φ r,ξ (r) is obtained by parallel translation of T p M along the geodesic γ ξ from p = γ ξ (0) to Φ r,ξ (r) = γ ξ (r).
The previous considerations allow us to define the endomorphism-valued map of the tangent space D ξ (t) :
T Φ t,ξ (p) M t,ξ → T Φ t,ξ (p) M t,ξ by D ξ (t)(P ξ v (t)) = X v (t), for each v ∈ T p M.
Since we are assuming that r is sufficiently small, D ξ (r) is actually an isomorphism of the tangent space. We denote now by S r,ξ η r the shape operator of M r,ξ with respect to the radial vector η r . It follows from Jacobi field theory that the shape operator of M r,ξ is given by S r,ξ η r (Φ r,ξ * p (v)) = −X ′ v (r) ⊤ , where (·) ⊤ denotes the orthogonal projection onto the tangent space T Φ r,ξ (p) M r,ξ . By the previous calculations,
X ′ u i (t) = f ′ λ i (t)P ξ u i (t) + u i , Jξ g ′ λ i (t)Jγ ′ ξ (t) ∈ T Φ r,ξ (p) M r,ξ .
This implies that S r,ξ η r = −D ′ ξ (r)D ξ (r) −1 . Finally, the mean curvature in radial directions is the function h r,ξ : M r,ξ → R determined by h r,ξ (Φ r,ξ (p)) = 1 2 tr S r,ξ η r (p) = −
1 2 tr D ′ ξ (r)D ξ (r) −1 = − d dr det D ξ (r) 2 det D ξ (r) .
It is easy to check that det
D ξ = f λ 1 f λ 2 + U 1 (ξ), Jξ 2 f λ 2 g λ 1 + U 2 (ξ), Jξ 2 f λ 1 g λ 2 .
The function h r,ξ • Φ r,ξ can be calculated explicitly, but for our purpose it suffices to calculate its Taylor power series expansion. After some relatively long but elementary calculations, and using U 1 (ξ), Jξ 2 + U 2 (ξ), Jξ 2 = Jξ, Jξ = 1, we get
h r,ξ (Φ r,ξ (p)) = 1 2 λ 1 (ξ p ) + λ 2 (ξ p ) + r 2 5c 4 + λ 1 (ξ p ) 2 + λ 2 (ξ p ) 2 + r 2 8 c λ 1 (ξ p ) + λ 2 (ξ p ) + 4 λ 1 (ξ p ) 3 + λ 2 (ξ p ) 3 + 3c λ 1 (ξ p ) U 1 (ξ p ), Jξ p 2 + λ 2 (ξ p ) U 2 (ξ p ), Jξ p 2 + O(r 3 ).
Since M is isoparametric, the function h r,ξ is constant by assumption. Since Φ r,ξ is a diffeomorphism, this is equivalent to requiring that (h r,ξ • Φ r,ξ )(p) does not depend on the point p. More precisely, by hypothesis the expression (h r,ξ • Φ r,ξ )(p) depends both on r and on the choice of parallel unit normal vector field ξ ∈ Γ(ν 1 M), but not on the base point p of the vector ξ p . Therefore, using the above power series expansion we obtain that the functions p → λ i (ξ)(p) = λ i (ξ p ), and p → U i (ξ), Jξ (p) = U i (ξ p ), Jξ p , i = 1, 2, are constant for a fixed parallel vector field ξ ∈ Γ(ν 1 M). By linearity this argument readily implies:
Proposition 4.1. An isoparametric submanifold ofM 2 (c) is Terng-isoparametric.
In order to conclude the proof of Theorem A we simply have to verify the following assertion: Proof. Since M is Lagrangian, Jν p M = T p M. Let ξ ∈ Γ(νM) be a parallel normal vector field and X ∈ Γ(T M). We denote by (·) ⊥ the orthogonal projection onto νM. As Jξ is tangent and sinceM 2 (c) is Kähler, the definition of the second fundamental form yields
0 = ∇ ⊥ X ξ = −∇ ⊥ X J 2 ξ = − ∇ X J 2 ξ ⊥ = − J∇ X Jξ ⊥ = − J(∇ X Jξ + II(X, Jξ)) ⊥ = −J∇ X Jξ.
Therefore, ∇Jξ = 0 and it follows that M is flat. Since M has constant principal curvatures and flat normal bundle, it is clear that M has parallel mean curvature. Thus, M is a Lagrangian, flat surface ofM 2 (c) with parallel mean curvature and it was shown in [13, Theorem 2.1] that M is then an open part of a principal orbit of a cohomogeneity two polar action onM 2 (c).
In particular, propositions 4.1 and 4.2, together with the fact that the principal orbits of a polar action are isoparametric submanifolds implies
Corollary 4.3. A Lagrangian submanifold ofM 2 (c) is isoparametric if and only if it is
Terng-isoparametric.
Remark 4.4. There is a shorter alternative proof of Theorem A that does not require working with Jacobi fields. Indeed, once the problem was reduced to the case of an isoparametric Lagrangian surface M, we could have argued as in the proof of Proposition 4.2 to show that M is flat. Since by assumption M is Lagrangian and has parallel mean curvature, by virtue of [13], M is a piece of a principal orbit of a cohomogeneity two polar action. However, we have preferred to include the longer argument because it shows that, in order to prove that an isoparametric submanifold inM 2 (c) has constant principal curvatures, it is not necessary to appeal to the strong result in [13].
Proof of Theorem B
We now consider a Terng-isoparametric submanifold M ofM 2 (c). In particular, the normal bundle of M is flat, and we have already seen in Section 4 that, if the normal bundle of M is complex, then M is either a point or an open subset ofM 2 (c). Thus, we may assume from now on that the normal bundle of M is not complex.
If the normal bundle of M is totally real, M is either a hypersurface or a Lagrangian submanifold. In the first case, M is a hypersurface ofM 2 (c) with constant principal curvatures. These were classified in [22] for CP 2 and in [3] for CH 2 where it was shown that such hypersurfaces are open parts of homogeneous hypersurfaces. In particular they are open parts of orbits of cohomogeneity one actions, which are polar.
If the normal bundle is totally real and has rank 2, then M is Lagrangian. Hence, it follows from Proposition 4.2 that M is an open part of a principal orbit of a cohomogeneity two polar action onM 2 (c).
Therefore, we can assume from now on that the normal bundle of M is neither complex nor totally real. If M is 1-dimensional, then M has to be a geodesic or a circle [2, §8.4], so we also assume that M is 2-dimensional.
Hence, we take, at least locally, a parallel orthonormal frame {ξ, η} of the normal bundle of M, and let {U 1 , U 2 } be an orthonormal frame of the tangent bundle of M such that S ξ U i = λ i U i , i = 1, 2. Since ξ is parallel, λ 1 and λ 2 are constant by assumption. At this point we observe that the mean curvature vector of M is parallel because the normal bundle is flat and the principal curvatures are constant (and hence the trace of each shape operator with respect to a parallel normal vector field is constant). Therefore, we may further assume that {ξ, η} is chosen so that η is perpendicular to the mean curvature vector field.
Using the fact that T M and νM are neither complex nor totally real we can write Jξ = b 1 U 1 + b 2 U 2 + aη, where a, b 1 , b 2 : M → R are smooth functions with b 2 1 + b 2 2 + a 2 = 1, and b 2 1 + b 2 2 = 0, a = 0. Since {U 1 , U 2 , ξ, η} is an orthonormal frame of T CH 2 we can write
−ξ = J 2 ξ = b 1 JU 1 + b 2 JU 2 + aJη = b 1 ( JU 1 , U 2 U 2 − b 1 ξ + JU 1 , η η) + b 2 (− JU 1 , U 2 U 1 − b 2 ξ + JU 2 , η η) + a(− JU 1 , η U 1 − JU 2 , η U 2 − aξ) = (−b 2 JU 1 , U 2 − a JU 1 , η )U 1 + (b 1 JU 1 , U 2 − a JU 2 , η )U 2 + (b 1 JU 1 , η + b 2 JU 2 , η )η − ξ. Thus, we have −b 2 JU 1 , U 2 − a JU 1 , η = b 1 JU 1 , U 2 − a JU 2 , η = b 1 JU 1 , η + b 2 JU 2 , η = 0.
Using these equalities and b 2 1 + b 2 2 + a 2 = 1, it is easy to show that we can write (up to a choice of orientation)
Jξ = b 1 U 1 + b 2 U 2 + aη, Jη = −b 2 U 1 + b 1 U 2 − aξ, JU 1 = −aU 2 − b 1 ξ + b 2 η, JU 2 = aU 1 − b 2 ξ − b 1 η.
For i ∈ {1, 2}, using the Codazzi equation, taking into account that ξ is parallel and that λ 1 and λ 2 are constant, we get
− 3cab i 4 = R (U 1 , U 2 )U i , ξ = (λ 2 − λ i ) ∇ U 1 U 2 , U i − (λ 1 − λ i ) ∇ U 2 U 1 , U i .
Since a = 0 and b 2 1 + b 2 2 = 0, we readily get λ 1 = λ 2 . Since {U 1 , U 2 } is an orthonormal frame of the tangent bundle we obtain
∇ U i U i = − 3cab i 4(λ 1 − λ 2 ) U j , ∇ U i U j = 3cab i 4(λ 1 − λ 2 ) U i , i, j ∈ {1, 2}, i = j.(2)
Now, since νM is flat, the Ricci equation implies
c 4 (−b 2 1 − b 2 2 + 2a 2 ) = R (U 1 , U 2 )ξ, η = S ξ U 1 , S η U 2 − S η U 1 , S ξ U 2 = (λ 1 − λ 2 ) S η U 1 , U 2 .
Recall that, since η is perpendicular to the mean curvature vector, we have tr S η = 0, and thus, with respect to the orthonormal basis {U 1 , U 2 } the shape operator S η can be written as
(3) S η = µ − c(1−3a 2 ) 4(λ 1 −λ 2 ) − c(1−3a 2 ) 4(λ 1 −λ 2 ) −µ .
for some function µ : M → R.
By assumption, the eigenvalues of S η are constant, or equivalently, the functions (4) tr S η = 0 and tr S 2
η = 2µ 2 + c 2 (1 − 3a 2 ) 2 8(λ 1 − λ 2 ) 2 are constant.
Now we calculate the derivatives of b 1 , b 2 and a. We take i, j ∈ {1, 2}, i = j. Using (2) and (3) we obtain (5)
U i b i = U i U i , Jξ = ∇ U i U i , b i U i + b j U j + aη + U i ,∇ U i Jξ = b j ∇ U i U i , U j + a U i , S η U i − λ i U i , JU i = − 3cab 1 b 2 4(λ 1 − λ 2 ) − a(−1) i µ, U i b j = U i U j , Jξ = ∇ U i U j , b i U i + b j U j + aη + U j ,∇ U i Jξ = b i ∇ U i U j , U i + a U j , S η U i − λ i U j , JU i = 3cab 2 i 4(λ 1 − λ 2 ) − ca(1 − 3a 2 ) 4(λ 1 − λ 2 ) − a(−1) i λ i , U i a = U i Jξ, η = ∇ U i Jξ, η + b i U i + b j U j + aη,∇ U i η = −λ i JU i , η − b i U i , S η U i − b j U j , S η U i = b j (−1) i λ i + b i (−1) i µ + cb j (1 − 3a 2 ) 4(λ 1 − λ 2 ) .
In order to get a relation for the derivatives of µ, we use the Codazzi equation together with (2), (3) and (5) to get, after some calculations
− 3c(−1) i ab j 4 = R (U 1 , U 2 )U i , η = ∇ U 1 S η U 2 , U i − ∇ U 1 U 2 , S η U i − ∇ U 2 S η U 1 , U i + ∇ U 2 U 1 , S η U i = −U j µ − 3ca(b j λ i + 2b i µ) 2(λ 1 − λ 2 ) .
Thus, we obtain
(6) U i µ = 3ca 4(λ 1 − λ 2 ) (b i λ i − 3b i λ j − 4b j µ), i, j ∈ {1, 2}, i = j.
The aim of the argument that follows is to show that the functions b 1 , b 2 , a and µ are constant. We first have Proof. If a is constant, it readily follows from (4) that µ is constant. Hence, from (6) we get (λ 1 − 3λ 2 )b 1 − 4µb 2 = −4µb 1 + (λ 2 − 3λ 1 )b 2 = 0. This is a homogeneous linear system in the variables b 1 and b 2 , whose coefficients are constant. It cannot have a unique solution because b 1 = b 2 = 0 is not possible, and thus the rank of the matrix of the system cannot be 2. The rank cannot be 0 because that would imply λ 1 = λ 2 = 0. Thus, it has rank one and we can write b 2 = νb 1 for some constant ν ∈ R. Then 1 − a 2 = b 2 1 + b 2 2 = (1 + ν 2 )b 2 1 implies that b 1 is constant, and hence also b 2 .
In view of Lemma 5.1, the calculations that follow aim at proving that a is constant. Recall from (4) that tr S 2 η is constant. Hence there is k ∈ R such that
(7) µ 2 = k − c 2 (1 − 3a 2 ) 16(λ 1 − λ 2 ) 2 .
Taking derivatives in (7) with respect to U i , using (5) and (6) and substituting µ 2 by (7) we get, after some calculations
(8) 0 = b j (−1) j c 2 (1 − 3a 2 ) 2 + 4c(1 − 3a 2 )λ i (λ 1 − λ 2 ) + 32(−1) i k(λ 1 − λ 2 ) 2 + 4b i (λ 1 − λ 2 ) c(1 − 3a 2 ) − 2(−1) i (λ 1 − λ 2 )(λ i − 3λ j ) µ.
If c(1 − 3a 2 ) + 2(λ 1 − λ 2 )(λ 1 − 3λ 2 ) or c(1 − 3a 2 ) − 2(λ 1 − λ 2 )(λ 2 − 3λ 1 ) is zero in an open set, then the function a is constant and it follows from Lemma 5.1 that b 1 , b 2 and µ are also constant. As a consequence, we may assume that there is a point in M where these two functions do not vanish, and thus, they do not vanish in an open set. Moreover, if b i = 0 in an open set, then it follows from the first equation in (5) that µ = 0, so by (7), a is constant, and thus also b j . Hence, we also assume that b i , i = 1, 2, is not zero on an open set. Thus, from (8) we get two possible expressions for µ, and combining this with (7) yields 0 = k − c 2 (1 − 3a 2 ) 2 16(λ1 − λ2) 2 − −b2
c 2 (1 − 3a 2 ) 2 + 4c(1 − 3a 2 )λ1(λ1 − λ2) − 32k(λ1 − λ2) 2 4b1(λ1 − λ2)(c(1 − 3a 2 ) + 2λ 2 1 − 8λ1λ2 + 6λ 2 2 ) · · b1 c 2 (1 − 3a 2 ) 2 − 4c(1 − 3a 2 )λ2(λ1 − λ2) − 32k(λ1 − λ2) 2 4b2(λ1 − λ2)(c(1 − 3a 2 ) + 6λ 2 1 − 8λ1λ2 + 2λ 2 2 ) = −c 3 (1 − 3a 2 ) 3 − 3c 2 (1 − 3a 2 ) 2 (4k + (λ1 − λ2) 2 ) + 16k(λ1 − λ2) 2 (16k + 3λ 2 1 − 10λ1λ2 + 3λ 2 2 ) 4(c(1 − 3a 2 ) + 2λ 2 1 − 8λ1λ2 + 6λ 2 2 )(c(1 − 3a 2 ) + 6λ 2 1 − 8λ1λ2 + 2λ 2 2 )
. This equation implies that 1 − 3a 2 is constant, and hence, by Lemma 5.1 we get that b 1 , b 2 and µ are also constant. Using (5) we get
0 = U 1 b 1 + U 2 b 2 = − 3cab 1 b 2 2(λ 1 − λ 2 ) ,
and since a = 0 we get b 1 = 0 or b 2 = 0. We may assume b 1 = 0, b 2 = 0, a 2 = 1 − b 2 1 . Then, by (5) we obtain 0 = U 2 b 2 = −aµ, so µ = 0. Next, equation (6) implies that 0 = U 1 µ = 3cab 1 (λ 1 − 3λ 2 )/(4(λ 1 − λ 2 )), and thus, λ 1 = 3λ 2 = 0. Finally, using (5) once more,
0 = U 1 b 2 = 3cab 2 1 − ca(1 − 3a 2 ) 4(λ 1 − λ 2 ) + aλ 1 = a(c + 12λ 2 2 ) 4λ 2 .
Hence, if c > 0 we get a contradiction, which yields Proposition 5.2. A Terng-isoparametric surface of CP 2 is isoparametric.
Otherwise, if c < 0 we have λ 2 = ± √ −3c/6. By changing the orientation if necessary, we may assume λ 2 > 0. Finally, (5) yields 0 = U 2 a = cb 1 (9b 2 1 − 8)/(4 √ −3c). Altogether we have obtained
S ξ = √ −3c 2 0 0 √ −3c 6 , S η = 0 √ −3c 6 √ −3c 6 0 , a = 1 3 , b 1 = 2 √ 2 3 , b 2 = 0.
Finally, it follows from [9, Theorem 5.1(vi)] that M is an open part of a Chen's surface, as we wanted to show.
Proposition 4 . 2 .
42Let M be a Lagrangian, Terng-isoparametric submanifold ofM 2 (c). Then, M is an open part of a principal orbit of a cohomogeneity two polar action on M 2 (c).
Lemma 5 . 1 .
51If the function a : M → R is constant, then b 1 , b 2 and µ are also constant.
The normal holonomy group of Kähler submanifolds. D V Alekseevsky, A J Di Scala, Proc. London Math. Soc. 3D. V. Alekseevsky, A. J. Di Scala: The normal holonomy group of Kähler submanifolds, Proc. London Math. Soc. (3) 89 (2004), no. 1, 193-216.
J Berndt, S Console, C Olmos, Submanifolds and holonomy, Chapman & Hall/CRC Research Notes in Mathematics. Boca Raton, FLChapman & Hall/CRC434J. Berndt, S. Console, C. Olmos: Submanifolds and holonomy, Chapman & Hall/CRC Research Notes in Mathematics, 434, Chapman & Hall/CRC, Boca Raton, FL, 2003.
Díaz-Ramos: Real hypersurfaces with constant principal curvatures in the complex hyperbolic plane. J Berndt, J C , Proc. Amer. Math. Soc. 135J. Berndt, J. C. Díaz-Ramos: Real hypersurfaces with constant principal curvatures in the complex hyperbolic plane, Proc. Amer. Math. Soc. 135 (2007), 3349-3357.
Díaz-Ramos: Homogeneous hypersurfaces in complex hyperbolic spaces. J Berndt, J C , Geom. Dedicata. 138J. Berndt, J. C. Díaz-Ramos: Homogeneous hypersurfaces in complex hyperbolic spaces, Geom. Dedicata 138 (2009), 129-150.
Polar actions on the complex hyperbolic plane. J Berndt, J C Díaz-Ramos, Ann. Global Anal. Geom. 43J. Berndt, J. C. Díaz-Ramos: Polar actions on the complex hyperbolic plane, Ann. Global Anal. Geom. 43 (2013), 99-106.
E Cartan, Familles de surfaces isoparamétriques dans les espacesà courbure constante. 17E. Cartan: Familles de surfaces isoparamétriques dans les espacesà courbure constante, Ann. Mat. Pura Appl. IV. Ser. 17, (1938), 177-191.
Special slant surfaces and a basic inequality. B.-Y. Chen, Results Math. 331-2B.-Y. Chen: Special slant surfaces and a basic inequality, Results Math. 33 (1998), no. 1-2, 65-78.
Ogiue: Two theorems on Kaehler manifolds. B.-Y Chen, K , Michigan Math. J. 21B.-Y. Chen, K. Ogiue: Two theorems on Kaehler manifolds, Michigan Math. J. 21 (1974), 225-229 (1975).
Slant submanifolds of complex projective and complex hyperbolic spaces. B.-Y Chen, Y Tazawa, Glasg. Math. J. 423B.-Y. Chen, Y. Tazawa: Slant submanifolds of complex projective and complex hyperbolic spaces, Glasg. Math. J. 42 (2000), no. 3, 439-454.
Polar coordinates induced by actions of compact Lie groups. J Dadok, Trans. Amer. Math. Soc. 2881J. Dadok: Polar coordinates induced by actions of compact Lie groups, Trans. Amer. Math. Soc. 288 (1985), no. 1, 125-137.
J C Díaz-Ramos, M Domínguez-Vázquez, A Kollross, arXiv:1208.2823v2Polar actions on complex hyperbolic spaces. math.DGJ. C. Díaz-Ramos, M. Domínguez-Vázquez, A. Kollross: Polar actions on complex hyperbolic spaces, arXiv:1208.2823v2 [math.DG].
J C Díaz-Ramos, M Domínguez-Vázquez, V Sanmartín-López, arXiv:1509.02498Isoparametric hypersurfaces in complex hyperbolic spaces. math.DGJ. C. Díaz-Ramos, M. Domínguez-Vázquez, V. Sanmartín-López: Isoparametric hypersurfaces in complex hyperbolic spaces, arXiv:1509.02498 [math.DG].
Real hypersurfaces with two principal curvatures in complex projective and hyperbolic planes. J C Díaz-Ramos, M Domínguez-Vázquez, C Vidal-Castiñeira, to appear in J. Geom. AnalJ. C. Díaz-Ramos, M. Domínguez-Vázquez, C. Vidal-Castiñeira: Real hypersurfaces with two principal curvatures in complex projective and hyperbolic planes, to appear in J. Geom. Anal.
Isoparametric foliations on complex projective spaces. M Domínguez-Vázquez, Trans. Amer. Math. Soc. 3682M. Domínguez-Vázquez: Isoparametric foliations on complex projective spaces, Trans. Amer. Math. Soc. 368 (2016), no. 2, 1211-1249.
Olmos: Isoparametric submanifolds and a Chevalley-type restriction theorem. E Heintze, X Liu, C , Integrable systems, geometry, and topology. Providence, RIAmer. Math. Soc36E. Heintze, X. Liu, C. Olmos: Isoparametric submanifolds and a Chevalley-type restriction theorem, in Integrable systems, geometry, and topology, 151-190, AMS/IP Stud. Adv. Math., 36, Amer. Math. Soc., Providence, RI, 2006.
Thorbergsson: Polar actions on rank-one symmetric spaces. F Podestà, G , J. Differential Geom. 53F. Podestà, G. Thorbergsson: Polar actions on rank-one symmetric spaces, J. Differential Geom. 53 (1999), 131-175.
Famiglie di ipersuperficie isoparametriche negli spazi euclidei ad un qualunque numero di dimensioni. B Segre, Atti Accad. Naz. Lincei Rend. Cl. Sci. Fis. Mat. Natur. 6B. Segre: Famiglie di ipersuperficie isoparametriche negli spazi euclidei ad un qualunque numero di dimensioni, Atti Accad. Naz. Lincei Rend. Cl. Sci. Fis. Mat. Natur. (6) 27 (1938), 203-207.
C Somigliana, Sulle relazioni fra il principio di Huygens e l'ottica geometrica. C. Somigliana: Sulle relazioni fra il principio di Huygens e l'ottica geometrica, Atti Acc. Sc. Torino LIV (1918-1919), 974-979.
Terng: Isoparametric submanifolds and their Coxeter groups. C.-L , J. Differential Geom. 211C.-L. Terng: Isoparametric submanifolds and their Coxeter groups, J. Differential Geom. 21 (1985), no. 1, 79-107.
Isoparametric foliations and their buildings. G Thorbergsson, Ann. of Math. 2G. Thorbergsson: Isoparametric foliations and their buildings, Ann. of Math. (2) 133 (1991), 429-446.
Singular Riemannian foliations and isoparametric submanifolds. G Thorbergsson, Milan J. Math. 781G. Thorbergsson: Singular Riemannian foliations and isoparametric submanifolds, Milan J. Math. 78 (2010), no. 1, 355-370.
Real hypersurfaces with constant principal curvatures in complex projective spaces (1). Q.-M Wang, Sci. Sin., Ser. A. 26Q.-M. Wang: Real hypersurfaces with constant principal curvatures in complex projective spaces (1), Sci. Sin., Ser. A 26 (1983), 1017-1024.
Isoparametric submanifolds of hyperbolic spaces. B Wu, Trans. Amer. Math. Soc. 3312B. Wu: Isoparametric submanifolds of hyperbolic spaces, Trans. Amer. Math. Soc. 331 (1992), no. 2, 609-626.
| []
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.